Gradient signals have been widely used for the interpretability of a convolution neural network. To explain the decision of a single input, all channels in a layer contribute to gradient propagation. We hypothesize that not all channels are required to explain a single input and that channel pruning can improve the reliability of a saliency map. To test this hypothesis, we propose what is termed the partitioned channel gradient (ParchGrad), which partitions channels into two sets and modifies the gradient signals so that the ratio of the gradient magnitudes is manually controllable. In addition, we propose simple channel partitioning methods to prune channels for ParchGrad. We empirically show that \ours, combined with several saliency methods, results in a more reliable saliency map than the original gradient signal. Also, we found that (1) only a few channels (~10%) are required to explain a single input and (2) that the optimal pruning layers are different for each class label.