Semi-supervised learning (SSL) methods for classification tasks exhibit a significant performance gain because they combine regularization and pseudo-labeling methods. General pseudo-labeling methods only depend on the model's prediction when assigning pseudo-labels, but this approach often leads to the generation of incorrect pseudo-labels, due to the network being biased toward easy classes or to the presence of confusing samples in the training set, which further decreases model performance. To address this issue, we propose a novel pseudo-labeling framework that dramatically reduces the ambiguity of pseudo-labels for confusing samples in SSL. We operate our method, called Pruning for Pseudo-Label (P-PseudoLabel), using the Easy-to-Forget (ETF) Sample Finder, which compares the outputs of the model and the pruned model to identify confusing samples. Next, we perform negative learning using the confusing samples to decrease the risk of providing incorrect information and to improve performance. Our method achieves better performance than those of recent state-of-the-art SSL methods on the CIFAR-10, CIFAR-100, and Mini-ImageNet datasets, and is on par with the state-of-the-art methods on SVHN and STL-10.