Filtering noisy labels is crucial for robust training of deep neural networks. To train networks with noisy labels, sampling methods have been introduced, which sample the reliable instances to update networks using only sampled data. Since they rarely employ the non-sampled data for training, these methods have a fundamental limitation that they reduce the amount of the training data. To alleviate this problem, our approach aims to fully utilize the whole dataset by leveraging the information of the sampled data. To this end, we propose a novel graph-based learning framework that enables networks to propagate the label information of the sampled data to adjacent data, whether they are sampled or not. Also, we propose a novel self-training strategy to utilize the non-sampled data without labels and to regularize the network update using the information of the sampled data. Our method outperforms state-of-the-art sampling methods.