Self-knowledge distillation (self-KD) methods, which use a student model itself as the teacher model instead of a large and complex teacher model, are currently a subject of active study. Since most previ-ous self-KD approaches relied on the knowledge of a single teacher model, if the teacher model incor-rectly predicted confusing samples, poor-quality knowledge was transferred to the student model. Unfor-tunately, natural images are often ambiguous for teacher models due to multiple objects, mislabeling, or low quality. In this paper, we propose a novel knowledge distillation framework named ambiguity-aware robust teacher knowledge distillation (ART-KD) that provides refined knowledge, that reflects the ambigu-ity of the samples with network pruning. Since the pruned teacher model is simply obtained by copying and pruning the teacher model, re-training process is unnecessary in ART-KD. The key insight of ART-KD lies in the predictions of a teacher model and pruned teacher model for ambiguous samples providing different distributions with low similarity. From these two distributions, we obtain a joint distribution considering the ambiguity of the samples as teacher's knowledge for distillation. We comprehensively evaluate our method on public classification benchmarks, as well as more challenging benchmarks for fine-grained visual recognition (FGVR), achieving much superior performance to state-of-the-art counter-parts.(c) 2023 Elsevier Ltd. All rights reserved.