Most existing approaches for single image super-resolution (SISR) resort to quality low-high resolution (LR-HR) pairs and available degradation kernels to train networks for a specific task in hand in a fully supervised manner. Labeled data used for training are, however, usually limited in terms of the quantity and the diversity degradation kernels. The learned SR networks with one degradation kernel (e.g., bicubic) do not generalize well and their performance sharply deteriorates on other kernels (e.g., blurred or noise). In this paper, we address the critical challenge for SISR: limited labeled LR images and degradation kernels. We propose a novel Semi-supervised Student-Teacher Super-Resolution approach called STSR that super-resolves both labelled and unlabeled LR images via adversarial learning. To better exploit the information from labeled LR images, we propose a student-teacher framework (S-T) via knowledge transfer from supervised learning (T) to unsupervised learning (S). Specifically, the S-T knowledge transfer is based on a shared SR network, partial weight sharing of dual discriminators, and a pair matching network which also plays as a ‘latent discriminator’. Lastly, to learn better features from the limited labeled LR images, we propose a new SR network via non-local and attention mechanisms. Experiments demonstrate that our approach substantially improves unsupervised methods and performs favorably over fully supervised methods.