Exemplification modeling is a recently proposed task that aims to produce a viable sentence using a target word that takes on a specific meaning. This task can be particularly challenging for polysemous words since they can have multiple meanings. In this paper, we propose a one-shot variant of the exemplification modeling task such that labeled data is not needed during training, making it possible to train our system using a raw text corpus. Given one example at test time, our proposed approach can generate diverse and fluent examples where the target word accurately matches its intended meaning. We compare our approach to a fully-supervised baseline trained with different amounts of data and focus our evaluation on polysemous words. We use both automatic and human evaluations to demonstrate how each model performs on both seen and unseen words. Our proposed approach performs similarly to the fully-supervised baseline despite not using labeled data during training.