We introduce a novel approach that redefines and constructs the input data format for prompt tuning, capitalizing on the training data format used for large language models (LLMs). While prompt tuning has demonstrated itself as a powerful parameter-efficient technique for adapting pre-trained language models to downstream tasks, it still faces challenges in achieving a performance level equivalent to full fine-tuning. Our proposed approach, PT2TT (Prompt Tuning via Pre-training Task Template Transfer), is motivated by the fact that LLMs are pre-trained to perform well on diverse set of natural language tasks using preprocessing templates, which are readily available for open-source LLMs such as T5. Thus, given a downstream task, it would make sense to format the input data in a way that resembles those of a relevant pre-training task. This would provide the LLM with a context that it’s already familiar with. We add soft prompts to the input data and tune them to capture the residual context exclusive to the downstream task. Through experiments on the standard set of benchmark tasks, we demonstrate that our method significantly outperforms vanilla prompt tuning, and performs on par with state-of-the-art parameter-efficient tuning methods.