Most deep reinforcement learning algorithms are sample inefficient in complex and rich environments, so they need a large amount of sample to adapt to a new task. However, in the real world, adapting a new task quickly with a small amount of sample is essential. One way to solve this problem is the meta-learning that learns how to learn, and studies on meta-learning have been performed. However, prior meta-learning methods only consider the one model for adapting a new task, but having the only model for adaptation is not enough for more complex tasks. In this work, we propose a meta-learning method with multiple models for adapting to a new task in reinforcement learning (meta-RL). The proposed meta-RL algorithm is evaluated on a variety of locomotion tasks, and we show that the proposed algorithm is more effective at learning a new task.