We proposed an algorithm that allows robots to manipulate non-graspable objects. Existing works such as planners require complex contact modeling, and are relatively slow at finding a plan. Our method is based on reinforcement learning, because contact modeling is not necessary and it is faster at finding actions. However, simple application of reinforcement learning is data-inefficient because a robot wastes exploration to merely touch the object. Many existing works address this issue by using a reward that encourages the robot to put its end-effector close to the object. On the other hand, we introduce the pre-contact policy, which makes initial contact with the object. Another policy called the post-contact policy manipulates the object to the goal. We show that the use of trained pre-contact policy is better than not using any initial contact or using random initial contacts. Also, we test whether our method is more data-efficient than using contact-encouraging reward in challenging manipulation problems.