This paper presents a combined framework with nonlinear model predictive control (NMPC) reinforcement learning (RL) for locomotion of a legged robot. A neural network trained by RL works as a footstep planner which decides where to put the feet of the robot on the ground. Given the constraints of footsteps and dynamics of the model, ground reaction forces exerting on each legs are obtained through NMPC and applied to the robot. This framework increases sample efficiency compared to the end-to-end RL and shows better performances than base NMPC controller which decides its footsteps in a heuristic manner. The proposed framework is verified on a simulation environment by performing challenging tasks such as push recovery and rough terrain walking.