At an early age, human infants are able to learn and build a model of the world very quickly by constantly observing and interacting with objects around them. One of the most fundamental intuitions human infants acquire is intuitive physics. Human infants learn and develop these models which later serve as a prior knowledge for further learning. Inspired by such behaviors exhibited by human infants, we introduce a graphical physics network integrated with reinforcement learning. Using pybullet 3D physics engine, we show that our graphical physics network is able to infer object's positions and velocities very effectively and our reinforcement learning network encourages an agent to improve its model by making it continuously interact with objects only using intrinsic motivation. In addition, we introduce a reward normalization trick that allows our agent to efficiently choose actions that can improve its intuitive physics model the most. We experiment our model in both stationary and non-stationary state problems, and measure the number of different actions agent performs and the accuracy of agent's intuition model.