This paper suggests an algorithm of reactive eye movement and corresponding facial expressions to make a robot lifelike for improving human and robot interaction (HRI). Difference-image process, afterimage process, concentration process, and eyelid movement process are suggested to determine the amount of reactive eye movement from single camera input images. Then, a simple emotion generation process is modelled by using a potential energy concept. Expressions are generated on a face simulator, FRESi. As a result, it is realized that the eye movement and facial expressions are complementary to each other in attempts to achieve lifelikeness for HRI.