High-quality visual feedback is important for an immersive medical simulation. There are several researches about rendering visual feedback in medical simulation based on deep learning and procedure images. These methods, however, are focusing on static surgeries with narrow variety of scenes. Endoscopy procedure includes dynamic movements of endoscopy, so it has wide variety of scenes. This paper proposes a deep learning based rendering method to provide photo-realistic visual feedbacks for users in endoscopy simulation. A transformation network based on generative adversarial networks (GAN) is designed and trained to learn a mapping function from simulation depth map to realistic visual feedbacks. Mapping function from real endoscopy procedure images to depth map is learned in the first place, and then the goal function is learned as an inverse function. Realism of visual feedbacks that generated by the proposed method are evaluated with NR-IQA method.