Depth information used in SLAM and visual odometry is essential in robotics. Depth information often obtained from sensors or learned by networks. While learning-based methods have gained popularity, they are mostly limited to RGB images. However, the limitation of RGB images occurs in visually derailed environments. Thermal cameras are in the spotlight as a way to solve these problems. Unlike RGB images, thermal images reliably perceive the environment regardless of the illumination variance but show lacking contrast and texture. This low contrast in the thermal image prohibits an algorithm from effectively learning the underlying scene details. To tackle these challenges, we propose multichannel remapping for contrast. My method allows a learning-based depth prediction model to have an accurate depth prediction even in low light conditions. We validate the feasibility and show that my multi-channel remapping method outperforms the existing methods both visually and quantitatively over STheReO dataset.