This paper describes an algorithm that improves 3D reconstruction result using a multi-sensor
fusion disparity map. We can project LRF (Laser Range Finder) 3D points onto image pixel
coordinatesusing extrinsic calibration matrixes of a camera-LRF (Φ, Δ ) and a camera calibration matrix
(K ). The LRF disparity map can be generated by interpolating projected LRF points. In the stereo
reconstruction, we can compensate invalid points caused by repeated pattern and textureless region using
the LRF disparity map. The result disparity map of compensation process is the multi-sensor fusion
disparity map. We can refine the multi-sensor 3D reconstruction based on stereo vision and LRF using the
multi-sensor fusion disparity map. The refinement algorithm of multi-sensor based 3D reconstruction is
specified in four subsections dealing with virtual LRF stereo image generation, LRF disparity map
generation, multi-sensor fusion disparity map generation, and 3D reconstruction process. It has been tested
by synchronized stereo image pair and LRF 3D scan data.