Stereoscopic virtual view generation has become one of the trending issues in engineering field with the emergence of commercial 3D displays. Given an image sequence taken by a moving camera, the camera parameters and 3D information of the scene can be recovered using structure from motion algorithm. The goal of this work is to create a realistic stereoscopic image sequence of novel camera path using the depth information. Previous works usually concentrate on view interpolation which brings good results on generating stereoscopic view of input sequence with limited camera movement. Our approach takes full advantage of recovered 3D points to create a novel view of the scene that reflects its own depth. Segment based dense depth maps guided by the projected 3D points are iteratively refined to provide spatially consistent depths between two images for each eye. Unavoidable empty regions in the novel view, which are originally occluded in the input images, are filled using texture synthesis. Experimental results show that the proposed method yields more realistic views than the previous ones.