Stereoscopic three-dimensional (S3D) contents have received a significant interest from industries and research fields because they could provide enhanced viewing experiences. However, with the increasing interest of the S3D contents, the concerns are emerging for the safety of stereoscopic imaging. To address the issues of viewing safety in S3D contents, it is essential to develop a reliable objective visual comfort assessment (VCA), aiming to predict the visual discomfort of the displayed S3D contents. In this thesis, we propose a novel VCA method for S3D contents by using deep convolutional neural network (DCNN). To effectively predict visual discomfort in S3D contents viewing, the proposed network mainly consists of two parts; 1) spatial feature encoding part on monocular image: multi-level spatial features are encoded from each view (monocular image) to consider various type of spatial characteristics. 2) binocular feature encoding part: multi-level spatial features from left- and right views (binocular images) are combined to encode S3D discomfort factors. During training the proposed network, disparity information is used through knowledge transfer and regularization method to encode binocular features in S3D VCA, since disparity is main factor leading the S3D discomfort factors. By the extensive and comparative experiments using IEEE-SA dataset, the results show that the proposed binocular fusion deep networks yield excellent prediction performance.