A panoramic content is a special type of a 2D video that depicts a wide-angle view of a scene. Rich information captured from the surrounding environment provides a viewer with a strong sense of presence. As computing power and digital imaging technology have advanced, the panoramic presentation has become an engaging movie format for emerging immersive theatres and an increasingly popular means of sharing our experiences through social networks services. Unlike a typical rectangular video for which standardized forms of playback and screening exist, the quality of panoramic contents varies depending on the display environments and resolution.
This dissertation proposes two systems, ScreenX and Rich360, for optimized image-space representations of panoramic contents. For a multi-viewer panoramic viewing environment, the ScreenX system is a novel movie viewing platform utilizing the left and right side walls as additional screens using a multi-projection technique. Ordinary movie theatres can be converted into public immersive theatres without modification of the existing theatre structure. This surrounding display environment delivers a strong sense of immersion in general movie viewing. In addition, we present a novel image representation model that minimizes the seat-dependent perspective distortion of the content displayed on the side walls. This leads to uniform movie viewing experiences regardless of the seating locations in a theatre and the diverse configurations of various movie theatres. ScreenX has been successfully deployed in the movie industry. Various results and user studies demonstrate that the ScreenX system improves the movie viewing experiences. For a single-viewer panoramic viewing environment, Rich360 proposes a novel approach to creating and viewing a $360 ^\circ$ panoramic video. The proposed deformable spherical projection surface efficiently stitches the videos with minimal parallax artifacts obtained from multiple cameras placed on a structured rig. To resolve loss of richness caused by downsampling, which is the case for most $360 ^\circ$ videos, Rich360 performs a non-uniform spherical ray sampling in the rendering step. This novel rendering process preserves the richness of the input videos by assigning more pixels to important regions (e.g. saliency and human face) than and fewer rays to less important homogeneous regions. Various results from Rich360 demonstrate the richness of the output video and the advancement in the stitching results.
In this dissertation, the novel methods based on an Image-space optimization scheme resolve the issues in presenting panoramic contents in a multi-viewer environment and a single-viewer environment. The core techniques of ScreenX and Rich360 greatly enhance visual immersion of panoramic contents by minimizing artifacts in creating a panoramic content and perspective distortions arising from different viewpoints, and by fully exploiting the resolution of the source videos. We strongly believe that they will make a significant contribution to the industry related to virtual reality and immersive entertainment where the panoramic contents are employed.