Physically-based neural rendering of multiplane images다중 평면 이미지를 이용한 물리 기반 뉴럴 렌더링

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 5
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor김민혁-
dc.contributor.authorCho, Jaemin-
dc.contributor.author조재민-
dc.date.accessioned2024-07-25T19:31:24Z-
dc.date.available2024-07-25T19:31:24Z-
dc.date.issued2023-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1045953&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/320721-
dc.description학위논문(석사) - 한국과학기술원 : 전산학부, 2023.8,[iv, 26 p. :]-
dc.description.abstractIn this work, we explore an alternative method for view synthesis, drawing upon neural rendering and inverse rendering techniques applied to multiple images. Recent neural rendering approaches utilize inverse rendering to estimate parameters for physically-based rendering. However, due to its dependence on volume rendering, which accumulates color and density values from multiple samples to determine each pixel's radiance, neural rendering may not be the most preferred choice for real-time rendering applications yet. Multiplane image (MPI) rendering, in contrast, stores color and density information in multiple depth layers, which may provide a more efficient solution for real-time view synthesis. Nonetheless, the storage of image data in the normalized device coordinate system presents challenges when applying conventional inverse rendering methods directly to MPIs. Moreover, inverse rendering for physically-based rendering is seldom compatible with MPIs. To address these limitations, we propose a method for inverse rendering that aims to learn scene material information and light environment, enabling high-quality novel view synthesis, physics-based rendering, and scene editing. Our method represents geometric information in the form of an MPI and trains material data for each scene point. Furthermore, we differentiate the light environment into far-bound and near-bound regions to account for the global and local illumination of real scenes. Our rendering pipeline incorporates the spherical Gaussian approximation for reflectance and illumination, which may be more suited for real-time rendering applications and can integrate effectively with the traditional MPI architecture. The results of our study suggest the potential utility of our physically-based neural rendering approach, illustrating its possible applications in various scene editing tasks, such as relighting and seamlessly changing material appearance.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject뉴럴 렌더링▼a물리 기반 렌더링▼a인버스 렌더링▼a다중 평면 이미지-
dc.subjectNeural rendering▼aPhysically-based rendering▼aInverse rendering▼aMultiplane image-
dc.titlePhysically-based neural rendering of multiplane images-
dc.title.alternative다중 평면 이미지를 이용한 물리 기반 뉴럴 렌더링-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전산학부,-
dc.contributor.alternativeauthorKim, Min H.-
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0