View-consistent 4D Light Field Depth Estimation

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 158
  • Download : 0
We propose a method to compute depth maps for every sub-aperture image in a lightfield in a view consistent way. Previous light field depth estimation methods typically estimate a depth map only for the central sub-aperture view, and struggle with view consistent estimation. Our method precisely defines depth edges via EPIs, then we diffus ethese edges spatially within the central view. These depth estimates are then propagated to all other views in an occlusion-aware way. Finally, disoccluded regions are completed by diffusion in EPI space. Our method runs efficiently with respect to both other classical and deep learning-based approaches, and achieves competitive quantitative metrics and qualitative performance on both synthetic and real-world light fields.
Publisher
British Machine Vision Association (BMVA)
Issue Date
2020-09-10
Language
English
Citation

British Machine Vision Conference (BMVC 2020)

URI
http://hdl.handle.net/10203/277138
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0