DC Field | Value | Language |
---|---|---|
dc.contributor.author | Im, Sunghoon | ko |
dc.contributor.author | Jeon, Hae-Gon | ko |
dc.contributor.author | Kweon, In-So | ko |
dc.date.accessioned | 2019-03-19T01:24:43Z | - |
dc.date.available | 2019-03-19T01:24:43Z | - |
dc.date.created | 2018-12-13 | - |
dc.date.issued | 2019-05 | - |
dc.identifier.citation | IEEE TRANSACTIONS ON IMAGE PROCESSING, v.28, no.5, pp.2451 - 2464 | - |
dc.identifier.issn | 1057-7149 | - |
dc.identifier.uri | http://hdl.handle.net/10203/251612 | - |
dc.description.abstract | As the computing power of hand-held devices grows, there has been increasing interest in the capture of depth information, to enable a variety of photographic applications. However, under low-light conditions, most devices still suffer from low imaging quality and inaccurate depth acquisition. To address the problem, we present a robust depth estimation method from a short burst shot with varied intensity (i.e., Auto-exposure bracketing) and/or strong noise (i.e., High ISO). Our key idea synergistically combines deep convolutional neural networks with geometric understanding of the scene. We introduce a geometric transformation between optical flow and depth tailored for burst images, enabling our learning-based multi-view stereo matching to be performed effectively. We then describe our depth estimation pipeline that incorporates this geometric transformation into our residual-flow network. It allows our framework to produce an accurate depth map even with a bracketed image sequence. We demonstrate that our method outperforms state-of-the-art methods for various datasets captured by a smartphone and a DSLR camera. Moreover, we show that the estimated depth is applicable for image quality enhancement and photographic editing. | - |
dc.language | English | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | Robust Depth Estimation using Auto-Exposure Bracketing | - |
dc.type | Article | - |
dc.identifier.wosid | 000458850800005 | - |
dc.identifier.scopusid | 2-s2.0-85058881406 | - |
dc.type.rims | ART | - |
dc.citation.volume | 28 | - |
dc.citation.issue | 5 | - |
dc.citation.beginningpage | 2451 | - |
dc.citation.endingpage | 2464 | - |
dc.citation.publicationname | IEEE TRANSACTIONS ON IMAGE PROCESSING | - |
dc.identifier.doi | 10.1109/TIP.2018.2886777 | - |
dc.contributor.localauthor | Kweon, In-So | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Depth estimation | - |
dc.subject.keywordAuthor | exposure fusion | - |
dc.subject.keywordAuthor | image denoising | - |
dc.subject.keywordAuthor | 3D reconstruction | - |
dc.subject.keywordAuthor | geometry | - |
dc.subject.keywordAuthor | convolutional neural network | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.