Robust Depth Estimation using Auto-Exposure Bracketing

Cited 4 time in webofscience Cited 4 time in scopus
  • Hit : 773
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorIm, Sunghoonko
dc.contributor.authorJeon, Hae-Gonko
dc.contributor.authorKweon, In-Soko
dc.date.accessioned2019-03-19T01:24:43Z-
dc.date.available2019-03-19T01:24:43Z-
dc.date.created2018-12-13-
dc.date.issued2019-05-
dc.identifier.citationIEEE TRANSACTIONS ON IMAGE PROCESSING, v.28, no.5, pp.2451 - 2464-
dc.identifier.issn1057-7149-
dc.identifier.urihttp://hdl.handle.net/10203/251612-
dc.description.abstractAs the computing power of hand-held devices grows, there has been increasing interest in the capture of depth information, to enable a variety of photographic applications. However, under low-light conditions, most devices still suffer from low imaging quality and inaccurate depth acquisition. To address the problem, we present a robust depth estimation method from a short burst shot with varied intensity (i.e., Auto-exposure bracketing) and/or strong noise (i.e., High ISO). Our key idea synergistically combines deep convolutional neural networks with geometric understanding of the scene. We introduce a geometric transformation between optical flow and depth tailored for burst images, enabling our learning-based multi-view stereo matching to be performed effectively. We then describe our depth estimation pipeline that incorporates this geometric transformation into our residual-flow network. It allows our framework to produce an accurate depth map even with a bracketed image sequence. We demonstrate that our method outperforms state-of-the-art methods for various datasets captured by a smartphone and a DSLR camera. Moreover, we show that the estimated depth is applicable for image quality enhancement and photographic editing.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleRobust Depth Estimation using Auto-Exposure Bracketing-
dc.typeArticle-
dc.identifier.wosid000458850800005-
dc.identifier.scopusid2-s2.0-85058881406-
dc.type.rimsART-
dc.citation.volume28-
dc.citation.issue5-
dc.citation.beginningpage2451-
dc.citation.endingpage2464-
dc.citation.publicationnameIEEE TRANSACTIONS ON IMAGE PROCESSING-
dc.identifier.doi10.1109/TIP.2018.2886777-
dc.contributor.localauthorKweon, In-So-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorDepth estimation-
dc.subject.keywordAuthorexposure fusion-
dc.subject.keywordAuthorimage denoising-
dc.subject.keywordAuthor3D reconstruction-
dc.subject.keywordAuthorgeometry-
dc.subject.keywordAuthorconvolutional neural network-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 4 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0