Data-Driven Depth Map Refinement via Multi-scale Sparse Representation

Cited 81 time in webofscience Cited 0 time in scopus
  • Hit : 495
  • Download : 165
Depth maps captured by consumer-level depth cameras such as Kinect are usually degraded by noise, missing values, and quantization. In this paper, we present a datadriven approach for refining degraded RAWdepth maps that are coupled with an RGB image. The key idea of our approach is to take advantage of a training set of high-quality depth data and transfer its information to the RAW depth map through multi-scale dictionary learning. Utilizing a sparse representation, our method learns a dictionary of geometric primitives which captures the correlation between high-quality mesh data, RAW depth maps and RGB images. The dictionary is learned and applied in a manner that accounts for various practical issues that arise in dictionarybased depth refinement. Compared to previous approaches that only utilize the correlation between RAW depth maps and RGB images, our method produces improved depth maps without over-smoothing. Since our approach is data driven, the refinement can be targeted to a specific class of objects by employing a corresponding training set. In our experiments, we show that this leads to additional improvements in recovering depth maps of human faces.
Publisher
IEEE Computer Society and the Computer Vision Foundation (CVF)
Issue Date
2015-06-08
Language
English
Citation

CVPR2015 IEEE Conference on Computer Vision and Pattern Recognition

URI
http://hdl.handle.net/10203/199666
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 81 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0