A Novel Depth-Based Virtual View Synthesis Method for Free Viewpoint Video

Cited 64 time in webofscience Cited 64 time in scopus
  • Hit : 476
  • Download : 106
Free-viewpoint rendering (FVR) has become a popular topic in 3-D research. A promising technology in FVR is to generate virtual views using a single texture image and the corresponding depth image. A critical problem that occurs when generating virtual views is that the regions covered by the foreground objects in the original view may be disoccluded in the synthesized views. In this paper, a depth based disocclusion filling algorithm using patch-based texture synthesis is proposed. In contrast to the existing patch-based virtual view synthesis methods, the filling priority is driven by the robust structure tensor that efficiently reflects the overall structure of an image part and a new confidence term that produces fine synthesis results even near the foreground boundaries. Moreover, the best-matched patch is searched in the background regions and finally it is chosen through a new patch distance measure. Significant superiority of the proposed method over the state-of-the-art methods is presented by comparing the experimental results.
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Issue Date
2013-12
Language
English
Article Type
Article
Keywords

TEXTURE SYNTHESIS; IMAGE QUALITY

Citation

IEEE TRANSACTIONS ON BROADCASTING, v.59, no.4, pp.614 - 626

ISSN
0018-9316
DOI
10.1109/TBC.2013.2281658
URI
http://hdl.handle.net/10203/188720
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 64 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0