Video Repairing: Inference of Foreground and Background under Severe Occlusion

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 340
  • Download : 0
We propose a new method, video repairing, to robustly infer missing static background and moving foreground due to severe damage or occlusion from a video. To recover background pixels, we extend the image repairing method, where layer segmentation and homography blending are used to preserve temporal coherence and avoid flickering. By exploiting the constraint imposed by periodic motion and a subclass of camera and object motions, we adopt a two-phase approach to repair moving foreground pixels: In the sampling phase, motion data are sampled and regularized by 3D tensor voting to maintain temporal coherence and motion periodicity. In the alignment phase, missing moving foreground pixels are inferred by spatial and temporal alignment of the sampled motion data at multiple scales. We experimented our system with some difficult examples, where the camera can be stationary or in motion.
Publisher
IEEE Computer Society and the Computer Vision Foundation (CVF)
Issue Date
2004-07
Language
English
Citation

IEEE Conference on Computer Vision and Pattern Recognition(CVPR), pp.364 - 371

ISSN
1063-6919
URI
http://hdl.handle.net/10203/152027
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0