Deep Video Inpainting Guided by Audio-Visual Self-Supervision

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 80
  • Download : 0
Humans can easily imagine a scene from auditory information based on their prior knowledge of audio-visual events. In this paper, we mimic this innate human ability in deep learning models to improve the quality of video inpainting. To implement the prior knowledge, we first train the audio-visual network, which learns the correspondence between auditory and visual information. Then, the audiovisual network is employed as a guider that conveys the prior knowledge of audio-visual correspondence to the video inpainting network. This prior knowledge is transferred through our proposed two novel losses: audio-visual attention loss and audio-visual pseudo-class consistency loss. These two losses further improve the performance of the video inpainting by encouraging the inpainting result to have a high correspondence to its synchronized audio. Experimental results demonstrate that our proposed method can restore a wider domain of video scenes and is particularly effective when the sounding object in the scene is partially blinded.
Publisher
IEEE Signal Processing Society
Issue Date
2022-05-09
Language
English
Citation

47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022, pp.1970 - 1974

ISSN
1520-6149
DOI
10.1109/ICASSP43922.2022.9747073
URI
http://hdl.handle.net/10203/298078
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0