Learning to Localize Sound Source in Visual Scenes

Cited 152 time in webofscience Cited 0 time in scopus
  • Hit : 218
  • Download : 0
Visual events are usually accompanied by sounds in our daily lives. We pose the question: Can the machine learn the correspondence between visual scene and the sound, and localize the sound source only by observing sound and visual scene pairs like human? In this paper, we propose a novel unsupervised algorithm to address the problem of localizing the sound source in visual scenes. A two-stream network structure which handles each modality, with attention mechanism is developed for sound source localization. Moreover, although our network is formulated within the unsupervised learning framework, it can be extended to a unified architecture with a simple modification for the supervised and semi-supervised learning settings as well. Meanwhile, a new sound source dataset is developed for performance evaluation. Our empirical evaluation shows that the unsupervised method eventually go through false conclusion in some cases. We also show that even with a few supervision, i.e., semi-supervised setup, false conclusion is able to be corrected effectively.
Publisher
IEEE Computer Society and the Computer Vision Foundation (CVF)
Issue Date
2018-06-20
Language
English
Citation

31st IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.4358 - 4366

DOI
10.1109/CVPR.2018.00458
URI
http://hdl.handle.net/10203/248012
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 152 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0