Sound-Guided Semantic Image Manipulation

Cited 2 time in webofscience Cited 0 time in scopus
  • Hit : 128
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLee, Seung Hyunko
dc.contributor.authorRoh, Wonseokko
dc.contributor.authorByeon, Wonminko
dc.contributor.authorYoon, Sang Hoko
dc.contributor.authorKim, Chanyoungko
dc.contributor.authorKim, Jinkyuko
dc.contributor.authorKim, Sangpilko
dc.date.accessioned2022-08-24T07:00:16Z-
dc.date.available2022-08-24T07:00:16Z-
dc.date.created2022-06-07-
dc.date.created2022-06-07-
dc.date.created2022-06-07-
dc.date.issued2022-06-21-
dc.identifier.citation2022 IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), pp.3377 - 3386-
dc.identifier.urihttp://hdl.handle.net/10203/298067-
dc.description.abstractThe recent success of the generative model shows that leveraging the multi-modal embedding space can manipulate an image using text information. However, manipulating an image with other sources rather than text, such as sound, is not easy due to the dynamic characteristics of the sources. Especially, sound can convey vivid emotions and dynamic expressions of the real world. Here, we propose a framework that directly encodes sound into the multi-modal (image-text) embedding space and manipulates an image from the space. Our audio encoder is trained to produce a latent representation from an audio input, which is forced to be aligned with image and text representations in the multi-modal embedding space. We use a direct latent optimization method based on aligned embeddings for sound-guided image manipulation. We also show that our method can mix text and audio modalities, which enrich the variety of the image modification. We verify the effectiveness of our sound-guided image manipulation quantitatively and qualitatively. We also show that our method can mix different modalities, i.e., text and audio, which enrich the variety of the image modification. The experiments on zero-shot audio classification and semantic-level image classification show that our proposed model outperforms other text and sound-guided state-of-the-art methods.-
dc.languageEnglish-
dc.publisherIEEE-
dc.titleSound-Guided Semantic Image Manipulation-
dc.typeConference-
dc.identifier.wosid000867754203060-
dc.type.rimsCONF-
dc.citation.beginningpage3377-
dc.citation.endingpage3386-
dc.citation.publicationname2022 IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR)-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationNew Orleans-
dc.contributor.localauthorYoon, Sang Ho-
dc.contributor.nonIdAuthorLee, Seung Hyun-
dc.contributor.nonIdAuthorRoh, Wonseok-
dc.contributor.nonIdAuthorByeon, Wonmin-
dc.contributor.nonIdAuthorKim, Chanyoung-
dc.contributor.nonIdAuthorKim, Jinkyu-
dc.contributor.nonIdAuthorKim, Sangpil-
Appears in Collection
GCT-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0