Learning a Joint Embedding Space of Monophonic and Mixed Music Signals for Singing Voice

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 439
  • Download : 0
Previous approaches in singer identification have used one of monophonic vocal tracks or mixed tracks containing multiple instruments, leaving a semantic gap between these two domains of audio. In this paper, we present a system to learn a joint embedding space of monophonic and mixed tracks for singing voice. We use a metric learning method, which ensures that tracks from both domains of the same singer are mapped closer to each other than those of different singers. We train the system on a large synthetic dataset generated by music mashup to reflect real-world music recordings. Our approach opens up new possibilities for cross-domain tasks, e.g., given a monophonic track of a singer as a query, retrieving mixed tracks sung by the same singer from the database. Also, it requires no additional vocal enhancement steps such as source separation. We show the effectiveness of our system for singer identification and query-by-singer in both the in-domain and cross-domain tasks.
Publisher
International Society for Music Information Retrieval Conference (ISMIR)
Issue Date
2019-11-04
Language
English
Citation

The 20th International Society for Music Information Retrieval Conference (ISMIR), pp.295 - 302

URI
http://hdl.handle.net/10203/269878
Appears in Collection
GCT-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0