Neural audio fingerprinting for broadcast monitoring with source separation음원 분리를 적용한 방송 모니터링용 신경망 기반 오디오 핑거프린팅 기법

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 1
  • Download : 0
Audio fingerprinting systems have evolved over time through frequency-analysis techniques, and have recently shown significantly improved performance in noisy environments through deep neural networks. However, these systems work well for identifying music played in specific spaces, but show lower performance in broadcast monitoring tasks. A major problem is that both deep neural network-based and frequency analysis-based systems often fail to detect music segments, mistaking them for non-musical content, primarily due to speech noise overpowering the music in broadcast audio. To address this, our study employed a pre-trained source separation model to remove vocals before feeding the query audio into the fingerprint extraction model, enhancing the performance of the broadcast monitoring system. Furthermore, We fine-tuned the source separation model to optimize it for speech removal. To do this, we customized the training dataset by replacing the vocal source with speech source. As a result, we improved the speech removal performance, boosting the performance of the broadcast monitoring system.
Advisors
남주한researcher
Description
한국과학기술원 :문화기술대학원,
Publisher
한국과학기술원
Issue Date
2024
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 문화기술대학원, 2024.2,[iv, 35 p. :]

Keywords

오디오 지문▼a심층신경망▼a목소리 제거▼a음원 분리▼a미세 조정; Audio fingerprinting▼aDeep neural network▼aSpeech removal▼aSource separation▼aFine-tuning

URI
http://hdl.handle.net/10203/321401
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1096186&flag=dissertation
Appears in Collection
GCT-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0