Joint Learning using Denoising Variational Autoencoders for Voice Activity Detection

Cited 23 time in webofscience Cited 0 time in scopus
  • Hit : 276
  • Download : 0
Voice activity detection (VAD) is a challenging task in very low signal-to-noise ratio (SNR) environments. To address this issue, a promising approach is to map noisy speech features to corresponding clean features and to perform VAD using the generated clean features. This can be implemented by concatenating a speech enhancement (SE) and a VAD network, whose parameters are jointly updated. In this paper, we propose denoising variational autoencoder-based (DVAE) speech enhancement in the joint learning framework. Moreover, we feed not only the enhanced feature but also the latent code from the DVAE into the VAD network. We show that the proposed joint learning approach outperforms conventional denoising autoencoder-based joint learning approach.
Publisher
ISCA
Issue Date
2018-09-04
Language
English
Citation

19th Annual Conference of the International-Speech-Communication-Association (INTERSPEECH 2018), pp.1210 - 1214

DOI
10.21437/Interspeech.2018-1151
URI
http://hdl.handle.net/10203/247930
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 23 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0