Deep Least Squares Regression for Speaker Adaptation

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 183
  • Download : 0
Recently, speaker adaptation methods in deep neural networks (DNNs) have been widely studied for automatic speech recognition. However, almost all adaptation methods for DNNs have to consider various heuristic conditions such as mini-batch sizes, learning rate scheduling, stopping criteria, and initialization conditions because of the inherent property of a stochastic gradient descent (SGD)-based training process. Unfortunately, those heuristic conditions are hard to be properly tuned. To alleviate those difficulties, in this paper, we propose a least squares regression -based speaker adaptation method in a DNN framework utilizing posterior mean of each class. Also, we show how the proposed method can provide a unique solution which is quite easy and fast to calculate without SGD. The proposed method was evaluated in the TED-LIUM corpus. Experimental results showed that the proposed method achieved up to a 4.6% relative improvement against a speaker independent DNN. In addition, we report further performance improvement of the proposed method with speaker-adapted features.
Publisher
International Speech Communication Association (ISCA)
Issue Date
2017-08-21
Language
English
Citation

18th Annual Conference of the International-Speech-Communication-Association (INTERSPEECH 2017), pp.729 - 733

ISSN
2308-457X
DOI
10.21437/Interspeech.2017-783
URI
http://hdl.handle.net/10203/227311
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0