Multi-View Automatic Lip-Reading using Neural Network

Cited 12 time in webofscience Cited 0 time in scopus
  • Hit : 468
  • Download : 0
It is well known that automatic lip-reading (ALR), also known as visual speech recognition (VSR), enhances the performance of speech recognition in a noisy environment and also has applications itself. However, ALR is a challenging task due to various lip shapes and ambiguity of visemes (the basic unit of visual speech information). In this paper, we tackle ALR as a classification task using end-to-end neural network based on convolutional neural network and long short-term memory architecture. We conduct single, cross, and multi-view experiments in speaker independent setting with various network configuration to integrate the multi-view data. We achieve 77.9%, 83.8%, and 78.6% classification accuracies in average on single, cross, and multi-view respectively. This result is better than the best score (76%) of preliminary single-view results given by ACCV 2016 workshop on multi-view lip-reading/audiovisual challenges. It also shows that additional view information helps to improve the performance of ALR with neural network architecture.
Publisher
Asian Federation of Computer Vision (AFCV)
Issue Date
2016-11-20
Language
English
Citation

13th Asian Conference on Computer Vision (ACCV), pp.290 - 302

DOI
10.1007/978-3-319-54427-4_22
URI
http://hdl.handle.net/10203/214327
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 12 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0