Learning to lip read words by watching videos

Cited 35 time in webofscience Cited 0 time in scopus
  • Hit : 164
  • Download : 0
Our aim is to recognise the words being spoken by a talking face, given only the video but not the audio. Existing works in this area have focussed on trying to recognise a small number of utterances in controlled environments (e.g. digits and alphabets), partially due to the shortage of suitable datasets. We make three novel contributions: first, we develop a pipeline for fully automated data collection from TV broadcasts. With this we have generated a dataset with over a million word instances, spoken by over a thousand different people; second, we develop a two-stream convolutional neural network that learns a joint embedding between the sound and the mouth motions from unlabelled data. We apply this network to the tasks of audio-to-video synchronisation and active speaker detection; third, we train convolutional and recurrent networks that are able to effectively learn and recognize hundreds of words from this large-scale dataset. In lip reading and in speaker detection, we demonstrate results that exceed the current state-of-the-art on public benchmark datasets.
Publisher
ACADEMIC PRESS INC ELSEVIER SCIENCE
Issue Date
2018-08
Language
English
Article Type
Article
Citation

COMPUTER VISION AND IMAGE UNDERSTANDING, v.173, pp.76 - 85

ISSN
1077-3142
DOI
10.1016/j.cviu.2018.02.001
URI
http://hdl.handle.net/10203/289581
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 35 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0