Deep Audio-visual Speech Recognition

Cited 320 time in webofscience Cited 0 time in scopus
  • Hit : 461
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorAfouras, Triantafyllosko
dc.contributor.authorChung, Joon Sonko
dc.contributor.authorSenior, Andrewko
dc.contributor.authorVinyals, Oriolko
dc.contributor.authorZisserman, Andrewko
dc.date.accessioned2022-11-30T01:00:27Z-
dc.date.available2022-11-30T01:00:27Z-
dc.date.created2021-11-26-
dc.date.created2021-11-26-
dc.date.created2021-11-26-
dc.date.issued2018-12-
dc.identifier.citationIEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, v.44, no.12, pp.8717 - 8727-
dc.identifier.issn0162-8828-
dc.identifier.urihttp://hdl.handle.net/10203/301318-
dc.description.abstractThe goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem -- unconstrained natural language sentences, and in the wild videos. Our key contributions are: (1) we compare two models for lip reading, one using a CTC loss, and the other using a sequence-to-sequence loss. Both models are built on top of the transformer self-attention architecture; (2) we investigate to what extent lip reading is complementary to audio speech recognition, especially when the audio signal is noisy; (3) we introduce and publicly release two new datasets for audio-visual speech recognition: LRS2-BBC, consisting of thousands of natural sentences from British television; and LRS3-TED, consisting of hundreds of hours of TED and TEDx talks obtained from YouTube. The models that we train surpass the performance of all previous work on lip reading benchmark datasets by a significant margin.-
dc.languageEnglish-
dc.publisherIEEE COMPUTER SOC-
dc.titleDeep Audio-visual Speech Recognition-
dc.typeArticle-
dc.identifier.wosid000880661400015-
dc.identifier.scopusid2-s2.0-85058981464-
dc.type.rimsART-
dc.citation.volume44-
dc.citation.issue12-
dc.citation.beginningpage8717-
dc.citation.endingpage8727-
dc.citation.publicationnameIEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE-
dc.identifier.doi10.1109/TPAMI.2018.2889052-
dc.contributor.localauthorChung, Joon Son-
dc.contributor.nonIdAuthorAfouras, Triantafyllos-
dc.contributor.nonIdAuthorSenior, Andrew-
dc.contributor.nonIdAuthorVinyals, Oriol-
dc.contributor.nonIdAuthorZisserman, Andrew-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorHidden Markov models-
dc.subject.keywordAuthorLips-
dc.subject.keywordAuthorSpeech recognition-
dc.subject.keywordAuthorVisualization-
dc.subject.keywordAuthorVideos-
dc.subject.keywordAuthorFeeds-
dc.subject.keywordAuthorTraining-
dc.subject.keywordAuthorLip reading-
dc.subject.keywordAuthoraudio visual speech recognition-
dc.subject.keywordAuthordeep learning-
dc.subject.keywordPlusNETWORKS-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 320 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0