DC Field | Value | Language |
---|---|---|
dc.contributor.author | Afouras, Triantafyllos | ko |
dc.contributor.author | Chung, Joon Son | ko |
dc.contributor.author | Senior, Andrew | ko |
dc.contributor.author | Vinyals, Oriol | ko |
dc.contributor.author | Zisserman, Andrew | ko |
dc.date.accessioned | 2022-11-30T01:00:27Z | - |
dc.date.available | 2022-11-30T01:00:27Z | - |
dc.date.created | 2021-11-26 | - |
dc.date.created | 2021-11-26 | - |
dc.date.issued | 2022-12 | - |
dc.identifier.citation | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, v.44, no.12, pp.8717 - 8727 | - |
dc.identifier.issn | 0162-8828 | - |
dc.identifier.uri | http://hdl.handle.net/10203/301318 | - |
dc.description.abstract | The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem -- unconstrained natural language sentences, and in the wild videos. Our key contributions are: (1) we compare two models for lip reading, one using a CTC loss, and the other using a sequence-to-sequence loss. Both models are built on top of the transformer self-attention architecture; (2) we investigate to what extent lip reading is complementary to audio speech recognition, especially when the audio signal is noisy; (3) we introduce and publicly release two new datasets for audio-visual speech recognition: LRS2-BBC, consisting of thousands of natural sentences from British television; and LRS3-TED, consisting of hundreds of hours of TED and TEDx talks obtained from YouTube. The models that we train surpass the performance of all previous work on lip reading benchmark datasets by a significant margin. | - |
dc.language | English | - |
dc.publisher | IEEE COMPUTER SOC | - |
dc.title | Deep Audio-visual Speech Recognition | - |
dc.type | Article | - |
dc.identifier.wosid | 000880661400015 | - |
dc.identifier.scopusid | 2-s2.0-85058981464 | - |
dc.type.rims | ART | - |
dc.citation.volume | 44 | - |
dc.citation.issue | 12 | - |
dc.citation.beginningpage | 8717 | - |
dc.citation.endingpage | 8727 | - |
dc.citation.publicationname | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE | - |
dc.identifier.doi | 10.1109/TPAMI.2018.2889052 | - |
dc.contributor.localauthor | Chung, Joon Son | - |
dc.contributor.nonIdAuthor | Afouras, Triantafyllos | - |
dc.contributor.nonIdAuthor | Senior, Andrew | - |
dc.contributor.nonIdAuthor | Vinyals, Oriol | - |
dc.contributor.nonIdAuthor | Zisserman, Andrew | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Hidden Markov models | - |
dc.subject.keywordAuthor | Lips | - |
dc.subject.keywordAuthor | Speech recognition | - |
dc.subject.keywordAuthor | Visualization | - |
dc.subject.keywordAuthor | Videos | - |
dc.subject.keywordAuthor | Feeds | - |
dc.subject.keywordAuthor | Training | - |
dc.subject.keywordAuthor | Lip reading | - |
dc.subject.keywordAuthor | audio visual speech recognition | - |
dc.subject.keywordAuthor | deep learning | - |
dc.subject.keywordPlus | NETWORKS | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.