Combining residual networks with LSTMs for lipreading

Stafylakis, Themos and Tzimiropoulos, Georgios (2017) Combining residual networks with LSTMs for lipreading. In: Interspeech 2017, 20-24 August 2017, Stockholm, Sweden.

Full text not available from this repository.


We propose an end-to-end deep learning architecture for word level visual speech recognition. The system is a combination of spatiotemporal convolutional, residual and bidirectional Long Short-Term Memory networks. We trained and evaluated it on the Lipreading In-The-Wild benchmark, a challenging database of 500-size vocabulary consisting of video excerpts from BBC TV broadcasts. The proposed network attains word accuracy equal to 83.0%, yielding 6.8% absolute improvement over the current state-of-the-art.

Item Type: Conference or Workshop Item (Paper)
Additional Information: Paper available on pp. 3652-3656. doi:10.21437/Interspeech.2017-85
Keywords: visual speech recognition, lipreading, deep learning
Schools/Departments: University of Nottingham, UK > Faculty of Science > School of Computer Science
Related URLs:
Depositing User: Tzimiropoulos, Yorgos
Date Deposited: 10 Aug 2017 11:09
Last Modified: 04 May 2020 18:46

Actions (Archive Staff Only)

Edit View Edit View