End-to-end audiovisual speech recognition

Petridis, Stavros, Stafylakis, Themos, Ma, Pingchuan, Cai, Feipeng, Tzimiropoulos, Georgios and Pantic, Maja (2018) End-to-end audiovisual speech recognition. In: IEEE International Conference on Acoustics, Speech, and Signal Processing, 15-20 April 2018, Calgary, Alberta, Canada.

[thumbnail of av_speech1.pdf]
Preview
PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Download (157kB) | Preview

Abstract

Several end-to-end deep learning approaches have been recently presented which extract either audio or visual features from the input images or audio signals and perform speech recognition. However, research on end-to-end audiovisual models is very limited. In this work, we present an end-to-end audiovisual model based on residual networks and Bidirectional Gated Recurrent Units (BGRUs). To the best of our knowledge, this is the first audiovisual fusion model which simultaneously learns to extract features directly from the image pixels and audio waveforms and performs within-context word recognition on a large publicly available dataset (LRW). The model consists of two streams, one for each modality, which extract features directly from mouth regions and raw waveforms. The temporal dynamics in each stream/modality are modeled by a 2-layer BGRU and the fusion of multiple streams/modalities takes place via another 2-layer BGRU. A slight improvement in the classification rate over an end-to-end audio-only and MFCC-based model is reported in clean audio conditions and low levels of noise. In presence of high levels of noise, the end-to-end audiovisual model significantly outperforms both audio-only models.

Item Type: Conference or Workshop Item (Paper)
Schools/Departments: University of Nottingham, UK > Faculty of Science > School of Computer Science
Depositing User: Tzimiropoulos, Yorgos
Date Deposited: 13 Apr 2018 10:54
Last Modified: 15 Apr 2018 04:35
URI: https://eprints.nottingham.ac.uk/id/eprint/51132

Actions (Archive Staff Only)

Edit View Edit View