Deep word embeddings for visual speech recognition

Stafylakis, Themos and Tzimiropoulos, Georgios (2018) Deep word embeddings for visual speech recognition. In: IEEE International Conference on Acoustics, Speech, and Signal Processing, 15-20 April 2018, Calgary, Alberta, Canada.

[thumbnail of av_speech2.pdf]
Preview
PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Download (252kB) | Preview

Abstract

In this paper we present a deep learning architecture for extracting word embeddings for visual speech recognition. The embeddings summarize the information of the mouth region that is relevant to the problem of word recognition, while suppressing other types of variability such as speaker, pose and illumination. The system is comprised of a spatiotemporal convolutional layer, a Residual Network and bidirectional LSTMs and is trained on the Lipreading in-the-wild database. We first show that the proposed architecture goes beyond state-of-the-art on closed-set word identification, by attaining 11.92% error rate on a vocabulary of 500 words. We then examine the capacity of the embeddings in modelling words unseen during training. We deploy Probabilistic Linear Discriminant Analysis (PLDA) to model the embeddings and perform low-shot learning experiments on words unseen during training. The experiments demonstrate that word-level visual speech recognition is feasible even in cases where the target words are not included in the training set.

Item Type: Conference or Workshop Item (Paper)
Keywords: Visual Speech Recognition, Lipreading, Word Embeddings, Deep Learning, Low-shot Learning
Schools/Departments: University of Nottingham, UK > Faculty of Science > School of Computer Science
Depositing User: Tzimiropoulos, Yorgos
Date Deposited: 13 Apr 2018 10:48
Last Modified: 15 Apr 2018 05:12
URI: https://eprints.nottingham.ac.uk/id/eprint/51133

Actions (Archive Staff Only)

Edit View Edit View