LipNet: End-to-End Lipreading
Jishnu T S1, Anju Antony2
1Jishnu T S, Department of Computer Science, St. Albert’s College, Kochi (Kerala), India.
2Anju Antony, Department of Computer Science, St. Albert’s College, Kochi (Kerala), India.
Manuscript received on 06 March 2024 | Revised Manuscript received on 02 May 2024 | Manuscript Accepted on 15 May 2024 | Manuscript published on 30 May 2024 | PP: 1-4 | Volume-4 Issue-1 May 2024 | Retrieval Number: 100.1/ijdm.A163204010524 | DOI: 10.54105/ijdm.A1632.04010524
Open Access | Editorial and Publishing Policies | Cite | Zenodo | OJS | Indexing and Abstracting
© The Authors. Published by Lattice Science Publication (LSP). This is an open-access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract: Lipreading is the task of decoding text from the movement of a speaker’s mouth. This research presents the development of an advanced end-to-end lipreading system. Leveraging deep learning architectures and multimodal fusion techniques, the proposed system interprets spoken language solely from visual cues, such as lip movements. Through meticulous data collection, annotation, preprocessing, model development, and evaluation, diverse datasets encompassing various speakers, accents, languages, and environmental conditions are curated to ensure robustness and generalization. Conventional methods divided the task into two phases: prediction and designing or learning visual characteristics. Most deep lipreading methods are trainable from end to end. In the past, lipreading has been tackled using tedious and sometimes unsatisfactory techniques that break down speech into smaller units like phonemes or visemes. But these methods often fail when faced with real-world problems, such contextual factors, accents, and differences in speech patterns. Nevertheless, current research on end-to-end trained models only carries out word classification; sentence-level sequence prediction is not included. LipNet is an end-to-end trained model that uses spatiotemporal convolutions, a recurrent network, and the connectionist temporal classification loss to translate a variable-length sequence of video frames to text. LipNet breaks from this traditional paradigm by using an all-encompassing, end-to-end approach supported by deep learning algorithms, Convolutional neural networks (CNNs) and recurrent neural networks (RNNs), which are skilled at processing sequential data and extracting high-level representations, are fundamental to LipNet’s architecture.LipNet achieves 95.2% accuracy in sentence-level on the GRID corpus, overlapped speaker split task, outperforming experienced human lipreaders and the previous 86.4% word-level state-of-the-art accuracy.The results underscore the transformative potential of the lipreading system in real-world applications, particularly in domains such as assistive technology and human-computer interaction, where it can significantly improve communication accessibility and inclusivity for individuals with hearing impairments.
Keywords: Lipreading, Lipnet, Sentence Level Prediction, Deep Lipreading, Convolutional Neural Network, Recurrent Neural Network.
Article of the Scope: Data Science