Sign Language to Text Conversion using CNN
Alan Wilson1, Lenet Steephen2
1Alan Wilson, Department of Computer Science, St. Albert’s College, Kochi (Kerala), India.
2Lenet Steephen, Department of Computer Science, St. Albert’s College, Kochi (Kerala), India.
Manuscript received on 16 April 2024 | Revised Manuscript received on 04 May 2024 | Manuscript Accepted on 15 May 2024 | Manuscript published on 30 May 2024 | PP: 9-12 | Volume-4 Issue-1 May 2024 | Retrieval Number: 100.1/ijdm.A163404010524 | DOI: 10.54105/ijdm.A1634.04010524
Open Access | Editorial and Publishing Policies | Cite | Zenodo | OJS | Indexing and Abstracting
© The Authors. Published by Lattice Science Publication (LSP). This is an open-access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract: Sign language is a communication strategy used by those who are unable to hear. So those people who know sign language can communicate with people who are deaf. But a majority of our people don’t know sign language therefore there comes a communication gap between the ones who know sign language and others who don’t know. This project’s major purpose is to bridge this gap by developing a systemthat recognizesmultiple sign languages and translates them into text in real-time. We use machine learning technologies to construct this system especially, convolutional neural networks (cnns), which are used to recognize and translate American Sign Language (ASL) into text by capturing it using a webcam. The transformed text is then presented on the screen by which individuals can comprehend and communicate with those who use sign language. The system’s performance is evaluated on a dataset of ASL gestures, attaining excellent accuracy and indicating its potential for practical usage in enhancing communication accessibility for the deaf and hard-of-hearing community.
Keywords: Sign Language, Convolutional Neural Network (CNN), Real-time, American Sign Language (ASL)
Article of the Scope: Data Science