IJFANS International Journal of Food and Nutritional Sciences

ISSN PRINT 2319 1775 Online 2320-7876

Real Time Sign Language Gesture Translation Using Deep Learning

Main Article Content

Abdulmateen Pitodiya, Mishty Singha, Srishti Shukla, Vikas Gupta, Sonal Dubal


Deaf and speech impaired persons utilise sign language as their major form of communica-tion, however sign language is difficult for most others in society to understand. As a result, these people confront several problems every day. There are several models in both hard-ware and software; the former is expensive and difficult to use continuously, while some of the latter are already in use but have limits like low forecast accuracy, background condition restrictions, and many more. Our approach suggests a real-time gesture translation system that makes use of deep learning and image processing methods. With only a camera needed, the objective is to make it possible for signers and non-signers to converse without any is-sues. In order to anticipate a gesture with accuracy close to 99%, a massive dataset of 13000 photos was employed. The technique includes employing a camera to collect sign motions, picture pre-processing, and machine learning to translate the acquired gesture into text mes-sage and spoken form.

Article Details