Volume 12 | Special Issue 1
Volume 12 | Special Issue 1
Volume 12 | Issue 1
Volume 12 | Issue 1
Volume 12 | Issue 1
Deaf and speech impaired persons utilise sign language as their major form of communica-tion, however sign language is difficult for most others in society to understand. As a result, these people confront several problems every day. There are several models in both hard-ware and software; the former is expensive and difficult to use continuously, while some of the latter are already in use but have limits like low forecast accuracy, background condition restrictions, and many more. Our approach suggests a real-time gesture translation system that makes use of deep learning and image processing methods. With only a camera needed, the objective is to make it possible for signers and non-signers to converse without any is-sues. In order to anticipate a gesture with accuracy close to 99%, a massive dataset of 13000 photos was employed. The technique includes employing a camera to collect sign motions, picture pre-processing, and machine learning to translate the acquired gesture into text mes-sage and spoken form.