Sign Language Recognition Using Python

Authors

Tanudeep Ganguly
Department of Computer Science & Engineering, Techno College of Engineering Agartala
Partha Pratim Deb
Department of Computer Science & Engineering, Techno College of Engineering Agartala

Synopsis

Background: Sign language is the primary mode of communication between hearing and vocally impaired population. The government of India has enacted the Rights of Persons with Disabilities Act 2016 (RPwD Act 2016). This act recognizes Indian Sign Language (ISL) as an important communication medium for communicating with hearing impaired people. This also insists the need for sign language interpreters in all Government organizations and public sector undertakings in order to abide RPwD Act 2016. In addition to that it is a universal problem to the society which creates a communication gap between a group of people in different sectors. [1] Advancement in Technologies has introduced us to python programming for scientific researches [2] and using Convolutional neural networks [3]. This can avoid their isolation from the rest of society to a great extent.

Objective: Most current approaches in the field of gesture and sign language recognition disregard the necessity of dealing with sequence data both for training and evaluation. Using a dynamic programming-based tracking approach to track the hands across a sequence of images, the images will fed into a model called the Convolutional Neural Network (CNN) for classification of images or CNN Training, Keras will be used for training of images, Provided with proper lighting condition and background.

Methodology: The methodology used in this research consists of Spotting and recognition of hand gesture for sign language recognition in Python using the Microsoft Kinect, convolutional neural networks (CNNs) and GPU acceleration. Instead of constructing complex handcrafted features, CNNs are able to auto-mate the process of feature construction. Finally, the approach and the findings are accessed and validated in order to identify the sign that are extracted from the hand image and classified to recognize the sign.

Result and Discussion: In this research, we attempted to refine the accuracy of hand gesture recognition with Kinect [4] and the experiment on classification showed the great potential of neural networks for this task. It has been seen that the accuracy is 86.45%, which is by more than 34% better result than chosen baseline methods. Future Work: Our future efforts will be focused on developing an accurate and dependable Hand sign gesture recognition system meanwhile This work shows that CNNs can be used to accurately recognize different signs of a sign language. This generalization capacity of CNNs in spatio-temporal data can contribute to the broader research field on automatic sign language recognition.

MISS2021
Published
January 28, 2022