Article
A UNIFIED DEEP LEARNING FRAMEWORK FOR INDIAN SIGN LANGUAGE INTERPRETATION AND SPEECH RECOGNITION
This work presents a unified deep learning–based Indian Sign Language (ISL) and speech recognition system designed to bridge communication gaps between hearing-impaired individuals and the general population [1], [7]. The proposed framework integrates image-based gesture recognition with natural speech-to-text conversion to enable seamless two-way interaction, following earlier efforts in sign-tospeech and gesture-to-text translation systems [1], [2], [4]. Using convolutional neural networks (CNN), feature extraction from hand gestures is performed, while recurrent and transformer-based models handle speech recognition tasks [11], [12]. The system aims to deliver high accuracy, robustness to noise, and real-time performance, improving upon traditional ISL recognition and glove-based solutions [2], [6], [9] and making it suitable for assistive communication applications [3], [5], [8].
Full Text Attachment





























