A Survey of Methods for Hand Gesture Recognition for Sign Language Methods: Research Gap, Trends, Challenges and Future Directions
Abstract
Okorie Emmanuel O*, Nachamada V Blamah and Gideon Dadik Bibu
Recent advancements in gesture and sign language recognition have been categorized into non-vision and vision-based techniques. The use of sensors, wearable gloves, microcontrollers, deep learning, computer vision and recently virtual and augmented reality have made this research area an interesting one. This paper presents a review of trends and techniques used in recent works to address the problem of gesture for sign language recognition. The objectives of this study are to critically review the state of the art non-vision and vision-based approaches of gesture and sign language recognition, observe the trends of recent works, identify challenges in model, design, algorithms and suggest possible and potential future research directions. 110 relevant papers spanning from the year 1998 to 2022 are surveyed. The findings could aid future research plans, while the suggested ideas could help researchers better design and build gesture and sign language recognition systems to support the communication of people with speech and hearing impairment.

