Document Type : Original Research

Authors

1 Department of Computer Science and Applications, Faculty of Prince Al-Hussein Bin Abdullah II for Information Technology, The Hashemite University, Zarqa, Jordan

2 Department of Computer Information Systems, Faculty of Prince Al-Hussein Bin Abdullah II for Information Technology, The Hashemite University, Zarqa, Jordan

Abstract

Background: Arabic Sign Language (ArSL) recognition remains limited in terms of technological development, compared to American Sign Language (ASL). This disparity restricts communication accessibility for individuals with hearing impairments in Arabic-speaking regions, in offline environments with limited computational resources.
Objective: This study aimed to develop a robust offline recognition system for ArSL by integrating Principal Component Analysis (PCA) for dimensionality reduction, Scale-Invariant Feature Transform (SIFT) for feature extraction, and Convolutional Neural Networks (CNNs) for gesture classification.
Material and Methods: This experimental, quantitative research used a curated dataset of ArSL gestures, obtained from Kaggle. Preprocessing involved normalization, contrast enhancement, and noise reduction. SIFT was used to extract invariant features, while PCA reduced computational complexity. CNN architectures were trained to recognize gestures, assessed using accuracy, precision, recall, F1-score, loss, confusion matrix, and Receiver Operating Characteristic (ROC) curve.
Results: The system achieved an accuracy of 86.64%, surpassing conventional models, such as SIFT combined with Support Vector Machines (SIFT+SVM) (84.45%). The integration of PCA and SIFT enhanced recognition efficiency and reduced model complexity. Deep learning methods showed superior adaptability and precision across gesture types. 
Conclusion: This study presents a robust offline ArSL recognition system that enhances communication, education, and social participation for individuals with hearing impairments in Arabic-speaking regions.

Keywords