Rational Design of a Novel Smart Mobile Communication System for Arabian Deaf and Dumb

The deaf impairment is among the most substantial health problem worldwide, that can lead to various personal, economical, and social crisis. Therefore, it is critical for developing an efficient way to facilitate communication between deaf-dumb impaired and normal people. Herein, we have rationally designed a new digitally computerized and mobile smart system as an efficient communication tool between deaf impaired and normal Arabian people. This is based on two main steps, including creating a digital output for the hand gestures using gloves flex sensors equipped with a three-axis accelerometer that is controlled using a microcontroller. The digital results are compared to that in a words-based “database”, where Arabs use expressions not alphabet in their communication. The second step is translation or conversion the outputs of the first stage into written texts and voices. The newly developed system allows Arabian deaf to translate words of ordinary people into gestures using a speech recognition system with an impressive accuracy over 90 % without the needing for a webcam, colored gloves, and/or online translator. The presented system can be used on any android or windows.


Introductions
According to the world health organization, there is more than 360million deaf people around the world [1], and it is expected to increase significantly in the future, owing to the uncontrolled growth in the chronic diseases. To this end, there are at least 10 % of Arabian are deaf impaired (i.e., Saudi Arabia 23.5 % and Egypt 16 % of the population ). Various efforts are dedicating to solve this critical issue and development an efficient way to facilitate communication between deaf impaired and ordinary people [2][3][4][5][6][7]. Regarding this, most deaf and dumb people around the world are using the American Sign Language (ASL) or Spanish. [8] They should be aware of English as they use 26 signs for English alphabets to spell words from the English language. [9][10][11][12] Many articles used a webcam placed in front of an impaired person and wearing collared rings in fingers, alphabets signs are detected and translated into speech [13]. A disadvantage of these systems that it needs ones to stand in front of the camera all time, time consumption, and the person cannot manipulate the data by himself [14].
Some systems used laptop camera and capture image for the gesture made by the right hand at one feet distance from the camera and background without any other objects. The accuracy was good with different light intensity [15]. Others used video frames, detect hand shape, and use classification system [16].
Research is employed to overcome communication problems using smart gloves that depend on flexible resistance and accelerometer for sensing the tilt of hand [17][18][19]. The sign is collected to make words and display them as text or produce a voice. Wireless system for signals is done with the help of RF transceiver [20,21]. This makes the system more comfortable than others. The letter sign is sent via 30 meters wireless module to PC where data for the stored signed letter exist, compared, and then letters or word are displayed [9,22]. To overcome using cables, gestures image processing is introduced based on image technique, hand segmentation, point, and color matching with a database, conversion to text was done [23,24].
Other study tracks hand motion and detects its gravity center, which provides the speed of any conducting time pattern [25][26][27]. Comparing with the stored data, the sign is recognized. Some systems used Neural Network for gesture recognition. It based on using hand sign images for a neural network (NN) training [28][29][30]. NN combined with Fuzzy systems gave good accuracy in sign recognition and classification [31]. Other system extracts signs taken from videos. The feature was extracted from the image as hand contour then the corresponding alphabet or meaning was described. [32][33][34] The data gloves sensor technique is more advantages than the image-based one and became a promising tool for communication [17,35,36]. These hand gestures can play a significant role in different fields as Robotics and Automatic control [22,37]. Some projects used capacitive touch sensor, and others mostly used flexible resistance.
However, the limitation of data and systems affects the efficiency and speed of the process. For Arab people, as letters are different from the English alphabet where words with the same letters have different meanings, pronunciations, and signatures. There is a crucial need to develop a new system for Arabs deaf people.
Inspired by this, herein, we rationally designed a new smart mobile system for effective communication between visually impaired and healthy people. This is successfully achieved using tow min stages involving using gloves flex sensors equipped with a threeaxis accelerometer that is controlled using a microcontroller for generating digital outputs for the hand gestures followed by conversion these outputs into written texts and voices with an accuracy of 90 %. Our newly developed system is mobile, low-cost, digital, fast, and easy to handle without the needing for a webcam, colored gloves, and online translator. Moreover, the system can promptly convert the outputs into texts instead of alphabets/letters. Finally, the newly designed system can be easily integrated on any android or windows system.

Material and Methods
In this paper, the communication problem of deaf and dumb people who speak Arabic and cannot express themselves could be solved. The proposed system divided into two stages. First, translates gesture of deaf people into words and voice. Flexible sensors, accelerometer and mega2560 as a microcontroller based on the ATmega2560 was used to receive data and transmit them to the laptop The second stage; translate words of an ordinary person into sign language using speech library in c#.. To design the smart gloves, we have used flex sensors and accelerometer fitted on a glove as well as a wireless speech recognition system.

Smart Gloves Design
In this stage, gloves are used to capture the hand gesture made a person and converted into speech and text. Flexible sensors fitted on gloves, accelerometer on the wrist is used for motion capture. Bending degree of Finger gestures are calculated into voltage terms using voltage divider rule. The microcontroller is used for analog to digital conversion of data from flex sensors. Then digitized data are matched with that stored data. Block diagram of proposed work is shown on Fig.1.

Flexible Resistance
A flexible resistance was used to capture the person's fingers motion, as shown in Fig.2. It is a normal resistance with the ability to change with bending. It is approximately 22.5 k ohm without bending and approximately 75.6 k ohm with maximum bending. The output voltage is calculated for different Resistances, and a voltage divider with R2=20 k ohm is used. Flexible Resistances were used for the five fingers in a glove and then connected them to Arduino mega analog pins.

Accelerometer
The accelerometer detects the moving directions of the hand. As hand can make a specific shape, but the orientation gives different meaning, it is important to detect hand tilting.
Analog output is converted into digital using Arduino Inertial measurement unit (IMU) shown in Fig.3, which was used to determine orientation degree.

Wireless Connection
A wireless connection was used to make the person comfortable and provides an easy movement from one place to another without any impediments due to the absence of wires. Fig. 5 shows the used prototype, while Fig. 6 shows the wireless connection.

Arduino mega and processing
An SQL server database was used to store signs data and process them.

First Stage
At first stage, the flex resistors capture motions then, a voltage divider used to convert the change in resistance into voltage. After that, the Arduino ADC is provided with the angle and motion of a person's hand captured by mpu9150; then the wireless send the data from both gloves via the laptop to be analyzed. Finally, the output is introduced in the form of image and word. The system could be simplified in the flow chart shown in Fig.7.
Flexible resistance and mpu9150 circuit shown in Fig.8. The circuit shows that the flex res from f1 to f5. The analog inputs takes the values of f1, f2, f3, f4, f5 to j1 while j2 is the power source and reset. The wireless TX, RX connected to the j5. For interrupt connected to j9 and the Arduino TX, RX connected to j6.

Second stage: Transferring voice into text
As normal people speak, the voice is processed and translated into text, and the corresponding sign is retrieved from database.so, deaf people can read. This stage was done using the system speech library to c#. A speech recognition application will typically perform the following basic operations: 1. Initialize the speech recognizer.

2.
Create a speech recognition grammar.

3.
Load the grammar into the speech recognizer.

4.
Register for speech recognition event notification.

5.
Create a handler for the speech recognition event.
After the voice recognized the picture for the meaning was appeared as shown in Fig.9, the word is displayed in both Arabic and English. A database contained all the words recognized by the program in Arabic and English words is used. The system also allows the user to add new words as a binary sign code. A notepad is shown in Fig.10 displays the code for some words. As one means that this finger is flatted and used while zero means that it does not use this one. B for back, F for forward, R for right and u for up.

Results
Ten persons with different hand size and different voice pitch performed sign language.
They started the test for words and sentences. The accuracy was 71 to 82 % while it was 82 to 92 % for persons with big hand size. It was recognized that as gloves good fitted and suitable to hand size, the error decreased. Accuracy computation for transferring the gloves signs into voice and text was 89% on average, so as the speech recognition and translation into the sign was high accuracy. When the training increases the error decrease.

Conclusion
The accuracy was relatively high; the accuracy was 92.5 % with a person who used good fitted gloves. Most of the cases achieved accuracy of at least 75 %. The highest accuracy with the good preparation was over 85 %. The results show good performance and nearly fast response with result below one second after each gesture. By increasing dataset, the results could be better.
Hybrid of methods may enhance the performance. The system is better than other systems that depend on the alphabet sign converter, as this one depends onward and sentence sign translation.
Other advantage is, that is easy for the user as deaf and dumb to build their dataset for different signs. User detect his sign and specify position of each finger and store it in database. The work is prepared to be used as mobile application. The work could be used to improve life quality and communication method for this category of people.
Funding: This work is supported by the Qatar University Grant (IRCC179).

Authors Contributions:
The authors contributed equally to the work. Dr. Amal designed the sensors, Dr. Kamel arranged the data, and Prof. Aboubakr supervised the work.

Conflicts of Interest
There are no conflicts to declare. Fig.1: Block diagram illustrates our newly developed system.