Human Computer Interaction(HCI) is becoming popular in this modern world. Their widespread use suggests that the ability to handle computers is perhaps equally essential for visually impaired as well as for sighted persons. Even though a large amount of work has been performed in the gesture based human computer interface, blind users still feel it is tough to interact with computers. The major obstacle is the lack of knowledge about blind users preferences toward hand gestures. Mouse and keyboard are the basic input devices to interact with a computer. People who are sightless find it difficult to interact with these means of HCI. Though Braille systems are being used by blind people but this system has a disadvantage also. A Braille device has only 64 keys whereas a computer keyboard consists of 104 keys. In many applications the capability of deep learning techniques has been confirmed to outperform classic approaches. Accordingly, we use convolutional neural network to classify the hand gestures. The proposed system has four main phases: Data set collection, pre processing, feature extraction and classification. A hand gesture captured by the camera will be recognised and classified or mapped to corresponding symbol(alphabets, digits etc.). The matched output is saved in the file as well as audio feedback is given to the blind user. A real time application where this proposed system can be used is in competitive examination for blind people. The experiment results show that the prediction accuracy of hand gesture recognition goes upto 90% with samples around 332.
展开▼