Lack of hearing is one of the most frequent sensory deficits among elder population. Its correct assessment becomes complicated for audiologists when there are severe difficulties in the communication with the patient. Trying to facilitate this task, this paper proposes a methodology for the correct classification of eye gestural reactions to the auditory stimuli by using machine learning approaches. After extracting the features from the existing videos, we applied several classifiers and managed to improve the detection of the most important classes through the use of oversampling techniques in a novel way. This methodology showed promising results, with true positive rates over 0.96 for the critical classes and global classification rates over 97%, paving the way to its inclusion in a fully automated tool.
展开▼