首页> 外文期刊>Informatics in Medicine Unlocked >Development of a Real-Time Emotion Recognition System Using Facial Expressions and EEG based on machine learning and deep neural network methods
【24h】

Development of a Real-Time Emotion Recognition System Using Facial Expressions and EEG based on machine learning and deep neural network methods

机译:基于机器学习和深神经网络方法,使用面部表情和脑电图的实时情感识别系统的开发

获取原文
       

摘要

Real-time emotion recognition has been an active field of research over the past several decades. This work aims to classify physically disabled people (deaf, dumb, and bedridden) and Autism children's emotional expressions based on facial landmarks and electroencephalograph (EEG) signals using a convolutional neural network (CNN) and long short-term memory (LSTM) classifiers by developing an algorithm for real-time emotion recognition using virtual markers through an optical flow algorithm that works effectively in uneven lightning and subject head rotation (up to 25°), different backgrounds, and various skin tones. Six facial emotions (happiness, sadness, anger, fear, disgust, and surprise) are collected using ten virtual markers. Fifty-five undergraduate students (35 male and 25 female) with a mean age of 22.9 years voluntarily participated in the experiment for facial emotion recognition. Nineteen undergraduate students volunteered to collect EEG signals. Initially, Haar-like features are used for facial and eye detection. Later, virtual markers are placed on defined locations on the subject's face based on a facial action coding system using the mathematical model approach, and the markers are tracked using the Lucas-Kande optical flow algorithm. The distance between the center of the subject's face and each marker position is used as a feature for facial expression classification. This distance feature is statistically validated using a one-way analysis of variance with a significance level ofp?
机译:在过去的几十年里,实时情感认可是一项积极的研究领域。这项工作旨在通过卷积神经网络(CNN)和长短期记忆(LSTM)分类器来分类基于面部地标和脑电图(EEG)信号的身体残疾人(聋哑,愚蠢和卧床)和自闭症儿童的情感使用虚拟标记通过虚拟标记通过光流量算法进行实时情感识别的算法,该算法有效地在不均匀的闪电和主题头旋转(最多25°),不同的背景和各种肤色。使用十个虚拟标记收集六种面部情感(幸福,悲伤,愤怒,恐惧,令人厌恶,令人厌恶,令人遗憾。五十五名本科生(35名男性和25名女),平均年龄为22.9岁,自愿参加了面部情感认可的实验。 19本科学生自愿收集EEG信号。最初,哈尔样功能用于面部和眼睛检测。稍后,基于使用数学模型方法的面部动作编码系统将虚拟标记放置在受试者脸上的定义位置,并使用Lucas-Kande光学流算法跟踪标记。受试者面部和每个标记位置之间的距离用作面部表情分类的特征。使用单向分析的差异差异差异,具有显着性水平的差异,这距离特征是统计验证的?<?0.01。另外,从EEG信号读取器(EPOC +)通道收集的十四个信号用作使用EEG信号进行情绪分类的特征。最后,使用五倍交叉验证并给予LSTM和CNN分类器来交叉验证。我们使用CNN实现了99.81%的最大识别率,用于使用面部地标的情绪检测。但是,使用EEG信号的情绪检测实现了使用LSTM分类器实现的最大识别率为87.25%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号