首页> 外文期刊>Applied Soft Computing >Facial expression recognition of intercepted video sequences based on feature point movement trend and feature block texture variation
【24h】

Facial expression recognition of intercepted video sequences based on feature point movement trend and feature block texture variation

机译:基于特征点运动趋势的截获视频序列的面部表情识别和特征块纹理变化

获取原文
获取原文并翻译 | 示例
       

摘要

Facial Expression Recognition (FER) is an important subject of human-computer interaction and has long been a research area of great interest. Accurate Facial Expression Sequence Interception (FESI) and discriminative expression feature extraction are two enormous challenges for the video-based FER. This paper proposes a framework of FER for the intercepted video sequences by using feature point movement trend and feature block texture variation. Firstly, the feature points are marked by Active Appearance Model (AAM) and the most representative 24 of them are selected. Secondly, facial expression sequence is intercepted from the face video by determining two key frames whose emotional intensities are minimum and maximum, respectively. Thirdly, the trend curve which represents the Euclidean distance variations between any two selected feature points is fitted, and the slopes of specific points on the trend curve are calculated. Finally, combining Slope Set which is composed by the calculated slopes with the proposed Feature Block Texture Difference (FBTD) which refers to the texture variation of facial patch, the final expressional feature are formed and inputted to One-dimensional Convolution Neural Network (1DCNN) for FER. Five experiments are conducted in this research, and three average FER rates 95.2%, 96.5%, and 97% for Beihang University (BHU) facial expression database, MMI facial expression database, and the combination of two databases, respectively, have shown the significant advantages of the proposed method over the existing ones. (C) 2019 Elsevier B.V. All rights reserved.
机译:面部表情识别(FER)是人机互动的重要主题,并且长期以来一直是令人兴趣的研究领域。精确的面部表情序列拦截(Fesi)和鉴别表达特征提取是基于视频的FER的两个巨大挑战。本文通过使用特征点运动趋势和特征块纹理变化来提出截获视频序列的FER框架。首先,特征点由主动外观模型(AAM)标记,并选择其中的最多代表性24。其次,通过确定其情绪强度分别最小和最大值的两个关键帧从面部视频拦截面部表达序列。第三,拟合任意两个所选特征点之间的欧几里德距离变化的趋势曲线,并计算趋势曲线上的特定点的斜率。最后,将由所计算的斜率组合的斜率集合具有所提出的特征块纹理差(FBTD),这是指面部贴片的纹理变化,最终表达特征形成并输入到一维卷积神经网络(1dcnn)对于fer。在本研究中进行了五项实验,三个平均费率95.2%,96.5%和97%,分别为Beihang大学(BHU)面部表情数据库,MMI面部表情数据库和两个数据库的组合,显示了重要的所提出的方法对现有的优点。 (c)2019年Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号