首页> 美国卫生研究院文献>Sensors (Basel Switzerland) >Automatic vs. Human Recognition of Pain Intensity from Facial Expression on the X-ITE Pain Database
【2h】

Automatic vs. Human Recognition of Pain Intensity from Facial Expression on the X-ITE Pain Database

机译:自动与人类识别X-ITE疼痛数据库的面部表情疼痛强度

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Prior work on automated methods demonstrated that it is possible to recognize pain intensity from frontal faces in videos, while there is an assumption that humans are very adept at this task compared to machines. In this paper, we investigate whether such an assumption is correct by comparing the results achieved by two human observers with the results achieved by a Random Forest classifier (RFc) baseline model (called RFc-BL) and by three proposed automated models. The first proposed model is a Random Forest classifying descriptors of Action Unit (AU) time series; the second is a modified MobileNetV2 CNN classifying face images that combine three points in time; and the third is a custom deep network combining two CNN branches using the same input as for MobileNetV2 plus knowledge of the RFc. We conduct experiments with X-ITE phasic pain database, which comprises videotaped responses to heat and electrical pain stimuli, each of three intensities. Distinguishing these six stimulation types plus no stimulation was the main 7-class classification task for the human observers and automated approaches. Further, we conducted reduced 5-class and 3-class classification experiments, applied Multi-task learning, and a newly suggested sample weighting method. Experimental results show that the pain assessments of the human observers are significantly better than guessing and perform better than the automatic baseline approach (RFc-BL) by about 1%; however, the human performance is quite poor due to the challenge that pain that is ethically allowed to be induced in experimental studies often does not show up in facial reaction. We discovered that downweighting those samples during training improves the performance for all samples. The proposed RFc and two-CNNs models (using the proposed sample weighting) significantly outperformed the human observer by about 6% and 7%, respectively.
机译:在自动化方法上的前程证明,可以识别来自视频中的正面面的疼痛强度,同时存在与机器相比,人类非常擅长这项任务。在本文中,我们通过比较两种人类观察者实现的结果与随机林分类器(RFC)基线模型(称为RFC-BL)和三种提出的自动模型实现的结果进行比较来进行这种假设是否是正确的。第一个提出的模型是动作单位(AU)时间序列的随机森林分类描述符;第二个是修改的MobileNetv2 CNN分类,将三个点及时组合的面部图像;第三是使用与MobileNetv2加上RFC的相同输入相同的输入组合两个CNN分支的自定义深网络。我们用X-ITE相位疼痛数据库进行实验,该数据库包括对热和电疼痛刺激的录像反应,每个强度都有。区分这六种刺激类型加不刺激是人类观察者和自动化方法的主要7级分类任务。此外,我们进行了减少的5级和3级分类实验,应用了多任务学习和新建议的样品加权方法。实验结果表明,人类观察者的疼痛评估明显优于猜测,而不是自动基线方法(RFC-BL)约1%;然而,由于在实际研究中诱导的疼痛造成痛苦的挑战,人类的性能相当差,常常在面部反应中没有出现。我们发现,在培训期间,潜行这些样本可提高所有样品的性能。所提出的RFC和双CNNS模型(使用所提出的样品加权)分别显着优于人体观察者,分别为约6%和7%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号