首页> 外文会议>IEEE Conference on Computer Vision and Pattern Recognition Workshops >Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-the-Wild
【24h】

Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-the-Wild

机译:情感 - 麻省理工学院表达数据集(AM-Fed):在野外收集的自然主义和自发面部表情

获取原文
获取外文期刊封面目录资料

摘要

Computer classification of facial expressions requires large amounts of data and this data needs to reflect the diversity of conditions seen in real applications. Public datasets help accelerate the progress of research by providing researchers with a benchmark resource. We present a comprehensively labeled dataset of ecologically valid spontaneous facial responses recorded in natural settings over the Internet. To collect the data, online viewers watched one of three intentionally amusing Super Bowl commercials and were simultaneously filmed using their webcam. They answered three self-report questions about their experience. A subset of viewers additionally gave consent for their data to be shared publicly with other researchers. This subset consists of 242 facial videos (168,359 frames) recorded in real world conditions. The dataset is comprehensively labeled for the following: 1) frame-by-frame labels for the presence of 10 symmetrical FACS action units, 4 asymmetric (unilateral) FACS action units, 2 head movements, smile, general expressiveness, feature tracker fails and gender; 2) the location of 22 automatically detected landmark points; 3) self-report responses of familiarity with, liking of, and desire to watch again for the stimuli videos and 4) baseline performance of detection algorithms on this dataset. This data is available for distribution to researchers online, the EULA can be found at: http://www.affectiva.com/facial-expression-dataset-am-fed/.
机译:面部表情的计算机分类需要大量数据,此数据需要反映实际应用中所见的条件的多样性。公共数据集帮助通过为基准资源提供研究人员来帮助加速研究进度。我们在Internet上展示了一个全面标记的生态有效的自发面部响应的数据集。要收集数据,在线观众观看了三个故意有趣的超级碗广告之一,并同时使用其网络摄像头拍摄。他们回答了有关他们经历的三个自我报告问题。观众的一个子集另外给予他们的数据与其他研究人员公开共享的数据。该子集包含在现实世界条件下记录的242个面部视频(168,359帧)。该数据集已综合标记如下:1)帧框架标签,用于存在10个对称FACS动作单元,4个不对称(单侧)FACS动作单位,2个磁头移动,微笑,一般表达,功能跟踪器失败和性别; 2)22的位置自动检测到地标点; 3)熟悉,喜欢和欲望在此数据集中检测算法的智能视频和4)基线性能的自我报告。该数据可用于在线分发给研究人员,EULA可以找到:http://www.affectiva.com/facial-expression-dataset-am-fed/。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号