首页> 外文会议>IEEE Winter Conference on Applications of Computer Vision >Eyemotion: Classifying Facial Expressions in VR Using Eye-Tracking Cameras
【24h】

Eyemotion: Classifying Facial Expressions in VR Using Eye-Tracking Cameras

机译:Eyemotion:使用眼动摄像头对VR中的面部表情进行分类

获取原文

摘要

One of the main challenges of social interaction in virtual reality settings is that head-mounted displays occlude a large portion of the face, blocking facial expressions and thereby restricting social engagement cues among users. We present an algorithm to automatically infer expressions by analyzing only a partially occluded face while the user is engaged in a virtual reality experience. Specifically, we show that images of the user's eyes captured from an IR gaze-tracking camera within a VR headset are sufficient to infer a subset of facial expressions without the use of any fixed external camera. Using these inferences, we can generate dynamic avatars in real-time which function as an expressive surrogate for the user. We propose a novel data collection pipeline as well as a novel approach for increasing CNN accuracy via personalization. Our results show a mean accuracy of 74% (F1 of 0.73) among 5 'emotive' expressions and a mean accuracy of 70% (F1 of 0.68) among 10 distinct facial action units, outperforming human raters.
机译:在虚拟现实环境中进行社交互动的主要挑战之一是,头戴式显示器会遮住大部分脸部,阻止面部表情,从而限制用户之间的社交参与度。我们提出一种算法,通过在用户进行虚拟现实体验时仅分析部分遮挡的脸部来自动推断表情。具体来说,我们显示了从VR头戴式耳机中的IR凝视跟踪相机捕获的用户眼睛图像足以推断出面部表情的子集,而无需使用任何固定的外部相机。使用这些推论,我们可以实时生成动态化身,用作用户的代用替代品。我们提出了一种新颖的数据收集管道以及通过个性化提高CNN准确性的新颖方法。我们的结果显示,在5个“情感”表情中的平均准确度为74%(F1为0.73),在10个不同的面部动作单元中的平均准确度为70%(F1为0.68),胜过人类评分者。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号