首页> 外文会议>IEEE Winter Conference on Applications of Computer Vision >Eyemotion: Classifying Facial Expressions in VR Using Eye-Tracking Cameras
【24h】

Eyemotion: Classifying Facial Expressions in VR Using Eye-Tracking Cameras

机译:eyevion:使用眼跟踪摄像机对VR进行分类

获取原文

摘要

One of the main challenges of social interaction in virtual reality settings is that head-mounted displays occlude a large portion of the face, blocking facial expressions and thereby restricting social engagement cues among users. We present an algorithm to automatically infer expressions by analyzing only a partially occluded face while the user is engaged in a virtual reality experience. Specifically, we show that images of the user's eyes captured from an IR gaze-tracking camera within a VR headset are sufficient to infer a subset of facial expressions without the use of any fixed external camera. Using these inferences, we can generate dynamic avatars in real-time which function as an expressive surrogate for the user. We propose a novel data collection pipeline as well as a novel approach for increasing CNN accuracy via personalization. Our results show a mean accuracy of 74% (F1 of 0.73) among 5 'emotive' expressions and a mean accuracy of 70% (F1 of 0.68) among 10 distinct facial action units, outperforming human raters.
机译:虚拟现实环境中社交交互的主要挑战之一是头部安装的显示器遮挡面部的大部分,阻挡面部表情,从而限制用户之间的社交接触线索。我们介绍一种通过仅在用户参与虚拟现实体验时分析部分封闭的面部来自动推断表达式的算法。具体地,我们示出了从VR耳机内的IR凝视跟踪相机捕获的用户眼睛的图像足以推断在不使用任何固定的外部相机的情况下推断面部表达的子集。使用这些推论,我们可以在实时生成动态化身,其用作用户的富有表现力代理。我们提出了一种新颖的数据收集管道以及通过个性化增加CNN精度的新方法。我们的结果表明,在5个“情绪化”表达中,在5英尺的表达中显示了74 %(F1为0.73)的准确性,并且在10个不同的面部动作单位中的70 %(F1为0.68)的平均准确性,表现优于人类评估者。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号