首页> 外文会议>IEEE International Conference on Artificial Intelligence and Virtual Reality >Gesture and Action Discovery for Evaluating Virtual Environments with Semi-Supervised Segmentation of Telemetry Records
【24h】

Gesture and Action Discovery for Evaluating Virtual Environments with Semi-Supervised Segmentation of Telemetry Records

机译:通过遥测记录的半监督分段评估虚拟环境的手势和动作发现

获取原文

摘要

In this paper, we propose a novel pipeline for semi-supervised behavioral coding of videos of users testing a device or interface, with an eye toward human-computer interaction evaluation for virtual reality. Our system applies existing statistical techniques for time-series classification, including e-divisive change point detection and 'Symbolic Aggregate approXimation' (SAX) with agglomerative hierarchical clustering, to 3D pose telemetry data. These techniques create classes of short segments of single-person video data-short actions of potential interest called 'micro-gestures.' A long short-term memory (LSTM) layer then learns these micro-gestures from pose features generated purely from video via a pre-trained OpenPose convolutional neural network (CNN) to predict their occurrence in unlabeled test videos. We present and discuss the results from testing our system on the single user pose videos of the CMU Panoptic Dataset.
机译:在本文中,我们提出了一种新的管道,用于对测试设备或界面的用户的视频进行半监督行为编码,并着眼于虚拟现实的人机交互评估。我们的系统将现有的统计技术用于时间序列分类,包括电子分割变化点检测和带有聚集层次聚类的“符号聚合近似”(SAX),以3D姿态遥测数据。这些技术创建了单人视频数据的短片段类,即潜在的被称为“微型手势”的短片动作。然后,一个长短期记忆(LSTM)层会通过预先训练的OpenPose卷积神经网络(CNN)从纯粹从视频生成的姿势特征中学习这些微手势,以预测它们在未标记的测试视频中的发生。我们介绍并讨论在CMU Panoptic数据集的单用户姿势视频上测试我们的系统所得的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号