首页> 外文会议>12th IEEE International Conference on Automatic Face and Gesture Recognition >EGGNOG: A Continuous, Multi-modal Data Set of Naturally Occurring Gestures with Ground Truth Labels
【24h】

EGGNOG: A Continuous, Multi-modal Data Set of Naturally Occurring Gestures with Ground Truth Labels

机译:EGGNOG:具有地面真相标签的自然发生手势的连续多模式数据集

获取原文
获取原文并翻译 | 示例

摘要

People communicate through words and gestures,but current voice-based computer interfaces such as Siri exploitonly words. This is a shame: human-computer interfaces wouldbe natural if they incorporated gestures as well as words. To support this goal, we present a new dataset of naturally occurring gestures made by people working collaboratively on blocks world tasks. The dataset, called EGGNOG, contains over 8 hours ofRGB video, depth video, and Kinect v2 body position data of 40subjects. The data has been semi-automatically segmented into 24,503 movements, each of which has been labeled according to (1) its physical motion and (2) the intent of the participant.We believe this dataset will stimulate research into natural and gestural human-computer interfaces.
机译:人们通过单词和手势进行交流,但是当前基于语音的计算机界面(例如Siri)仅利用单词。真可惜:如果人机界面同时包含手势和文字,那么它们将是自然的。为了支持该目标,我们提供了一个新数据集,这些数据集是由人们共同完成区块世界任务而自然做出的手势。该数据集名为EGGNOG,包含超过8个小时的RGB视频,深度视频和40个对象的Kinect v2身体位置数据。数据已被半自动分割为24,503个运动,每个运动都根据(1)身体运动和(2)参与者的意图进行了标记。我们相信该数据集将刺激对自然和手势人机的研究。接口。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号