首页> 外文会议>International Conference on Control, Automation and Systems >First-Person Activity Recognition Based on Three-Stream Deep Features
【24h】

First-Person Activity Recognition Based on Three-Stream Deep Features

机译:基于三流深度特征的第一人称活动识别

获取原文

摘要

In this paper, we present a novel three-stream deep feature fusion technique to recognize interaction-level human activities from a first-person viewpoint. Specifically, the proposed approach distinguishes human motion and camera ego-motion to focus on human's movement. The features of human and camera ego-motion information are extracted from the three-stream architecture. These features are fused by considering a relationship of human action and camera ego-motion. To validate the effectiveness of our approach, we perform experiments on UTKinect-FirstPerson dataset, and achieve state-of-the-art performance.
机译:在本文中,我们提出了一种新颖的三流深度特征融合技术,用于从第一人称视角识别交互级别的人类活动。具体而言,所提出的方法将人的运动和相机的自我运动区分开来,以专注于人的运动。人和相机自我运动信息的特征是从三流体系结构中提取的。这些功能是通过考虑人类动作与相机自我运动的关系来融合的。为了验证我们方法的有效性,我们在UTKinect-FirstPerson数据集上进行了实验,并获得了最先进的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号