...
首页> 外文期刊>Pattern Recognition: The Journal of the Pattern Recognition Society >Trajectory aligned features for first person action recognition
【24h】

Trajectory aligned features for first person action recognition

机译:轨迹对齐的是第一人称行动识别的特征

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Egocentric videos are characterized by their ability to have the first person view. With the popularity of Google Glass and GoPro, use of egocentric videos is on the rise. With the substantial increase in the number of egocentric videos, the value and utility of recognizing actions of the wearer in such videos has also thus increased. Unstructured movement of the camera due to natural head motion of the wearer causes sharp changes in the visual field of the egocentric camera causing many standard third person action recognition techniques to perform poorly on such videos. Objects present in the scene and hand gestures of the wearer are the most important cues for first person action recognition but are difficult to segment and recognize in an egocentric video. We propose a novel representation of the first person actions derived from feature trajectories. The features are simple to compute using standard point tracking and do not assume segmentation of hand/objects or recognizing object or hand pose unlike in many previous approaches. We train a bag of words classifier with the proposed features and report a performance improvement of more than 11% on publicly available datasets. Although not designed for the particular case, we show that our technique can also recognize wearer's actions when hands or objects are not visible. (C) 2016 Elsevier Ltd. All rights reserved.
机译:以自我为中心的视频具有第一人称观看的能力。随着谷歌眼镜和GoPro的流行,以自我为中心的视频的使用正在增加。随着以自我为中心的视频数量的大幅增加,识别佩戴者在此类视频中的行为的价值和效用也随之增加。由于佩戴者头部的自然运动导致相机的无结构运动,导致以自我为中心的相机的视野发生急剧变化,导致许多标准的第三人称动作识别技术在此类视频中表现不佳。场景中的物体和佩戴者的手势是第一人称动作识别最重要的线索,但在以自我为中心的视频中很难分割和识别。我们提出了一种基于特征轨迹的第一人称动作表示方法。这些特征使用标准的点跟踪计算起来很简单,并且不像以前的许多方法那样假设手/对象分割或识别对象或手姿势。我们用提出的特征训练了一个词袋分类器,并报告在公开可用的数据集上性能提高了11%以上。虽然不是针对特定情况设计的,但我们表明,当手或物体不可见时,我们的技术也能识别佩戴者的动作。(C) 2016爱思唯尔有限公司版权所有。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号