首页> 外文期刊>International Journal of Computer Vision >First-Person Activity Recognition: Feature, Temporal Structure, and Prediction
【24h】

First-Person Activity Recognition: Feature, Temporal Structure, and Prediction

机译:第一人称活动识别:特征,时间结构和预测

获取原文
获取原文并翻译 | 示例
           

摘要

This paper discusses the problem of recognizing interaction-level human activities from a first-person viewpoint. The goal is to enable an observer (e.g., a robot or a wearable camera) to understand 'what activity others are performing to it' from continuous video inputs. These include friendly interactions such as 'a person hugging the observer' as well as hostile interactions like 'punching the observer' or 'throwing objects at the observer', whose videos involve a large amount of camera ego-motion caused by physical interactions. The paper investigates multi-channel kernels to integrate global and local motion information, and presents a new activity learning/recognition methodology that explicitly considers temporal structures displayed in first-person activity videos. Furthermore, we present a novel algorithm for early recognition (i.e., prediction) of activities from first-person videos, which allows us to infer ongoing activities at their early stage. In our experiments, we not only show classification results with segmented videos, but also confirm that our new approach is able to detect activities from continuous videos and perform early recognition reliably.
机译:本文讨论了从第一人称视角识别交互级人类活动的问题。目的是使观察者(例如,机器人或可穿戴式摄像机)能够从连续的视频输入中了解“其他人对其正在进行的活动”。这些包括友好的互动,例如“一个拥抱观察者的人”,以及敌对的互动,例如“打孔观察者”或“向观察者投掷物体”,其视频涉及由物理互动引起的大量摄像机自我运动。本文研究了整合全局和局部运动信息的多通道内核,并提出了一种新的活动学习/识别方法,该方法明确考虑了第一人称活动视频中显示的时间结构。此外,我们提出了一种新颖的算法,可以根据第一人称视频对活动进行早期识别(即预测),从而可以推断出早期活动。在我们的实验中,我们不仅显示了分段视频的分类结果,而且证实了我们的新方法能够检测连续视频中的活动并可靠地进行早期识别。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号