首页> 外文会议>IEEE International Conference on Robotics and Automation >Arbitrary view action recognition via transfer dictionary learning on synthetic training data
【24h】

Arbitrary view action recognition via transfer dictionary learning on synthetic training data

机译:通过对合成训练数据进行字典学习来进行任意视图动作识别

获取原文

摘要

Human action recognition is an important problem in robotic vision. Traditional recognition algorithms usually require the knowledge of view angle, which is not always available in robotic applications such as active vision. In this paper, we propose a new framework to recognize actions with arbitrary views. A main feature of our algorithm is that view-invariance is learned from synthetic 2D and 3D training data using transfer dictionary learning. This guarantees the availability of training data, and removes the hassle of obtaining real world video in specific viewing angles. The result of the process is a dictionary that can project real world 2D video into a view-invariant sparse representation. This facilitates the training of a view-invariant classifier. Experimental results on the IXMAS and N-UCLA datasets show significant improvements over existing algorithms.
机译:人体动作识别是机器人视觉中的重要问题。传统的识别算法通常需要了解视角,这在诸如主动视觉之类的机器人应用中并不总是可用。在本文中,我们提出了一个新的框架来识别具有任意视图的动作。我们算法的主要特点是,使用转移字典学习可从合成的2D和3D训练数据中学习视图不变性。这保证了训练数据的可用性,并消除了在特定视角下获取现实世界视频的麻烦。该过程的结果是一个字典,可以将现实世界中的2D视频投影到视图不变的稀疏表示中。这有助于训练视图不变分类器。在IXMAS和N-UCLA数据集上的实验结果表明,与现有算法相比有显着改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号