首页> 外文期刊>IEEE Transactions on Image Processing >Action Recognition in Still Images With Minimum Annotation Efforts
【24h】

Action Recognition in Still Images With Minimum Annotation Efforts

机译:静态图像中的动作识别方法

获取原文
获取原文并翻译 | 示例
       

摘要

We focus on the problem of still image-based human action recognition, which essentially involves making prediction by analyzing human poses and their interaction with objects in the scene. Besides image-level action labels (e.g., riding, phoning), during both training and testing stages, existing works usually require additional input of human bounding boxes to facilitate the characterization of the underlying human-object interactions. We argue that this additional input requirement might severely discourage potential applications and is not very necessary. To this end, a systematic approach was developed in this paper to address this challenging problem of minimum annotation efforts, i.e., to perform recognition in the presence of only image-level action labels in the training stage. Experimental results on three benchmark data sets demonstrate that compared with the state-of-the-art methods that have privileged access to additional human bounding-box annotations, our approach achieves comparable or even superior recognition accuracy using only action annotations in training. Interestingly, as a by-product in many cases, our approach is able to segment out the precise regions of underlying human-object interactions.
机译:我们关注基于静止图像的人体动作识别问题,该问题实质上涉及通过分析人体姿势及其与场景中对象的交互来进行预测。除了图像级别的动作标签(例如骑行,打电话)外,在训练和测试阶段,现有作品通常需要额外输入人为边界框,以便于表征潜在的人与物体之间的相互作用。我们认为,这种额外的输入要求可能会严重阻碍潜在的应用程序,并且不是很必要。为此,本文开发了一种系统的方法来解决最小注解努力这一具有挑战性的问题,即在训练阶段仅在图像级动作标签的情况下执行识别。在三个基准数据集上的实验结果表明,与可以访问其他人类边界框注释的最新方法相比,我们的方法仅在训练中使用动作注释即可达到可比甚至更高的识别精度。有趣的是,在许多情况下,作为副产品,我们的方法能够细分出潜在的人机交互的精确区域。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号