首页> 外文会议>International Conference on Systems and Informatics >POLAR: Posture-level Action Recognition Dataset
【24h】

POLAR: Posture-level Action Recognition Dataset

机译:极性:姿势级动作识别数据集

获取原文

摘要

Action category definitions in existing datasets are inconsistent since actions can be labeled at different levels. In this paper, the authors focus on the less studied posture-level actions and introduce a new image dataset, “POsture-Level Action Recognition” (POLAR). The POLAR dataset has a total of 35,324 images and covers 99% of posture-level human actions based on our analysis on the VOC dataset. For action recognition on the POLAR dataset, irrelevant objects or backgrounds could mislead human action predictions. The authors propose the Human-centric Contextual Regions Network (HCRN) that extracts pose related contextual features according to the person in question to address this problem. HCRN outperforms the prior best method in the VOC action classification competition on the POLAR dataset, also reports the new state-of-the-art result, 92.2% with 2.0% relative improvement on the PASCAL VOC action dataset and achieves a promising result on the Stanford 40 dataset.
机译:现有数据集中的操作类别定义不一致,因为操作可以在不同的级别标记。在本文中,作者侧重于较少的研究姿势级操作,并引入了新的图像数据集“姿势级动作识别”(Polar)。 Polar DataSet总共有35,324个图像,并根据我们对VOC数据集的分析,涵盖99%的姿势人类动作。对于在极地数据集上的行动识别,无关的物体或背景可能会误导人类的行动预测。作者提出了以人为中心的上下文区域网络(HCRN),该网络(HCRN)根据有问题的人提取姿势相关的上下文特征来解决这个问题。 HCRN优于北极数据集的VOC动作分类竞争中的先前最佳方法,还报告了新的最先进的结果,92.2%,帕斯卡VOC动作数据集的相对改进2.0%,实现了有希望的结果斯坦福40个数据集。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号