首页> 外文会议>International Conference on Systems and Informatics >POLAR: Posture-level Action Recognition Dataset
【24h】

POLAR: Posture-level Action Recognition Dataset

机译:POLAR:姿势级动作识别数据集

获取原文

摘要

Action category definitions in existing datasets are inconsistent since actions can be labeled at different levels. In this paper, the authors focus on the less studied posture-level actions and introduce a new image dataset, “POsture-Level Action Recognition” (POLAR). The POLAR dataset has a total of 35,324 images and covers 99% of posture-level human actions based on our analysis on the VOC dataset. For action recognition on the POLAR dataset, irrelevant objects or backgrounds could mislead human action predictions. The authors propose the Human-centric Contextual Regions Network (HCRN) that extracts pose related contextual features according to the person in question to address this problem. HCRN outperforms the prior best method in the VOC action classification competition on the POLAR dataset, also reports the new state-of-the-art result, 92.2% with 2.0% relative improvement on the PASCAL VOC action dataset and achieves a promising result on the Stanford 40 dataset.
机译:由于可以在不同级别标记动作,因此现有数据集中的动作类别定义不一致。在本文中,作者着重研究较少的姿势级动作,并介绍了一个新的图像数据集“姿势级动作识别”(POLAR)。根据我们对VOC数据集的分析,POLAR数据集共有35,324张图像,涵盖了99%的姿势级人类动作。为了在POLAR数据集上进行动作识别,不相关的对象或背景可能会误导人类的动作预测。作者提出了以人为中心的上下文区域网络(HCRN),该网络可根据相关人员提取与姿势相关的上下文特征,以解决此问题。 HCRN在POLAR数据集的VOC行为分类竞赛中胜过先前最好的方法,还报告了最新的最新结果,PASCAL VOC行为数据集的最新结果为92.2%,相对改进为2.0%,在HCRN上取得了可喜的结果斯坦福大学40个数据集。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号