首页> 外文会议>IEEE Conference on Computer Vision and Pattern Recognition >PANDA: Pose Aligned Networks for Deep Attribute Modeling
【24h】

PANDA: Pose Aligned Networks for Deep Attribute Modeling

机译:PANDA:用于深度属性建模的姿势对齐网络

获取原文

摘要

We propose a method for inferring human attributes (such as gender, hair style, clothes style, expression, action) from images of people under large variation of viewpoint, pose, appearance, articulation and occlusion. Convolutional Neural Nets (CNN) have been shown to perform very well on large scale object recognition problems. In the context of attribute classification, however, the signal is often subtle and it may cover only a small part of the image, while the image is dominated by the effects of pose and viewpoint. Discounting for pose variation would require training on very large labeled datasets which are not presently available. Part-based models, such as poselets [4] and DPM [12] have been shown to perform well for this problem but they are limited by shallow low-level features. We propose a new method which combines part-based models and deep learning by training pose-normalized CNNs. We show substantial improvement vs. state-of-the-art methods on challenging attribute classification tasks in unconstrained settings. Experiments confirm that our method outperforms both the best part-based methods on this problem and conventional CNNs trained on the full bounding box of the person.
机译:我们提出了一种从视点,姿势,外表,发音和遮挡的巨大变化中的人的图像推断人的属性(例如性别,发型,衣服风格,表情,动作)的方法。卷积神经网络(CNN)已被证明在大规模目标识别问题上表现出色。但是,在属性分类的情况下,信号通常是微妙的,并且可能仅覆盖图像的一小部分,而图像则受姿势和视点的影响。为姿势变化打折将需要对非常大的,标签标签数据集进行训练,而这些标签数据集目前尚不可用。基于零件的模型(例如,poselet [4]和DPM [12])已被证明可以很好地解决此问题,但它们受到浅层低层特征的限制。我们提出了一种新的方法,该方法通过训练姿势标准化的CNN将基于零件的模型与深度学习相结合。在无约束的环境中,我们对具有挑战性的属性分类任务显示出与最新技术相比的实质性改进。实验证实,我们的方法优于针对该问题的最佳基于零件的方法,以及优于在人的整个包围盒上训练的常规CNN的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号