首页> 外文会议>IEEE Conference on Computer Vision and Pattern Recognition >Learning Deep Representation for Imbalanced Classification
【24h】

Learning Deep Representation for Imbalanced Classification

机译:学习深度表示不平衡分类

获取原文

摘要

Data in vision domain often exhibit highly-skewed class distribution, i.e., most data belong to a few majority classes, while the minority classes only contain a scarce amount of instances. To mitigate this issue, contemporary classification methods based on deep convolutional neural network (CNN) typically follow classic strategies such as class re-sampling or cost-sensitive training. In this paper, we conduct extensive and systematic experiments to validate the effectiveness of these classic schemes for representation learning on class-imbalanced data. We further demonstrate that more discriminative deep representation can be learned by enforcing a deep network to maintain both intercluster and inter-class margins. This tighter constraint effectively reduces the class imbalance inherent in the local data neighborhood. We show that the margins can be easily deployed in standard deep learning framework through quintuplet instance sampling and the associated triple-header hinge loss. The representation learned by our approach, when combined with a simple k-nearest neighbor (kNN) algorithm, shows significant improvements over existing methods on both high-and low-level vision classification tasks that exhibit imbalanced class distribution.
机译:视觉域中的数据通常表现出高度偏斜的类别分布,即,大多数数据属于少数多数类别,而少数类别仅包含少量实例。为了缓解这个问题,基于深度卷积神经网络(CNN)的当代分类方法通常遵循经典策略,例如类重采样或对成本敏感的培训。在本文中,我们进行了广泛而系统的实验,以验证这些经典方案对类不平衡数据表示学习的有效性。我们进一步证明,通过强制使用深度网络来保持群集间和类间页边距,可以学习到更多判别性的深度表示。这种更严格的约束有效地减少了本地数据邻域中固有的类不平衡。我们显示,通过五重实例实例采样和相关的三头铰链丢失,可以轻松地在标准深度学习框架中部署边距。通过我们的方法学习的表示形式,与简单的k近邻算法(kNN)相结合,在显示类别分布失衡的高级别和低级视觉分类任务上,都比现有方法有了显着改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号