首页> 外文会议>European Conference on Computer Vision >RBF-Softmax: Learning Deep Representative Prototypes with Radial Basis Function Softmax
【24h】

RBF-Softmax: Learning Deep Representative Prototypes with Radial Basis Function Softmax

机译:RBF-Softmax:使用径向基础功能学习深层代表原型软墨像

获取原文

摘要

Deep neural networks have achieved remarkable successes in learning feature representations for visual classification. However, deep features learned by the softmax cross-entropy loss generally show excessive intra-class variations. We argue that, because the traditional softmax losses aim to optimize only the relative differences between intra-class and inter-class distances (logits), it cannot obtain representative class prototypes (class weights/centers) to regularize intra-class distances, even when the training is converged. Previous efforts mitigate this problem by introducing auxiliary regularization losses. But these modified losses mainly focus on optimizing intra-class compactness, while ignoring keeping reasonable relations between different class prototypes. These lead to weak models and eventually limit their performance. To address this problem, this paper introduces a novel Radial Basis Function (RBF) distances to replace the commonly used inner products in the softmax loss function, such that it can adaptively assign losses to regularize the intra-class and inter-class distances by reshaping the relative differences, and thus creating more representative prototypes of classes to improve optimization. The proposed RBF-Softmax loss function not only effectively reduces intra-class distances, stabilizes the training behavior, and reserves ideal relations between prototypes, but also significantly improves the testing performance. Experiments on visual recognition benchmarks including MNIST, CIFAR-10/100, and ImageNet demonstrate that the proposed RBF-Softmax achieves better results than cross-entropy and other state-of-the-art classification losses.
机译:深层神经网络在学习功能表示为视觉分类取得了令人瞩目的成就。然而,由SOFTMAX交叉熵损失了解到深特征通常显示出过大的类内变化。我们认为,这是因为传统的SOFTMAX损失旨在优化只帧内类和类间距离(logits)之间的相对差异,因此不能获得代表类的原型(类的权重/中心),以当正规化类内的距离,甚至训练收敛。先前的努力通过引入辅助正规化损失缓解这一问题。但是,这些修改的损失主要集中在优化类内紧,而忽视保持不同类原型之间合理的关系。这些导致弱模型,并最终限制了它们的性能。为了解决这个问题,本文介绍了一种新颖的径向基函数(RBF)的距离,以代替常用的内积在SOFTMAX损失函数,使得它可以自适应地分配损失正规化由整形帧内类和级间距离的相对差异,以及由此产生的类的更多代表性原型改善优化。所提出的RBF-使用SoftMax损失函数不仅有效地降低了类内的距离,稳定的训练行为,并保留原型之间的理想关系,同时也显著提高了性能测试。上视觉识别基准包括MNIST,CIFAR-10/100,并且ImageNet实验表明,该RBF-使用SoftMax实现比交叉熵和国家的最先进的其它分类损失更好的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号