首页> 外文会议>European conference on computer vision >Collaborative Layer-Wise Discriminative Learning in Deep Neural Networks
【24h】

Collaborative Layer-Wise Discriminative Learning in Deep Neural Networks

机译:深神经网络中的协作层面歧视性学习

获取原文

摘要

Intermediate features at different layers of a deep neural network are known to be discriminative for visual patterns of different complexities. However, most existing works ignore such cross-layer heterogeneities when classifying samples of different complexities. For example, if a training sample has already been correctly classified at a specific layer with high confidence, we argue that it is unnecessary to enforce rest layers to classify this sample correctly and a better strategy is to encourage those layers to focus on other samples. In this paper, we propose a layer-wise discriminative learning method to enhance the discriminative capability of a deep network by allowing its layers to work collaboratively for classification. Towards this target, we introduce multiple classifiers on top of multiple layers. Each classifier not only tries to correctly classify the features from its input layer, but also coordinates with other classifiers to jointly maximize the final classification performance. Guided by the other companion classifiers, each classifier learns to concentrate on certain training examples and boosts the overall performance. Allowing for end-to-end training, our method can be conveniently embedded into state-of-the-art deep networks. Experiments with multiple popular deep networks, including Network in Network, GoogLeNet and VGGNet, on scale-various object classification benchmarks, including CIFAR100, MNIST and ImageNet, and scene classification benchmarks, including MIT67, SUN397 and Places205, demonstrate the effectiveness of our method. In addition, we also analyze the relationship between the proposed method and classical conditional random fields models.
机译:已知不同层次的不同层的中间特征是不同复杂性的视觉模式的判别。然而,当分类不同复杂性的样本时,大多数现有工程忽略这种横向异质性。例如,如果培训样本已经以高信心正确地分类在特定层上,我们认为不必强制执行休息层以正确对此样本进行分类,更好的策略是鼓励这些层专注于其他样本。在本文中,我们提出了一种层面判别学习方法,通过允许其层协同地进行分类来提高深网络的辨别能力。对此目标,我们在多个层顶部引入多个分类器。每个分类器不仅尝试正确对其输入层进行正确分类,而且还与其他分类器坐标,共同提高最终的分类性能。由其他伴随分类器引导,每个分类器都会学会专注于某些培训示例,并提高整体性能。允许允许端到端培训,我们的方法可以方便地嵌入到最先进的深网络中。具有多个流行的深网络的实验,包括网络中的网络,Googlenet和Vggnet,在规模 - 各种对象分类基准中,包括CIFAR100,MNIST和ImageNet,以及场景分类基准,包括MIT67,Sun397和Place205,证明了我们方法的有效性。此外,我们还分析了所提出的方法和经典条件随机字段模型之间的关系。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号