...
【24h】

Guided Attention Inference Network

机译:引导注意推论网络

获取原文
获取原文并翻译 | 示例
           

摘要

With only coarse labels, weakly supervised learning typically uses top-down attention maps generated by back-propagating gradients as priors for tasks such as object localization and semantic segmentation. While these attention maps are intuitive and informative explanations of deep neural network, there is no effective mechanism to manipulate the network attention during learning process. In this paper, we address three shortcomings of previous approaches in modeling such attention maps in one common framework. First, we make attention maps a natural and explicit component in the training pipeline such that they are end-to-end trainable. Moreover, we provide self-guidance directly on these maps by exploring supervision from the network itself to improve them towards specific target tasks. Lastly, we proposed a design to seamlessly bridge the gap between using weak and extra supervision if available. Despite its simplicity, experiments on the semantic segmentation task demonstrate the effectiveness of our methods. Besides, the proposed framework provides a way not only explaining the focus of the learner but also feeding back with direct guidance towards specific tasks. Under mild assumptions our method can also be understood as a plug-in to existing convolutional neural networks to improve their generalization performance.
机译:只有粗标签,弱监督学习通常使用由背部传播梯度生成的自上而下的注意图作为对象本地化和语义分割等任务的前沿。虽然这些关注图是对深度神经网络的直观和信息解释,但在学习过程中没有有效的机制来操纵网络关注。在本文中,我们解决了在一个常见框架中建模这些注意图的先前方法的三种缺点。首先,我们注意训练管道中的自然和明确的组件,使得它们是端到端的培训。此外,我们通过探索网络本身的监督来提供对这些地图的自我指导,以改善他们朝向特定的目标任务。最后,我们提出了一种设计,无缝地弥合使用弱和额外的监督之间的差距。尽管其简单性,但语义分割任务的实验证明了我们方法的有效性。此外,拟议的框架不仅提供了解释学习者的焦点,而且提供了对特定任务的直接指导送回的方式。在温和的假设下,我们的方法也可以被理解为现有卷积神经网络的插件,以提高其泛化性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号