首页> 外文会议>European conference on computer vision >Sparsely Aggregated Convolutional Networks
【24h】

Sparsely Aggregated Convolutional Networks

机译:稀疏聚合卷积网络

获取原文

摘要

We explore a key architectural aspect of deep convolutional neural networks: the pattern of internal skip connections used to aggregate outputs of earlier layers for consumption by deeper layers. Such aggregation is critical to facilitate training of very deep networks in an end-to-end manner. This is a primary reason for the widespread adoption of residual networks, which aggregate outputs via cumulative summation. While subsequent works investigate alternative aggregation operations (e.g. concatenation), we focus on an orthogonal question: which outputs to aggregate at a particular point in the network. We propose a new internal connection structure which aggregates only a sparse set of previous outputs at any given depth. Our experiments demonstrate this simple design change offers superior performance with fewer parameters and lower computational requirements. Moreover, we show that sparse aggregation allows networks to scale more robustly to 1000+ layers, thereby opening future avenues for training long-running visual processes.
机译:我们探索了深度卷积神经网络的关键架构方面:内部跳过连接的模式,用于汇总较早层的输出以供较深层使用。这种聚合对于以端到端的方式促进非常深层网络的培训至关重要。这是残差网络被广泛采用的主要原因,该残差网络通过累积求和来汇总输出。在后续工作中研究替代聚合操作(例如,级联)时,我们将重点放在一个正交问题上:该输出在网络的特定点处进行聚合。我们提出了一种新的内部连接结构,该结构仅在任何给定深度仅聚合一组稀疏的先前输出。我们的实验表明,这种简单的设计更改提供了具有更少参数和更低计算需求的卓越性能。此外,我们证明了稀疏聚合可以使网络更稳健地扩展到1000层以上,从而为训练长期运行的视觉过程开辟了未来的途径。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号