首页> 外文会议>Conference on Neural Information Processing Systems >CondConv: Conditionally Parameterized Convolutions for Efficient Inference
【24h】

CondConv: Conditionally Parameterized Convolutions for Efficient Inference

机译:CondConv:有条件地参数化卷积,用于有效推论

获取原文

摘要

Convolutional layers are one of the basic building blocks of modern deep neural networks. One fundamental assumption is that convolutional kernels should be shared for all examples in a dataset. We propose conditionally parameterized convolutions (CondConv), which learn specialized convolutional kernels for each example. Replacing normal convolutions with CondConv enables us to increase the size and capacity of a network, while maintaining efficient inference. We demonstrate that scaling networks with CondConv improves the performance and inference cost trade-off of several existing convolutional neural network architectures on both classification and detection tasks. On ImageNet classification, our CondConv approach applied to EfficientNet-B0 achieves state-of-the-art performance of 78.3% accuracy with only 413M multiply-adds. Code and checkpoints for the CondConv Tensorflow layer and CondConv-EfficientNet models are available at: https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet/condconv.
机译:卷积层是现代深层神经网络的基本构建块之一。一个基本假设是应在数据集中的所有示例共享卷积内核。我们提出有条件地参数化卷积(CondConv),为每个示例学习专门的卷积核。用CondConv取代正常卷积使我们能够增加网络的大小和容量,同时保持有效的推论。我们展示了具有CondConv的缩放网络在分类和检测任务中提高了几个现有卷积神经网络架构的性能和推理成本权衡。在ImageNet分类上,我们的CondConv方法适用于UpplicalNet-B0,实现了最先进的性能,精度为78.3%,只有413米乘以增加。 CondConv Tensorflow层和CondConv-AfficitionNet型号的代码和检查点可用于:https://github.com/tensorflow/tpu/tree/master/models/official/efficeNet/condconv。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号