首页> 外文会议>Conference on Neural Information Processing Systems >Einconv: Exploring Unexplored Tensor Network Decompositions for Convolutional Neural Networks
【24h】

Einconv: Exploring Unexplored Tensor Network Decompositions for Convolutional Neural Networks

机译:EINCONV:探索卷积神经网络的未开发的张量网络分解

获取原文

摘要

Tensor decomposition methods are widely used for model compression and fast inference in convolutional neural networks (CNNs). Although many decompositions are conceivable, only CP decomposition and a few others have been applied in practice, and no extensive comparisons have been made between available methods. Previous studies have not determined how many decompositions are available, nor which of them is optimal. In this study, we first characterize a decomposition class specific to CNNs by adopting a flexible graphical notation. The class includes such well-known CNN modules as depthwise separable convolution layers and bottleneck layers, but also previously unknown modules with nonlinear activations. We also experimentally compare the tradeoff between prediction accuracy and time/space complexity for modules found by enumerating all possible decompositions, compositions, or by using a neural architecture search. We find some nonlinear decompositions outperform existing ones.
机译:张量分解方法广泛用于卷积神经网络(CNNS)的模型压缩和快速推断。 虽然可以想到许多分解,但在实践中仅应用CP分解和其他一些已经应用,并且在可用方法之间没有进行广泛的比较。 以前的研究尚未确定有多少分解,其中也不是最佳的。 在本研究中,我们首先通过采用灵活的图形符号来表征特定于CNN的分解类。 该类包括这种众所周知的CNN模块,作为深度可分离的卷积层和瓶颈层,而且还具有预先具有非线性激活的未知模块。 我们还通过枚举所有可能的分解,组合物或通过使用神经结构搜索来通过实验地比较模块的预测精度和时间/空间复杂性之间的权衡。 我们发现一些非线性分解优于现有的非线性分解。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号