首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Convexified Convolutional Neural Networks
【24h】

Convexified Convolutional Neural Networks

机译:凸卷积神经网络

获取原文
       

摘要

We describe the class of convexified convolutional neural networks (CCNNs), which capture the parameter sharing of convolutional neural networks in a convex manner. By representing the nonlinear convolutional filters as vectors in a reproducing kernel Hilbert space, the CNN parameters can be represented as a low-rank matrix, which can be relaxed to obtain a convex optimization problem. For learning two-layer convolutional neural networks, we prove that the generalization error obtained by a convexified CNN converges to that of the best possible CNN. For learning deeper networks, we train CCNNs in a layer-wise manner. Empirically, CCNNs achieve competitive or better performance than CNNs trained by backpropagation, SVMs, fully-connected neural networks, stacked denoising auto-encoders, and other baseline methods.
机译:我们描述了凸卷积神经网络(CCNN)的类别,它以凸方式捕获了卷积神经网络的参数共享。通过将非线性卷积滤波器表示为再生内核Hilbert空间中的向量,CNN参数可以表示为低秩矩阵,可以放宽该矩阵以获得凸优化问题。为了学习两层卷积神经网络,我们证明了由凸CNN获得的泛化误差收敛到最佳CNN的泛化误差。为了学习更深入的网络,我们以分层的方式训练CCNN。从经验上讲,与通过反向传播,支持向量机,全连接神经网络,堆叠降噪自动编码器和其他基线方法训练的CNN相比,CCNN具有更好的性能或竞争优势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号