...
首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Optimal approximation of continuous functions by very deep ReLU networks
【24h】

Optimal approximation of continuous functions by very deep ReLU networks

机译:非常深的ReLU网络对连续函数的最佳逼近

获取原文
           

摘要

We consider approximations of general continuous functions on finite-dimensional cubes by general deep ReLU neural networks and study the approximation rates with respect to the modulus of continuity of the function and the total number of weights $W$ in the network. We establish the complete phase diagram of feasible approximation rates and show that it includes two distinct phases. One phase corresponds to slower approximations that can be achieved with constant-depth networks and continuous weight assignments. The other phase provides faster approximations at the cost of depths necessarily growing as a power law $Lsim W^{lpha}, 0
机译:我们通过一般的深度ReLU神经网络考虑有限维立方体上的一般连续函数的逼近,并研究关于函数的连续模数和网络中权重总数$ W $的逼近率。我们建立了可行的近似速率的完整相位图,并表明它包括两个不同的相位。一个阶段对应于较慢的近似值,可以通过恒定深度网络和连续的权重分配来实现。另一阶段以必不可少的深度为代价提供了更快的近似值,深度必须随着幂定律$ L sim W ^ { alpha},0 < alpha le 1,$而增长,并且必然具有不连续的权重分配。特别是,我们证明了深度为$ L sim W $的恒定宽度的全连接网络可提供最快的近似速率$ | f- widetilde f | _ infty = O( omega_f(O(W ^ {-2 / nu}))))$深度较浅的网络无法实现。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号