首页> 外文期刊>Electronic Colloquium on Computational Complexity >Bounds for the Computational Power and Learning Complexity of Analog Neural Nets
【24h】

Bounds for the Computational Power and Learning Complexity of Analog Neural Nets

机译:模拟神经网络的计算能力和学习复杂性的界限

获取原文
           

摘要

It is shown that high order feedforward neural nets of constant depth with piecewise polynomial activation functions and arbitrary real weights can be simulated for booleaninputs and outputs by neural nets of a somewhat larger size and depth with heaviside gates and weights from {-1,0,1}. This provides the first known upper bound for the computational power of the former type of neural nets. It is also shown that in the case of first order nets with piecewise linear activation functions one can replace arbitrary real weights by rational numbers with polynomially many bits, without changing the boolean function that is computed by the neural net. In order to prove these results we introduce two new methods for reducing nonlinear problems about weights in multi-layer neural nets to linear problems for a transformed set of parameters. These transformed parameters can be interpreted as weights in a somewhat larger neural net.As another application of our new proof technique we show that neural nets with piecewise polynomial activation functions and a constant number of analog inputs are probably approximately learnable (in Valiant's model for PAC-learning).
机译:结果表明,具有较大分段和多项式激活函数并具有任意实权重的恒定深度的高阶前馈神经网络可以通过较大尺寸和深度的神经网络模拟布尔输入和输出,重载门和权重为{-1,0, 1 }。这为前一类神经网络的计算能力提供了第一个已知的上限。还表明,在具有分段线性激活函数的一阶网络的情况下,可以用具有多项式位数的有理数代替任意实权,而无需更改由神经网络计算的布尔函数。为了证明这些结果,我们介绍了两种将多层神经网络中权重的非线性问题简化为一组转换参数的线性问题的新方法。这些转换后的参数可以解释为较大神经网络中的权重。作为我们新证明技术的另一个应用,我们证明了具有分段多项式激活函数和恒定数量的模拟输入的神经网络可能是近似可学的(在PAC的Valiant模型中-学习)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号