首页> 外文期刊>Electronic Colloquium on Computational Complexity >Agnostic PAC-Learning of Functions on Analog Neural Nets
【24h】

Agnostic PAC-Learning of Functions on Analog Neural Nets

机译:不可知论的PAC学习在模拟神经网络上的功能

获取原文
           

摘要

We consider learning on multi-layer neural nets with piecewise polynomialactivation functions and a fixed number k of numerical inputs. We exhibitarbitrarily large network architectures for which efficient and provablysuccessful learning algorithms exist in the rather realistic refinement ofValiant's model for probably approximately correct learning ("PAC-learning")where no a-priori assumptions are required about the "target function"(agnostic learning), arbitrary noise is permitted in the training sample,and the target outputs as well as the network outputs may be arbitraryreals. The number of computation steps of the learning algorithm LEARN thatwe construct is bounded by a polynomial in the bit-length n of the fixednumber of input variables, in the bound s for the allowed bit-length ofweights, in 1/epsilon, where epsilon is some arbitrary given bound for thetrue error of the neural net after training, and in 1/delta, where delta issome arbitrary given bound for the probability that the learning algorithm fails for a randomly drawn training sample. However the computation time ofLEARN is exponential in the number of weights of the considered network architecture, and therefore only of interest for neural nets of small size.
机译:我们考虑在具有分段多项式激活函数和固定数量k的数值输入的多层神经网络上学习。我们展示了任意大型的网络体系结构,在Valiant模型的相当现实的改进中,对于有效的和可证明成功的学习算法,这些模型可以用于近似正确的学习(“ PAC学习”),而无需对“目标函数”(不可知论学习)进行先验假设,训练样本中允许有任意噪声,目标输出和网络输出可能是任意真实的。我们构建的学习算法LEARN的计算步骤数受多项式限制,即多项式在输入变量的固定数目的位长n中,在权重的允许位长的边界s中,以1 /ε表示,其中epsilon为训练后神经网络的真实误差有一些任意给定界限,且为1 / delta,其中δ是随机抽取训练样本的学习算法失败概率的任意任意给定界限。但是,LEARN的计算时间在所考虑的网络体系结构的权数上是指数的,因此仅对于小尺寸的神经网络有意义。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号