首页> 外文会议>International Conference on Algorithmic Learning Theory >On Approximate Learning by Multi-layered Feedforward Circuits
【24h】

On Approximate Learning by Multi-layered Feedforward Circuits

机译:多层前馈电路近似学习

获取原文

摘要

We consider the problem of efficient approximate learning by multi-layered feedforward circuits subject to two objective functions. First, we consider the objective to maximize the ratio of correctly classified points compared to the training set size (e.g., see [3,5]). We show that for single hidden layer threshold circuits with n hidden nodes and varying input dimension, approximation of this ratio within a relative error c/n{sup}3, for some positive constant c, is NP-hard even if the number of examples is limited with respect to n. For architectures with two hidden nodes (e.g., as in [6]), approximating the objective within some fixed factor is NP-hard even if any sigmoid-like activation function in the hidden layer and -separation of the output [19] is considered, or if the semilinear activation function substitutes the threshold function. Next, we consider the objective to minimize the failure ratio [2]. We show that it is NP-hard to approximate the failure ratio within every constant larger than 1 for a multilayered threshold circuit provided the input biases are zero. Furthermore, even weak approximation of this objective is almost NP-hard.
机译:我们考虑由多层前馈电路受到两个目标函数的多层前馈电路有效近似学习的问题。首先,我们认为与训练集大小相比,最大化正确分类点的比率(例如,见[3,5])。我们表明,对于使用N个隐藏节点的单个隐藏层阈值电路和不同的输入维度,对于某些正常常数C的相对误差C / N {sup} 3内的该比率的近似值,即使示例的数量,也是np-supply关于n有限。对于具有两个隐藏节点的架构(例如,如[6]),即使在隐藏层中的任何Sigmoid类似的激活函数和输出[19]的待遇的任何Sigmoid-like激活函数,也是NP-supply的,或者如果半线性激活功能替换阈值函数。接下来,我们认为目标最小化失效比[2]。我们表明,如果输入偏差为零,则近似于大于1的每个恒定内的故障比率是近似的。此外,甚至弱近似该目标的近似是难以忍受的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号