首页> 美国政府科技报告 >Complexity of Learning in Neural Networks - Theory and Application
【24h】

Complexity of Learning in Neural Networks - Theory and Application

机译:神经网络学习的复杂性 - 理论与应用

获取原文

摘要

Randomnized algorithms are proposed and analyzed for learning binary weights fora neuron and links to the theory of random graphs are established. Optimal stopping phenomena in gradient descent learning algorithms are characterized and explained in terms of a time-varying effective machine complexity. The finite sample performance of the k-nearest neighbor algorithm for pattern recognition is rigorously characterized. Tradeoffs in learning from mixtures of labeled and unlabeled examples are determined.

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号