【24h】

Direct synthesis of neural networks

机译:直接综合神经网络

获取原文

摘要

The paper overviews recent developments of a VLSI-friendly, constructive algorithm as well as detailing two extensions. The problem is to construct a neural network when m examples of n inputs are given (classification problem). The two extensions discussed are: (i) the use of analog comparators; and (ii) digital as well as analog solution to XOR-like problems. For a simple example (the two-spirals), we are able to show that the algorithm does a very "efficient" encoding of a given problem into the neural network it "builds"-when compared to the entropy of the given problem and to other learning algorithms. We are also able to estimate the number of bits needed to solve any classification problem for the general case. Being interested in the VLSI implementation of such networks, the optimum criteria are not only the classical size and depth, but also the connectivity and the number of bits for representing the weights-as such measures are closer estimates of the area and lead to better approximations of the AT/sup 2/.
机译:纸张概述了VLSI友好,建设性算法的最新发展以及详细说明了两个扩展。当给出n个输入的m个例子(分类问题)时,问题是构造神经网络。讨论的两个扩展是:(i)使用模拟比较器; (ii)数字和XOR样问题的模拟解决方案。对于一个简单的例子(双螺旋),我们能够表明该算法对给定问题的算法进行了非常“高效的”编码,在神经网络中它“构建” - 与给定问题的熵相比,当时其他学习算法。我们还能够估计解决常规案件的任何分类问题所需的比特数。对于这些网络的VLSI实现感兴趣,最佳标准不仅是经典大小和深度,而且是表示权重的连接数和比特数 - 因为这种措施是该区域的较近估计,并导致更好的近似值AT / SUP 2 /。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号