首页> 外文期刊>Neural Network World >A MODEL OF ARTIFICIAL NEURONAL NETWORKS DESIGNED ACCORDING THE NATURAL NEURONAL BRAIN STRUCTURES
【24h】

A MODEL OF ARTIFICIAL NEURONAL NETWORKS DESIGNED ACCORDING THE NATURAL NEURONAL BRAIN STRUCTURES

机译:根据自然神经脑结构设计的人工神经网络模型

获取原文
获取原文并翻译 | 示例

摘要

The functional structure of our new network is not preset; instead, it comes into existence in a random, stochastic manner. The anatomical structure of our model consists of two input "neurons", hundreds up to five thousands of hidden-layer "neurons" and one output "neuron". The proper process is based on iteration, i.e., mathematical operation governed by a set of rules, in which repetition helps to approximate the desired result. Each iteration begins with data being introduced into the input layer to be processed in accordance with a particular algorithm in the hidden layer; it then continues with the computation of certain as yet very crude configurations of images regulated by a genetic code, and ends up with the selection of 10% of the most accomplished "offspring". The next iteration begins with the application of these new, most successful variants of the results, i.e., descendants in the continued process of image perfection. The ever new variants (descendants) of the genetic algorithm are always generated randomly. The determinist rule then only requires the choice of 10% of all the variants available (in our case 20 optimal variants out of 200). The stochastic model is marked by a number of characteristics, e.g., the initial conditions are determined by different data dispersion variance, the evolution of the network organisation is controlled by genetic rules of a purely stochastic nature; Gaussian distribution noise proved to be the best "organiser". Another analogy between artificial networks and neuronal structures lies in the use of time in network algorithms. For that reason, we gave our networks organisation a kind of temporal development, i.e., rather than being instantaneous; the connection between the artificial elements and neurons consumes certain units of time per one synapse or, better to say, per one contact between the preceding and subsequent neurons. The latency of neurons, natural and artificial alike, is very important as it enables feedback action. Our network becomes organised under the effect of considerable noise. Then, however, the amount of noise must subside. However, if the network evolution gets stuck in the local minimum, the amount of noise has to be increased again. While this will make the network organisation waver, it will also increase the likelihood that the crisis in the local minimum will abate and improve substantially the state of the network in its self-organisation. Our system allows for constant state-of-the-network reading by means of establishing the network energy level, i.e., basically ascertaining progression of the network's rate of success in self-organisation. This is the principal parameter for the detection of any jam in the local minimum. It is a piece of input information for the formator algorithm which regulates the level of noise in the system.
机译:我们新网络的功能结构尚未预设;相反,它以随机,随机的方式出现。我们模型的解剖结构由两个输入“神经元”,数百个多达五千个隐藏层的“神经元”和一个输出“神经元”组成。适当的过程基于迭代,即由一组规则控制的数学运算,其中重复有助于近似期望的结果。每次迭代都从将数据引入输入层开始,以便根据隐藏层中的特定算法进行处理;然后,它继续通过遗传密码调节某些非常粗略的图像构型的计算,最后选择10%的最有成就的“后代”。下一次迭代从结果的这些新的,最成功的变体开始应用,即,在图像完美持续过程中的后代。遗传算法的新变种(后代)总是随机生成的。然后,确定性规则只需要选择所有可用变体中的10%(在我们的示例中为200个最佳变体)。随机模型具有许多特征,例如,初始条件由不同的数据分散方差确定,网络组织的演化由纯随机性质的遗传规则控制;高斯分布噪声被证明是最好的“组织者”。人工网络与神经元结构之间的另一个类比在于网络算法中时间的使用。出于这个原因,我们使我们的网络组织有了一种暂时的发展,即不是即时的;而是暂时的。人工元件与神经元之间的连接每一个突触消耗一定的时间单位,或者更确切地说,前一个神经元与后一个神经元之间的一次接触消耗某些时间单位。神经元的潜伏性,无论是自然的还是人工的,都非常重要,因为它可以进行反馈操作。我们的网络在噪音很大的情况下变得井井有条。但是,噪声量必须消退。但是,如果网络发展陷入局部最小值,则必须再次增加噪声量。虽然这将使网络组织动摇,但也将增加本地最低限度的危机缓解的可能性,并从根本上改善网络在其自组织中的状态。我们的系统通过建立网络能量水平,即基本确定网络在自组织中的成功率的进展,来实现网络状态的恒定读取。这是检测局部最小值中任何卡纸的主要参数。它是格式化程序算法的一部分输入信息,它可调节系统中的噪声水平。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号