首页> 外文会议>SPIE Photonics Europe Conference >Design and simulation of optoelectronic neuron-equivalentors as hardware accelerators of self-learning equivalent-convolutional neural structures (SLECNS)
【24h】

Design and simulation of optoelectronic neuron-equivalentors as hardware accelerators of self-learning equivalent-convolutional neural structures (SLECNS)

机译:光电神经元等价物作为自学习等效卷积神经结构(SLECNS)的硬件加速器的设计和仿真

获取原文

摘要

In the paper, we consider the urgent need to create highly efficient hardware accelerators for machine learning algorithms, including convolutional and deep neural networks (CNN and DNNS), for associative memory models, clustering, and pattern recognition. These algorithms usually include a large number of multiply-accumulate (and the like) operations. We show a brief overview of our related works the advantages of the equivalent models (EM) for describing and designing neural networks and recognizing bio-inspired systems. The capacity of NN on the basis of EM and of its modifications, including auto-and hetero-associative memories for 2D images, is in several times quantity of neurons. Such neuroparadigms are very perspective for processing, clustering, recognition, storing large size and strongly correlated and highly noised images. They are also very promising for solving the problem of creating machine uncontrolled learning. And since the basic operational functional nodes of EM are such vector-matrix or matrix-tensor procedures with continuous-logical operations as: normalized vector operations 'equivalence', 'nonequivalence', 'auto-equivalence', 'auto-nonequivalence', we consider in this paper new conceptual approaches to the design of full-scale arrays of such neuron-equivalentors (NEs) with extended functionality, including different activation functions. Our approach is based on the use of analog and mixed (with special coding) methods for implementing the required operations, building NEs (with number of synapsis from 8 up to 128 and more) and their base cells, nodes based on photosensitive elements and CMOS current mirrors. We show the results of modeling the proposed new modular-scalable implementations of NEs, we estimates and compare them. Simulation results show that processing time in such circuits does not exceed units of micro seconds, and for some variants 50-100 nanoseconds. Circuits are simple, have low supply voltage (1.5 - 3.3 V), low power consumption (milliwatts), low levels of input signals (microwatts), integrated construction, satisfy the problem of interconnections and cascading. Signals at the output of such neurons can be both digital and analog, or hybrid, and also with two complement outputs. They realize principle of dualism which gives a number of advantages of such complement dual NEs.
机译:在本文中,我们认为迫切需要为机器学习算法(包括卷积和深度神经网络(CNN和DNNS))创建高效的硬件加速器,以用于关联存储模型,聚类和模式识别。这些算法通常包括大量的乘法累加(和类似操作)操作。我们简要介绍了我们的相关工作,这些工作包括等效模型(EM)在描述和设计神经网络以及识别生物启发系统方面的优势。基于EM及其修改的NN的能力,包括2D图像的自动和异物关联记忆,是神经元数量的几倍。这样的神经范式对于处理,聚类,识别,存储大尺寸和高度相关且噪声很大的图像非常有用。它们对于解决创建机器不受控制的学习的问题也很有希望。并且由于EM的基本运算功能节点是具有连续逻辑运算的矢量矩阵或矩阵张量程序,例如:归一化矢量运算“等价”,“不等价”,“自等价”,“自等价”,我们在本文中考虑采用新的概念方法来设计具有扩展功能(包括不同的激活功能)的此类神经元等价物(NE)的全尺寸阵列。我们的方法基于使用模拟和混合(带有特殊编码)方法来实现所需的操作,构建NE(突触数量从8个到128个及更多)及其基本单元,基于光敏元件和CMOS的节点当前的镜子。我们展示了对提议的NE的新的模块化可扩展实现建模的结果,我们对其进行了估计和比较。仿真结果表明,此类电路的处理时间不超过微秒单位,对于某些变体,其处理时间不超过50-100纳秒。电路简单,电源电压低(1.5-3.3 V),功耗低(毫瓦),输入信号电平低(微瓦),集成结构,可以满足互连和级联的问题。这种神经元输出端的信号既可以是数字信号,也可以是模拟信号,也可以是混合信号,还可以有两个补码输出。他们实现了二元论的原理,这赋予了这种互补二元NE的许多优势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号