...
首页> 外文期刊>IEEE transactions on very large scale integration (VLSI) systems >Interstice: Inverter-Based Memristive Neural Networks Discretization for Function Approximation Applications
【24h】

Interstice: Inverter-Based Memristive Neural Networks Discretization for Function Approximation Applications

机译:术语:基于逆变器的忆阻神经网络,用于函数近似应用的离散化

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

In this article, the accuracy of inverter-based memristive neural networks (NNs) for function approximation applications is improved under the presence of process variations. The improvement is achieved by using a design approach, called INTERSTICE (Inverter-based Memristive Neural Networks Dis cretization for Function Approximation Applications), which discretizes the output values by employing a classifier. More precisely, in the INTERSTICE approach, the output range is divided into K subranges where each subrange is considered as a class. To train the classifier, the training samples are labeled where each label shows belonging to a specific class. To evaluate the efficacy of the design technique, some function approximation applications such as BlackScholes, FFT, K-means, and Sobel are considered. Compared to PHAX, a recently published inverter-based memristive NN, INTERSTICE provides lower mean squared error (MSE) values in the presence of memristor and transistor variations. More specifically, the improvements in the mean of MSE (mu(MSE)) are in the range of 40%-80% when considering 10% variations in the memristor resistance and transistor parameters. In addition, for most of the benchmarks, INTERSTICE improves the mu(MSE) values of the nominal case (the case where all circuit elements are ideal) compared to PHAX. As another advantage compared to the PHAX, in INTERSTICE, digital outputs can be generated based on the selected classes which eliminates the need for an analog-to-digital converter at the output port connected to the digital part of the system. Finally, achieving lower mu(MSE) values using fewer memristors and consuming lower energy is also attainable with this design approach.
机译:在本文中,在过程变化的存在下,改善了用于功能近似应用的基于逆变器的忆内神经网络(NNS)的准确性。通过使用名为Acterstice(基于逆变器的Memristive神经网络DIS CRET CRET)来实现改进,该设计方法通过采用分类器离散输出值来离散输出值。更确切地说,在空缺方法中,输出范围被分成K个子区域,其中每个子范围被视为类。为了训练分类器,标记训练样本标记,其中每个标签显示属于特定类。为了评估设计技术的功效,考虑了一些函数近似应用,如黑色,FFT,K均值和Sobel。与phax相比,最近公开的基于逆变器的忆耳Nn,空隙在存在映射器和晶体管变化的情况下提供较低的平均平方误差(MSE)值。更具体地,当考虑函数电阻和晶体管参数的10%变化时,MSE(MU(MS))的平均值的改进在40%-80%的范围内。另外,对于大多数基准测试,间隙改善了与phax相比,标称壳体的MU(MSE)值(所有电路元素是理想的情况)。作为另一个优点与捕鲸相比,在间隙中,可以基于所选择的类生成数字输出,这消除了连接到系统的数字部分的输出端口的模数转换器的需要。最后,使用更少的椎间盘和消耗较低的能量来实现使用较少的MU(MSE)值,并且这种设计方法也可以获得。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号