首页> 外文学位 >Built-In Self Training of Hardware-Based Neural Networks
【24h】

Built-In Self Training of Hardware-Based Neural Networks

机译:基于硬件的神经网络的内置自训练

获取原文
获取原文并翻译 | 示例

摘要

Artificial neural networks and deep learning are a topic of increasing interest in computing. This has spurred investigation into dedicated hardware like accelerators to speed up the training and inference processes. This work proposes a new hardware architecture called Built-In Self Training (BISTr) for both training a network and performing inferences. The architecture combines principles from the Built-In Self Testing (BIST) VLSI paradigm with the backpropagation learning algorithm. The primary focus of the work is to present the BISTr architecture and verify its efficacy.;The development of the architecture began with analysis of the backpropagation algorithm and the derivation of new equations. Once the derivations were complete, the hardware was designed and all of the functional components were tested using VHDL from the bottom to top level. An automatic synthesis tool was created to generate the code used and tested in the experimental phase. The application tested during the experiments was function approximation. The new architecture was trained successfully for a couple of the test cases. The other test cases were not successful, but this was due to the data representation used in the VHDL code and not a result of the hardware design itself. The area overhead of the added hardware and speed performance were analyzed briefly. The results showed that: (1) the area overhead was signifffcant (around 3 times the area without the additional hardware) and (2) the theoretical speed performance of the design is very good.;The new BISTr architecture was proven to work and had a good theoretical speed performance. However, the architecture presented in this work cannot be implemented for large neural networks due to the large amount of area overhead. Further work would be required to expand upon the idea presented in this paper and improve it: (1) development of an alternative design that is more practical to implement, (2) more rigorous testing of area and speed, (3) implementation of other training methods and functionality, and (4) additions to the synthesis tool to increase its capability.
机译:人工神经网络和深度学习是人们对计算越来越感兴趣的主题。这刺激了对专用硬件(如加速器)的研究,以加快训练和推理过程。这项工作提出了一种称为内置自我训练(BISTr)的新硬件体系结构,用于训练网络和执行推理。该体系结构将内置自测(BIST)VLSI范例中的原理与反向传播学习算法结合在一起。工作的主要重点是介绍BISTr体系结构并验证其有效性。该体系结构的开发始于对反向传播算法的分析和新方程的推导。一旦派生完成,就设计了硬件,并使用VHDL从底层到顶层对所有功能组件进行了测试。创建了一个自动综合工具来生成在实验阶段使用和测试的代码。实验期间测试的应用是函数逼近。新架构已成功通过几个测试案例的培训。其他测试用例均未成功,但这是由于VHDL代码中使用的数据表示形式,而不是硬件设计本身的结果。简要分析了增加的硬件的区域开销和速度性能。结果表明:(1)区域开销很大(没有附加硬件的情况下,大约是区域的3倍);(2)设计的理论速度性能非常好。;新的BISTr架构被证明是可行的并且具有良好的理论速度性能。但是,由于大量的区域开销,因此无法在大型神经网络中实现此工作中提出的体系结构。需要进一步的工作来扩展本文提出的想法并加以改进:(1)开发一种更可行的替代设计,(2)对面积和速度进行更严格的测试,(3)其他方法的实现培训方法和功能,以及(4)对综合工具的补充以提高其功能。

著录项

  • 作者

    Anderson, Thomas.;

  • 作者单位

    University of Cincinnati.;

  • 授予单位 University of Cincinnati.;
  • 学科 Computer engineering.
  • 学位 M.S.
  • 年度 2017
  • 页码 123 p.
  • 总页数 123
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号