首页> 外文会议>IEEE International Symposium on Circuits and Systems >Memory-error tolerance of scalable and highly parallel architecture for restricted Boltzmann machines in Deep Belief Network
【24h】

Memory-error tolerance of scalable and highly parallel architecture for restricted Boltzmann machines in Deep Belief Network

机译:深度信任网络中受限制的Boltzmann机器的可扩展和高度并行架构的内存错误容限

获取原文

摘要

A key aspect of constructing highly scalable Deep-learning microelectronic systems is to implement fault tolerance in the learning sequence. Error-injection analyses for memory is performed using a custom hardware model implementing parallelized restricted Boltzmann machines (RBMs). It is confirmed that the RBMs in Deep Belief Networks (DBNs) provides remarkable robustness against memory errors. Fine-tuning has significant effects on recovery of accuracy for static errors injected to the structural data of RBMs during and after learning, which are either at cell-level or block level. The memory-error tolerance is observable using our hardware networks with fine-graded memory distribution.
机译:构建高度可扩展的深度学习微电子系统的关键方面是在学习序列中实现容错能力。使用实现并行化受限Boltzmann机器(RBM)的自定义硬件模型执行内存的错误注入分析。可以肯定的是,深层信任网络(DBN)中的RBM具有出色的鲁棒性,可防止内存错误。微调对于学习期间和学习之后注入到RBM结构数据中的静态错误(在单元级别或块级别)的精度恢复具有重大影响。使用我们的硬件网络以及良好的内存分布,可以观察到内存错误容忍度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号