首页> 外文学位 >Fault tolerance of feedforward artificial neural nets and synthesis of robust nets.
【24h】

Fault tolerance of feedforward artificial neural nets and synthesis of robust nets.

机译:前馈人工神经网络的容错性和鲁棒网络的合成。

获取原文
获取原文并翻译 | 示例

摘要

A method is proposed to estimate the fault tolerance of feedforward Artificial Neural Nets (ANNs) and synthesize robust nets. The fault model abstracts a variety of failure modes of hardware implementations to permanent stuck-at type faults of single components. A procedure is developed to build fault tolerant ANNs by replicating the hidden units. It exploits the intrinsic weighted summation operation performed by the processing units in order to overcome faults. It is simple, robust and is applicable to any feedforward net. Based on this procedure, metrics are devised to quantify the fault tolerance as a function of redundancy.;Furthermore, a lower bound on the redundancy required to tolerate all possible single faults is analytically derived. This bound demonstrates that less than Triple Modular Redundancy (TMR) cannot provide complete fault tolerance for all possible single faults. This general result establishes a necessary condition that holds for all feedforward nets, irrespective of the network topology or the task it is trained on. Extensive simulations indicate that the actual redundancy needed to synthesize a completely fault tolerant net is specific to the problem at hand and is usually much higher than that dictated by the general lower bound. The data imply that the conventional TMR scheme of treplication and majority vote is the best way to achieve complete fault tolerance in most ANNs.;Although the redundancy needed for complete fault tolerance is substantial, the results do show that ANNs exhibit good partial fault tolerance to begin with and degrade gracefully. For large nets, exhaustive testing of all possible single faults is prohibitive. Hence, the strategy of randomly testing a small fraction of the total number links is adopted. It yields partial fault tolerance estimates that are very close to those obtained by exhaustive testing.;The last part of the thesis develops improved learning algorithms that favor fault tolerance. Here, the objective function for the gradient descent is modified to include extra terms that favor fault tolerance. Simulations indicate that the algorithm works only if the relative weight of the extra terms is small.;There are two different ways to achieve fault tolerance: (1) Search for the minimal net and replicate (2) Provide redundancy to begin with and use improved training algorithms. A natural question is: which of these two schemes is better? Contrary to the expectation, the replication scheme seems to win in almost all cases. We provide a justification as to why this might be true.;Several interesting open problems are discussed and future extensions are suggested.
机译:提出了一种估计前馈人工神经网络(ANN)的容错能力并综合鲁棒网络的方法。故障模型将各种硬件实现的故障模式抽象为单个组件的永久卡死型故障。开发了一种通过复制隐藏单元来构建容错ANN的过程。它利用处理单元执行的固有加权求和运算来克服故障。它简单,耐用,适用于任何前馈网络。基于此过程,设计度量标准以量化作为冗余功能的容错能力。此外,通过分析得出了容忍所有可能的单个故障所需的冗余下限。此界限表明,少于三重模块冗余(TMR)不能为所有可能的单个故障提供完整的容错能力。该总体结果为所有前馈网络建立了一个必要条件,而与网络拓扑或接受其训练的任务无关。大量的模拟表明,合成一个完全容错的网络所需的实际冗余度特定于当前的问题,通常比一般下限所指示的冗余度要高得多。数据表明,在大多数人工神经网络中,传统的TMR投票和多数表决权方案是实现完全容错的最佳方法。尽管完全容错所需的冗余很大,但结果确实表明ANN具有良好的部分容错能力。开始并优雅地降级。对于大型网络,禁止对所有可能的单个故障进行详尽的测试。因此,采用了随机测试总数链接的一小部分的策略。它产生的部分容错估计与通过穷举测试获得的结果非常接近。;论文的最后一部分开发了改进的学习算法,该算法支持容错。在此,对梯度下降的目标函数进行了修改,以包含有利于容错的其他项。仿真表明,该算法仅在额外项的相对权重较小时才有效。有两种不同的方法来实现容错:(1)搜索最小网络并进行复制(2)提供冗余,以开始使用并使用改进的方法训练算法。一个自然的问题是:这两种方案中哪一种更好?与预期相反,复制方案似乎在几乎所有情况下都可以胜出。我们提供理由说明为什么这可能是正确的。讨论了几个有趣的开放性问题,并提出了将来的扩展建议。

著录项

  • 作者

    Phatak, Dhananjay S.;

  • 作者单位

    University of Massachusetts Amherst.;

  • 授予单位 University of Massachusetts Amherst.;
  • 学科 Electrical engineering.;Computer science.;Artificial intelligence.
  • 学位 Ph.D.
  • 年度 1994
  • 页码 102 p.
  • 总页数 102
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号