首页> 外文期刊>Journal of Low Power Electronics and Applications >Stochastically Estimating Modular Criticality in Large-Scale Logic Circuits Using Sparsity Regularization and Compressive Sensing
【24h】

Stochastically Estimating Modular Criticality in Large-Scale Logic Circuits Using Sparsity Regularization and Compressive Sensing

机译:使用稀疏正则化和压缩感测随机估计大型逻辑电路中的模块临界度

获取原文
           

摘要

This paper considers the problem of how to efficiently measure a large and complex information field with optimally few observations. Specifically, we investigate how to stochastically estimate modular criticality values in a large-scale digital circuit with a very limited number of measurements in order to minimize the total measurement efforts and time. We prove that, through sparsity-promoting transform domain regularization and by strategically integrating compressive sensing with Bayesian learning, more than 98% of the overall measurement accuracy can be achieved with fewer than 10% of measurements as required in a conventional approach that uses exhaustive measurements. Furthermore, we illustrate that the obtained criticality results can be utilized to selectively fortify large-scale digital circuits for operation with narrow voltage headrooms and in the presence of soft-errors rising at near threshold voltage levels, without excessive hardware overheads. Our numerical simulation results have shown that, by optimally allocating only 10% circuit redundancy, for some large-scale benchmark circuits, we can achieve more than a three-times reduction in its overall error probability, whereas if randomly distributing such 10% hardware resource, less than 2% improvements in the target circuit’s overall robustness will be observed. Finally, we conjecture that our proposed approach can be readily applied to estimate other essential properties of digital circuits that are critical to designing and analyzing them, such as the observability measure in reliability analysis and the path delay estimation in stochastic timing analysis. The only key requirement of our proposed methodology is that these global information fields exhibit a certain degree of smoothness, which is universally true for almost any physical phenomenon.
机译:本文考虑了如何以最少的观测值有效地测量一个大型而复杂的信息场的问题。具体来说,我们研究了如何在数量有限的大型数字电路中随机估算模块临界值,以最大程度地减少总的测量工作量和时间。我们证明,通过稀疏性促进变换域正则化以及将压缩感知与贝叶斯学习进行战略性集成,与使用穷举测量的传统方法所要求的不到10%的测量值相比,可以实现98%以上的整体测量精度。此外,我们说明所获得的临界结果可用于选择性地加强大型数字电路,使其在较窄的电压裕量下运行,并且在存在接近阈值电压水平的软错误的情况下,而不会产生过多的硬件开销。我们的数值模拟结果表明,通过仅最优地分配10%的电路冗余,对于某些大型基准电路,我们可以将其总体错误概率降低三倍以上,而如果随机分配10%的硬件资源,将观察到目标电路的整体鲁棒性提高不到2%。最后,我们推测,我们提出的方法可以很容易地应用于估计数字电路的其他基本特性,这些特性对于设计和分析数字电路至关重要,例如可靠性分析中的可观察性度量以及随机时序分析中的路径延迟估计。我们提出的方法的唯一关键要求是这些全局信息字段显示一定程度的平滑度,这对于几乎所有物理现象都是普遍适用的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号