首页> 外文期刊>IEEE Transactions on Microwave Theory and Techniques >Device and circuit-level modeling using neural networks with faster training based on network sparsity
【24h】

Device and circuit-level modeling using neural networks with faster training based on network sparsity

机译:使用基于网络稀疏性的快速训练的神经网络进行设备和电路级建模

获取原文
获取原文并翻译 | 示例
           

摘要

Recently, circuit analysis and optimization featuring neural-network models have been proposed, reducing the computational time during optimization while keeping the accuracy of physics-based models. We present a novel approach for fast training of such neural-network models based on the sparse matrix concept. The new training technique does not require any structure change in neural networks, but makes use of the inherent nature of neural networks that for each pattern some neuron activations are close to zero, and hence, have no effect on network outputs and weights update. Much of the computation effort is saved over standard training techniques, while achieving the same accuracy. FET device and VLSI interconnect modeling examples verified the proposed technique.
机译:最近,已经提出了以神经网络模型为特征的电路分析和优化,以减少优化过程中的计算时间,同时保持基于物理模型的准确性。我们提出了一种基于稀疏矩阵概念的快速训练这种神经网络模型的新方法。新的训练技术不需要更改神经网络的任何结构,而是利用了神经网络的固有特性,对于每种模式,某些神经元的激活接近于零,因此对网络输出和权重更新没有影响。与标准训练技术相比,可以节省大量计算工作量,同时达到相同的准确性。 FET器件和VLSI互连建模实例验证了所提出的技术。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号