首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Using Digital Masks to Enhance the Bandwidth Tolerance and Improve the Performance of On-Chip Reservoir Computing Systems
【24h】

Using Digital Masks to Enhance the Bandwidth Tolerance and Improve the Performance of On-Chip Reservoir Computing Systems

机译:使用数字掩码来提高带宽容限并改善片上油藏计算系统的性能

获取原文
获取原文并翻译 | 示例

摘要

Reservoir computing (RC) is a computing scheme related to recurrent neural network theory. As a model for neural activity in the brain, it attracts a lot of attention, especially because of its very simple training method. However, building a functional, on-chip, photonic implementation of RC remains a challenge. Scaling delay lines down from optical fiber scale to chip scale results in RC systems that compute faster, but at the same time requires that the input signals be scaled up in speed, which might be impractical or expensive. In this brief, we show that this problem can be alleviated by a masked RC system in which the amplitude of the input signal is modulated by a binary-valued mask. For a speech recognition task, we demonstrate that the necessary input sample rate can be a factor of 40 smaller than in a conventional RC system. In addition, we also show that linear discriminant analysis and input matrix optimization is a well-performing alternative to linear regression for reservoir training.
机译:储层计算(RC)是与递归神经网络理论相关的一种计算方案。作为大脑神经活动的模型,它吸引了很多关注,尤其是由于其非常简单的训练方法。然而,构建功能上,片上RC的光子实现仍然是一个挑战。将延迟线从光纤级扩展到芯片级,可以使RC系统的计算速度更快,但同时又要求按比例提高输入信号的速度,这可能不切实际或昂贵。在本摘要中,我们表明可以通过掩蔽的RC系统来缓解此问题,在该系统中,输入信号的幅度由二进制值的掩膜调制。对于语音识别任务,我们证明了必要的输入采样率可以比常规RC系统小40倍。此外,我们还表明,线性判别分析和输入矩阵优化是油藏训练线性回归的良好替代方案。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号