首页> 外文会议>International conference on artificial intelligence >Separation and Deconvolution of Speech Using Recurrent Neural Networks
【24h】

Separation and Deconvolution of Speech Using Recurrent Neural Networks

机译:使用经常性神经网络分离和去折作致作用

获取原文
获取外文期刊封面目录资料

摘要

This paper focuses on improvements of the Speech Recognition or Speech Reading (SR), due to combining multiple auditory sources. We present results obtained in the traditional Blind Signal Separation & Deconvolution (BSS) paradigm using two speaker signals from the perspective of two sources, investigating artificial linear and convolutive mixes as well as real recordings. The adaptive algorithm is based on two-dimensional (2D) system theory using recurrent neural networks (RNNs). The characteristics of convolutively mixed signals (eg. audio signals) are matched by the structure of RNNs. The feedback paths in an RNN permit the possibility of a memory of the signals at relevant delays so that better separation can be achieved. The cross-correlations of the outputs of the RNN are used as separation criterion.
机译:由于组合多个听觉来源,本文侧重于语音识别或语音读取(SR)的改进。我们使用来自两个来源的角度来看,使用两个扬声器信号,从两个来源的角度来看,使用两个扬声器信号,研究人造线性和卷曲混合物以及真正的录音,以及真正的录音,呈现出在传统的盲信号分离和解卷积(BSS)范式。自适应算法基于使用经常性神经网络(RNN)的二维(2D)系统理论。旋转混合信号(例如,音频信号)的特征由RNN的结构匹配。 RNN中的反馈路径允许在相关延迟处的信号存储器的可能性,以便可以实现更好的分离。 RNN的输出的互相关用作分离标准。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号