首页> 外文会议>2017 IEEE Automatic Speech Recognition and Understanding Workshop >Listening while speaking: Speech chain by deep learning
【24h】

Listening while speaking: Speech chain by deep learning

机译:边听边说:深度学习的语音链

获取原文
获取原文并翻译 | 示例

摘要

Despite the close relationship between speech perception and production, research in automatic speech recognition (ASR) and text-to-speech synthesis (TTS) has progressed more or less independently without exerting much mutual influence on each other. In human communication, on the other hand, a closed-loop speech chain mechanism with auditory feedback from the speaker's mouth to her ear is crucial. In this paper, we take a step further and develop a closed-loop speech chain model based on deep learning. The sequence-to-sequence model in close-loop architecture allows us to train our model on the concatenation of both labeled and unlabeled data. While ASR transcribes the unlabeled speech features, TTS attempts to reconstruct the original speech waveform based on the text from ASR. In the opposite direction, ASR also attempts to reconstruct the original text transcription given the synthesized speech. To the best of our knowledge, this is the first deep learning model that integrates human speech perception and production behaviors. Our experimental results show that the proposed approach significantly improved the performance more than separate systems that were only trained with labeled data.
机译:尽管语音感知和产生之间有着密切的关系,但自动语音识别(ASR)和文本语音合成(TTS)的研究或多或少地独立进行,彼此之间没有太多相互影响。另一方面,在人类交流中,具有从说话者的嘴到她的耳朵的听觉反馈的闭环语音链机制至关重要。在本文中,我们将进一步采取措施,并基于深度学习开发闭环语音链模型。闭环体系结构中的序列到序列模型使我们能够在标记数据和未标记数据的串联上训练我们的模型。当ASR转录未标记的语音特征时,TTS会尝试根据ASR的文本来重建原始语音波形。在相反的方向上,ASR还尝试在给定合成语音的情况下重建原始文本转录。据我们所知,这是第一个将人类语音感知和生产行为整合在一起的深度学习模型。我们的实验结果表明,与仅使用标记数据进行训练的单独系统相比,所提出的方法可显着提高性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号