...
【24h】

Multi-Stream End-to-End Speech Recognition

机译:多流端到端语音识别

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Attention-based methods and Connectionist Temporal Classification (CTC) network have been promising research directions for end-to-end (E2E) Automatic Speech Recognition (ASR). The joint CTC/Attention model has achieved great success by utilizing both architectures during multi-task training and joint decoding. In this article, we present a multi-stream framework based on joint CTC/Attention E2E ASR with parallel streams represented by separate encoders aiming to capture diverse information. On top of the regular attention networks, the Hierarchical Attention Network (HAN) is introduced to steer the decoder toward the most informative encoders. A separate CTC network is assigned to each stream to force monotonic alignments. Two representative framework have been proposed and discussed, which are Multi-Encoder Multi-Resolution (MEM-Res) framework and Multi-Encoder Multi-Array (MEM-Array) framework, respectively. In MEM-Res framework, two heterogeneous encoders with different architectures, temporal resolutions and separate CTC networks work in parallel to extract complementary information from same acoustics. Experiments are conducted on Wall Street Journal (WSJ) and CHiME-4, resulting in relative Word Error Rate (WER) reduction of $ext{18.0}!-!ext{32.1}%$ and the best WER of $ext{3.6}%$ in the WSJ eval92 test set. The MEM-Array framework aims at improving the far-field ASR robustness using multiple microphone arrays which are activated by separate encoders. Compared with the best single-array results, the proposed framework has achieved relative WER reduction of $ext{3.7}%$ and $ext{9.7}%$ in AMI and DIRHA multi-array corpora, respectively, which also outperforms conventional fusion strategies.
机译:基于注意的方法和连接主义时间分类(CTC)网络一直是对端到端(E2E)自动语音识别(ASR)的研究方向。 CTC /注意力模型通过在多任务培训和联合解码期间利用两个架构实现了巨大的成功。在本文中,我们基于联合CTC /注意E2E ASR的多流框架,并通过单独的编码器表示的并行流,旨在捕获不同的信息。在常规关注网络之上,引入了分层关注网络(HAN)以使解码器转向最具信息化的编码器。将单独的CTC网络分配给每个流以强制单调对齐。已经提出和讨论了两个代表性框架,其分别是多编码器多分辨率(MEM-RES)框架和多编码器多阵列(MEM-ARRAY)框架。在MEM-RES框架中,具有不同架构,时间分辨率和单独的CTC网络的两个异构编码器并行工作,以提取来自相同声学的互补信息。实验在华尔街日志(WSJ)和Chime-4上进行,导致相对字错误率(WER)减少<内联公式XMLNS:MML =“http://www.w3.org/1998/math/mathml”xmlns:xlink =“http://www.w3.org/1999/xlink”> $ text {18.0} ! - ! text {32.1} %$ 和最好的行<内联公式XMLNS:MML =“http://www.w3.org/1998/math/mathml”xmlns:xlink =“http://www.w3.org/1999/xlink”> $ text {3.6} %$ 在WSJ eval92测试集中。 MEM-Array框架旨在使用多个麦克风阵列来改善远场ASR鲁棒性,该阵列由单独的编码器激活。与最佳单阵结果相比,所提出的框架已经实现了相对的WER<内联公式XMLNS:MML =“http://www.w3.org/1998/math/mathml”xmlns:xlink =“http://www.w3.org/1999/xlink”> $ text {3.7} %$ 和<内联公式XMLNS:MML =“http://www.w3.org/1998/math/mathml”xmlns:xlink =“http://www.w3.org/1999/xlink”> $ text {9.7} %$ 在AMI和Dirha多阵列语料库中,也优于传统的融合策略。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号