首页> 外文会议>International Conference on Computational Linguistics >Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation
【24h】

Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation

机译:双解码器变压器,用于联合自动语音识别和多语言语音翻译

获取原文

摘要

We introduce dual-decoder Transformer, a new model architecture that jointly performs automatic speech recognition (ASR) and multilingual speech translation (ST). Our models are based on the original Transformer architecture (Vaswani et al., 2017) but consist of two decoders, each responsible for one task (ASR or ST). Our major contribution lies in how these decoders interact with each other: one decoder can attend to different information sources from the other via a dual-attention mechanism. We propose two variants of these architectures corresponding to two different levels of dependencies between the decoders, called the parallel and cross dual-decoder Transformers, respectively. Extensive experiments on the MuST-C dataset show that our models outperform the previously-reported highest translation performance in the multilingual settings, and outperform as well bilingual one-to-one results. Furthermore, our parallel models demonstrate no trade-off between ASR and ST compared to the vanilla multi-task architecture.
机译:我们引入了双解码器变压器,这是一个新的模型架构,共同执行自动语音识别(ASR)和多语言语音转换(ST)。我们的型号基于原始变压器架构(Vaswani等,2017),但由两个解码器组成,每个解码器都是一个任务(ASR或ST)的负责。我们的主要贡献位于这些解码器如何相互交互:一个解码器可以通过双关注机制从另一个解码器上参加不同的信息源。我们提出了两个架构的两个变体,其分别对应于两个不同级别的解码器之间的依赖性,称为并行和交叉双解码器变压器。关于Must-C DataSet的广泛实验表明,我们的模型优于多语言设置中先前报告的最高版本性能,并且优于双语一对一的结果。此外,我们的并行模型与Vanilla多任务架构相比,ASR和St之间没有折衷。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号