首页> 外文会议>Catse international conference on mobile and wireless technology >Bilingual Word-Embedding for Korean and English Without Word Alignments
【24h】

Bilingual Word-Embedding for Korean and English Without Word Alignments

机译:没有词对齐的双语词嵌入韩国和英语

获取原文

摘要

In spite of lots of cross-lingual word embedding models for various languages, approaches that support cross-lingual word embedding between languages that have different word order and different origin word are lacking. In this study, we address the problem of cross-lingual word embedding between Korean and English that have different word order and origin and perform experiments to examine its performance behavior. Cross-lingual models have different levels of supervision. For training between languages which have different word order, it is essential to reduce preprocessing time. Therefore, two sentence-level alignment cross-lingual models are chosen for our experiments. Our results show that cross-lingual embedding for Korean and English without word-alignment is possible. We also analyze which bilingual tasks are proper for each trained result by comparing characteristic of each model's trained result.
机译:尽管有许多用于各种语言的跨语言嵌入模型,但缺乏支持具有不同词序和不同原点单词的语言之间嵌入的跨语言词的方法。在这项研究中,我们解决了韩国和英语之间嵌入的跨语言嵌入的问题,具有不同的单词顺序和起源,并执行实验以检查其性能行为。交叉语言模型具有不同程度的监督。对于具有不同字令的语言之间的培训,必须减少预处理时间。因此,为我们的实验选择了两个句子级别的对齐交叉模型。我们的结果表明,对于没有字对齐的韩国和英语,可以进行交叉舌嵌入。我们还通过比较每个培训结果的特性来分析每个培训的结果都适用于每个培训的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号