首页> 外文期刊>Knowledge-Based Systems >A resource-light method for cross-lingual semantic textual similarity
【24h】

A resource-light method for cross-lingual semantic textual similarity

机译:一种跨语言语义文本相似度的资源丰富的方法

获取原文
获取原文并翻译 | 示例
           

摘要

Recognizing semantically similar sentences or paragraphs across languages is beneficial for many tasks, ranging from cross-lingual information retrieval and plagiarism detection to machine translation. Recently proposed methods for predicting cross-lingual semantic similarity of short texts, however, make use of tools and resources (e.g., machine translation systems, syntactic parsers or named entity recognition) that for many languages (or language pairs) do not exist. In contrast, we propose an unsupervised and a very resource-light approach for measuring semantic similarity between texts in different languages. To operate in the bilingual (or multilingual) space, we project continuous word vectors (i.e., word embeddings) from one language to the vector space of the other language via the linear translation model. We then align words according to the similarity of their vectors in the bilingual embedding space and investigate different unsupervised measures of semantic similarity exploiting bilingual embeddings and word alignments. Requiring only a limited-size set of word translation pairs between the languages, the proposed approach is applicable to virtually any pair of languages for which there exists a sufficiently large corpus, required to learn monolingual word embeddings. Experimental results on three different datasets for measuring semantic textual similarity show that our simple resource-light approach reaches performance close to that of supervised and resource-intensive methods, displaying stability across different language pairs. Furthermore, we evaluate the proposed method on two extrinsic tasks, namely extraction of parallel sentences from comparable corpora and cross-lingual plagiarism detection, and show that it yields performance comparable to those of complex resource-intensive state-of-the-art models for the respective tasks. (C) 2017 Published by Elsevier B.V.
机译:从跨语言信息检索和抄袭检测到机器翻译,识别跨语言的语义相似的句子或段落对许多任务都是有益的。然而,最近提出的用于预测短文本的跨语言语义相似性的方法利用了许多语言(或语言对)不存在的工具和资源(例如,机器翻译系统,句法解析器或命名实体识别)。相比之下,我们提出了一种无监督且资源匮乏的方法来测量不同语言的文本之间的语义相似性。为了在双语(或多语)空间中进行操作,我们通过线性翻译模型将一种语言的连续单词向量(即单词嵌入)投影到另一种语言的向量空间。然后,我们根据双语嵌入空间中向量的相似性来对齐单词,并利用双语嵌入和单词对齐方式研究语义相似性的各种无监督措施。所提出的方法仅需要在语言之间的有限大小的单词翻译对集合,实际上适用于存在学习单语词嵌入所需的足够大语料库的任何一对语言。在三个用于测量语义文本相似性的数据集上的实验结果表明,我们的简单资源轻量级方法的性能接近于监督型和资源密集型方法的性能,在不同语言对之间显示出稳定性。此外,我们在两个外部任务(即从可比语料库中提取平行句子和跨语言窃检测)中评估了该方法,并表明该方法的性能可与复杂资源密集型最新模型的性能相媲美。各自的任务。 (C)2017由Elsevier B.V.发布

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号