首页> 外文期刊>Computational Intelligence >CONTEXTUAL LANGUAGE MODELS FOR RANKING ANSWERS TO NATURAL LANGUAGE DEFINITION QUESTIONS
【24h】

CONTEXTUAL LANGUAGE MODELS FOR RANKING ANSWERS TO NATURAL LANGUAGE DEFINITION QUESTIONS

机译:用于对自然语言定义问题进行排序的上下文语言模型

获取原文
获取原文并翻译 | 示例
           

摘要

Question-answering systems make good use of knowledge bases (KBs, e.g., Wikipedia) for responding to definition queries. Typically, systems extract relevant facts from articles regarding the question across KBs, and then they are projected into the candidate answers. However, studies have shown that the performance of this kind of method suddenly drops, whenever KBs supply narrow coverage. This work describes a new approach to deal with this problem by constructing context models for scoring candidate answers, which are, more precisely, statistical n-gram language models inferred from lexicalized dependency paths extracted from Wikipedia abstracts. Unlike state-of-the-art approaches, context models are created by capturing the semantics of candidate answers (e.g., "novel," "singer," "coach," and "city"). This work is extended by investigating the impact on context models of extra linguistic knowledge such as part-of-speech tagging and named-entity recognition. Results showed the effectiveness of context models as n-gram lexicalized dependency paths and promising context indicators for the presence of definitions in natural language texts.
机译:问答系统充分利用了知识库(知识库,例如Wikipedia)来响应定义查询。通常,系统从与知识库有关的问题的文章中提取相关事实,然后将其投影到候选答案中。但是,研究表明,只要KB提供的覆盖范围很窄,这种方法的性能就会突然下降。这项工作描述了一种通过构造用于对候选答案评分的上下文模型来解决此问题的新方法,更准确地说,这是从Wikipedia摘要中提取的词汇化依赖路径推断出的统计n-gram语言模型。与最新技术不同,上下文模型是通过捕获候选答案(例如“小说”,“歌手”,“教练”和“城市”)的语义创建的。通过研究额外的语言知识(例如词性标记和命名实体识别)对上下文模型的影响来扩展这项工作。结果表明,上下文模型作为n-gram词汇化的依赖路径的有效性,以及自然语言文本中存在定义的有希望的上下文指示符。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号