首页> 外文学位 >Vectorial representations of meaning for a computational model of language comprehension.
【24h】

Vectorial representations of meaning for a computational model of language comprehension.

机译:语言理解计算模型的意义矢量表示。

获取原文
获取原文并翻译 | 示例

摘要

This thesis aims to define and extend a line of computational models for text comprehension that are humanly plausible. Since natural language is human by nature, computational models of human language will always be just that --- models. To the degree that they miss out on information that humans would tap into, they may be improved by considering the human process of language processing in a linguistic, psychological, and cognitive light.;Approaches to constructing vectorial semantic spaces often begin with the distributional hypothesis, i.e., that words can be judged 'by the company they keep.' Typically, words that occur in the same documents are similar, and will have similar vectorial meaning representations. However, this does not in itself provide a way for two distinct meanings to be composed, and it ignores syntactic context.;Both of these problems are solved in Structured Vectorial Semantics (SVS), a new framework that fully unifies vectorial semantics with syntactic parsing. Most approaches that try to combine syntactic and semantic information will either lack a cohesive semantic component or a full-fledged parser, but SVS integrates both. Thus, in the SVS framework, interpretation is interactive, considering both syntax and semantics simultaneously.;Cognitively-plausible language models would also be incremental, support linear-time inference, and operate in only a bounded store of short-term memory. Each of these characteristics is supported by right-corner Hierarchical Hidden Markov Model (HHMM) parsing; therefore, SVS will be transformed into right-corner form and mapped to an HHMM parser. The resulting representation will then encode a psycholinguistically plausible incremental SVS language model.
机译:本文旨在为人类理解定义并扩展一系列用于文本理解的计算模型。由于自然语言是天生的人类,因此人类语言的计算模型将永远只是模型。如果他们错过了人类可以利用的信息的程度,则可以通过从语言,心理和认知的角度考虑人类的语言处理过程来改善它们。;构造向量语义空间的方法通常始于分布假设,即可以由“他们经营的公司”来判断这些词语。通常,出现在同一文档中的单词是相似的,并且将具有相似的矢量含义表示。但是,这本身并不能提供一种组合两种不同含义的方法,并且它忽略了句法上下文。;这两个问题都在结构化矢量语义(SVS)中得到了解决,这是一个新的框架,该框架通过语法分析将矢量化语义完全统一在一起。 。大多数试图将句法和语义信息结合起来的方法要么缺少内聚的语义成分,要么没有成熟的解析器,但是SVS集成了两者。因此,在SVS框架中,解释是交互式的,同时考虑了语法和语义。认知上合理的语言模型也将是增量的,支持线性时间推断,并且仅在短期记忆的有界存储中运行。这些特性中的每一个都由右角分层隐马尔可夫模型(HHMM)解析支持;因此,SVS将转换为右角形式并映射到HHMM解析器。然后,所得表示将对心理语言学上合理的增量SVS语言模型进行编码。

著录项

  • 作者

    Wu, Stephen Tze-Inn.;

  • 作者单位

    University of Minnesota.;

  • 授予单位 University of Minnesota.;
  • 学科 Language Linguistics.;Artificial Intelligence.;Computer Science.
  • 学位 Ph.D.
  • 年度 2010
  • 页码 147 p.
  • 总页数 147
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号