首页> 外文学位 >A neural network model for the representation of natural language.
【24h】

A neural network model for the representation of natural language.

机译:用于表示自然语言的神经网络模型。

获取原文
获取原文并翻译 | 示例

摘要

Current research in natural language processing demonstrates the importance of analyzing syntactic relationships, such as word order, topicalization, passivization, dative movement, particle movement, pronominalization, as dynamic resonant patterns of neuronal activation (Loritz, 1999). Following this line of research this study demonstrates the importance of also analyzing conceptual relationships, such as polysemy, homonymy, ambiguity, metaphor, neologism, coreference, as dynamic resonant patterns represented in terms of neuronal activation. This view has implications for the representation of natural language (NL). Alternatively, formal representation methods abstract away from the actual properties of real-time natural language input, and rule-based systems are of limited representational power.; Since NL is a human neurocognitive phenomenon we presume that it can be best represented in a neural network model. This study focuses on a neural network simulation, the Cognitive Linguistic Adaptive Resonant Network (CLAR-NET) model of online and real-time associations among concepts. The CLAR-NET model is a simulated Adaptive Resonance Theory (ART, Grossberg 1972 et seq.) model. Through a series of experiments, I address particular linguistic problems such as homonymy, neologism, polysemy, metaphor, constructional polysemy, contextual coreference, subject-object control, event-structure metaphor and negation. The aim of this study is to infer natural language specific mappings of concepts in the human neurocognitive system on the basis of known facts and observations provided within the realms of conceptual metaphor theory (CMT), and adaptive grammar (AG, Loritz 1999), theories of linguistic analysis, and known variables drawn from the brain and cognitive sciences as well as previous neural network systems built for similar purposes. Additionally, this study investigates the extent to which these linguistic phenomena can be plausibly analyzed and accounted for within an ART-like neural network model.; My basic hypothesis is that the association among concepts is primarily an expression of domain-general cognitive mechanisms that depend on continuous learning of both previously presented linguistic input and everyday, direct experiential (i.e. sensory-physical) behaviors represented in natural language as "common knowledge" (or "common sense"). According to this hypothesis, complex conceptual representations are not actually associated with pre-postulated feature structures, but with time-sensitive dynamic patterns of activation. These patterns can reinforce previous learning and/or create new "place-holders" in the conceptual system for future value binding.; This line of investigation holds implications for language learning, neurolinguistics, metaphor theory, information retrieval, knowledge engineering, case-based reasoning, knowledge-based machine translation systems and related ontologies.; This study finds that although STM effects in ART-like networks are significant, most of the time LTM calculation yields better semantic discrimination. It is suggested that the internal structure of lexical frames that correspond to clusters of congenial associations (in fact, neuronal subnetworks), is maintained as long as it resonates with new input patterns or learned in long-term memory traces. Different degrees of similarity (or deviation) from previously acquired knowledge clusters are computed as activation levels of the corresponding neuronal nodes and may be measured via differential equations of neuronal activity.; The overall conclusion is that ART-like networks can model interesting linguistic phenomena in a neurocognitively plausible way.
机译:当前在自然语言处理中的研究表明,分析语法关系(如单词顺序,主题化,钝化,和格运动,质点运动,代词化)作为神经元激活的动态共振模式的重要性(Loritz,1999)。遵循这一研究路线,本研究证明了分析概念关系的重要性,例如多义,同义,歧义,隐喻,新词,共指等,作为以神经元激活表示的动态共振模式。这种观点对自然语言(NL)的表示有影响。或者,形式表示方法从实时自然语言输入的实际属性中抽象出来,而基于规则的系统的表示能力有限。由于NL是人类的神经认知现象,因此我们假设它可以在神经网络模型中得到最好的表示。这项研究的重点是神经网络仿真,概念之间在线和实时关联的认知语言自适应共振网络(CLAR-NET)模型。 CLAR-NET模型是模拟的自适应共振理论(ART,Grossberg 1972等)模型。通过一系列实验,我解决了特殊的语言问题,例如同音,新词,多义,隐喻,构造多义,上下文共指,主题-对象控制,事件结构隐喻和否定。这项研究的目的是根据概念隐喻理论(CMT)和自适应语法(AG,Loritz 1999)领域中提供的已知事实和观察结果,推断人类神经认知系统中自然语言概念的特定映射。语言分析,从大脑和认知科学以及为类似目的而构建的以前的神经网络系统中提取的已知变量。另外,本研究调查了在类似于ART的神经网络模型中可以合理地分析和解释这些语言现象的程度。我的基本假设是,概念之间的关联主要是领域通用认知机制的表达,它依赖于对先前呈现的语言输入和以自然语言表示为“常识”的日常直接体验(即感觉-身体)行为的不断学习。 ”(或“常识”)。根据该假设,复杂的概念表示实际上并不与预先假定的特征结构相关,而是与时间敏感的激活动态模式相关联。这些模式可以加强以前的学习和/或在概念系统中创建新的“占位符”,以用于将来的价值绑定。这一研究领域对语言学习,神经语言学,隐喻理论,信息检索,知识工程,基于案例的推理,基于知识的机器翻译系统和相关本体具有重要意义。这项研究发现,尽管STM在类ART网络中的影响非常显着,但大多数时候LTM计算会产生更好的语义区分。建议只要与新的输入模式产生共振或在长期记忆痕迹中获悉,就可以维持与同类关联的簇(实际上是神经元子网)相对应的词法框架的内部结构。与先前获得的知识簇的不同程度的相似性(或偏差)被计算为相应神经元节点的激活水平,并可通过神经元活动的微分方程来测量。总体结论是,类似ART的网络可以以一种神经认知的方式对有趣的语言现象进行建模。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号