首页> 美国卫生研究院文献>other >Artificial Grammar Learning Capabilities in an Abstract Visual Task Match Requirements for Linguistic Syntax
【2h】

Artificial Grammar Learning Capabilities in an Abstract Visual Task Match Requirements for Linguistic Syntax

机译:语言语法的抽象视觉任务匹配要求中的人工语法学习能力

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Whether pattern-parsing mechanisms are specific to language or apply across multiple cognitive domains remains unresolved. Formal language theory provides a mathematical framework for classifying pattern-generating rule sets (or “grammars”) according to complexity. This framework applies to patterns at any level of complexity, stretching from simple sequences, to highly complex tree-like or net-like structures, to any Turing-computable set of strings. Here, we explored human pattern-processing capabilities in the visual domain by generating abstract visual sequences made up of abstract tiles differing in form and color. We constructed different sets of sequences, using artificial “grammars” (rule sets) at three key complexity levels. Because human linguistic syntax is classed as “mildly context-sensitive,” we specifically included a visual grammar at this complexity level. Acquisition of these three grammars was tested in an artificial grammar-learning paradigm: after exposure to a set of well-formed strings, participants were asked to discriminate novel grammatical patterns from non-grammatical patterns. Participants successfully acquired all three grammars after only minutes of exposure, correctly generalizing to novel stimuli and to novel stimulus lengths. A Bayesian analysis excluded multiple alternative hypotheses and shows that the success in rule acquisition applies both at the group level and for most participants analyzed individually. These experimental results demonstrate rapid pattern learning for abstract visual patterns, extending to the mildly context-sensitive level characterizing language. We suggest that a formal equivalence of processing at the mildly context sensitive level in the visual and linguistic domains implies that cognitive mechanisms with the computational power to process linguistic syntax are not specific to the domain of language, but extend to abstract visual patterns with no meaning.
机译:模式解析机制是特定于语言还是适用于多个认知领域仍未解决。形式语言理论提供了一种数学框架,用于根据复杂度对模式生成规则集(或“语法”)进行分类。该框架适用于各种复杂程度的模式,从简单的序列延伸到高度复杂的树状或网状结构,再到任何图灵可计算的字符串集。在这里,我们通过生成由形式和颜色不同的抽象图块组成的抽象视觉序列,探索了视觉领域中人类模式的处理能力。我们使用三个关键复杂性级别的人工“语法”(规则集)构建了不同的序列集。由于人类语言语法被归类为“轻度上下文敏感”,因此我们特别在这种复杂性级别上包括了视觉语法。在人工语法学习范例中测试了这三种语法的习得:暴露于一组格式正确的字符串后,要求参与者将新颖的语法模式与非语法模式区分开。参与者仅需暴露几分钟即可成功获得所有这三种语法,可以正确地推广到新的刺激和新的刺激长度。贝叶斯分析排除了多个替代假设,并表明规则获取的成功既适用于小组级别,也适用于大多数单独分析的参与者。这些实验结果证明了对抽象视觉模式的快速模式学习,并扩展到了对语言敏感的对上下文敏感的级别。我们建议,在视觉和语言领域中,在轻度上下文敏感级别上进行正式的处理等效,意味着具有处理语言语法的计算能力的认知机制并非特定于语言领域,而是扩展为没有意义的抽象视觉模式。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号