首页> 外文期刊>Journal of Cognitive Neuroscience >Beat that Word: How Listeners Integrate Beat Gesture and Focus in Multimodal Speech Discourse
【24h】

Beat that Word: How Listeners Integrate Beat Gesture and Focus in Multimodal Speech Discourse

机译:敲打这个词:听众如何将打手势和焦点整合到多模态言语中

获取原文
获取原文并翻译 | 示例
           

摘要

Communication is facilitated when listeners allocate their attention to important information (focus) in the message, a process called “information structure.” Linguistic cues like the preceding context and pitch accent help listeners to identify focused information. In multimodal communication, relevant information can be emphasized by nonverbal cues like beat gestures, which represent rhythmic nonmeaningful hand movements. Recent studies have found that linguistic and nonverbal attention cues are integrated independently in single sentences. However, it is possible that these two cues interact when information is embedded in context, because context allows listeners to predict what information is important. In an ERP study, we tested this hypothesis and asked listeners to view videos capturing a dialogue. In the critical sentence, focused and nonfocused words were accompanied by beat gestures, grooming hand movements, or no gestures. ERP results showed that focused words are processed more attentively than nonfocused words as reflected in an N1 and P300 component. Hand movements also captured attention and elicited a P300 component. Importantly, beat gesture and focus interacted in a late time window of 600–900 msec relative to target word onset, giving rise to a late positivity when nonfocused words were accompanied by beat gestures. Our results show that listeners integrate beat gesture with the focus of the message and that integration costs arise when beat gesture falls on nonfocused information. This suggests that beat gestures fulfill a unique focusing function in multimodal discourse processing and that they have to be integrated with the information structure of the message.
机译:当听众将注意力转移到消息中的重要信息(焦点)上时,便可以进行交流,这一过程称为“信息结构”。诸如上述上下文和音调重音之类的语言提示可帮助听众识别重点信息。在多模式通信中,相关信息可以通过拍打手势之类的非语言提示来强调,这些提示代表有节奏的无意义的手部动作。最近的研究发现,语言和非语言注意提示都独立地集成在单个句子中。但是,将信息嵌入上下文中时,这两个提示可能会相互作用,因为上下文允许侦听器预测哪些信息很重要。在ERP研究中,我们检验了这一假设,并要求听众观看捕捉对话的视频。在批判性句子中,重点和非重点词都伴随有节拍手势,修饰手势动作或没有手势。 ERP结果表明,针对重点词的处理要比针对非重点词的处理更加专注,这体现在N1和P300组件中。手部动作也引起了注意,并引发了P300组件。重要的是,节拍手势和焦点在相对于目标单词发作的600-900毫秒的较晚时间窗口中相互作用,当不集中的单词伴有节拍手势时会引起后期积极性。我们的结果表明,听众将节拍手势与消息的焦点集成在一起,并且当节拍手势落在非重点信息上时会产生集成成本。这表明拍打手势在多模式话语处理中具有独特的聚焦功能,并且必须与消息的信息结构集成在一起。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号