首页> 美国卫生研究院文献>The Journal of the Acoustical Society of America >Specification of cross-modal source information in isolated kinematic displays of speech
【2h】

Specification of cross-modal source information in isolated kinematic displays of speech

机译:孤立的运动学语音显示中的跨模式源信息规范

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Information about the acoustic properties of a talker’s voice is available in optical displays of speech, and vice versa, as evidenced by perceivers’ ability to match faces and voices based on vocal identity. The present investigation used point-light displays (PLDs) of visual speech and sinewave replicas of auditory speech in a cross-modal matching task to assess perceivers’ ability to match faces and voices under conditions when only isolated kinematic information about vocal tract articulation was available. These stimuli were also used in a word recognition experiment under auditory-alone and audiovisual conditions. The results showed that isolated kinematic displays provide enough information to match the source of an utterance across sensory modalities. Furthermore, isolated kinematic displays can be integrated to yield better word recognition performance under audiovisual conditions than under auditory-alone conditions. The results are discussed in terms of their implications for describing the nature of speech information and current theories of speech perception and spoken word recognition.
机译:有关讲话者声音的声学特性的信息可以在语音的光学显示中找到,反之亦然,这可以从感知者根据声音识别匹配面部和声音的能力中得到证明。本研究在交叉模式匹配任务中使用了视觉语音的点光显示(PLD)和听觉语音的正弦波复制品,以评估仅在有关声道发音的孤立运动学信息可用的情况下,感知者才能匹配面部和语音。这些刺激还用于单独听觉和视听条件下的单词识别实验。结果表明,孤立的运动学显示可提供足够的信息,以匹配跨感觉模态的发声源。此外,可以集成隔离的运动显示,以在视听条件下产生比仅在听觉条件下更好的单词识别性能。就其对描述语音信息的性质以及当前语音感知和口头单词识别理论的意义进行了讨论。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号