首页> 美国卫生研究院文献>Springer Open Choice >Matching novel face and voice identity using static and dynamic facial images
【2h】

Matching novel face and voice identity using static and dynamic facial images

机译:使用静态和动态面部图像匹配新颖的面部和语音身份

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Research investigating whether faces and voices share common source identity information has offered contradictory results. Accurate face–voice matching is consistently above chance when the facial stimuli are dynamic, but not when the facial stimuli are static. We tested whether procedural differences might help to account for the previous inconsistencies. In Experiment , participants completed a sequential two-alternative forced choice matching task. They either heard a voice and then saw two faces or saw a face and then heard two voices. Face–voice matching was above chance when the facial stimuli were dynamic and articulating, but not when they were static. In Experiment , we tested whether matching was more accurate when faces and voices were presented simultaneously. The participants saw two face–voice combinations, presented one after the other. They had to decide which combination was the same identity. As in Experiment , only dynamic face–voice matching was above chance. In Experiment , participants heard a voice and then saw two static faces presented simultaneously. With this procedure, static face–voice matching was above chance. The overall results, analyzed using multilevel modeling, showed that voices and dynamic articulating faces, as well as voices and static faces, share concordant source identity information. It seems, therefore, that above-chance static face–voice matching is sensitive to the experimental procedure employed. In addition, the inconsistencies in previous research might depend on the specific stimulus sets used; our multilevel modeling analyses show that some people look and sound more similar than others.
机译:有关面部和声音是否共享共同的来源身份信息的研究提供了矛盾的结果。当面部刺激是动态的时,准确的面部声音匹配始终是偶然的,而不是在面部刺激是静态的时候。我们测试了程序差异是否可能有助于解决以前的不一致问题。在“实验”中,参与者完成了一个顺序的两个或多个强制选择匹配任务。他们要么听到一个声音然后看见两个脸,要么看到一个面孔然后听到两个声音。当面部刺激是动态的和清晰的时,面部和声音的匹配是偶然的,但是当它们是静态的时则不是。在实验中,我们测试了同时显示人脸和声音时匹配是否更准确。参与者看到了两种面部-声音组合,一个接一个地呈现。他们必须决定哪种组合具有相同的身份。与实验中一样,只有动态面部表情匹配才是机会。在实验中,参与者听到了声音,然后看到了同时出现的两个静态面孔。通过此过程,静态的人脸与声音的匹配就超出了机会。使用多级建模分析的总体结果显示,语音和动态发音的面孔以及语音和静态面孔共享一致的源身份信息。因此,似乎有机会使用静态面部声音的机会对实验程序很敏感。另外,先前研究中的不一致之处可能取决于所使用的特定刺激集。我们的多级建模分析表明,某些人的外观和声音比其他人更相似。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号