首页> 外文期刊>Journal of Mathematical Psychology >Regularized models of audiovisual integration of speech with predictive power for sparse behavioral data
【24h】

Regularized models of audiovisual integration of speech with predictive power for sparse behavioral data

机译:正规化模型的视听集成与稀疏行为数据的预测力量

获取原文
获取原文并翻译 | 示例
           

摘要

Audiovisual integration can facilitate speech comprehension by integrating information from lip-reading with auditory speech perception. When incongruent acoustic speech is dubbed onto a video of a talking face, this integration can lead to the McGurk illusion of hearing a different phoneme than that spoken by the voice. Several computational models of the information integration process underlying these phenomena exist. All are based on the assumption that the integration process is, in some sense, optimal. They differ, however, in assuming that it is based on either continuous or categorical internal representations. Here we develop models of audiovisual integration of the phonetic information represented on an internal representation that is continuous and cyclical. We compare these models to the Fuzzy Logical Model of Perception (FLMP), which is based on a categorical internal representation. Using cross-validation, we show that model evaluation criteria based on the goodness of-fit are poor measures of the models' generalization error even if they take the number of free parameters into account. We also show that the predictive power of all the models benefit from regularization that limits the precision of the internal representation. Finally, we show that, unlike the FLMP, models based on a continuous internal representation have good predictive power when properly regularized. (c) 2020 Published by Elsevier Inc.
机译:视听整合可以通过整合唇读信息和听觉言语感知来促进言语理解。当不协调的声学语音被配音到一张说话的脸的视频中时,这种整合可能会导致麦库尔克错觉,即听到的是不同于声音的音素。这些现象背后的信息整合过程存在几种计算模型。所有这些都基于这样一个假设,即在某种意义上,整合过程是最优的。然而,他们的不同之处在于,他们假设它是基于连续或绝对的内部表征。在这里,我们开发了音视频整合模型,将语音信息呈现在一个连续且循环的内部表征上。我们将这些模型与基于范畴内部表示的模糊逻辑感知模型(FLMP)进行了比较。通过交叉验证,我们发现基于拟合优度的模型评估标准即使考虑了自由参数的数量,也不能很好地衡量模型的泛化误差。我们还表明,所有模型的预测能力都受益于限制内部表示精度的正则化。最后,我们表明,与FLMP不同,基于连续内部表示的模型在适当正则化时具有良好的预测能力。(c) 2020年由爱思唯尔公司出版。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号