首页> 外文期刊>Cognition: International Journal of Cognitive Psychology >Transfer of object category knowledge across visual and haptic modalities: Experimental and computational studies
【24h】

Transfer of object category knowledge across visual and haptic modalities: Experimental and computational studies

机译:跨视觉和触觉方式传递对象类别知识:实验和计算研究

获取原文
获取原文并翻译 | 示例
           

摘要

We study people's abilities to transfer object category knowledge across visual and haptic domains. If a person learns to categorize objects based on inputs from one sensory modality, can the person categorize these same objects when the objects are perceived through another modality? Can the person categorize novel objects from the same categories when these objects are, again, perceived through another modality? Our work makes three contributions. First, by fabricating Fribbles (3-D, multi-part objects with a categorical structure), we developed visual-haptic stimuli that are highly complex and realistic, and thus more ecologically valid than objects that are typically used in haptic or visual-haptic experiments. Based on these stimuli, we developed the See and Grasp data set, a data set containing both visual and haptic features of the Fribbles, and are making this data set freely available on the world wide web. Second, complementary to previous research such as studies asking if people transfer knowledge of object identity across visual and haptic domains, we conducted an experiment evaluating whether people transfer object category knowledge across these domains. Our data clearly indicate that we do. Third, we developed a computational model that learns multisensory representations of prototypical 3-D shape. Similar to previous work, the model uses shape primitives to represent parts, and spatial relations among primitives to represent multi-part objects. However, it is distinct in its use of a Bayesian inference algorithm allowing it to acquire multisensory representations, and sensory-specific forward models allowing it to predict visual or haptic features from multisensory representations. The model provides an excellent qualitative account of our experimental data, thereby illustrating the potential importance of multisensory representations and sensory-specific forward models to multisensory perception.
机译:我们研究人们在视觉和触觉领域传递对象类别知识的能力。如果一个人学会根据一种感觉模态的输入对物体进行分类,那么当通过另一种模态感知到物体时,该人可以对这些相同的物体进行分类吗?当通过另一种形式再次感知新事物时,人可以将它们归为同一类别吗?我们的工作做出了三点贡献。首先,通过制造碎石(具有分类结构的3-D,多部分对象),我们开发了高度复杂且逼真的视觉触觉刺激,因此比通常在触觉或视觉触觉中使用的对象更具生态学意义实验。基于这些刺激,我们开发了See and Grasp数据集,该数据集同时包含了Fribbles的视觉和触觉特征,并正在万维网上免费提供此数据集。第二,作为对先前研究的补充,例如询问人们是否在视觉和触觉领域中转移了对象身份的知识的研究,我们进行了一项实验,评估人们是否在这些领域中转移了对象类别的知识。我们的数据清楚地表明我们这样做。第三,我们开发了一个计算模型,该模型学习了原型3-D形状的多感官表示。与以前的工作类似,该模型使用形状图元表示零件,并使用图元之间的空间关系表示多零件对象。但是,它的独特之处在于使用贝叶斯推理算法允许它获取多感官表示,而特定于感官的正向模型允许它从多感官表示中预测视觉或触觉特征。该模型很好地定性了我们的实验数据,从而说明了多感官表示和特定于感官的正向模型对多感官感知的潜在重要性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号