...
首页> 外文期刊>ACM transactions on multimedia computing communications and applications >Using Eye Tracking and Heart-Rate Activity to Examine Crossmodal Correspondences QoE in Mulsemedia
【24h】

Using Eye Tracking and Heart-Rate Activity to Examine Crossmodal Correspondences QoE in Mulsemedia

机译:使用眼动追踪和心率活动检查Mulsemedia中的交叉模式对应QoE

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Different senses provide us with information of various levels of precision and enable us to construct a more precise representation of the world. Rich multisensory simulations are thus beneficial for comprehension, memory reinforcement, or retention of information. Crossmodal mappings refer to the systematic associations often made between different sensory modalities (e.g., high pitch is matched with angular shapes) and govern multisensory processing. A great deal of research effort has been put into exploring cross-modal correspondences in the field of cognitive science. However, the possibilities they open in the digital world have been relatively unexplored. Multiple sensorial media (mulsemedia) provides a highly inunersive experience to the users and enhances their Quality of Experience (QoE) in the digital world. Thus, we consider that studying the plasticity and the effects of cross-modal correspondences in a mulsemedia setup can bring interesting insights about improving the human computer dialogue and experience. In our experiments, we exposed users to videos with certain visual dimensions (brightness, color, and shape), and we investigated whether the pairing with a cross-modal matching sound (high and low pitch) and the corresponding auto-generated vibrotactile effects (produced by a haptic vest) lead to an enhanced QoE. For this, we captured the eye gaze and the heart rate of users while experiencing mulsemedia, and we asked them to fill in a set of questions targeting their enjoyment and perception at the end of the experiment. Results showed differences in eye-gaze patterns and heart rate between the experimental and the control group, indicating changes in participants' engagement when videos were accompanied by matching cross-modal sounds (this effect was the strongest for the video displaying angular shapes and high-pitch audio) and transitively generated cross-modal vibrotactile effects.
机译:不同的感觉为我们提供了各种精确度的信息,使我们能够构建更精确的世界表示。因此,丰富的多感官模拟对于理解,增强记忆或保留信息很有帮助。跨峰映射是指通常在不同的感觉模态之间进行的系统关联(例如,高音调与角度形状匹配)并控制多感觉处理。在认知科学领域,已经进行了大量的研究工作来探索交叉模式的对应关系。但是,它们在数字世界中打开的可能性尚未得到充分挖掘。多种感官媒体(mulsemedia)为用户提供了高度无损的体验,并提高了他们在数字世界中的体验质量(QoE)。因此,我们认为研究mulsemedia设置中交叉模式对应关系的可塑性和影响可以带来有关改进人机对话和体验的有趣见解。在我们的实验中,我们向用户展示了具有特定视觉尺寸(亮度,颜色和形状)的视频,我们调查了配对是否与交叉模式匹配声音(高音和低音)以及相应的自动产生的触觉效果(由触觉背心产生)可提高QoE。为此,我们在体验mulsemedia的同时捕捉了用户的视线和心率,并在实验结束时要求他们填写一组针对他们的享受和感知的问题。结果显示,实验组和对照组之间的注视模式和心率有所不同,表明当视频伴随有匹配的跨模态声音时参与者的参与度发生了变化(这种效果对于显示角度形状和音调音频)和可传递产生的跨模态触觉效果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号