首页> 外文期刊>Interacting with Computers >Emotion Prediction from Physiological Signals: A Comparison Study Between Visual and Auditory Elicitors
【24h】

Emotion Prediction from Physiological Signals: A Comparison Study Between Visual and Auditory Elicitors

机译:从生理信号进行情绪预测:视觉和听觉诱发者之间的比较研究

获取原文
获取原文并翻译 | 示例

摘要

Unlike visual stimuli, little attention has been paid to auditory stimuli in terms of emotion prediction with physiological signals. This paper aimed to investigate whether auditory stimuli can be used as an effective elicitor as visual stimuli for emotion prediction using physiological channels. For this purpose, a well-controlled experiment was designed, in which standardized visual and auditory stimuli were systematically selected and presented to participants to induce various emotions spontaneously in a laboratory setting. Numerous physiological signals, including facial electromyogram, electroencephalography, skin conductivity and respiration data, were recorded when participants were exposed to the stimulus presentation. Two data mining methods, namely decision rules and k-nearest neighbor based on the rough set technique, were applied to construct emotion prediction models based on the features extracted from the physiological data. Experimental results demonstrated that auditory stimuli were as effective as visual stimuli in eliciting emotions in terms of systematic physiological reactivity. This was evidenced by the best prediction accuracy quantified by the F_1 measure (visual: 76.2% vs. auditory: 76.1%) among six emotion categories (excited, happy, neutral, sad, fearful and disgusted). Furthermore, we also constructed culture-specific (Chinese vs. Indian) prediction models. The results showed that model prediction accuracy was not significantly different between culture-specific models. Finally, the implications of affective auditory stimuli in human-computer interaction, limitations of the study and suggestions for further research are discussed.
机译:与视觉刺激不同,就生理信号进行情绪预测而言,对听觉刺激的关注很少。本文旨在研究听觉刺激是否可以用作有效刺激物,作为使用生理学渠道进行情绪预测的视觉刺激。为了这个目的,设计了一个控制良好的实验,其中系统地选择了标准化的视觉和听觉刺激并将其呈现给参与者,以便在实验室环境中自发诱发各种情绪。当参与者受到刺激时,会记录许多生理信号,包括面部肌电图,脑电图,皮肤电导率和呼吸数据。运用基于粗糙集技术的决策规则和k-最近邻两种数据挖掘方法,基于从生理数据中提取的特征,构建情绪预测模型。实验结果表明,在系统性生理反应方面,听觉刺激与视觉刺激一样有效。这可以通过六种情感类别(兴奋,快乐,中立,悲伤,恐惧和厌恶)中的F_1量度(视觉:76.2%对听觉:76.1%)量化的最佳预测准确性来证明。此外,我们还构建了特定于文化的(中国与印度)预测模型。结果表明,模型预测准确性在特定于文化的模型之间没有显着差异。最后,讨论了情感听觉刺激在人机交互中的意义,研究的局限性和进一步研究的建议。

著录项

  • 来源
    《Interacting with Computers》 |2014年第3期|285-302|共18页
  • 作者单位

    Center for Human Factors and Ergonomics, School of Mechanical and Aerospace Engineering, Nanyang Technological University, Blk N3, North Spine, Nanyang Avenue, Singapore 639798,The George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, USA;

    Center for Human Factors and Ergonomics, School of Mechanical and Aerospace Engineering, Nanyang Technological University, Blk N3, North Spine, Nanyang Avenue, Singapore 639798;

    The George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, USA;

    Center for Human Factors and Ergonomics, School of Mechanical and Aerospace Engineering, Nanyang Technological University, Blk N3, North Spine, Nanyang Avenue, Singapore 639798;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    human computer interaction; interaction paradigms; empirical studies in HCI;

    机译:人机交互;交互范例;人机交互中的实证研究;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号