首页> 外文OA文献 >A Satisfaction-based Model for Affect Recognition from Conversational Features in Spoken Dialog Systems
【2h】

A Satisfaction-based Model for Affect Recognition from Conversational Features in Spoken Dialog Systems

机译:基于满意度的口语对话系统中会话特征识别模型

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Detecting user affect automatically during real-time conversation is the main challenge towards our greater aim of infusing social intelligence into a natural-language mixed-initiative High-Fidelity (Hi-Fi) audio control spoken dialog agent. In recent years, studies on affect detection from voice have moved on to using realistic, non-acted data, which is subtler. However, it is more challenging to perceive subtler emotions and this is demonstrated in tasks such as labelling and machine prediction. This paper attempts to address part of this challenge by considering the role of user satisfaction ratings and also conversational/dialog features in discriminating contentment and frustration, two types of emotions that are known to be prevalent within spoken human-computer interaction. However, given the laboratory constraints, users might be positively biased when rating the system, indirectly making the reliability of the satisfaction data questionable. Machine learning experiments were conducted on two datasets, users and annotators, which were then compared in order to assess the reliability of these datasets. Our results indicated that standard classifiers were significantly more successful in discriminating the abovementioned emotions and their intensities (reflected by user satisfaction ratings) from annotator data than from user data. These results corroborated that: first, satisfaction data could be used directly as an alternative target variable to model affect, and that they could be predicted exclusively by dialog features. Second, these were only true when trying to predict the abovementioned emotions using annotator?s data, suggesting that user bias does exist in a laboratory-led evaluation.
机译:向我们的更大目标(将社交智能注入自然语言的混合式高保真(Hi-Fi)音频控制口语对话代理程序)这一更大目标的主要挑战是,在实时对话期间自动检测用户影响是主要挑战。近年来,关于从语音进行情感检测的研究已经转向使用更真实的,非实际的数据。但是,感知微妙的情绪更具挑战性,这在诸如标记和机器预测之类的任务中得到了证明。本文试图通过考虑用户满意度等级的作用以及对话/对话功能在区分满足和沮丧方面的作用来解决这一挑战的一部分,满足和沮丧是在人机交互中普遍存在的两种情感。但是,由于实验室的限制,用户在对系统进行评级时可能会产生偏见,从而间接使满意度数据的可靠性令人怀疑。在两个数据集(用户和注释者)上进行了机器学习实验,然后对它们进行了比较,以评估这些数据集的可靠性。我们的结果表明,标准的分类器在从注释者数据中区分上述情绪及其强度(由用户满意度等级反映)方面比从用户数据中更为成功。这些结果证实了:首先,满意度数据可以直接用作模拟情感的替代目标变量,并且可以完全由对话功能预测。其次,只有在尝试使用注释者的数据预测上述情绪时,这些才是正确的,这表明在实验室主导的评估中确实存在用户偏见。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号