首页> 外文会议>International Conference on Automatic Face and Gesture Recognition >Equipping social robots with culturally-sensitive facial expressions of emotion using data-driven methods
【24h】

Equipping social robots with culturally-sensitive facial expressions of emotion using data-driven methods

机译:使用数据驱动的方法为社交机器人配备对文化敏感的情感表情

获取原文

摘要

Social robots must be able to generate realistic and recognizable facial expressions to engage their human users. Many social robots are equipped with standardized facial expressions of emotion that are widely considered to be universally recognized across all cultures. However, mounting evidence shows that these facial expressions are not universally recognized - for example, they elicit significantly lower recognition accuracy in East Asian cultures than they do in Western cultures. Therefore, without culturally sensitive facial expressions, state-of-the-art social robots are restricted in their ability to engage a culturally diverse range of human users, which in turn limits their global marketability. To develop culturally sensitive facial expressions, novel data-driven methods are used to model the dynamic face movement patterns that convey basic emotions (e.g., happy, sad, anger) in a given culture using cultural perception. Here, we tested whether such dynamic facial expression models, derived in an East Asian culture and transferred to a popular social robot, improved the social signalling generation capabilities of the social robot with East Asian participants. Results showed that, compared to the social robot's existing set of facial `universal' expressions, the culturally-sensitive facial expression models are recognized with generally higher accuracy and judged as more human-like by East Asian participants. We also detail the specific dynamic face movements (Action Units) that are associated with high recognition accuracy and judgments of human-likeness, including those that further boost performance. Our results therefore demonstrate the utility of using data-driven methods that employ human cultural perception to derive culturally-sensitive facial expressions that improve the social face signal generation capabilities of social robots. We anticipate that these methods will continue to inform the design of social robots and broaden their usability and global marketability.
机译:社交机器人必须能够生成逼真的和可识别的面部表情,以吸引其人类用户。许多社交机器人都配备了标准化的情感面部表情,被广泛认为在所有文化中都是公认的。但是,越来越多的证据表明,这些面部表情并未得到普遍认可,例如,与西方文化相比,它们在东亚文化中引起的识别准确性明显降低。因此,在没有文化敏感的面部表情的情况下,最先进的社交机器人在与各种文化背景的人类用户互动方面的能力受到限制,这反过来又限制了其全球可销售性。为了开发对文化敏感的面部表情,新颖的数据驱动方法用于对动态的面部运动模式进行建模,这些运动通过文化感知传达给定文化中的基本情感(例如,快乐,悲伤,愤怒)。在这里,我们测试了这种源自东亚文化并转移到流行的社交机器人的动态面部表情模型,是否改善了具有东亚参与者的社交机器人的社交信号生成能力。结果表明,与社交机器人现有的面部“通用”表情集相比,具有文化敏感性的面部表情模型通常具有更高的识别精度,并且被东亚参与者认为更像人类。我们还将详细介绍与高识别精度和对人像的判断相关的特定动态人脸移动(动作单元),包括那些可进一步提高性能的动作。因此,我们的结果证明了使用数据驱动的方法的实用性,该方法利用人类的文化感知来获得文化敏感的面部表情,从而改善社交机器人的社交面部信号生成能力。我们预计这些方法将继续为社交机器人的设计提供信息,并扩大其可用性和全球市场占有率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号