首页> 外文期刊>IEEE transactions on multimedia >Learning Personalized Models for Facial Expression Analysis and Gesture Recognition
【24h】

Learning Personalized Models for Facial Expression Analysis and Gesture Recognition

机译:学习个性化的面部表情分析和手势识别模型

获取原文
获取原文并翻译 | 示例

摘要

Facial expression and gesture recognition algorithms are key enabling technologies for human-computer interaction (HCI) systems. State of the art approaches for automatic detection of body movements and analyzing emotions from facial features heavily rely on advanced machine learning algorithms. Most of these methods are designed for the average user, but the assumption “one-size-fits-all” ignores diversity in cultural background, gender, ethnicity, and personal behavior, and limits their applicability in real-world scenarios. A possible solution is to build personalized interfaces, which practically implies learning person-specific classifiers and usually collecting a significant amount of labeled samples for each novel user. As data annotation is a tedious and time-consuming process, in this paper we present a framework for personalizing classification models which does not require labeled target data. Personalization is achieved by devising a novel transfer learning approach. Specifically, we propose a regression framework which exploits auxiliary (source) annotated data to learn the relation between person-specific sample distributions and parameters of the corresponding classifiers. Then, when considering a new target user, the classification model is computed by simply feeding the associated (unlabeled) sample distribution into the learned regression function. We evaluate the proposed approach in different applications: pain recognition and action unit detection using visual data and gestures classification using inertial measurements, demonstrating the generality of our method with respect to different input data types and basic classifiers. We also show the advantages of our approach in terms of accuracy and computational time both with respect to user-independent approaches and to previous personalization techniques.
机译:面部表情和手势识别算法是人机交互(HCI)系统的关键启用技术。用于自动检测身体运动并分析来自面部特征的情绪的最新技术在很大程度上依赖于先进的机器学习算法。这些方法大多数是为普通用户设计的,但是“一刀切”的假设忽略了文化背景,性别,种族和个人行为的多样性,并限制了它们在现实世界中的适用性。一种可能的解决方案是构建个性化界面,这实际上意味着学习特定于人的分类器,并且通常为每个新用户收集大量的带标签的样本。由于数据注释是一个繁琐且耗时的过程,因此在本文中,我们提出了一个个性化分类模型的框架,该框架不需要标记的目标数据。通过设计一种新颖的转移学习方法可以实现个性化。具体来说,我们提出了一个回归框架,该框架利用辅助(源)注释数据来学习特定于人的样本分布与相应分类器的参数之间的关系。然后,在考虑新的目标用户时,只需将关联的(未标记的)样本分布输入到学习的回归函数中即可计算出分类模型。我们评估了在不同应用中提出的方法:使用视觉数据的疼痛识别和动作单元检测以及使用惯性测量的手势分类,证明了我们的方法针对不同输入数据类型和基本分类器的一般性。我们还展示了我们的方法在准确性和计算时间方面相对于用户无关的方法以及以前的个性化技术的优势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号