首页> 外文会议>International Conference on Enterprise Information Systems >Facial Expressions Animation in Sign Language based on Spatio-temporal Centroid
【24h】

Facial Expressions Animation in Sign Language based on Spatio-temporal Centroid

机译:基于时空质心的手语中的面部表达式动画

获取原文
获取外文期刊封面目录资料

摘要

Systems that use virtual environments with avatars for information communication are of fundamental importance in contemporary life. They are even more relevant in the context of supporting sign language communication for accessibility purposes. Although facial expressions provide message context and define part of the information transmitted, e.g., irony or sarcasm, facial expressions are usually considered as a static background feature in a primarily gestural language in computational systems. This article proposes a novel parametric model of facial expression synthesis through a 3D avatar representing complex facial expressions leveraging emotion context. Our technique explores interpolation of the base expressions in the geometric animation through centroids control and Spatio-temporal data. The proposed method automatically generates complex facial expressions with controllers that use region parameterization as in manual models used for sign language representation. Our approach to the generation of facial expressions adds emotion to representation, which is a determining factor in defining the tone of a message. This work contributes with the definition of non-manual markers for Sign Languages 3D Avatar and the refinement of the synthesized message in sign languages, proposing a complete model for facial parameters and synthesis using geometric centroid regions interpolation. A dataset with facial expressions was generated using the proposed model and validated using machine learning algorithms. In addition, evaluations conducted with the deaf community showed a positive acceptance of the facial expressions and synthesized emotions.
机译:使用具有化身进行信息通信的虚拟环境的系统在当代生活中具有基本的重要性。它们在支持可访问目的的手语通信的背景下更相关。尽管面部表达提供消息上下文并定义传输的部分信息,例如,讽刺或讽刺,面部表情通常被认为是计算系统中主要是势语言的静态背景特征。本文提出了通过表示情绪上下文的复杂面部表情的3D化身,提出了一种新的面部表情综合的参数模型。我们的技术通过质心控制和时空数据探讨了几何动画中基本表达式的插值。所提出的方法自动生成与使用区域参数化的控制器自动生成复杂的面部表达式,如用于手动模型中用于手语展示的手动模型。我们对面部表情的生成的方法增加了表现的情感,这是定义消息音调的决定因素。这项工作有助于签名语言的非手动标记的定义3D化身和综合消息的细化在标志语言中,提出了使用几何质心区域插值的面部参数和合成的完整模型。使用所提出的模型生成具有面部表达式的数据集,并使用机器学习算法进行验证。此外,与聋人进行的评估表明对面部表情和合成情绪的积极接受。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号