首页> 外文期刊>International journal of imaging systems and technology >Synthetic Faces: Analysis and Applications
【24h】

Synthetic Faces: Analysis and Applications

机译:合成面:分析和应用

获取原文
获取原文并翻译 | 示例
       

摘要

Facial animation has been a topic of intensive research for more than three decades. Still, designing realistic facial animations remains to be a challenging task. Several models and tools have been developed so far to automate the design of faces and facial animations synchronized with speech, emotions, and gestures. In this article, we take a brief overview of the existing parameterized facial animation systems. We then turn our attention to facial expression analysis, which we believe is the key to improving realism in animated faces. We report the results of our research regarding the analysis of the facial motion capture data. We use an optical tracking system that extracts the 3D positions of markers attached at specific feature point locations. We capture the movements of these face markers for a talking person. We then form a vector space representation by using the principal component analysis of this data. We call this space "expression and viseme space." As a result, we propose a new parameter space for sculpting facial expressions for synthetic faces. Such a representation not only offers insight into improving realism of animated faces, but also gives a new way of generating convincing speech animation and blending between several expressions. Expressive facial animation finds a variety of applications ranging from virtual environments to entertainment and games. With the advances in Internet technology, the development of online sales assistants, Web navigation aides and Web-based interactive tutors is promising than ever before. We overview the recent advances in the field of facial animation on the Web, with a detailed look at the requirements for Web-based facial animation systems and various applications.
机译:面部动画已超过三十年成为深入研究的主题。尽管如此,设计逼真的面部动画仍然是一项艰巨的任务。迄今为止,已经开发了几种模型和工具来自动化与语音,情感和手势同步的面部和面部动画的设计。在本文中,我们简要概述了现有的参数化面部动画系统。然后,我们将注意力转向面部表情分析,我们认为这是改善动画面孔逼真度的关键。我们报告了有关面部运动捕获数据分析的研究结果。我们使用光学跟踪系统提取在特定特征点位置附加的标记的3D位置。我们为说话的人捕获了这些面部标记的动作。然后,通过使用此数据的主成分分析来形成向量空间表示。我们称此空间为“表达和视位空间”。因此,我们提出了一个新的参数空间,用于为合成人脸雕刻面部表情。这样的表示方式不仅可以洞悉提高动画面孔的真实感,还可以提供一种新的方式来生成令人信服的语音动画并在多个表情之间进行融合。富有表现力的面部动画可以找到从虚拟环境到娱乐和游戏的各种应用程序。随着Internet技术的进步,在线销售助手,Web导航助手和基于Web的交互式辅导员的发展比以往任何时候都充满希望。我们概述了Web上面部动画领域的最新进展,并详细介绍了基于Web的面部动画系统和各种应用程序的要求。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号