首页> 外文期刊>Multimedia Tools and Applications >On creating multimodal virtual humans-real time speech driven facial gesturing
【24h】

On creating multimodal virtual humans-real time speech driven facial gesturing

机译:关于创建多模式虚拟人实时语音驱动的面部手势

获取原文
获取原文并翻译 | 示例
           

摘要

Because of extensive use of different computer devices, human-computer interaction design nowadays moves towards creating user centric interfaces. It assumes incorporating different modalities that humans use in everyday communication. Virtual humans, who look and behave believably, fit perfectly in the concept of designing interfaces in more natural, effective, as well as social oriented way. In this paper we present a novel method for automatic speech driven facial gesturing for virtual humans capable of real time performance. Facial gestures included are various nods and head movements, blinks, eyebrow gestures and gaze. A mapping from speech to facial gestures is based on the prosodic information obtained from the speech signal. It is realized using a hybrid approach-Hidden Markov Models, rules and global statistics. Further, we test the method using an application prototype-a system for speech driven facial gesturing suitable for virtual presenters. Subjective evaluation of the system confirmed that the synthesized facial movements are consistent and time aligned with the underlying speech, and thus provide natural behavior of the whole face.
机译:由于广泛使用不同的计算机设备,如今的人机交互设计朝着创建以用户为中心的界面发展。它假定合并了人类在日常交流中使用的不同方式。虚拟人的外观和行为真实可信,非常适合以更自然,有效和面向社会的方式设计界面的概念。在本文中,我们提出了一种新型的自动语音驱动的手势手势的方法,该手势手势可用于具有实时性能的虚拟人。包括的面部手势包括各种点头和头部动作,眨眼,眉毛手势和凝视。从语音到面部手势的映射基于从语音信号获得的韵律信息。它是使用混合方法,隐马尔可夫模型,规则和全局统计信息来实现的。此外,我们使用应用程序原型测试了该方法-一种适用于虚拟演示者的语音驱动面部手势系统。系统的主观评估确认合成的面部运动是一致的,并且时间与基础语音保持一致,从而提供了整个面部的自然行为。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号