首页> 外文期刊>ACM Transactions on Graphics >Learning an Intrinsic Garment Space for Interactive Authoring of Garment Animation
【24h】

Learning an Intrinsic Garment Space for Interactive Authoring of Garment Animation

机译:学习内部服装空间以交互式制作服装动画

获取原文
获取原文并翻译 | 示例

摘要

Authoring dynamic garment shapes for character animation on body motionis one of the fundamental steps in the CG industry. Established workflowsare either time and labor consuming (i.e., manual editing on dense frames with controllers), or lack keyframe-level control (i.e., physically-based simulation).Not surprisingly, garment authoring remains a bottleneck in manyproduction pipelines. Instead, we present a deep-learning-based approach forsemi-automatic authoring of garment animation, wherein the user providesthe desired garment shape in a selection of keyframes, while our systeminfers a latent representation for its motion-independent intrinsic parameters(e.g., gravity, cloth materials, etc.). Given new character motions, the latentrepresentation allows to automatically generate a plausible garment animationat interactive rates. Having factored out character motion, the learnedintrinsic garment space enables smooth transition between keyframes ona new motion sequence. Technically, we learn an intrinsic garment spacewith an motion-driven autoencoder network, where the encoder maps thegarment shapes to the intrinsic space under the condition of body motions,while the decoder acts as a differentiable simulator to generate garment shapes according to changes in character body motion and intrinsic parameters.We evaluate our approach qualitatively and quantitatively on commongarment types. Experiments demonstrate our system can significantly improvecurrent garment authoring workflows via an interactive user interface.Compared with the standard CG pipeline, our system significantly reducesthe ratio of required keyframes from 20% to 1 − 2%.
机译:为人体动作中的角色动画创作动态服装形状是CG行业的基本步骤之一。既定的工作流程既费时又费力(即在带有控制器的密集框架上进行手动编辑),或者缺乏关键帧级别的控制(即基于物理的模拟)。不足为奇的是,服装创作仍然是许多生产管道中的瓶颈。取而代之的是,我们提出了一种基于深度学习的服装动画半自动创作方法,其中,用户在关键帧的选择中提供了所需的服装形状,而我们的系统则根据其与运动无关的固有参数(例如重力,布材料等)。给定新的角色动作,潜在表示允许以交互速率自动生成合理的服装动画。在排除了角色运动之后,学习型本机服装空间使新的运动序列上关键帧之间的平滑过渡成为可能。从技术上讲,我们使用运动驱动的自动编码器网络来学习服装的固有空间,该网络在身体运动的情况下将服装的形状映射到服装的固有空间,而解码器则作为可区分的模拟器,根据角色的身体变化生成服装的形状。运动和内在参数。我们对服装类型进行定性和定量评估。实验表明,我们的系统可以通过交互式用户界面显着改善当前的服装创作工作流程。与标准CG管道相比,我们的系统将所需关键帧的比例从20%显着降低到1-2%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号