首页> 外文会议>IEEE/CVF Conference on Computer Vision and Pattern Recognition >TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style
【24h】

TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style

机译:TailorNet:根据人体姿势,形状和服装样式预测3D服装

获取原文

摘要

In this paper, we present TailorNet, a neural model which predicts clothing deformation in 3D as a function of three factors: pose, shape and style (garment geometry), while retaining wrinkle detail. This goes beyond prior models, which are either specific to one style and shape, or generalize to different shapes producing smooth results, despite being style specific. Our hypothesis is that (even non-linear) combinations of examples smoothes out high frequency components such as fine-wrinkles, which makes learning the three factors jointly hard. At the heart of our technique is a decomposition of deformation into a high frequency and a low frequency component. While the low-frequency component is predicted from pose, shape and style parameters with an MLP, the high-frequency component is predicted with a mixture of shape-style specific pose models. The weights of the mixture are computed with a narrow bandwidth kernel to guarantee that only predictions with similar high-frequency patterns are combined. The style variation is obtained by computing, in a canonical pose, a subspace of deformation, which satisfies physical constraints such as inter-penetration, and draping on the body. TailorNet delivers 3D garments which retain the wrinkles from the physics based simulations (PBS) it is learned from, while running more than 1000 times faster. In contrast to classical PBS, TailorNet is easy to use and fully differentiable, which is crucial for computer vision and learning algorithms. Several experiments demonstrate TailorNet produces more realistic results than prior work, and even generates temporally coherent deformations on sequences of the AMASS dataset, despite being trained on static poses from a different dataset. To stimulate further research in this direction, we will make a dataset consisting of 55800 frames, as well as our model publicly available at https://virtualhumans.mpi-inf.mpg.de/tailornet/.
机译:在本文中,我们介绍了TailorNet,这是一个神经模型,可预测3D服装变形的三个因素:姿势,形状和样式(服装几何形状),同时保留皱纹细节。这超出了现有模型的范围,尽管这些模型特定于某种样式,但这些模型要么特定于一种样式和形状,要么泛化为产生平滑结果的不同形状。我们的假设是,示例的组合(即使是非线性的)也可以消除诸如细纹之类的高频成分,这使得共同学习这三个因素变得很困难。我们技术的核心是将变形分解为高频和低频成分。使用MLP根据姿势,形状和样式参数可以预测低频分量,而可以使用形状样式特定的姿势模型混合来预测高频分量。使用窄带宽内核计算混合物的权重,以确保仅合并具有相似高频模式的预测。通过以标准姿势计算变形的子空间来获得样式变化,该子空间满足物理约束,例如互穿和在身体上悬垂。 TailorNet提供3D服装,该服装可以保留从基于物理的仿真(PBS)中获得的皱纹,而运行速度要快1000倍以上。与传统的PBS相比,TailorNet易于使用且完全可区分,这对于计算机视觉和学习算法至关重要。多个实验表明,TailorNet可以比以前的工作产生更真实的结果,甚至可以对AMASS数据集的序列产生时间上连贯的变形,尽管已经训练了来自不同数据集的静态姿势。为了激发朝着这个方向的进一步研究,我们将创建一个包含55800帧的数据集,以及我们的模型,网址为https://virtualhumans.mpi-inf.mpg.de/tailornet/。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号