Hair is one of the most recognizable parts of a human body, which is essential for the digitization of compelling virtual avatars but also one of the most challenging to create. In this paper, we present a single-view hair modeling technique for generating visually plausible strand-based 3D hair models. This is made possible by an effective high-precision 2D strand tracing algorithm, which explicitly models uncertainty and local layering during tracing. The depth of the traced strands is solved through an optimization that combines the base shape and 3D helical hair prior. We fit a parametric morphable face model to the input photo and construct a base shape in the face and hair regions using occlusion and silhouette constraints. Then we introduce a 3D helical hair prior that captures the geometric structure of hair, and show that it can be robustly recovered from the 2D strands in an automatic manner. We demonstrate that our method can reconstruct a wide variety of hair styles ranging from short to long and from straight to messy.
展开▼