The ability to edit materials of objects in images is desirable by manycontent creators. However, this is an extremely challenging task as it requiresto disentangle intrinsic physical properties of an image. We propose anend-to-end network architecture that replicates the forward image formationprocess to accomplish this task. Specifically, given a single image, thenetwork first predicts intrinsic properties, i.e. shape, illumination, andmaterial, which are then provided to a rendering layer. This layer performsin-network image synthesis, thereby enabling the network to understand thephysics behind the image formation process. The proposed rendering layer isfully differentiable, supports both diffuse and specular materials, and thuscan be applicable in a variety of problem settings. We demonstrate a rich setof visually plausible material editing examples and provide an extensivecomparative study.
展开▼