The paradigm of image-to-image translation is leveraged for the benefitof sketch stylization via transfer of geometric textural details. Lacking thenecessary volumes of data for standard training of translation systems, weadvocate for operation at the patch level, where a handful of stylized sketchesprovide ample mining potential for patches featuring basic geometric primitives.Operating at the patch level necessitates special consideration of fullsketch translation, as individual translation of patches with no regard toneighbors is likely to produce visible seams and artifacts at patch borders.Aligned pairs of styled and plain primitives are combined to form inputhybrids containing styled elements around the border and plain elementswithin, and given as input to a seamless translation (ST) generator, whoseoutput patches are expected to reconstruct the fully styled patch. An adversarialaddition promotes generalization and robustness to diverse geometriesat inference time, forming a simple and effective system for arbitrary sketchstylization, as demonstrated upon a variety of styles and sketches.
展开▼