首页> 外文会议>International Conference on Machine Learning >Texture Networks: Feed-forward Synthesis of Textures and Stylized Images
【24h】

Texture Networks: Feed-forward Synthesis of Textures and Stylized Images

机译:纹理网络:纹理的前馈合成和程式化图像

获取原文

摘要

Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods require a slow and memoryconsuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys et al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions.
机译:Gatys等人。最近展示了深度网络可以从单个纹理示例生成美观纹理和风格化图像。但是,它们的方法需要缓慢和内存的优化过程。我们在这里提出了一种将计算负担转移到学习阶段的替代方法。给定纹理的单个例子,我们的方法列举了紧凑的前馈卷积网络,以产生多个相同的任意尺寸的样本,并将艺术风格从给定图像传送到任何其他图像。所得到的网络非常轻,可以产生与Gatys等人相当的质量纹理。但是要快得多。更一般地,我们的方法突出了具有复杂和富有表现力损耗功能的生成前馈模型的功率和灵活性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号