首页> 外文会议>IEEE Conference on Applications of Computer Vision >Texture Attribute Synthesis and Transfer Using Feed-Forward CNNs
【24h】

Texture Attribute Synthesis and Transfer Using Feed-Forward CNNs

机译:使用前馈CNN的纹理属性合成和传递

获取原文

摘要

We present a novel technique for texture synthesis and style transfer based on convolutional neural networks (CNNs). Our method learns feed-forward image generators that correspond to specification of styles and textures in terms of high-level describable attributes such as 'striped', 'dotted', or 'veined'. Two key conceptual advantages over template-based approaches are that attributes can be analyzed and activated individually, while a template image necessarily represents a simultaneous specification of many attributes, and that attributes can combine aspects of many texture templates allowing flexibility in the generation process. Once the attribute-wise networks are trained, applications to texture synthesis and style transfer are fast, allowing for real-time video processing.
机译:我们提出了一种基于卷积神经网络(CNN)的纹理合成和样式转换的新技术。我们的方法通过高级描述性属性(例如“条纹”,“点缀”或“纹理”)学习与样式和纹理规范相对应的前馈图像生成器。与基于模板的方法相比,两个关键的概念优势是可以分别分析和激活属性,而模板图像必须表示多个属性的同时指定,并且属性可以组合许多纹理模板的各个方面,从而在生成过程中具有灵活性。一旦对属性网络进行了训练,纹理合成和样式转换的应用就会很快,从而可以进行实时视频处理。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号