首页> 外文期刊>Journal of visual communication & image representation >Generative image deblurring based on multi-scaled residual adversary network driven by composed prior-posterior loss
【24h】

Generative image deblurring based on multi-scaled residual adversary network driven by composed prior-posterior loss

机译:基于组合后损失的多尺度残差对抗网络的生成图像去模糊

获取原文
获取原文并翻译 | 示例

摘要

Conditional Generative Adversarial Networks (CGANs) have been introduced to generate realistic images from extremely degraded inputs. However, these generative models without prior knowledge of spatial distributions has limited performance to deal with various complex scenes. In this paper, we proposed a image deblurring network based on CGANs to generate ideal images without any blurring assumption. To overcome adversarial insufficiency, an extended classifier with different attribute domains is formulated to replace the original discriminator of CGANs. Inspired by residual learning, a set of skip-connections are cohered to transfer multi-scaled spatial features to the following high-level operations. Furthermore, this adversary architecture is driven by a composite loss that integrates histogram of gradients (HoG) and geodesic distance. In experiments, an uniformed adversarial iteration is circularly applied to improve image degenerations. Extensive results show that the proposed deblurring approach significantly outper-forms state-of-the-art methods on both qualitative and quantitative evaluations. (C) 2019 Elsevier Inc. All rights reserved.
机译:已引入条件生成对抗网络(CGAN),以从极端降级的输入生成逼真的图像。但是,这些生成模型在没有先验空间分布知识的情况下,处理各种复杂场景的性能有限。在本文中,我们提出了一种基于CGAN的图像去模糊网络,以生成理想的图像而无需任何模糊假设。为了克服对抗性不足,制定了具有不同属性域的扩展分类器,以替代CGAN的原始区分符。受到残差学习的启发,一组跳过连接连贯起来,将多尺度空间特征转移到以下高级操作中。此外,这种攻击者架构是由综合损失驱动的,该综合损失整合了梯度直方图(HoG)和测地距离。在实验中,统一的对抗迭代被循环应用以改善图像退化。大量结果表明,所提出的去模糊方法在定性和定量评估方面均明显优于最新技术。 (C)2019 Elsevier Inc.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号