首页> 外文期刊>Neurocomputing >Dual residual attention module network for single image super resolution
【24h】

Dual residual attention module network for single image super resolution

机译:双残留注意力模块网络可实现单图像超分辨率

获取原文
获取原文并翻译 | 示例

摘要

Recent studies show that research on single image super-resolution (SISR) has achieved great success by using deep convolutional neural network (CNN). Different types of features obtained in deep CNN have different contribution. However, most of the previous models ignore the distinction between different features and deal with them in the same way, which affects the representational capacity of the models. On the other hand, receptive fields with different size capture diverse features from the input. Based on the above considerations, we propose a dual residual attention module (DRAM) network which concentrates on recovering the high-frequency details and sharing the information between two receptive fields of different sizes. We construct local information integration (LFI) module as the basic module to make full use of the local information. The LFI module is a cascade of several dual residual attention fusion (DRAF) blocks with a dense connection structure. The feature modulation can focus on important features and suppress unimportant ones. The evaluation results on five benchmark datasets demonstrate the superiority of our DRAM network against the state-of-the-art algorithms. (C) 2019 Elsevier B.V. All rights reserved.
机译:最近的研究表明,通过使用深度卷积神经网络(CNN)对单图像超分辨率(SISR)的研究已取得了巨大的成功。在深层CNN中获得的不同类型的特征具有不同的贡献。但是,大多数以前的模型都忽略了不同特征之间的区别,并以相同的方式处理它们,这影响了模型的表示能力。另一方面,具有不同大小的接收场会从输入中捕获各种特征。基于以上考虑,我们提出了一种双残差关注模块(DRAM)网络,该网络专注于恢复高频细节并在两个不同大小的接收场之间共享信息。我们构建本地信息集成(LFI)模块作为基本模块,以充分利用本地信息。 LFI模块是几个具有密集连接结构的双残差注意力融合(DRAF)块的级联。特征调制可以专注于重要特征,而抑制不重要的特征。在五个基准数据集上的评估结果证明了我们的DRAM网络相对于最新算法的优越性。 (C)2019 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号