首页> 外文期刊>Pattern recognition letters >MSAR-Net: Multi-scale attention based light-weight image super-resolution
【24h】

MSAR-Net: Multi-scale attention based light-weight image super-resolution

机译:MSAR-Net: Multi-scale attention based light-weight image super-resolution

获取原文
获取原文并翻译 | 示例
           

摘要

Recently, single image super-resolution (SISR), aiming to preserve the lost structural and textural information from the input low resolution image, has witnessed huge demand from the videos and graphics industries. The exceptional success of convolution neural networks (CNNs), has absolutely revolutionized the field of SISR. However, for most of the CNN-based SISR methods, excessive memory consumption in terms of parameters and flops, hinders their application in low-computing power devices. Moreover, different state-of-the-art SR methods collect different f eatures, by treating all the pixels contributing equally to the performance of the network. In this paper, we take into consideration both the performance and the reconstruction efficiency, and propose a Light-weight multi-scale attention residual network (MSARNet) for SISR. The proposed MSAR-Net consists of stack of multi-scale attention residual (MSAR) blocks for feature refinement, and an up and down-sampling projection (UDP) block for edge refinement of the extracted multi-scale features. These blocks are capable of effectively exploiting the multi-scale edge information, without increasing the number of parameters. Specially, we design our network in progressive fashion, for substituting the large scale factors (x 4) combinations, with small scale factor ( x2) combinations, and thus gradually exploit the hierarchical information. In parallel, for modulation of multi-scale features in global and local manners, channel and spatial attention in MSAR block is being used. Visual results and quantitative metrics of PSNR and SSIM exhibit the accuracy of the proposed approach on synthetic benchmark super-resolution datasets. The experimental analysis shows that the proposed approach outperforms the other existing methods for SISR in terms of memory footprint, inference time and visual quality. (C) 2021 Elsevier B.V. All rights reserved.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号