首页> 外文期刊>The visual computer >Fine-grained scale space learning for single image super-resolution
【24h】

Fine-grained scale space learning for single image super-resolution

机译:Fine-grained scale space learning for single image super-resolution

获取原文
获取原文并翻译 | 示例
           

摘要

Abstract Recent deep convolutional neural networks have achieved great reconstruction accuracy for single image super-resolution (SISR). Most of them, however, need to train a specific set of parameters for a single scaling factor or a particular group of scaling factors. This means, multiple sets of model parameters have to be used for different scaling factors, each of which can be already very large. In this paper, we study a new problem of fine-grained scale space learning of SISR, which uses one set of parameters while achieving varying scales. Specifically, we aim to use an arbitrary base SISR ×2documentclass[12pt]{minimal} usepackage{amsmath} usepackage{wasysym} usepackage{amsfonts} usepackage{amssymb} usepackage{amsbsy} usepackage{mathrsfs} usepackage{upgreek} setlength{oddsidemargin}{-69pt} begin{document}$$times 2$$end{document} model to realize high-quality SISR for a continuous-integer spectrum of scaling factors, e.g., 2~8documentclass[12pt]{minimal} usepackage{amsmath} usepackage{wasysym} usepackage{amsfonts} usepackage{amssymb} usepackage{amsbsy} usepackage{mathrsfs} usepackage{upgreek} setlength{oddsidemargin}{-69pt} begin{document}$$2 sim 8$$end{document}. To this end, first for the base scaling factor 2, we propose low-resolution reconstruction, blind kernel estimation and recursive error compensation, to generate three loss functions, helping to boost the training quality of the base model. Then, we cascade the boosted SISR ×documentclass[12pt]{minimal} usepackage{amsmath} usepackage{wasysym} usepackage{amsfonts} usepackage{amssymb} usepackage{amsbsy} usepackage{mathrsfs} usepackage{upgreek} setlength{oddsidemargin}{-69pt} begin{document}$$times $$end{document}2 model and extend the low-resolution reconstruction to incorporate multiple LR loss functions covering {3,4,…,2n}documentclass[12pt]{minimal} usepackage{amsmath} usepackage{wasysym} usepackage{amsfonts} usepackage{amssymb} usepackage{amsbsy} usepackage{mathrsfs} usepackage{upgreek} setlength{oddsidemargin}{-69pt} begin{document}$${3,4,ldots ,2^n}$$end{document} scales. By this way, the SISR ×documentclass[12pt]{minimal} usepackage{amsmath} usepackage{wasysym} usepackage{amsfonts} usepackage{amssymb} usepackage{amsbsy} usepackage{mathrsfs} usepackage{upgreek} setlength{oddsidemargin}{-69pt} begin{document}$$times $$end{document}2 model can be effectively tuned to work well for continuous-integer scaling factors, with exactly the same set of parameters. Extensive experiments verify the capability of our approach to enable state-of-the-art methods to realize fine-grained scale space learning of SISR, with higher accuracy and much less parameters.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号