...
首页> 外文期刊>Journal of visual communication & image representation >SRLibrary: Comparing different loss functions for super-resolution over various convolutional architectures
【24h】

SRLibrary: Comparing different loss functions for super-resolution over various convolutional architectures

机译:SRLibrary:比较各种卷积架构的超分辨率的不同损耗功能

获取原文
获取原文并翻译 | 示例

摘要

This study analyzes the effectiveness of various loss functions on performance improvement for Single Image Super-Resolution (SISR) using Convolutional Neural Network (CNN) models by surrogating the reconstructive map between Low Resolution (LR) and High Resolution (HR) images with convolutional filters. In total, eight loss functions are separately incorporated with Adam optimizer. Through experimental evaluations on different datasets, it is observed that some parametric and non-parametric robust loss functions promise impressive accuracies whereas remaining ones are sensitive to noise that misleads the learning process and consequently resulting in lower quality HR outcomes. Eventually, it turns out that the use of either Difference of Structural Similarity (DSSIM), Charbonnier or L1 loss functions within the optimization mechanism would be a proper choice, by considering their excellent reconstruction results. Among them, Charbonnier and Ll loss functions are fastest ones when the computational time cost is examined during training stage. (C) 2019 Elsevier Inc. All rights reserved.
机译:本研究通过使用卷积滤波器代谢在低分辨率(LR)和高分辨率(HR)图像之间的重建映射来分析各种损失功能对单图像超分辨率(CNN)模型的性能改进的效果改进。共有八个损失函数与ADAM优化器单独注册。通过对不同数据集的实验评估,观察到一些参数和非参数稳健损失函数承诺令人印象深刻的准确性,而剩余的损耗函数对误导学习过程的噪声敏感,因此导致质量较低的HR结果。最终,事实证明,在优化机制内使用结构相似性(DSSIM),Charbonnier或L1损耗功能的使用将是适当的选择,通过考虑其出色的重建结果。其中,当在训练阶段检查计算时间成本时,Charbonnier和LL损失功能是最快的。 (c)2019 Elsevier Inc.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号