...
首页> 外文期刊>Journal of visual communication & image representation >SRLibrary: Comparing different loss functions for super-resolution over various convolutional architectures
【24h】

SRLibrary: Comparing different loss functions for super-resolution over various convolutional architectures

机译:SRLibrary:比较各种损失函数以在各种卷积架构上实现超分辨率

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

This study analyzes the effectiveness of various loss functions on performance improvement for Single Image Super-Resolution (SISR) using Convolutional Neural Network (CNN) models by surrogating the reconstructive map between Low Resolution (LR) and High Resolution (HR) images with convolutional filters. In total, eight loss functions are separately incorporated with Adam optimizer. Through experimental evaluations on different datasets, it is observed that some parametric and non-parametric robust loss functions promise impressive accuracies whereas remaining ones are sensitive to noise that misleads the learning process and consequently resulting in lower quality HR outcomes. Eventually, it turns out that the use of either Difference of Structural Similarity (DSSIM), Charbonnier or L1 loss functions within the optimization mechanism would be a proper choice, by considering their excellent reconstruction results. Among them, Charbonnier and Ll loss functions are fastest ones when the computational time cost is examined during training stage. (C) 2019 Elsevier Inc. All rights reserved.
机译:这项研究使用卷积神经网络替代低分辨率(LR)图像和高分辨率(HR)图像之间的重构图,使用卷积神经网络(CNN)模型分析了各种损失函数对单图像超分辨率(SISR)性能改善的有效性。总共有8个损失函数与Adam优化器分别合并。通过对不同数据集的实验评估,可以观察到一些参数性和非参数性鲁棒损失函数保证了令人印象深刻的准确性,而其余的函数则对误导学习过程的噪声敏感,从而导致较低质量的HR结果。最终,通过考虑它们的出色重建结果,发现在优化机制中使用结构相似性差异(DSSIM),Charbonnier或L1损失函数将是一个适当的选择。其中,在训练阶段检查计算时间成本时,Charbonnier和L1损失函数是最快的函数。 (C)2019 Elsevier Inc.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号