首页> 外文会议>IEEE International Conference on Acoustics, Speech and Signal Processing >On Divergence Approximations for Unsupervised Training of Deep Denoisers Based on Stein’s Unbiased Risk Estimator
【24h】

On Divergence Approximations for Unsupervised Training of Deep Denoisers Based on Stein’s Unbiased Risk Estimator

机译:基于斯坦因的无偏风险估计量的深层降噪器无监督训练的散度近似

获取原文

摘要

Recently, there have been several works on unsupervised learning for training deep learning based denoisers without clean images. Approaches based on Stein’s unbiased risk estimator (SURE) have shown promising results for training Gaussian deep denoisers. However, their performance is sensitive to hyper-parameter selection in approximating the divergence term in the SURE expression. In this work, we briefly study the computational efficiency of Monte-Carlo (MC) divergence approximation over recently available exact divergence computation using backpropagation. Then, we investigate the relationship between smoothness of nonlinear activation functions in deep denoisers and robust divergence term approximations. Lastly, we propose a new divergence term that does not contain hyper-parameters. Both unsupervised training methods yield comparable performance to supervised training methods with ground truth for denoising on various datasets. While the former method still requires roughly tuned hyper parameter selection, the latter method removes the necessity of choosing one.
机译:最近,有一些关于无监督学习的工作,用于训练没有清晰图像的基于深度学习的降噪器。基于斯坦因的无偏风险估计器(SURE)的方法在训练高斯深度降噪器方面显示出令人鼓舞的结果。但是,在近似SURE表达式中的发散项时,它们的性能对超参数选择敏感。在这项工作中,我们简要研究了蒙特卡洛(MC)发散逼近的计算效率,而不是最近使用反向传播进行的精确发散计算。然后,我们研究了深降噪器中非线性激活函数的平滑度与鲁棒散度项近似之间的关系。最后,我们提出了一个新的散度项,其中不包含超参数。两种无监督训练方法都可以与有基础真理的有监督训练方法在各种数据集上进行去噪的性能相当。尽管前一种方法仍需要粗调超参数选择,但后一种方法消除了选择一个参数的必要性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号