首页> 外文会议>IEEE International Conference on Acoustics, Speech and Signal Processing >On Divergence Approximations for Unsupervised Training of Deep Denoisers Based on Stein's Unbiased Risk Estimator
【24h】

On Divergence Approximations for Unsupervised Training of Deep Denoisers Based on Stein's Unbiased Risk Estimator

机译:基于Stein的无偏见风险估算器的深脱离子机无监督培训的分歧近似值

获取原文

摘要

Recently, there have been several works on unsupervised learning for training deep learning based denoisers without clean images. Approaches based on Stein's unbiased risk estimator (SURE) have shown promising results for training Gaussian deep denoisers. However, their performance is sensitive to hyper-parameter selection in approximating the divergence term in the SURE expression. In this work, we briefly study the computational efficiency of Monte-Carlo (MC) divergence approximation over recently available exact divergence computation using backpropagation. Then, we investigate the relationship between smoothness of nonlinear activation functions in deep denoisers and robust divergence term approximations. Lastly, we propose a new divergence term that does not contain hyper-parameters. Both unsupervised training methods yield comparable performance to supervised training methods with ground truth for denoising on various datasets. While the former method still requires roughly tuned hyper parameter selection, the latter method removes the necessity of choosing one.
机译:最近,在没有干净的图像的情况下,有几个关于训练基于深入学习的Denoisers的无监督学习的作品。基于Stein的无偏见风险估算器(肯定)的方法显示了培训高斯深脱落者的有希望的结果。但是,它们的性能对在肯定表达式中近似分歧项的超参数选择敏感。在这项工作中,我们简要介绍了Monte-Carlo(MC)发散近似使用BackPropagation的最近可用的精确分歧计算的计算效率。然后,我们研究了深脱换器中非线性激活功能的平滑度与强大的分歧术语近似关系的关系。最后,我们提出了一种不含超参数的新分歧项。无论是无监督的训练方法都会产生可比的性能,以监督培训方法,以便在各种数据集上去噪。虽然前一种方法仍需要大致调整的超参数选择,但后者方法会消除选择一个的必要性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号