首页> 外文会议>IEEE International Conference on Acoustics, Speech and Signal Processing >Learning Model-Blind Temporal Denoisers without Ground Truths
【24h】

Learning Model-Blind Temporal Denoisers without Ground Truths

机译:学习模型 - 盲时颞垫没有地面真相

获取原文

摘要

Denoisers trained with synthetic noises often fail to cope with the diversity of real noises, giving way to methods that can adapt to unknown noise without noise modeling or ground truth. Previous image-based method leads to noise overfitting if directly applied to temporal denoising, and has inadequate temporal information management especially in terms of occlusion and lighting variation. In this paper, we propose a general framework for temporal denoising that successfully addresses these challenges. A novel twin sampler assembles training data by decoupling inputs from targets without altering semantics, which not only solves the noise overfitting problem, but also generates better occlusion masks by checking optical flow consistency. Lighting variation is quantified based on the local similarity of aligned frames. Our method consistently outperforms the prior art by 0.6-3.2dB PSNR on multiple noises, datasets and network architectures. State-of-the-art results on reducing model-blind video noises are achieved.
机译:用合成噪声训练的欺营者经常无法应对真实噪音的多样性,使方法能够适应没有噪声建模或地面真理的未知噪音。基于图像的方法如果直接应用于时间去噪,并且尤其是在遮挡和照明变化方面具有不足的时间信息管理。在本文中,我们提出了一般框架,以便成功解决这些挑战的时间去噪。通过在不改变语义的情况下解耦来自目标的输入来组装培训数据,而不改变语义,不仅解决了噪声过度拟合问题,而且还通过检查光学流量一致性来产生更好的遮挡掩模。基于对准框架的局部相似性来量化照明变化。我们的方法在多个噪声,数据集和网络架构上始终如一地优于现有技术0.6-3.2db psnr。实现了减少模型 - 盲视频噪声的最先进结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号