首页> 外文会议>IEEE International Conference on Visual Communications and Image Processing >Improving Robustness of DNNs against Common Corruptions via Gaussian Adversarial Training
【24h】

Improving Robustness of DNNs against Common Corruptions via Gaussian Adversarial Training

机译:通过高斯对抗性培训改善DNN抗常见腐败的鲁棒性

获取原文

摘要

Deep neural networks have demonstrated tremendous success in image classification, but their performance sharply degrades when evaluated on slightly different test data (e.g., data with corruptions). To address these issues, we propose a minimax approach to improve common corruption robustness of deep neural networks via Gaussian Adversarial Training. To be specific, we propose to train neural networks with adversarial examples where the perturbations are Gaussian-distributed. Our experiments show that our proposed GAT can improve neural networks’ robustness to noise corruptions more than other baseline methods. It also outperforms the state-of-the-art method in improving the overall robustness to common corruptions.
机译:深度神经网络在图像分类中表现出巨大的成功,但在略微不同的测试数据(例如,具有损坏数据)时,它们的性能会急剧降低。为了解决这些问题,我们提出了一种极少的方法,可以通过高斯对抗的训练改善深神经网络的共同腐败鲁棒性。具体而言,我们建议使用扰动的对抗性示例训练神经网络,其中扰动是分布的。我们的实验表明,我们所提出的GAT可以提高神经网络对噪声损坏的稳健性,而不是其他基线方法。它还优于提高普通腐败的整体鲁棒性的最先进的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号