首页> 外文会议>Interntaional Workshop on Digital Forensics and Watermarking >Towards Robust Neural Networks with Lipschitz Continuity
【24h】

Towards Robust Neural Networks with Lipschitz Continuity

机译:朝着Lipschitz连续性强大的神经网络

获取原文

摘要

Deep neural networks have shown remarkable performance across a wide range of vision-based tasks, particularly due to the availability of large-scale datasets for training and better architectures. However, data seen in the real world are often affected by distortions that not accounted for by the training datasets. In this paper, we address the challenge of robustness and stability of neural networks and propose a general training method that can be used to make the existing neural network architectures more robust and stable to input visual perturbations while using only available datasets for training. Proposed training method is convenient to use as it does not require data augmentation or changes in the network architecture. We provide theoretical proof as well as empirical evidence for the efficiency of the proposed training method by performing experiments with existing neural network architectures and demonstrate that same architecture when trained with the proposed training method perform better than when trained with conventional training approach in the presence of noisy datasets.
机译:深度神经网络在广泛的基于视觉任务中表现出显着的性能,特别是由于培训和更好的架构的大规模数据集的可用性。然而,在现实世界中看到的数据往往受训练数据集未占据的扭曲的影响。在本文中,我们解决了神经网络的鲁棒性和稳定性的挑战,并提出了一种普遍的培训方法,可以用于使现有的神经网络架构更加强大,并且在仅使用可用数据集进行培训时输入视觉扰动。建议的培训方法很方便,因为它不需要数据增强或网络架构的变化。我们提供理论证明以及通过使用现有神经网络架构的实验进行实验,并在用所提出的训练方法培训时表现出相同的架构,从而比在存在的情况下进行更好的培训时表现出相同的架构嘈杂的数据集。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号