...
首页> 外文期刊>Journal of machine learning research >Why do deep convolutional networks generalize so poorly to small image transformations?
【24h】

Why do deep convolutional networks generalize so poorly to small image transformations?

机译:为什么深度卷积网络概括到小型图像转换时不佳?

获取原文
           

摘要

Convolutional Neural Networks (CNNs) are commonly assumed to be invariant to small image transformations: either because of the convolutional architecture or because they were trained using data augmentation. Recently, several authors have shown that this is not the case: small translations or rescalings of the input image can drastically change the network's prediction. In this paper, we quantify this phenomena and ask why neither the convolutional architecture nor data augmentation are sufficient to achieve the desired invariance. Specifically, we show that the convolutional architecture does not give invariance since architectures ignore the classical sampling theorem, and data augmentation does not give invariance because the CNNs learn to be invariant to transformations only for images that are very similar to typical images from the training set. We discuss two possible solutions to this problem: (1) antialiasing the intermediate representations and (2) increasing data augmentation and show that they provide only a partial solution at best. Taken together, our results indicate that the problem of insuring invariance to small image transformations in neural networks while preserving high accuracy remains unsolved.
机译:卷积神经网络(CNNS)通常假设对小图像变换不变:由于卷积架构,或者因为它们是使用数据增强训练的。最近,若干作者表明,这不是这种情况:输入图像的小转换或重立可以大大改变网络的预测。在本文中,我们量化了这种现象,并询问为什么卷积架构和数据增强都没有足以实现所需的不变性。具体来说,我们表明,由于架构忽略了经典采样定理,并且数据增强不提供不变性,因为CNNS学会仅对来自训练集非常相似的图像的变换而不是不变性,而不是给予不变性。 。我们讨论了两个问题的可能解决方案:(1)中间表示和(2)增加数据增强并表明它们最佳地提供部分解决方案。我们的结果表明,在保持高精度的同时保险到神经网络中的小图像变换的不变性的问题仍未解决。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号