首页> 外文会议>IEEE Conference on Computer Vision and Pattern Recognition >Deep neural networks are easily fooled: High confidence predictions for unrecognizable images
【24h】

Deep neural networks are easily fooled: High confidence predictions for unrecognizable images

机译:深度神经网络很容易被骗:对无法辨认的图像的高信心预测

获取原文

摘要

Deep neural networks (DNNs) have recently been achieving state-of-the-art performance on a variety of pattern-recognition tasks, most notably visual classification problems. Given that DNNs are now able to classify objects in images with near-human-level performance, questions naturally arise as to what differences remain between computer and human vision. A recent study [30] revealed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a DNN to label the image as something else entirely (e.g. mislabeling a lion a library). Here we show a related result: it is easy to produce images that are completely unrecognizable to humans, but that state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion). Specifically, we take convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and then find images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class. It is possible to produce images totally unrecognizable to human eyes that DNNs believe with near certainty are familiar objects, which we call “fooling images” (more generally, fooling examples). Our results shed light on interesting differences between human vision and current DNNs, and raise questions about the generality of DNN computer vision.
机译:深度神经网络(DNN)最近在各种模式识别任务中实现了最先进的性能,最重要的是视觉分类问题。鉴于DNN现在能够将具有近人级性能的图像中的对象分类,自然问题是关于计算机和人类视野之间的差异。最近的一个研究[30]揭示了以难以察觉的方式改变图像(例如狮子)可能导致DNN将图像标记为其他东西(例如,误标记狮子图书馆)。在这里,我们展示了相关结果:很容易产生对人类完全无法识别的图像,但是最先进的DNN认为是可识别的物体,其置信99.99%(例如,肯定地标记白噪声静态狮子)。具体而言,我们采取训练的卷积神经网络,以在想象成或Mnist数据集上执行良好,然后找到具有进化算法的图像或渐变上升,该图像具有高频率,属于每个数据集类。可以生产完全无法识别的人眼的图像,即DNN认为近乎确定的是熟悉的物体,我们称之为“愚弄图像”(更一般,愚弄的例子)。我们的结果阐明了人类视力和当前DNN之间有趣的差异,并提出了关于DNN计算机视觉一般性的问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号