首页> 外文期刊>Journal of Information and Telecommunication >Evolutionary algorithms deceive humans and machines at image classification: an extended proof of concept on two scenarios
【24h】

Evolutionary algorithms deceive humans and machines at image classification: an extended proof of concept on two scenarios

机译:在图像分类中的进化算法欺骗人和机器:两种情况下概念的扩展证明

获取原文
           

摘要

ABSTRACT The range of applications of Neural Networks encompasses image classification. However, Neural Networks are vulnerable to attacks, and may misclassify adversarial images, leading to potentially disastrous consequences. Pursuing some of our previous work, we provide an extended proof of concept of a black-box, targeted, non-parametric attack using evolutionary algorithms to fool both Neural Networks and humans at the task of image classification. Our feasibility study is performed on VGG-16 trained on CIFAR-10. For any category of CIFAR-10, one chooses an image classified by VGG-16 as belonging to . From there, two scenarios are addressed. In the first scenario, a target category is fixed a priori. We construct an evolutionary algorithm that evolves to a modified image that VGG-16 classifies as belonging to . In the second scenario, we construct another evolutionary algorithm that evolves to a modified image that VGG-16 is unable to classify. In both scenarios, the obtained adversarial images remain so close to the original one that a human would likely classify them as still belonging to .
机译:摘要神经网络的应用范围包括图像分类。然而,神经网络容易受到攻击,并且可能错误分类对抗性图像,导致潜在的灾难性后果。追求我们以前的一些工作,我们提供了一个黑匣子的概念的扩展证明,有针对性的,非参数攻击,使用进化算法欺骗图像分类任务的神经网络和人类。我们的可行性研究是在CIFAR-10培训的VGG-16上进行的。对于任何类别的CIFAR-10,可以选择由VGG-16分类的图像作为属于。从那里,解决了两种情况。在第一种情况下,目标类别是固定的先验。我们构建一种进化算法,它发展到VGG-16对属于的修改图像。在第二场景中,我们构造另一个进化算法,该算法演变为VGG-16无法对的修改图像。在这两种情况下,所获得的对抗性图像仍然如此接近原始人,即人类可能将它们分类为仍然属于。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号