首页> 外文会议>IEEE Visualization Conference – Short Papers >Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks
【24h】

Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks

机译:虚张声迹:对深神经网络的交互式解密对抗攻击

获取原文

摘要

Deep neural networks (DNNs) are now commonly used in many domains. However, they are vulnerable to adversarial attacks: carefully-crafted perturbations on data inputs that can fool a model into making incorrect predictions. Despite significant research on developing DNN attack and defense techniques, people still lack an understanding of how such attacks penetrate a model’s internals. We present Bluff, an interactive system for visualizing, characterizing, and deciphering adversarial attacks on vision-based neural networks. Bluff allows people to flexibly visualize and compare the activation pathways for benign and attacked images, revealing mechanisms that adversarial attacks employ to inflict harm on a model. Bluff is open-sourced and runs in modern web browsers.
机译:深度神经网络(DNN)现在通常用于许多域。然而,它们容易受到对抗的攻击:在数据输入上仔细制作的扰动,可以欺骗模型来制造不正确的预测。尽管对发展DNN攻击和防御技术进行了重大研究,但人们仍然缺乏了解这种攻击如何穿透模型的内部。我们呈现虚张声,一种用于可视化,表征和解读对基于视觉的神经网络的对抗攻击的交互式系统。虚张声势允许人们灵活地可视化并比较良性和攻击图像的激活途径,揭示了对抗的机制,使对抗性攻击造成对模型的伤害。虚张声势在现代Web浏览器中开放并运行。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号