...
首页> 外文期刊>ACM transactions on intelligent systems and technology >Adversarial Attacks on Deep-learning Models in Natural Language Processing: A Survey
【24h】

Adversarial Attacks on Deep-learning Models in Natural Language Processing: A Survey

机译:对自然语言处理中深度学习模型的对抗攻击:调查

获取原文
获取原文并翻译 | 示例
           

摘要

With the development of high computational devices, deep neural networks (DNNs), in recent years, have gained significant popularity in many Artificial Intelligence (AI) applications. However, previous efforts have shown that DNNs are vulnerable to strategically modified samples, named adversarial examples. These samples are generated with some imperceptible perturbations, but can fool the DNNs to give false predictions. Inspired by the popularity of generating adversarial examples against DNNs in Computer Vision (CV), research efforts on attacking DNNs for Natural Language Processing (NLP) applications have emerged in recent years. However, the intrinsic difference between image (CV) and text (NLP) renders challenges to directly apply attacking methods in CV to NLP. Various methods are proposed addressing this difference and attack a wide range of NLP applications. In this article, we present a systematic survey on these works. We collect all related academic works since the first appearance in 2017. We then select, summarize, discuss, and analyze 40 representative works in a comprehensive way. To make the article self-contained, we cover preliminary knowledge of NLP and discuss related seminal works in computer vision. We conclude our survey with a discussion on open issues to bridge the gap between the existing progress and more robust adversarial attacks on NLP DNNs.
机译:随着高计算设备的发展,近年来,深神经网络(DNN),在许多人工智能(AI)应用中获得了显着的普及。然而,以前的努力表明,DNN易受战略修饰的样本,名为对抗的例子。这些样本具有一些不可察觉的扰动,但可以欺骗DNN给出虚假预测。通过在计算机视觉(CV)中产生对抗DNN的普发拉利示例的普及的启发,近年来出现了对自然语言处理(NLP)应用攻击DNN的研究努力。然而,图像(CV)和文本之间的内在差异呈现挑战,以直接在CV到NLP中应用攻击方法。提出了各种方法解决了这种差异并攻击了各种NLP应用。在本文中,我们对这些作品提出了系统的调查。自2017年首次出现以来,我们收集所有相关的学术作品。我们以全面的方式选择,总结,讨论和分析40个代表工作。为了使文章自包含,我们涵盖了NLP的初步知识,并讨论了计算机愿景中的相关开创性工作。我们在讨论开放问题的讨论中结束了我们的调查,以弥合现有进展与NLP DNN的更强大的对抗攻击之间的差距。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号