首页> 外文会议>International Conference on Pattern Recognition >Transferable Adversarial Attacks for Deep Scene Text Detection
【24h】

Transferable Adversarial Attacks for Deep Scene Text Detection

机译:深度场景文本检测可转移的对抗攻击

获取原文

摘要

Scene text detection (STD) aims to locate text in images and plays an important role in many computer vision tasks including automatic driving and text recognition systems. Recently, deep neural networks (DNNs) have been widely and successfully used in scene text detection, leading to plenty of DNN-based STD methods including regression-based and segmentation-based STD methods. However, recent studies have also shown that DNN is vulnerable to adversarial attacks, which can significantly degrade the performance of DNN models. In this paper, we investigate the robustness of DNN-based STD methods against adversarial attacks. To this end, we propose a generic and efficient attack method to generate adversarial examples, which are produced by adding small but imperceptible adversarial perturbation to the input images. Experiments on attacking four various models and a real-world STD engine of Google optical character recognition (OCR) show that the state-of-the-art DNN-based STD methods including regression-based and segmentation-based methods are vulnerable to adversarial attacks.
机译:场景文本检测(STD)旨在找到图像中的文本,在许多计算机视觉任务中扮演重要作用,包括自动驾驶和文本识别系统。最近,深度神经网络(DNN)已广泛且成功地用于场景文本检测,导致大量基于DNN的STD方法,包括基于回归和基于分段的STD方法。然而,最近的研究还表明,DNN易受对抗性攻击的影响,这可以显着降低DNN模型的性能。在本文中,我们研究了基于DNN的STD方法对抗对抗攻击的鲁棒性。为此,我们提出了一种通用和有效的攻击方法来产生对抗的例子,其通过向输入图像添加小但不可察觉的对抗扰动来产生。攻击四种各种型号的实验和谷歌光学字符识别(OCR)的实际STD发动机(OCR)表明,基于最先进的DNN的STD方法包括基于回归和基于分段的方法的基于DNN的STD方法容易受到对抗的攻击。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号