首页> 外文期刊>IEEE transactions on information forensics and security >Characterizing and Evaluating Adversarial Examples for Offline Handwritten Signature Verification
【24h】

Characterizing and Evaluating Adversarial Examples for Offline Handwritten Signature Verification

机译:表征和评估离线手写签名验证的对抗性示例

获取原文
获取原文并翻译 | 示例

摘要

The phenomenon of adversarial examples is attracting increasing interest from the machine learning community, due to its significant impact on the security of machine learning systems. Adversarial examples are similar (from a perceptual notion of similarity) to samples from the data distribution, that "fool" a machine learning classifier. For computer vision applications, these are images with carefully crafted but almost imperceptible changes, which are misclassified. In this paper, we characterize this phenomenon under an existing taxonomy of threats to biometric systems, in particular identifying new attacks for offline handwritten signature verification systems. We conducted an extensive set of experiments on four widely used datasets: MCYT-75, CEDAR, GPDS-160, and the Brazilian PUC-PR, considering both a CNN-based system and a system using a handcrafted feature extractor. We found that attacks that aim to get a genuine signature rejected are easy to generate, even in a limited knowledge scenario, where the attacker does not have access to the trained classifier nor the signatures used for training. Attacks that get a forgery to be accepted are harder to produce, and often require a higher level of noise-in most cases, no longer "imperceptible" as previous findings in object recognition. We also evaluated the impact of two countermeasures on the success rate of the attacks and the amount of noise required for generating successful attacks.
机译:对抗性示例现象由于对机器学习系统的安全性产生了重大影响,因此越来越引起机器学习社区的关注。对抗性示例与数据分布中的样本相似(从感知相似性的角度出发),“愚弄”了机器学习分类器。对于计算机视觉应用,这些是经过精心制作但几乎察觉不到的变化的图像,这些变化被错误分类。在本文中,我们根据对生物识别系统的威胁的现有分类法来表征这种现象,尤其是为离线手写签名验证系统识别新的攻击。考虑到基于CNN的系统和使用手工特征提取器的系统,我们对四个广泛使用的数据集进行了广泛的实验:MCYT-75,CEDAR,GPDS-160和巴西PUC-PR。我们发现,即使在知识有限的情况下,攻击者也无法访问训练有素的分类器或训练用的签名,也容易产生旨在拒绝真实签名的攻击。难以被伪造的攻击难以产生,并且通常需要更高级别的噪音-在大多数情况下,不再像以前在对象识别中的发现那样“难以察觉”。我们还评估了两种对策对攻击成功率以及产生成功攻击所需的噪声数量的影响。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号