首页> 外文期刊>Neurocomputing >Acoustic-decoy: Detection of adversarial examples through audio modification on speech recognition system
【24h】

Acoustic-decoy: Detection of adversarial examples through audio modification on speech recognition system

机译:声学 - 诱饵:通过语音识别系统的音频修改检测对抗性实例

获取原文
获取原文并翻译 | 示例
           

摘要

Deep neural networks (DNNs) display good performance in the domains of recognition and prediction, such as on tasks of image recognition, speech recognition, video recognition, and pattern analysis. However, adversarial examples, created by inserting a small amount of noise into the original samples, can be a serious threat because they can cause misclassification by the DNN. Adversarial examples have been studied primarily in the context of images, but their effect in the audio context is now drawing considerable interest as well. For example, by adding a small distortion to an original audio sample, imperceptible to humans, an audio adversarial example can be created that humans hear as error-free but that causes misunderstanding by a machine. Therefore, it is necessary to create a method of defense for resisting audio adversarial examples. In this paper, we propose an acoustic-decoy method for detecting audio adversarial examples. Its key feature is that it adds well-formalized distortions using audio modification that are sufficient to change the classification result of an adversarial example but do not affect the classification result of an original sample. Experimental results show that the proposed scheme can detect adversarial examples by reducing the similarity rate for an adversarial example to 6.21%, 1.27%, and 0.66% using low-pass filtering (with 12 dB roll-off), 8-bit reduction, and audio silence removal techniques, respectively. It can detect an audio adversarial example with a success rate of 97% by performing a comparison with the initial audio sample. (C) 2020 The Authors. Published by Elsevier B.V.
机译:深度神经网络(DNN)在识别和预测域中显示良好的性能,例如在图像识别,语音识别,视频识别和模式分析的任务中。然而,通过将少量噪声插入原始样本来创建的对手示例可能是严重的威胁,因为它们可以通过DNN导致错误分类。已经在图像的背景下主要研究过对手示例,但它们在音频上下文中的效果现在也借鉴了相当大的兴趣。例如,通过向原始音频样本添加小失真,可以对人类难以察觉,可以创建一个音频对抗的例子,以便人类听到无差别,但导致机器的误解。因此,有必要创建一种防御方法来抵抗音频对抗性示例。在本文中,我们提出了一种用于检测音频对抗的示例的声学诱饵方法。它的关键特征是它使用足以改变对外示例的分类结果的音频修改来增加正式的失真,但不影响原始样本的分类结果。实验结果表明,该方案可以通过使用低通滤波(具有12dB滚动),8位减少和8位减少和音频沉默去除技术。通过与初始音频样本进行比较,它可以检测具有97%的成功率的音频对抗示例。 (c)2020作者。由elsevier b.v出版。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号