首页> 外文期刊>Computer vision and image understanding >Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems
【24h】

Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems

机译:调查对抗性攻击的重要性及其与基于雷达的人类活动识别系统的解释性关系

获取原文
获取原文并翻译 | 示例

摘要

Given their substantial success in addressing a wide range of computer vision challenges, Convolutional Neural Networks (CNNs) are increasingly being used in smart home applications, with many of these applications relying on the automatic recognition of human activities. In this context, low-power radar devices have recently gained in popularity as recording sensors, given that the usage of these devices allows mitigating a number of privacy concerns, a key issue when making use of conventional video cameras. Another concern that is often cited when designing smart home applications is the resilience of these applications against cyberattacks. It is, for instance, well-known that the combination of images and CNNs is vulnerable against adversarial examples, mischievous data points that force machine learning models to generate wrong classifications during testing time. In this paper, we investigate the vulnerability of radar-based CNNs to adversarial attacks, and where these radar-based CNNs have been designed to recognize human gestures. Through experiments with four unique threat models, we show that radar-based CNNs are susceptible to both white- and black-box adversarial attacks. We also expose the existence of an extreme adversarial attack case, where it is possible to change the prediction made by the radar-based CNNs by only perturbing the padding of the inputs, without touching the frames where the action itself occurs. Moreover, we observe that gradient-based attacks exercise perturbation not randomly, but on important features of the input data. We highlight these important features by making use of Grad-CAM, a popular neural network interpretability method, hereby showing the connection between adversarial perturbation and prediction interpretability.
机译:鉴于解决广泛的计算机视觉挑战的实质性成功,卷积神经网络(CNNS)越来越多地用于智能家居应用,其中许多应用程序依赖于人类活动的自动识别。在这种情况下,鉴于这些设备的使用允许在利用传统视频摄像机时允许减轻许多隐私问题,最近普及的低功率雷达器件被普及。在设计智能家庭应用时经常引用的另一个问题是这些应用于Cyber​​Atcks的应用。例如,众所周知,图像和CNN的组合易受逆势示例,淘压数据点强制机器学习模型在测试时间期间产生错误的分类。在本文中,我们研究了基于雷达的CNNS对抗对抗攻击的脆弱性,并且这些基于雷达的CNNS被设计用于识别人类手势。通过实验,通过四种独特的威胁模型,我们表明基于雷达的CNNS易受白和黑匣子的对抗性攻击。我们还暴露了极端逆境攻击案件的存在,在那里可以通过仅扰乱输入的填充来改变基于雷达的CNNS的预测,而不触摸动作自身发生的框架。此外,我们观察到基于梯度的攻击不会随机运动扰动,而是对输入数据的重要特征。通过利用Grad-Cam,一种流行的神经网络解释性方法,突出了这些重要特征,特此展示了对抗扰动和预测解释性之间的连接。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号