首页> 外文会议>IEEE Applied Imagery Pattern Recognition Workshop >Adversarial Examples in Deep Learning for Multivariate Time Series Regression
【24h】

Adversarial Examples in Deep Learning for Multivariate Time Series Regression

机译:多变量时间序列回归深度学习的对抗例

获取原文

摘要

Multivariate time series (MTS) regression tasks are common in many real-world data mining applications including finance, cybersecurity, energy, healthcare, prognostics, and many others. Due to the tremendous success of deep learning (DL) algorithms in various domains including image recognition and computer vision, researchers started adopting these techniques for solving MTS data mining problems, many of which are targeted for safety-critical and cost-critical applications. Unfortunately, DL algorithms are known for their susceptibility to adversarial examples which makes the DL regression models for MTS forecasting also vulnerable to those attacks. To the best of our knowledge, no previous work has explored the vulnerability of DL MTS regression models to adversarial time series examples, which is an important step, specifically when the forecasting from such models is used in safety-critical and cost-critical applications. In this work, we leverage existing adversarial attack generation techniques from the image classification domain and craft adversarial multivariate time series examples for three state-of-the-art deep learning regression models, specifically Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU). We evaluate our study using Google stock and household power consumption dataset. The obtained results show that all the evaluated DL regression models are vulnerable to adversarial attacks, transferable, and thus can lead to catastrophic consequences in safety-critical and cost-critical domains, such as energy and finance.
机译:多变量时间序列(MTS)回归任务在许多现实世界数据挖掘应用程序中是常见的,包括金融,网络安全,能源,医疗保健,预测和许多其他人。由于在包括图像识别和计算机视觉的各个领域的深度学习(DL)算法的巨大成功,研究人员开始采用这些技术来解决MTS数据挖掘问题,其中许多是安全关键和成本关键的应用。不幸的是,DL算法已知其对对抗的例子的易感性,这使得MTS预测的DL回归模型也容易受到这些攻击的影响。据我们所知,之前没有以前的工作已经探索了DL MTS回归模型的脆弱性,以对抗的时间序列示例,这是一个重要的步骤,特别是当这些模型的预测用于安全关键和成本关键的应用时。在这项工作中,我们利用来自图像分类领域的现有的对抗性攻击生成技术和用于三个最先进的深度学习回归模型,特别是卷积神经网络(CNN),长短短期记忆(LSTM)和门控复发单元(GRU)。我们使用Google Stock和家庭电力消耗数据集进行评估我们的研究。所得结果表明,所有评估的DL回归模型都容易受到对抗性攻击,可转让的影响,从而导致安全关键和成本关键域中的灾难性后果,例如能量和金融。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号