首页> 外文期刊>Pattern recognition letters >Investigating strategies towards adversarially robust time series classification
【24h】

Investigating strategies towards adversarially robust time series classification

机译:Investigating strategies towards adversarially robust time series classification

获取原文
获取原文并翻译 | 示例
       

摘要

Deep neural networks have been shown to be vulnerable against specifically-crafted perturbations designed to affect their predictive performance. Such perturbations, formally termed 'adversarial attacks' have been designed for various domains in the literature, most prominently in computer vision and more recently, in time series classification. Therefore there is a need to derive robust strategies to defend deep networks from such attacks. In this work we propose to establish axioms of robustness against adversarial attacks in time series classification. We subsequently design a suitable experimental methodology and empirically validate the hypotheses put forth. Results obtained from our investigations confirm the proposed hypotheses, and provide a strong empirical baseline with a view to mitigating the effects of adversarial attacks in deep time series classification.(c) 2022 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license ( http://creativecommons.org/licenses/by-nc-nd/4.0/ )

著录项

  • 来源
    《Pattern recognition letters》 |2022年第4期|104-111|共8页
  • 作者单位

    Ahmadu Bello Univ, Dept Comp Engn, Zaria 810251, Nigeria;

    Egypt Japan Univ Sci & Technol, Comp Sci & Engn Dept, New Borg El Arab 21934, Egypt|Alexandria Univ, Fac Engn, Dept Comp & Syst Engn, Alexandria 21544, Egypt;

    Osaka Univ, Inst Adv Cocreat Studies, Osaka 5670047, JapanOsaka Univ, Inst Sci & Ind Res, Osaka 5670047, Japan;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 英语
  • 中图分类
  • 关键词

    Time series; Adversarial; Shapelets;

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号