首页> 外文期刊>IEEE transactions on visualization and computer graphics >Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics
【24h】

Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

机译:通过可视化分析解释对抗性机器学习的漏洞

获取原文
获取原文并翻译 | 示例
       

摘要

Machine learning models are currently being deployed in a variety of real-world applications where model predictions are used to make decisions about healthcare, bank loans, and numerous other critical tasks. As the deployment of artificial intelligence technologies becomes ubiquitous, it is unsurprising that adversaries have begun developing methods to manipulate machine learning models to their advantage. While the visual analytics community has developed methods for opening the black box of machine learning models, little work has focused on helping the user understand their model vulnerabilities in the context of adversarial attacks. In this paper, we present a visual analytics framework for explaining and exploring model vulnerabilities to adversarial attacks. Our framework employs a multi-faceted visualization scheme designed to support the analysis of data poisoning attacks from the perspective of models, data instances, features, and local structures. We demonstrate our framework through two case studies on binary classifiers and illustrate model vulnerabilities with respect to varying attack strategies.
机译:机器学习模型当前正部署在各种实际应用中,其中模型预测用于做出有关医疗保健,银行贷款和许多其他关键任务的决策。随着人工智能技术的普及无处不在,对手已经开始开发方法来操纵机器学习模型以发挥其优势就不足为奇了。尽管视觉分析社区已经开发出了用于打开机器学习模型黑匣子的方法,但很少有工作致力于帮助用户在对抗性攻击的情况下了解其模型漏洞。在本文中,我们提供了一个可视化分析框架,用于解释和探索对抗攻击的模型漏洞。我们的框架采用了多方面的可视化方案,旨在从模型,数据实例,功能和局部结构的角度支持对数据中毒攻击的分析。我们通过对二进制分类器的两个案例研究来证明我们的框架,并说明了针对不同攻击策略的模型漏洞。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号