首页> 外文会议>IEEE International Conference on Artificial Intelligence Testing >An Analytical Framework for Security-Tuning of Artificial Intelligence Applications Under Attack
【24h】

An Analytical Framework for Security-Tuning of Artificial Intelligence Applications Under Attack

机译:攻击下人工智能应用安全调整的分析框架

获取原文

摘要

Machine Learning (ML) algorithms, as the core technology in Artificial Intelligence (AI) applications, such as self-driving vehicles, make important decisions by performing a variety of data classification or prediction tasks. Attacks on data or algorithms in AI applications can lead to misclassification or misprediction, which can fail the applications. For each dataset separately, the parameters of ML algorithms should be tuned to reach a desirable classification or prediction accuracy. Typically, ML experts tune the parameters empirically, which can be time consuming and does not guarantee the optimal result. To this end, some research suggests an analytical approach to tune the ML parameters for maximum accuracy. However, none of the works consider the ML performance under attack in their tuning process. This paper proposes an analytical framework for tuning the ML parameters to be secure against attacks, while keeping its accuracy high. The framework finds the optimal set of parameters by defining a novel objective function, which takes into account the test results of both ML accuracy and its security against attacks. For validating the framework, an AI application is implemented to recognize whether a subject's eyes are open or closed, by applying k-Nearest Neighbors (kNN) algorithm on her Electroencephalogram (EEG) signals. In this application, the number of neighbors (k) and the distance metric type, as the two main parameters of kNN, are chosen for tuning. The input data perturbation attack, as one of the most common attacks on ML algorithms, is used for testing the security of the application. Exhaustive search approach is used to solve the optimization problem. The experiment results show k = 43 and cosine distance metric is the optimal configuration of kNN for the EEG dataset, which leads to 83.75% classification accuracy and reduces the attack success rate to 5.21%.
机译:作为自动驾驶汽车等人工智能(AI)应用程序的核心技术,机器学习(ML)算法通过执行各种数据分类或预测任务来做出重要决策。对AI应用程序中的数据或算法的攻击可能导致错误分类或错误预测,这可能会使应用程序失败。对于每个单独的数据集,应调整ML算法的参数以达到理想的分类或预测精度。通常,机器学习专家会根据经验调整参数,这可能很耗时,并且不能保证最佳结果。为此,一些研究提出了一种分析方法来调整ML参数以实现最大准确性。但是,没有任何作品在其调整过程中考虑到ML性能受到攻击。本文提出了一个分析框架,用于调整ML参数以使其免受攻击,同时保持较高的准确性。该框架通过定义一个新颖的目标函数来找到最佳的参数集,该目标函数考虑了ML准确性及其针对攻击的安全性的测试结果。为了验证该框架,通过对她的脑电图(EEG)信号应用k最近邻(kNN)算法,实现了AI应用程序以识别对象的眼睛是睁开还是闭着。在此应用中,选择邻居数(k)和距离度量类型作为kNN的两个主要参数,以进行调整。输入数据扰动攻击是针对ML算法的最常见攻击之一,用于测试应用程序的安全性。穷举搜索方法用于解决优化问题。实验结果表明,k = 43,余弦距离度量是脑电数据集的kNN最优配置,分类精度达到83.75%,攻击成功率降低到5.21%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号