...
首页> 外文期刊>International Journal of Information Security >Adversarial security mitigations of mmWave beamforming prediction models using defensive distillation and adversarial retraining
【24h】

Adversarial security mitigations of mmWave beamforming prediction models using defensive distillation and adversarial retraining

机译:Adversarial security mitigations of mmWave beamforming prediction models using defensive distillation and adversarial retraining

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

The design of a security scheme for beamforming prediction is critical for next-generation wireless networks (5G, 6G, and beyond). However, there is no consensus about protecting beamforming prediction using deep learning algorithms in these networks. This paper presents the security vulnerabilities in deep learning for beamforming prediction using deep neural networks in 6G wireless networks, which treats the beamforming prediction as a multi-output regression problem. It is indicated that the initial DNN model is vulnerable to adversarial attacks, such as Fast Gradient Sign Method , Basic Iterative Method , Projected Gradient Descent , and Momentum Iterative Method , because the initial DNN model is sensitive to the perturbations of the adversarial samples of the training data. This study offers two mitigation methods, such as adversarial training and defensive distillation, for adversarial attacks against artificial intelligence-based models used in the millimeter-wave (mmWave) beamforming prediction. Furthermore, the proposed scheme can be used in situations where the data are corrupted due to the adversarial examples in the training data. Experimental results show that the proposed methods defend the DNN models against adversarial attacks in next-generation wireless networks.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号