首页> 外文会议>IEEE Computer Society Annual Symposium on VLSI >Countering PUF Modeling Attacks through Adversarial Machine Learning
【24h】

Countering PUF Modeling Attacks through Adversarial Machine Learning

机译:通过对抗机器学习进行对抗PUF建模攻击

获取原文

摘要

A Physically Unclonable Function (PUF) is an effective option for device authentication, especially for IoT frame-works with resource-constrained devices. However, PUFs are vulnerable to modeling attacks which build a PUF model using a small subset of its Challenge-Response Pairs (CRPs). We propose an effective countermeasure against such an attack by employing adversarial machine learning techniques that introduce errors (poison) to the adversary’s model. The approach intermittently provides wrong response for the fed challenges. Coordination among the communicating parties is pursued to prevent the poisoned CRPs from causing the device authentication to fail. The experimental results extracted for a PUF implemented on FPGA demonstrate the efficacy of the proposed approach in thwarting modeling attacks. We also discuss the resiliency of the proposed scheme against impersonation and Sybil attacks.
机译:物理上不可渗透的函数(PUF)是设备认证的有效选项,尤其适用于带有资源受限设备的IOT帧。 但是,PUFS易于使用其挑战 - 响应对(CRP)的小型子集建立PUF模型的建模攻击。 我们提出了一种通过使用对逆境模型引入错误(毒药)的对抗机器学习技术来解决这种攻击的有效对策。 该方法间歇性地为美联储挑战提供了错误的响应。 宣传缔约方之间的协调旨在防止中毒CRP导致设备认证失败。 在FPGA上实现的PUF提取的实验结果证明了在挫败模拟攻击中提出的方法的功效。 我们还讨论了拟议计划对冒充和Sybil攻击的弹性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号