首页> 外文期刊>International journal of circuit theory and applications >A novel hardware authentication primitive against modeling attacks
【24h】

A novel hardware authentication primitive against modeling attacks

机译:A novel hardware authentication primitive against modeling attacks

获取原文
获取原文并翻译 | 示例
       

摘要

Summary Traditional hardware security primitives such as physical unclonable functions (PUFs) are quite vulnerable to machine learning (ML) attacks. The primary reason is that PUFs rely on process mismatches between two identically designed circuit blocks to generate deterministic math functions as the secret information sources. Unfortunately, ML algorithms are pretty efficient in modeling deterministic math functions. In order to resist against ML attacks, in this letter, a novel hardware security primitive named neural network (NN) chain is proposed by utilizing noise data to generate chaotic NNs for achieving authentication. In a NN chain, two independent batches of noise data are utilized as the input and output training data of NNs, respectively, to maximize the uncertainty within the NN chain. In contrast to a regular PUF, the proposed NN chain is capable of achieving over 20 times ML attack‐resistance and 100% reliability with less than 39% power and area overhead.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号