首页> 外文会议>IEEE International Conference on Embedded and Ubiquitous Computing >Exploring the Performance Envelope of the LLL Algorithm
【24h】

Exploring the Performance Envelope of the LLL Algorithm

机译:探索LLL算法的性能包络

获取原文

摘要

In this paper, we investigate two implementations of the LLL lattice basis reduction algorithm in the popular NTL and fplll libraries, which helps to assess the security of lattice-based cryptographic schemes. The work has two main contributions: First, we present a novel method to develop performance models and use the unpredictability of LLL's behavior in dependence of the structure of the input lattice as an illustrative example. The model generation approach is based on profiled training measurements of the code and the final runtime performance models are constructed by an extended version of the open source tool Extra-P by systematic consideration of a variety of hypothesis functions via shared-memory parallelized simulated annealing. We employ three kinds of lattice bases for our tests: Random lattice bases of Goldstein-Mayer form with linear and quadratic increase in the bit length of their entries and NTRU-like matrices. The performance models derived show a very good fit to the experimental data and a high variety in their range of complexity which we compare to predictions by theoretical upper bounds and previous average-case estimates. The modeling principles demonstrated by the example of the use case LLL are directly applicable to other algorithms in cryptography and general serial and parallel algorithms. Second, we also evaluate the common approach of estimating the runtime on the basis of the number of floating point operations or bit operations executed within an algorithm and combining them with theoretical assumptions about the executing processor (clock rate, operations per tick). Our experiments show that this approach leads to unreliable estimates for the runtime.
机译:在本文中,我们研究了流行的NTL和FPLLL库中LLL晶格基础减少算法的两种实现,有助于评估基于格子的加密方案的安全性。这项工作有两个主要贡献:首先,我们提出了一种开发性能模型的新方法,并利用LLL行为的不可预测性,以便将输入格的结构作为说明性示例。模型生成方法基于代码的思考训练测量,并且最终运行时性能模型由开源工具Extra-P的扩展版本构成,通过通过共享内存并行化模拟退火系统考虑各种假设函数来构造。我们雇用了三种晶格基础,用于我们的测试:Goldstein-Mayer形式的随机晶格碱基,其条目的比特长度和诸如类似的矩阵的线性和二次增加。衍生的性能模型显示出对实验数据的非常好的拟合和在其复杂性范围内的高品种,我们与理论上界限的预测和以前的平均水平估计进行比较。用例LLL的示例演示的建模原理直接适用于密码术和一般串行和并行算法的其他算法。其次,我们还评估基于算法内执行的浮点操作或比特操作的数量估计运行时的常用方法,并将它们与关于执行处理器的理论假设(时钟速率,每个刻度的操作)组合。我们的实验表明,这种方法导致运行时的不可靠性估计。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号