...
首页> 外文期刊>Information Technology >Solving subset sum with small space – Handling cryptanalytic Big Data
【24h】

Solving subset sum with small space – Handling cryptanalytic Big Data

机译:解决小空间的子集合 - 处理密码大数据

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Big Data applications are characterized by processing an amount of data too huge to be stored. Cryptographic protocols are by construction supposed to define huge data spaces that cannot be handled by any attacker. Nevertheless, the task of protocol cryptanalysis is to properly select cryptographic parameter lengths that guarantee both efficiency and security. This requires to break cryptographic protocols and their underlying hardness assumptions for mid-sized parameters. But even for mid-sized parameters cryptographic search spaces are way too huge to be stored. This asks for technical solutions that traverse the search space without storing elements. As an appealingly simple example, we address the subset sum problem which lies at the heart of many modern cryptographic protocols designed to offer security even against quantum computers. In the subset sum problem, one obtains integers a1,…,an{a_{1}},dots ,{a_{n}} and an integer target t, and has to find a subset of the ai{a_{i}}’s that exactly sums to t. A trivial memory-less algorithm tests for all 2n{2^{n}} subsets, whether their sum equals t. It may come as a surprise that there exist memory-less algorithms significantly faster than 2n{2^{n}}. We give a survey on recent memory-less techniques, that apply but are not limited to the subset sum problem. We start by describing a general collision finding technique that was introduced in 1994 in the seminal work of van Oorschot and Wiener. Applied to subset sum the van Oorschot-Wiener technique leads to a 20.75n{2^{0.75n}}-algorithm. This was improved in 2011 by Becker, Coron and Joux to 20.72n{2^{0.72n}} using the representation technique. Recently, Esser and May presented a memory-less algorithm achieving 20.65n{2^{0.65n}} using two-layered collision finding. These running times have to be compared to the optimal 20.5n{2^{0.5n}} lower bound for collision finding algorithms.
机译:大数据应用的特征在于处理太多无法存储的数据量。加密协议是通过施工来定义无法由任何攻击者处理的庞大数据空间。尽管如此,协议密码分析的任务是正确选择保证效率和安全性的加密参数长度。这需要打破加密协议及其基础硬度假设,以获得中型参数。但即使对于中型参数,加密搜索空间也无法存储太大。这要求遍历搜索空间而不存储元素的技术解决方案。作为一个有吸引力的简单示例,我们解决了旨在为量子计算机提供安全性提供安全性的许多现代加密协议的核心的子集合问题。在子集合问题中,一个人获得整数一种1那......那一种N.{a_ {1}}, dots,{a_ {n}}和一个整数目标t,并且必须找到一个子集一种一世{a_ {i}}这完全相钱。少于琐碎的记忆算法测试2N.{2 ^ {n}}子集,它们是否等于t。它可能会令人惊讶的是,较少的内存算法明显快于2N.{2 ^ {n}}。我们对最近的记忆技术进行了调查,适用但不限于子集合问题。我们首先描述了1994年推出的一般碰撞发现技术,该技术在Van Oorschot和Wiener的开创性工作中。应用于子集合,范非欧森替补技术导致一个20.75N.{2 ^ {0.75N}} - 算法。这是2011年通过Becker,Coron和Joux改进20.72N.{2 ^ {0.72n}}使用表示技术。最近,esser可以呈现较少的记忆算法20.65N.{2 ^ {0.65n}}使用双层碰撞查找。这些运行时间必须与最佳方式进行比较20.5N.{2 ^ {0.5n}}碰撞查找算法的下限。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号