首页> 外文会议>IMA conference on cryptography and coding >Shannon Entropy Versus Renyi Entropy from a Cryptographic Viewpoint
【24h】

Shannon Entropy Versus Renyi Entropy from a Cryptographic Viewpoint

机译:Shannon熵与Renyi熵从加密角度来看

获取原文

摘要

We provide a new inequality that links two important entropy notions: Shannon Entropy H_1 and collision entropy H_2. Our formula gives the worst possible amount of collision entropy in a probability distribution, when its Shannon Entropy is fixed. While in practice it is easier to evaluate Shannon entropy than other entropy notions, it is well known in folklore that it does not provide a good estimate of randomness quality from a cryptographic viewpoint, except very special settings. Our results and techniques put this in a quantitative form, allowing us to precisely answer the following questions: (a) How accurately does Shannon entropy estimate uniformity? Concretely, if the Shannon entropy of an n-bit source X is n - ε, where ε is a small number, can we conclude that X is close to uniform? This question is motivated by uniformity tests based on entropy estimators, like Maurer's Universal Test. (b) How much randomness can we extract having high Shannon entropy? That is, if the Shannon entropy of an n-bit source X is n - O(1), how many almost uniform bits can we retrieve, at least? This question is motivated by the folklore upper bound O(log(n)). (c) Can we use high Shannon entropy for key derivation? More precisely, if we have an n-bit source X of Shannon entropy n - O(1), can we use it as a secure key for some applications, such as square-secure applications? This is motivated by recent improvements in key derivation obtained by Barak et al. (CRYPTO'11) and Dodis et al. (TCC'14), which consider keys with some entropy deficiency. Our approach involves convex optimization techniques, which yield the shape of the "worst" distribution, and the use of the Lambert W function, by which we resolve equations coming from Shannon Entropy constraints. We believe that it may be useful and of independent interests elsewhere, particularly for studying Shannon Entropy with constraints.
机译:我们提供了一种新的不等式,这些不等式将两个重要的熵概念联系起来:Shannon熵H_1和碰撞熵H_2。当其香农熵固定时,我们的公式在概率分布中提供了最常见的碰撞熵。虽然在实践中,更容易评估Shannon熵而不是其他熵概念,但在民间传说中众所周知,除了非常特殊的环境之外,它还没有提供从加密观点的随机性质量的良好估计。我们的结果和技术以定量形式提出这一点,允许我们精确地回答以下问题:(a)香农熵估计均匀性如何准确?具体地,如果n位源x的香农熵是n - ε,其中ε是少数的,我们可以得出结论x接近均匀吗?这个问题是通过基于熵估计的统一测试,如Maurer的通用测试。 (b)我们可以提取多少随机性,拥有高香农熵?也就是说,如果n位源x的Shannon熵是n - O(1),则至少可以检索几乎均匀的比特,至少可以检索?这个问题是由民间传说上限O(log(n))的动机。 (c)我们可以使用高香农熵进行关键推导吗?更精确地,如果我们有一个N位源X的Shannon熵N - O(1),我们是否可以将其用作某些应用程序的安全键,例如Square-Secure应用程序?这是通过Barak等人获得的关键推导的最新改进的动机。 (Crypto'11)和Dodis等人。 (TCC'14),考虑一些熵缺乏的钥匙。我们的方法涉及凸优化技术,其产生“最差”分布的形状,以及兰伯特W功能的使用,通过其中解析来自Shannon熵约束的方程。我们认为,它可能是有用的以及其他地方的独立利益,特别是在学习Shannon熵与限制。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号