首页> 外文会议>IEEE Symposium on Computer Applications and Industrial Electronics >Multiplying very large integer in GPU with pascal architecture
【24h】

Multiplying very large integer in GPU with pascal architecture

机译:使用Pascal架构将GPU中的非常大的整数相乘

获取原文

摘要

Multiplication plays an important role in scientific computing and cryptography. When the size of multiplicands grow large (e.g. more than 100K-bit), the multiplication process become time consuming. In this paper, we present implementation techniques to multiply very large integer in state of the art GPU architecture. The implementation relies on Number Theoretic Transform with 64-bit prime. The implementation results show that multiplication of 768K-bit integer takes 1.37 milliseconds on GTX1070 (GPU with Pascal architecture). The work presented in this paper can be used to implement various advanced cryptosystem, including Homomorphic Encryption and Lattice based cryptography.
机译:乘法在科学计算和密码学中起着重要作用。当被乘数的大小增大时(例如,大于100K位),乘法过程变得很耗时。在本文中,我们介绍了在最先进的GPU架构中乘以非常大的整数的实现技术。该实现依赖于具有64位素数的Number Theoretic Transform。实现结果表明,在GTX1070(具有Pascal架构的GPU)上,768K位整数的乘法需要1.37毫秒。本文介绍的工作可用于实现各种高级密码系统,包括同态加密和基于格的加密。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号