...
首页> 外文期刊>Communications of the ACM >Compressed Linear Algebra for Declarative Large-Scale Machine Learning
【24h】

Compressed Linear Algebra for Declarative Large-Scale Machine Learning

机译:用于声明式大规模机器学习的压缩线性代数

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Large-scale Machine Learning (ML) algorithms are often iterative, using repeated read-only data access and I/O-bound matrix-vector multiplications. Hence, it is crucial for performance to fit the data into single-node or distributed main memory to enable fast matrix-vector operations. General-purpose compression struggles to achieve both good compression ratios and fast decompression for block-wise uncompressed operations. Therefore, we introduce Compressed Linear Algebra (CIA) for lossless matrix compression. CIA encodes matrices with lightweight, value-based compression techniques and executes linear algebra operations directly on the compressed representations. We contribute effective column compression schemes, cache-conscious operations, and an efficient sampling-based compression algorithm. Our experiments show good compression ratios and operations performance close to the uncompressed case, which enables fitting larger datasets into available memory. We thereby obtain significant end-to-end performance improvements.
机译:大型机器学习(ML)算法通常是迭代的,使用重复的只读数据访问和I / O绑定的矩阵矢量乘法。因此,将数据装入单节点或分布式主存储器以实现快速矩阵矢量运算的性能至关重要。通用压缩难以同时实现良好的压缩率和快速解压缩,以进行逐块的未压缩操作。因此,我们引入了压缩线性代数(CIA)以实现无损矩阵压缩。 CIA使用基于值的轻量级压缩技术对矩阵进行编码,并直接在压缩的表示形式上执行线性代数运算。我们提供了有效的列压缩方案,注重缓存的操作以及高效的基于采样的压缩算法。我们的实验表明,良好的压缩率和接近未压缩情况的操作性能,可以将较大的数据集拟合到可用内存中。因此,我们获得了显着的端到端性能改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号