首页> 外文会议>International Workshop on Big Data and Information Security >Big data compression using spiht in Hadoop: A case study in multi-lead ECG signals
【24h】

Big data compression using spiht in Hadoop: A case study in multi-lead ECG signals

机译:在Hadoop中使用SPIHT的大数据压缩:多引导ECG信号的案例研究

获取原文

摘要

Compression still become main concern in big data framework. The performance of big data depend on speed of data transfer. Compressed data can speed up transfer data between network. It also save more space for storage. Several compression method is provide by Hadoop as a most common big data framework. That method mostly for general purpose. But the performance still have to optimize especially for Biomedical record like ECG data. We propose Set Partitioning in Hierarchical Tree (SPIHT) for big data compression with study case ECG signal data. In this paper compression will run in Hadoop Framework. The proposed method has stages such as input signal, map input signal, spiht coding, and reduce bit-stream. The compression produce compressed data for intermediate (Map) output and final (reduce) output. The experiment using ECG data to measure compression performance. The proposed method gets Percentage Root-mean-square difference (PRD) is about 1.0. Compare to existing method, the proposed method get better Compression Ratio (CR) with competitive longer compression time. So proposed method gets better performance compare to other method especially for ECG dataset.
机译:压缩仍然是大数据框架中的主要关注点。大数据的性能取决于数据传输的速度。压缩数据可以加速网络之间的传输数据。它还保存更多的存储空间。 Hadoop作为最常见的大数据框架提供了几种压缩方法。这种方法主要用于通用。但表现仍然必须优化尤其是ECG数据等生物医学记录。我们建议在分层树(SPIHT)中为大数据压缩设置分区,研究案例ECG信号数据。在本文中,压缩将在Hadoop框架中运行。所提出的方法具有诸如输入信号,地图输入信号,SPIHT编码和减少比特流的阶段。压缩产生用于中间(MAP)输出和最终(减少)输出的压缩数据。实验使用ECG数据来测量压缩性能。所提出的方法得到百分比的根均方差(PRD)约为1.0。与现有方法进行比较,所提出的方法获得更好的压缩比(CR),具有竞争力的更长的压缩时间。因此,所提出的方法与特别适用于ECG数据集的其他方法,更好地比较。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号