首页> 外文学位 >Making Large Transfers Fast for in-Memory Databases in Modern Networks.
【24h】

Making Large Transfers Fast for in-Memory Databases in Modern Networks.

机译:在现代网络中快速进行内存数据库的大规模传输。

获取原文
获取原文并翻译 | 示例

摘要

Efficient movement of massive amounts of data over high-speed networks at high throughput is essential for a modern-day in-memory storage system. In response to the growing needs of throughput and latency demands at scale, a new class of database systems was developed in recent years. The development of these systems was guided by increased access to high throughput, low latency network fabrics, and declining cost of Dynamic Random Access Memory (DRAM). These systems were designed with On-Line Transactional Processing (OLTP) workloads in mind, and, as a result, are optimized for fast dispatch and perform well under small request-response scenarios. However, massive server responses such as those for range queries and data migration for load balancing poses challenges for this design. This thesis analyzes the effects of large transfers on scale-out systems through the lens of a modern Network Interface Card (NIC). The present-day NIC offers new and exciting opportunities and challenges for large transfers, but using them efficiently requires smart data layout and concurrency control. We evaluated the impact of modern NICs in designing data layout by measuring transmit performance and full system impact by observing the effects of Direct Memory Access (DMA), Remote Direct Memory Access (RDMA), and caching improvements such as IntelRTM Data Direct I/O (DDIO). We discovered that use of techniques such as Zero Copy yield around 25% savings in CPU cycles and a 50% reduction in the memory bandwidth utilization on a server by using a client-assisted design with records that are not updated in place. We also set up experiments that underlined the bottlenecks in the current approach to data migration in RAMCloud and propose guidelines for a fast and efficient migration protocol for RAMCloud.
机译:通过高速网络以高吞吐量高效移动大量数据对于现代内存存储系统至关重要。为了应对大规模增长的吞吐量和延迟需求,近年来,开发了新型的数据库系统。这些系统的开发受到对高吞吐量,低延迟网络结构的访问增加以及动态随机存取存储器(DRAM)成本下降的指导。这些系统在设计时就考虑到了在线事务处理(OLTP)工作负载,因此,针对快速分配进行了优化,并在较小的请求响应情况下表现良好。但是,大规模服务器响应(例如用于范围查询和数据迁移以实现负载平衡的响应)给该设计带来了挑战。本文通过现代网络接口卡(NIC)的角度分析了大规模传输对横向扩展系统的影响。当前的NIC为大型传输提供了新的,令人兴奋的机遇和挑战,但是要有效地使用它们,则需要智能的数据布局和并发控制。我们通过观察直接内存访问(DMA),远程直接内存访问(RDMA)和缓存改进(例如IntelRTM Data Direct I / O)的影响来评估传输性能和整个系统的影响,从而评估了现代NIC在设计数据布局中的影响(DDIO)。我们发现使用零拷贝等技术可以通过使用客户端辅助设计(未更新记录)来节省大约25%的CPU周期,并减少50%的服务器内存带宽利用率。我们还进行了实验,突显了RAMCloud中当前数据迁移方法的瓶颈,并提出了针对RAMCloud的快速高效迁移协议的指南。

著录项

  • 作者

    Kesavan, Aniraj.;

  • 作者单位

    The University of Utah.;

  • 授予单位 The University of Utah.;
  • 学科 Computer science.
  • 学位 M.S.
  • 年度 2017
  • 页码 79 p.
  • 总页数 79
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

  • 入库时间 2022-08-17 11:54:23

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号