...
首页> 外文期刊>Performance evaluation review >Analysis of a Stochastic Model of Replication in Large Distributed Storage Systems: A Mean-Field Approach
【24h】

Analysis of a Stochastic Model of Replication in Large Distributed Storage Systems: A Mean-Field Approach

机译:大型分布式存储系统中复制的随机模型分析:均值方法

获取原文
获取原文并翻译 | 示例

摘要

Distributed storage systems such as Hadoop File System or Google File System (GFS) ensure data availability and durability using replication. Persistence is achieved by replicating the same data block on several nodes, and ensuring that a minimum number of copies are available on the system at any time. Whenever the contents of a node are lost, for instance due to a hard disk crash, the system regenerates the data blocks stored before the failure by transferring them from the remaining replicas. This paper is focused on the analysis of the efficiency of replication mechanism that determines the location of the copies of a given file at some server. The variability of the loads of the nodes of the network is investigated for several policies. Three replication mechanisms are tested against simulations in the context of a real implementation of a such a system: Random, Least Loaded and Power of Choice. The simulations show that some of these policies may lead to quite unbalanced situations: if β is the average number of copies per node it turns out that, at equilibrium, the load of the nodes may exhibit a high variabdity. It is shown in this paper that a simple variant of a power of choice type algorithm has a striking effect on the loads of the nodes: at equilibrium, the distribution of the load of a node has a bounded support, most of nodes have a load less than 2β which is an interesting property for the design of the storage space of these systems. Stochastic models are introduced and investigated to explain this interesting phenomenon. Arxiv URL of the full paper.
机译:诸如Hadoop File System或Google File System(GFS)之类的分布式存储系统可通过复制确保数据可用性和持久性。通过在多个节点上复制相同的数据块并确保在任何时候系统上都有最小数量的副本来实现持久性。每当节点的内容丢失(例如由于硬盘崩溃)时,系统都会通过从其余副本中转移数据块来重新生成发生故障之前存储的数据块。本文着重分析复制机制的效率,该机制可确定给定文件的副本在某些服务器上的位置。对于几种策略,研究了网络节点负载的可变性。在此类系统的实际实现环境中,针对仿真对三种复制机制进行了测试:随机,最小负载和选择能力。仿真表明,其中某些策略可能导致非常不平衡的情况:如果β是每个节点的平均副本数,则表明在平衡时,节点的负载可能表现出较高的可变性。本文表明,选择权类型算法的一个简单变体对节点的负载具有显着影响:在平衡时,节点的负载分布具有有限的支持,大多数节点具有负载小于2β,这对于这些系统的存储空间设计是一个有趣的特性。引入随机模型并进行研究以解释这一有趣现象。全文的Arxiv URL。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号