首页> 外文会议>International Conference on Parallel Processing Workshops >OSI Standards and the Top Fallacy of Distributed Computing
【24h】

OSI Standards and the Top Fallacy of Distributed Computing

机译:OSI标准和分布式计算的最大谬误

获取原文

摘要

In 1994, the OSI(Open System Interconnection)/IEC 7498-1:1994 Model was published. The two-sided point-to-pointcommunication TCP/IP protocol became widely accepted as the standard building block for all distributed applications. The OSI Standards states "Transport Layer: Reliable transmission of data segments between points on a network, including segmentation, acknowledgement and multiplexing".In the same year, "The Network is Reliable" was named thetop fallacy in distributed computing by Peter Deutsch andhis colleagues. Understanding these two conflicting descriptions for the seemingly same object has been the lasting myth for distributed and parallel processing. Investigating this myth under the lens of extreme scale computing revealed that it is impossible to implement reliablecommunications in the face of crashes of either the sender or the receiver in every point-to-point communication channel. The impossibility theory exposed two unacceptable risks: a) arbitrary data loss, and b) increasing probability of datalosses as the infrastructure expands in size. Therefore, thedirect use of point-to-point protocols in distributed and parallel applications is the cause of the notorious scalability dilemma. The top fallacy allegation in distributed computingis supported by theory and practice. The scalability dilemma is responsible for the growing planned and unplanned downtimes in existing infrastructures. It is also responsible for the under reported data and service losses in mission critical applications. It makes large scale computation's reproducibility increasingly more difficult. The growing instability is also linked to our inability to quantify parallel application's scalability. This paper reports a statistic multiplexed computing (SMC) paradigm designed for solving the scalability and reproducibility problems. Preliminary computational results are reported in support of the proposed solution.
机译:1994年,发布了OSI(开放系统互连)/ IEC 7498-1:1994模型。双向点对点通信TCP / IP协议已被广泛接受为所有分布式应用程序的标准构建块。 OSI标准规定“传输层:网络上各点之间的数据段的可靠传输,包括分段,确认和多路复用”。同年,“网络可靠”被Peter Deutsch和他的同事们评为分布式计算中的最高谬论。 。对于看似相同的对象,理解这两个相互矛盾的描述一直是分布式和并行处理的神话。在极端规模计算的镜头下研究这个神话表明,面对每个点对点通信通道中的发送者或接收者崩溃的情况,不可能实现可靠的通信。不可能理论暴露了两个无法接受的风险:a)任意数据丢失,b)随着基础架构规模的扩大,数据丢失的可能性增加。因此,在分布式应用程序和并行应用程序中直接使用点对点协议是导致臭名昭著的可伸缩性难题的原因。分布式计算中的最高谬论指控得到了理论和实践的支持。可伸缩性的困境是造成现有基础架构中计划内和计划外停机时间不断增长的原因。它还负责关键任务应用程序中报告不足的数据和服务损失。这使得大规模计算的可再现性变得越来越困难。不断增加的不稳定性也与我们无法量化并行应用程序的可扩展性有关。本文报告了一种统计多路复用计算(SMC)范例,旨在解决可伸缩性和可再现性问题。报告了初步的计算结果,以支持所提出的解决方案。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号