首页> 外文会议>International Conference on Parallel and Distributed Computing >Scalable Linux Container Provisioning in Fog and Edge Computing Platforms
【24h】

Scalable Linux Container Provisioning in Fog and Edge Computing Platforms

机译:雾和边缘计算平台中可扩展的Linux集装箱配置

获取原文

摘要

The tremendous increase in the number of mobile devices and the proliferation of all kinds of new types of sensors is creating new value opportunities by analyzing, developing insights from, and actuating upon large volumes of data streams generated at the edge of the network. While general purpose processing required to unleash this value is abundant in Cloud datacenters, bringing raw IoT data streams to the Cloud poses critical challenges, including: (i) regulatory constraints related to data sensitivity, (ii) significant bandwidth costs and (iii) latency barriers inhibiting near-real-time applications. Edge Computing aspires to extend the traditional cloud model to the "edge of the network", to deliver low-latency, bandwidth-efficiencies and controlled privacy. For all the commonalities between the two models, transitioning the provisioning and orchestration of a distributed analytics platform from Cloud to Edge is not trivial. The two models present totally different cost structures such as price of bandwidth, data communication latency, power density and availability. In this paper, we address the challenge associated with transitioning scalable provisioning from Cloud to distributed Edge platforms. We identify current scalability challenges in Linux container provisioning at the Edge; we propose a novel peer-to-peer model taking on them; we present a prototype of this model designed for and tested on real Edge testbeds, and we report a scalability evaluation on a scale-out virtualized platform. Our results demonstrate significant savings in terms of provisioning latency and bandwidth utilization.
机译:在移动设备的数量和各种新类型的传感器的增殖的巨大增加是通过分析创造新的值的机会,从显影的见解,并在大容量的数据的致动在所述网络的边缘流生成。虽然释放该值所需的通用处理在云数据中心中丰富,但为云带来原始的物联网数据流造成关键挑战,包括:(i)与数据敏感性相关的监管约束,(ii)显着的带宽成本和(iii)延迟抑制近实时应用的障碍。边缘计算旨在将传统云模型扩展到“网络边缘”,以提供低延迟,带宽效率和受控隐私。对于两种模型之间的所有共性,从云到边的分布式分析平台的供应和编排都不是微不足道的。这两种模型存在完全不同的成本结构,例如带宽的价格,数据通信延迟,功率密度和可用性。在本文中,我们解决了与从云转换到分布式边缘平台的可扩展配置相关联的挑战。我们在边缘的Linux集装箱供应中确定了当前的可扩展性挑战;我们提出了一种新的点对点模型,占用它们;我们在真正的边缘测试平台上介绍了这种设计和测试的该模型的原型,我们在缩放虚拟化平台上报告了可扩展性评估。我们的结果表明,在供应延迟和带宽利用方面表现出大量节省。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号