首页> 外文会议>European performance engineering workshop >Modeling the Effect of Parallel Execution on Multi-site Computation Offloading in Mobile Cloud Computing
【24h】

Modeling the Effect of Parallel Execution on Multi-site Computation Offloading in Mobile Cloud Computing

机译:建模并行执行对移动云计算中多站点计算卸载的影响

获取原文

摘要

As the smart mobile devices are becoming an inevitable part of our daily life, the demand for running complex applications on such devices is increasing. However, the limitations of resources (e.g. battery life, computation power, bandwidth) of these devices are restricting the type of applications that can run on them. The restrictions can be overcome by allowing such devices to offload computation and run parts of an application in the powerful cloud servers. The greatest benefit from computation offloading can be obtained by optimally allocating the parts of an application to different devices (i.e. the mobile device and the cloud servers) that minimizes the total cost--the cost can be the response time of the application or the mobile battery usage, or both. Normally, different devices can have different number of processing cores. Unlike prior work in the modeling of computation offloading, this work models the effect of parallel execution of different parts of an application--on different devices (external parallelism) as well as on different cores of a single device (internal parallelism)--on offloading allocation. This work considers each device as a multi-server queueing station. It proposes a novel algorithm to evaluate the response time and energy consumption of an allocation while considering both the application workflow as well as the parallel execution across the cores of different devices. For finding the near-optimal altocation(s), it uses an existing genetic algorithm that invokes our proposed algorithm to determine the fitness of an allocation. This work is more advantageous for cases where a workflow has multiple tasks that can execute in parallel. The results show that modeling the effect of parallel execution yields better near-optimal solution(s) for the allocation problem compared to not modeling parallel execution at all.
机译:随着智能移动设备已成为我们日常生活中不可避免的一部分,在此类设备上运行复杂应用程序的需求正在增加。但是,这些设备的资源限制(例如电池寿命,计算能力,带宽)限制了可以在其上运行的应用程序的类型。通过允许此类设备卸载计算并在功能强大的云服务器中运行应用程序的某些部分,可以克服这些限制。通过将应用程序的各个部分最佳地分配到不同的设备(例如,移动设备和云服务器),可以最大程度地减少计算成本,从而最大程度地降低总成本,该成本可以是应用程序或移动设备的响应时间电池使用情况,或两者兼而有之。通常,不同的设备可以具有不同数量的处理核心。与先前在计算卸载建模中的工作不同,本工作在不同设备上(外部并行性)以及单个设备的不同核心(内部并行性)对应用程序不同部分的并行执行的效果进行建模。卸载分配。这项工作将每个设备视为一个多服务器排队站。它提出了一种新颖的算法来评估分配的响应时间和能耗,同时考虑了应用程序工作流以及跨不同设备核心的并行执行。为了找到接近最佳的替代,它使用了现有的遗传算法,该算法调用了我们提出的算法来确定分配的适用性。对于工作流具有可以并行执行的多个任务的情况,这项工作更为有利。结果表明,与完全不对并行执行进行建模相比,对并行执行的影响进行建模可以为分配问题提供更好的近似最优解决方案。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号