...
首页> 外文期刊>Cloud Computing, IEEE Transactions on >Adaptive Avatar Handoff in the Cloudlet Network
【24h】

Adaptive Avatar Handoff in the Cloudlet Network

机译:Cloudlet网络中的自适应头像切换

获取原文
获取原文并翻译 | 示例
           

摘要

In a traditional big data network, data streams generated by User Equipments (UEs) are uploaded to the remote cloud (for further processing) via the Internet. However, moving a huge amount of data via the Internet may lead to a long End-to-End (E2E) delay between a UE and its computing resources (in the remote cloud) as well as severe traffic jams in the Internet. To overcome this drawback, we propose a cloudlet network to bring the computing and storage resources from the cloud to the mobile edge. Each base station is attached to one cloudlet and each UE is associated with its Avatar in the cloudlet to process its data locally. Thus, the E2E delay between a UE and its computing resources in its Avatars is reduced as compared to that in the traditional big data network. However, in order to maintain the low E2E delay when UEs roam away, it is necessary to hand off Avatars accordingly-it is not practical to hand off the Avatars' virtual disks during roaming as this will incur unbearable migration time and network congestion. We propose the LatEncy Aware Replica placemeNt (LEARN) algorithm to place a number of replicas of each Avatar's virtual disk into suitable cloudlets. Thus, the Avatar can be handed off among its cloudlets (which contain one of its replicas) without migrating its virtual disk. Simulations demonstrate that LEARN reduces the average E2E delay. Meanwhile, by considering the capacity limitation of each cloudlet, we propose the LatEncy aware Avatar hanDoff (LEAD) algorithm to place UEs' Avatars among the cloudlets such that the average E2E delay is minimized. Simulations demonstrate that LEAD maintains the low average E2E delay.
机译:在传统的大数据网络中,用户设备(UE)生成的数据流通过Internet上载到远程云(以进行进一步处理)。但是,通过Internet传输大量数据可能会导致UE及其计算资源(在远程云中)之间的端到端(E2E)较长的延迟,以及Internet上的严重流量阻塞。为了克服此缺点,我们提出了一个cloudlet网络,以将计算和存储资源从云中带到移动边缘。每个基站都连接到一个cloudlet,并且每个UE与cloudlet中的其Avatar关联,以在本地处理其数据。因此,与传统大数据网络相比,UE和其头像中的计算资源之间的E2E延迟减少了。但是,为了在UE漫游时保持较低的E2E延迟,有必要相应地移交化身-在漫游过程中移交化身的虚拟磁盘是不切实际的,因为这将导致难以忍受的迁移时间和网络拥塞。我们提出了LatEncy Aware副本位置(LEARN)算法,以将每个Avatar虚拟磁盘的多个副本放入合适的cloudlet中。因此,可以在不迁移其虚拟磁盘的情况下在其cloudlet(包含其副本之一)中切换Avatar。仿真表明,LEARN减少了平均E2E延迟。同时,考虑到每个小云的容量限制,我们提出了可感知LatEncy的头像hanDoff(LEAD)算法,将UE的头像放置在小云中,从而使平均E2E延迟最小。仿真表明,LEAD保持了较低的平均端到端延迟。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号