首页> 外文期刊>ACM Transactions on Embedded Computing Systems >Predictable Shared Cache Management for Multi-Core Real-Time Virtualization
【24h】

Predictable Shared Cache Management for Multi-Core Real-Time Virtualization

机译:用于多核实时虚拟化的可预测共享缓存管理

获取原文
获取原文并翻译 | 示例
           

摘要

Real-time virtualization has gained much attention for the consolidation of multiple real-time systems onto a single hardware platform while ensuring timing predictability. However, a shared last-level cache (LLC) on modern multi-core platforms can easily hamper the timing predictability of real-time virtualization due to the resulting temporal interference among consolidated workloads. Since such interference caused by the LLC is highly variable and may have not even existed in legacy systems to be consolidated, it poses a significant challenge for real-time virtualization. In this article, we propose a predictable shared cache management framework for multi-core real-time virtualization. Our framework introduces two hypervisor-level techniques, vLLC and vColoring, that enable the cache allocation of individual tasks running in a virtual machine (VM), which is not achievable by the current state of the art. Our framework also provides a cache management scheme that determines cache allocation to tasks, designs VMs in a cache-aware manner, and minimizes the aggregated utilization of VMs to be consolidated. As a proof of concept, we implemented vLLC and vColoring in the KVM hypervisor running on x86 and ARM multi-core platforms. Experimental results with three different guest OSs (i.e., Linux/RK, vanilla Linux, and MS Windows Embedded) show that our techniques can effectively control the cache allocation of tasks in VMs. Our cache management scheme yields a significant utilization benefit compared to other approaches while satisfying timing constraints.
机译:实时虚拟化在确保定时可预测性的同时将多个实时系统整合到单个硬件平台上的大量关注。然而,现代多核平台上的共享最后级别高速缓存(LLC)可以很容易地妨碍实时虚拟化的定时可预测性,由于综合工作负载中产生的时间干扰。由于由LLC引起的这种干扰是高度变量,并且可能在要合并的传统系统中尚未存在,因此对实时虚拟化构成了重大挑战。在本文中,我们为多核实时虚拟化提出了一个可预测的共享缓存管理框架。我们的框架推出了两个虚拟机制级技术,VLLC和VColorion,它能够在虚拟机(VM)中运行的各个任务的高速缓存分配,这是由本领域的当前状态无法实现的。我们的框架还提供了一种缓存管理方案,该缓存管理方案确定要任务的缓存分配,以缓存感知方式设计VM,并最大限度地减少要合并的VM的聚合利用率。作为概念证明,我们在X86和ARM多核平台上运行的KVM管理程序中实现了VLLC和VColoring。实验结果与三个不同的Guest OSS(即Linux / RK,vanilla Linux和MS Windows嵌入式)表明我们的技术可以有效地控制VM中的任务的缓存分配。与其他方法相比,我们的缓存管理方案与在满足时序约束的情况下相比产生了显着的利用益处。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号