【24h】

Cache-Affinity Scheduling for Fine Grain Multithreading

机译:细粒度多线程的缓存亲和力调度

获取原文
获取原文并翻译 | 示例

摘要

Cache utilisation is often very poor in multithreaded applications, due to the loss of data access locality incurred by frequent context switching. This problem is compounded on shared memory multiprocessors when dynamic load balancing is introduced and thread migration disrupts cache content. In this paper, we present a technique, which we refer to as 'batching', for reducing the negative impact of fine grain multithreading on cache performance. Prototype schedulers running on uniprocessors and shared memory multiprocessors are described, and finally experimental results which illustrate the improvements observed after applying our techniques are presented.
机译:由于频繁的上下文切换会导致数据访问位置丢失,因此在多线程应用程序中,缓存利用率通常非常差。当引入动态负载平衡并且线程迁移中断高速缓存内容时,此问题在共享内存多处理器上更加严重。在本文中,我们提出了一种称为“批处理”的技术,用于减少细粒度多线程对缓存性能的负面影响。描述了在单处理器和共享内存多处理器上运行的原型调度程序,最后给出了实验结果,这些结果说明了应用我们的技术后所观察到的改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号