首页> 外文会议>Parallel and Distributed Computing and Systems >PIPELINED MUTUAL EXCLUSION ON LARGE-SCALE CACHE-COHERENT MULTIPROCESSORS
【24h】

PIPELINED MUTUAL EXCLUSION ON LARGE-SCALE CACHE-COHERENT MULTIPROCESSORS

机译:大型缓存相干多处理器上的管线互斥

获取原文
获取外文期刊封面目录资料

摘要

This paper proposes a pipelined mutual exclusion that allows different processors to access different memory blocks for a set D of shared data in a pipeline manner. In the pipelining, processors requesting for D are first serialized into the queue for each memory block of D. Second, a permission to access the memory block or its word is acquired and released according to the order in the queue. The first phase exploits a tree data structure of which leaf nodes are associated with the memory blocks of D. With the tree, a total order of processors requesting for D is preserved in each of the queues for the memory blocks of D, so that mutual exclusion on D is ensured. The pipelining is achieved since the access permission is passed on a block or a word basis. The queue is distributed in the caches, that support efficient pipelining. Results with a cycle-by-cycle simulator show that for mutual exclusion on 256-block shared data with 256 processors, about 82 or 225 processors simultaneously access different blocks or words of the data, leading to a speedup of about 80 or 41 over a pipelining without the tree. For a few small applications, the speedup is about 1 to 21.
机译:本文提出了一种流水线互斥,它允许不同的处理器以流水线方式访问共享数据集D的不同存储块。在流水线中,首先将请求D的处理器序列化到D的每个存储块的队列中。其次,根据队列中的顺序获取并释放对存储块或其字的访问权限。第一阶段利用树数据结构,该树数据结构的叶节点与D的存储块相关联。通过树,请求D的处理器的总顺序被保留在D的存储块的每个队列中,以便相互确保排除在D上。由于按块或单词传递访问许可,因此可以实现流水线操作。队列分布在支持高效流水线化的缓存中。逐周期模拟器的结果显示,对于与256个处理器的256块共享数据的互斥,大约82或225个处理器同时访问数据的不同块或字,从而导致在一个处理器上加速约80或41。没有树的流水线。对于一些小型应用程序,加速比约为1到21。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号