首页> 外文学位 >Cache-Conscious Concurrent Data Structures.
【24h】

Cache-Conscious Concurrent Data Structures.

机译:高速缓存并发数据结构。

获取原文
获取原文并翻译 | 示例

摘要

The power wall, the instruction-level parallelism wall, and the memory wall are driving a shift in microprocessor design from implicitly parallel architectures to- wards explicitly parallel architectures. A necessary condition for peak scalability and performance on modern hardware is application execution that is aware of the memory hierarchy. The thesis of this dissertation is that cache-conscious con- current data structures for many-core systems will show significant performance improvements over the state of the art in concurrent data structure designs for those applications that must contend with the deleterious effects of the memory wall. Lock-free cache-conscious data structures that maintain the abstraction of a linearizable set have been studied previously in the context of unordered data structures. We explore novel alternatives, namely lock-free cache-conscious data structures that maintain the abstraction of a linearizable ordered set. The two primary design contributions of this dissertation are the lock-free skip tree and lock-free burst trie algorithms. In both algorithms, read-only operations are wait-free and modification operations are lock-free. The lock-free skip tree has relaxed structural properties that allow atomic operations to modify the free without invalidating the consistency of the data structure. We define the dense skip tree as a variation of the skip tree data structure, and prove cache-conscious properties of the dense skip tree. The proof techniques represent a significant departure from the methods outlined in the original skip tree paper. We show that cache-conscious, linearizable concurrent data structures have advantageous performance that can be measured across multiple architecture platforms. The improved performance arises from better treatment of the memory wall phenomenon that is ubiquitous to current multi-core systems and almost certainly will continue to affect future many-core systems.
机译:功率墙,指令级并行性墙和存储器墙正在推动微处理器设计从隐式并行架构向显式并行架构的转变。在现代硬件上达到最高可伸缩性和性能的必要条件是知道内存层次结构的应用程序执行。本文的论点是,对于那些必须应对内存壁的有害影响的应用程序,并发数据结构设计中,用于多核系统的具有缓存意识的并发数据结构将显示出比现有技术显着的性能改进。 。先前已经在无序数据结构的背景下研究了保持线性化集抽象的无锁缓存敏感数据结构。我们探索了新颖的替代方法,即保持线性化有序集的抽象的无锁缓存意识数据结构。本文的两个主要设计贡献是无锁跳过树和无锁突发特里算法。在这两种算法中,只读操作均无需等待,而修改操作则无需锁定。无锁跳过树具有宽松的结构属性,允许原子操作修改无锁而不会使数据结构的一致性无效。我们将密集跳过树定义为跳过树数据结构的变体,并证明密集跳过树的缓存意识。证明技术与原始跳过树论文中概述的方法有很大不同。我们证明了具有缓存意识的,可线性化的并发数据结构具有可以跨多个架构平台衡量的优越性能。性能的提高源自对当前多核系统普遍存在的内存壁现象的更好处理,并且几乎可以肯定的是,它将继续影响未来的多核系统。

著录项

  • 作者

    Spiegel, Michael.;

  • 作者单位

    University of Virginia.;

  • 授予单位 University of Virginia.;
  • 学科 Computer Science.
  • 学位 Ph.D.
  • 年度 2011
  • 页码 188 p.
  • 总页数 188
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号