首页> 外文期刊>Simulation >A GPU-Based Application Framework Supporting Fast Discrete-Event Simulation
【24h】

A GPU-Based Application Framework Supporting Fast Discrete-Event Simulation

机译:支持快速离散事件仿真的基于GPU的应用程序框架

获取原文
获取原文并翻译 | 示例
       

摘要

The graphics processing unit (GPU) has evolved into a flexible and powerful processor of relatively low cost, compared to processors used for other available parallel computing systems. The majority of studies using the GPU within the graphics and simulation communities have focused on the use of the GPU for models that are traditionally simulated using regular time increments, whether these increments are accomplished through the addition of a time delta (i.e., numerical integration) or event scheduling using the delta (i.e., discrete event approximations of continuous-time systems). These types of models have the property of being decomposable over a variable or parameter space. In prior studies, discrete event simulation has been characterized as being an inefficient application for the GPU primarily due to the inherent synchronicity of the GPU organization and an apparent mismatch between the classic event scheduling cycle and the GPU's basic functionality. However, we have found that irregular time advances of the sort common in discrete event models can be successfully mapped to a GPU, thus making it possible to execute discrete event systems on an inexpensive personal computer platform at speedups close to 10x. This speedup is achieved through the development of a special purpose code library we developed that uses an approximate time-based event scheduling approach. We present the design and implementation of this library, which is based on the compute unified device architecture (CUDA) general purpose parallel applications programming interface for the NVIDIA class of GPUs.
机译:与用于其他可用并行计算系统的处理器相比,图形处理单元(GPU)已经发展成为一种成本相对较低的灵活强大的处理器。在图形和仿真社区中,使用GPU进行的大多数研究都集中于将GPU用于传统上使用规则时间增量进行仿真的模型,而这些增量是否是通过添加时间增量(即数值积分)来实现的或使用增量进行事件调度(即连续时间系统的离散事件近似值)。这些类型的模型具有在变量或参数空间上可分解的特性。在先前的研究中,主要由于GPU组织的固有同步性以及经典事件调度周期与GPU基本功能之间的明显不匹配,离散事件模拟已被描述为GPU的低效应用。但是,我们发现离散事件模型中常见的不规则时间提前量可以成功地映射到GPU,从而可以在便宜的个人计算机平台上以接近10倍的速度执行离散事件系统。通过开发专用代码库(我们使用基于时间的近似事件调度方法)来实现这种加速。我们介绍了该库的设计和实现,该库基于NVIDIA类GPU的计算统一设备架构(CUDA)通用并行应用程序编程接口。

著录项

  • 来源
    《Simulation》 |2010年第10期|p.613-628|共16页
  • 作者单位

    Department of Computer and Information Science and Engineering University of Florida Gainesville, FL 32611, USA;

    rnDepartment of Computer and Information Science and Engineering University of Florida Gainesville, FL 32611, USA;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    discrete event simulation; parallel event scheduling; GPU; CUDA; simulation libraries;

    机译:离散事件模拟;并行事件调度;GPU;CUDA;仿真库;
  • 入库时间 2022-08-18 02:50:37

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号