首页> 外文期刊>高能物理与核物理计算国际会议公报:英文版 >A Dataflow Meta—Computing Framework for Event Processing in the H1 Experiment
【24h】

A Dataflow Meta—Computing Framework for Event Processing in the H1 Experiment

机译:H1实验中用于事件处理的数据流元计算框架

获取原文
获取原文并翻译 | 示例
       

摘要

Linux based networked PCs clusters are replacing both the VME non uniform direct memory access systems and SMP shared memory systems used previously for the online event filtering and reconstrucion.To allow an optimal use of the distributed resources of PC clusters an open software framework is presently being developed based on a dataflow paradigm for event processing.This framework allows for the distribution of the data of physics events and associated calibration data to multiple computers from multiple input sources for processing and the subsequent collection of the processed events at multiple outputs.The basis of the system is the event repository,basically a first-in first -out event store which may be read and written in a manner similar to sequential file access.Events are stored in and transferred between repositories as suitably large sequences to enable high throughput.Multiple readers can read simultaneously from a single repository to receive event sequences and multiple writers can insert event sequences to a repository,Hence repositories are used for event distribution and collection.To support synchronisation of the event folow the repository implements baaiers.A barrier must be written by all the writers of a repository before any reader can read the barrier,A reader must read a barrier before it may receive data from behind it.Only after all readers have read the barrier is the barrier emoved from the repository.A barrier may also have attached data,In this way calibration data can be distributed to all proessuing units. The repositories are implemented as multi-threaded CORBA objects in C++ and CORMA is used for all data transfers,Job setup scripts are written in python and interactive status and histogram display is provided by a Java program.Jobs run under the PBS batch system providing shared use of resources for online triggering ,offline mass reporcessing and user analysis jobs.
机译:基于Linux的联网PC群集正在取代以前用于在线事件过滤和重构的VME非统一直接内存访问系统和SMP共享内存系统。为了使PC群集的分布式资源得到最佳利用,目前正在使用开放软件框架基于数据流范式进行事件处理的开发。此框架允许将物理事件的数据和相关的校准数据从多个输入源分发到多台计算机以进行处理,并随后在多个输出处收集已处理的事件。系统是事件存储库,基本上是先进先出事件存储库,可以按照与顺序文件访问类似的方式进行读取和写入。事件以适当大的顺序存储在存储库中并在存储库之间传输,以实现高吞吐量。读者可以同时从单个存储库读取内容,以接收事件序列和多重编写者可以将事件序列插入到存储库中,因此使用存储库进行事件分发和收集。为了支持事件的同步,存储库实施了baaiers。存储库的所有编写者必须编写一个屏障,任何读者才能读取屏障,读取器必须先读取屏障,然后才能从其后面接收数据。只有在所有读者都读取屏障之后,屏障才是从存储库中移出的屏障。屏障也可能具有附加数据,因此可以将校准数据分发到所有处理单位。存储库以C ++中的多线程CORBA对象实现,CORMA用于所有数据传输,作业设置脚本以python编写,交互状态由Java程序提供,直方图显示。作业在PBS批处理系统下运行,提供共享使用资源进行在线触发,离线批量处理和用户分析工作。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号