Linux based networked PCs clusters are replacing both the VME non uniform direct memory access systems and SMP shared memory systems used previously for the online event filtering and reconstrucion.To allow an optimal use of the distributed resources of PC clusters an open software framework is presently being developed based on a dataflow paradigm for event processing.This framework allows for the distribution of the data of physics events and associated calibration data to multiple computers from multiple input sources for processing and the subsequent collection of the processed events at multiple outputs.The basis of the system is the event repository,basically a first-in first -out event store which may be read and written in a manner similar to sequential file access.Events are stored in and transferred between repositories as suitably large sequences to enable high throughput.Multiple readers can read simultaneously from a single repository to receive event sequences and multiple writers can insert event sequences to a repository,Hence repositories are used for event distribution and collection.To support synchronisation of the event folow the repository implements baaiers.A barrier must be written by all the writers of a repository before any reader can read the barrier,A reader must read a barrier before it may receive data from behind it.Only after all readers have read the barrier is the barrier emoved from the repository.A barrier may also have attached data,In this way calibration data can be distributed to all proessuing units. The repositories are implemented as multi-threaded CORBA objects in C++ and CORMA is used for all data transfers,Job setup scripts are written in python and interactive status and histogram display is provided by a Java program.Jobs run under the PBS batch system providing shared use of resources for online triggering ,offline mass reporcessing and user analysis jobs.
展开▼