The CMS experiment (Compact Muon Solenoid) is one of the main experiments that will collect data at the Large Hadron Collider (LHC) located at CERN. The expected rate of events is 100 Hz, resulting in few PBytes of data per year to be stored and processed. CMS chose a distributed architecture based on the Grid middleware to distribute data and enable thousands of physi cists collaborations, worldwide spreaded, to access them. Computing and Data Model is based on a combination of Grid tools provided by the WLGC (World wide LHC Computing Grid) and the OSG (Open Science Grid) (see web page in http: /opensciencegrid. org) projects plus a number of CMS-specific services operating on top of them, facilitating high level data and workload management operations. A description of CMS data and workload management together with the actual experience are presented in this paper.
展开▼