【24h】

CMS computing and data handling

机译:CMS计算和数据处理

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

The CMS experiment (Compact Muon Solenoid) is one of the main experiments that will collect data at the Large Hadron Collider (LHC) located at CERN. The expected rate of events is 100 Hz, resulting in few PBytes of data per year to be stored and processed. CMS chose a distributed architecture based on the Grid middleware to distribute data and enable thousands of physi cists collaborations, worldwide spreaded, to access them. Computing and Data Model is based on a combination of Grid tools provided by the WLGC (World wide LHC Computing Grid) and the OSG (Open Science Grid) (see web page in http: /opensciencegrid. org) projects plus a number of CMS-specific services operating on top of them, facilitating high level data and workload management operations. A description of CMS data and workload management together with the actual experience are presented in this paper.
机译:CMS实验(紧凑型Muon螺线管)是将在CERN的大型强子对撞机(LHC)上收集数据的主要实验之一。预期的事件发生率为100 Hz,因此每年需要存储和处理少量数据字节。 CMS选择了基于Grid中间件的分布式体系结构来分发数据,并使遍布全球的成千上万的物理学家合作访问它们。计算和数据模型基于WLGC(全球LHC计算网格)和OSG(开放式科学网格)(请参见http://opensciencegrid.org上的网页)项目提供的网格工具以及许多CMS-在它们之上运行特定服务,以促进高级数据和工作负载管理操作。本文介绍了CMS数据和工作负载管理以及实际经验。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号