21–25 May 2012
New York City, NY, USA
US/Eastern timezone

CRAB3: Establishing a new generation of services for distributed analysis at CMS

22 May 2012, 13:30
4h 45m
Rosenthal Pavilion (10th floor) (Kimmel Center)

Rosenthal Pavilion (10th floor)

Kimmel Center

Poster Distributed Processing and Analysis on Grids and Clouds (track 3) Poster Session

Speaker

Daniele Spiga (CERN)

Description

In CMS Computing the highest priorities for analysis tools are the improvement of the end users' ability to produce and publish reliable samples and analysis results as well as a transition to a sustainable development and operations model. To achieve these goals CMS decided to incorporate analysis processing into the same framework as the data and simulation processing. This strategy foresees that all workload tools (Tier0, Tier1, production, analysis) share a common core which allows long term maintainability as well as the standardization of the operator interfaces. The re-engineered analysis workload manager, called CRAB3, makes use of newer technologies, such as RESTful based web services, NoSQL Databases aiming to increase the scalability and reliability of the system. As opposed to CRAB2 in CRAB3 all work is centrally injected and managed in a global queue. A pool of agents, which can be geographically distributed, consumes work from the central services, servicing the user tasks. The new architecture of CRAB substantially changes the deployment model and operations activities. In this paper we present the implementation of CRAB3 emphasizing how the new architecture improves the workflow automation and simplifies maintainability. We will highlight, in particular, the impact of the new design on daily operations.

Primary author

Co-authors

Dr Eric Wayne Vaandering (Fermi National Accelerator Lab. (US)) Hassen Riahi (Universita e INFN (IT)) Marco Mascheroni (Nat. Inst. of Chem.Phys. & Biophys. (EE)) Mattia Cinquilli (Univ. of California San Diego (US))

Presentation materials

There are no materials yet.