13–17 Feb 2006
Tata Institute of Fundamental Research
Europe/Zurich timezone

Prototype of a Parallel Analysis System for CMS using PROOF

14 Feb 2006, 16:20
20m
D406 (Tata Institute of Fundamental Research)

D406

Tata Institute of Fundamental Research

Homi Bhabha Road Mumbai 400005 India
oral presentation Distributed Data Analysis Distributed Data Analysis

Speaker

Dr Isidro Gonzalez Caballero (Instituto de Fisica de Cantabria (CSIC-UC))

Description

A typical HEP analysis in the LHC experiments involves the processing of data corresponding to several million events, terabytes of information, to be analysed in the last phases. Currently, processing one million events in a single modern workstation takes several hours, thus slowing the analysis cycle. The desirable computing model for a physicist would be closer to a High Performance Computing one where a large number of CPUs are required for short periods (of the order of several minutes). Where CPU farms are available, parallel computing is an obvious solution to this problem. Here we present the tests along this line using a tool for parallel physics analysis in CMS based on the PROOF libraries. Special attention has been paid in the development of this tool to modularity and easiness of usage to enable the possibility of sharing algorithms and simplifying software extensibility while hiding the details of the parallelisation. The first tests performed using a medium size (90 nodes) cluster of dual processor machines on a typical CMS analysis dataset (corresponding to root files for one million top qurk pairs producing fully leptonic final state events distributed uniformly among the computers) show quite promising results on scalability.

Primary author

Dr Isidro Gonzalez Caballero (Instituto de Fisica de Cantabria (CSIC-UC))

Co-authors

Mr Daniel Cano (Instituto de Fisica de Cantabria (CSIC-UC)) Dr Javier Cuevas (Departamento de Física, Universidad de Oviedo) Mr Rafael Marco (Instituto de Fisica de Cantabria (CSIC-UC))

Presentation materials