1–5 Sept 2014
Faculty of Civil Engineering
Europe/Prague timezone

Multilevel Workflow System in the ATLAS Experiment

2 Sept 2014, 08:00
1h
Faculty of Civil Engineering

Faculty of Civil Engineering

Faculty of Civil Engineering, Czech Technical University in Prague Thakurova 7/2077 Prague 166 29 Czech Republic
Board: 115
Poster Computing Technology for Physics Research Poster session

Speaker

Dr Alexandre Vaniachine (ANL)

Description

The ATLAS experiment is scaling up Big Data processing for the next LHC run using a multilevel workflow system comprised of many layers. In Big Data processing ATLAS deals with datasets, not individual files. Similarly a task (comprised of many jobs) has become a unit of the ATLAS workflow in distributed computing, with about 0.8M tasks processed per year. In order to manage the diversity of LHC physics (exceeding 35K physics samples per year), the individual data processing tasks are organized into workflows. For example, the Monte Carlo workflow is composed of many steps: generate or configure hard-processes, hadronize signal and minimum-bias (pileup) events, simulate energy deposition in the ATLAS detector, digitize electronics response, simulate triggers, reconstruct data, convert the reconstructed data into ROOT ntuples for physics analysis, etc. Outputs are merged and/or filtered as necessary to optimize the chain. The bi-level workflow manager - ProdSys2 - generates actual workflow tasks and their jobs are executed across more than a hundred distributed computing sites by PanDA – the ATLAS job-level workload management system. On the outer level, the Database Engine for Tasks (DEfT) empowers production managers with templated workflow definitions. On the next level, the Job Execution and Definition Interface (JEDI) is integrated with PanDA to provide dynamic job definition tailored to the sites capabilities. We report on scaling up the production system to accommodate a growing number of requirements from main ATLAS areas: Trigger, Physics and Data Preparation.

Summary

We report on scaling up the ATLAS production system to accommodate a growing number of requirements for the next LHC run.

Primary author

Dr Alexandre Vaniachine (ANL)

Co-authors

Dr Alexei Klimentov (Brookhaven National Laboratory (US)) Dmitry Golubkov (Institute for High Energy Physics (RU)) Jose Enrique Garcia Navarro (Unknown) Kaushik De (University of Texas at Arlington (US)) Misha Borodin (Moscow State Engineering Physics Institute (RU)) Tadashi Maeno (Brookhaven National Laboratory (US))

Presentation materials

Peer reviewing

Paper