Scaling up ATLAS production system for the LHC Run 2 and beyond: project ProdSys2

Not scheduled


1919-1 Tancha, Onna-son, Kunigami-gun Okinawa, Japan 904-0495
poster presentation Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing


Dr Alexei Klimentov (Brookhaven National Laboratory (US))


The Big Data processing needs of the ATLAS experiment grow continuously, as more data and more use cases emerge. For Big Data processing the ATLAS experiment adopted the data transformation approach, where software applications transform the input data into outputs. In the ATLAS production system, each data transformation is represented by a task, a collection of many jobs, submitted by the ATLAS workload management system (PanDA) and executed on the Grid. Our experience shows that the rate of tasks submission grows exponentially over the years. To scale up the ATLAS production system for new challenges, we started the ProdSys2 project. PanDA has been upgraded with the Job Execution and Definition Interface (JEDI). Patterns in ATLAS data transformation workflows composed of many tasks, provided a scalable production system framework for template definitions of the many-tasks workflows. These workflows are being implemented in the Database Engine for Tasks (DEfT) that generates individual tasks for processing by JEDI. We report on the ATLAS experience with many-task workflow patterns in preparation for the LHC Run-2.

Primary author


Dr Alexei Klimentov (Brookhaven National Laboratory (US)) Jose Enrique Garcia Navarro (Instituto de Fisica Corpuscular (ES)) Kaushik De (University of Texas at Arlington (US)) Misha Borodin (National Research Nuclear University MEPhI (RU)) Tadashi Maeno (Brookhaven National Laboratory (US))

Presentation Materials

There are no materials yet.