3-10 August 2016
Chicago IL USA
US/Central timezone
There is a live webcast for this event.

Developments in Architectures and Services for using High Performance Computing in Energy Frontier Experiments (25' + 5')

4 Aug 2016, 15:50


Oral Presentation Computing and Data Handling Computing


Lisa Gerhardt (LBNL) Taylor Childers (Argonne National Laboratory (US))


The integration of HPC resources into the standard computing toolkit of HEP experiments is becoming important as traditional resources are being outpaced by the needs of the experiments. We will describe solutions that address some of the difficulty in running data-intensive pipelines on HPC systems. Users of NERSC HPCs are benefiting from a newly developed package called "Shifter" that creates docker-like images and the deployment of the new "Burst Buffers" NVRAM file system designed to offer extreme I/O performance, supporting terabyte-per-second bandwidth and 10s of millions of IO operations per second. These tools have enabled particle physicists from multiple experiments routinely run their entire multi-TB sized CVMFS software stacks across tens of thousands of compute cores. In addition, an Edge Service has been developed to provide a uniform interface for HEP job management systems to access supercomputer sites. It is based on the Python Django framework and is composed of two processes, of which one runs inside the supercomputing environment and one runs outside. It has been used to run over 100 million core-hours of LHC experiment jobs on the Mira supercomputer at the Argonne Leadership Computing Facility and on the Edison supercomputer at NERSC for LHC experiments.

Primary authors

Lisa Gerhardt (LBNL) Taylor Childers (Argonne National Laboratory (US))


Deborah Bard Doug Benjamin (Duke University (US)) Jeff Porter (Lawrence Berkeley National Lab. (US)) Mr. Prabhat (Lawrence Berkeley National Laboratory) Thomas Uram (Argonne Leadership Computing Facility) Wahid Bhimji (Lawrence Berkeley National Lab. (US))

Presentation Materials