Argonne & Fermilab host: Beyond Leading Order Calculations on HPCs

The West Wing, Wilson Hall 10th floor NW corner (WH10NW) (LPC, Fermilab)

The West Wing, Wilson Hall 10th floor NW corner (WH10NW)

LPC, Fermilab
Elizabeth Sexton-Kennedy (Fermi National Accelerator Lab. (US)) , Taylor Childers (Argonne National Laboratory (US))
This event is sponsored by the DOE HEP Center for Computing Excellence. The DOE will spend $0.5B over the next three years to replace current High Performance Computing (HPC) resources with new machines. Current machines provide about 15B core-hours per year to scientific research, with LHC experiments using around 100M core-hours per year. The new machines will provide more than an order of magnitude increase in the computing resources available at the institutes with HPCs such as the NERSC (National Energy Research Scientific Computing) center, the Argonne Leadership Computing Facility, and the Oak Ridge Computing Facility. HEP must focus on using these machines effectively in order to achieve the science goals set out by the P5 report. With this in mind, Argonne and the LHC Physics Center (LPC) at Fermilab are hosting this workshop to bring together authors of parton event generators, NLO/NNLO calcuations and software experts from HPC institutes. The goal is to work toward HPC friendly codes that allow for the most compute intensive calculations to scale to millions of parallel processes. Taylor Childers and Elizabeth Sexton-Kennedy Co-Chairs of the Organizing Committee Local Organizing Committee: Taylor Childers (Argonne) Bo Jayatilaka (Fermilab) Elizabeth Sexton-Kennedy (Fermilab) David Sheffield (Rutgers) Gabriele Benelli and Nadja Strobbe (LPC Event Committee Chairs) Boaz Klima and Meenakshi Narain (LPC Coordinators)
Registration Form
  • Amitoj Singh
  • Anthony Tiradani
  • Bo Jayatilaka
  • Bronson Messer
  • Christopher Neu
  • Ciaran Williams
  • David Abdurachmanov
  • David Sheffield
  • Elizabeth Sexton-Kennedy
  • Evan Michael Wolfe
  • Frank Petriello
  • Gabriel Perdue
  • Gabriele Benelli
  • Joey Huston
  • Josh Bendavid
  • Kalyan Kumaran
  • Lindsey Gray
  • Manfred Paulini
  • Marek Zielinski
  • Oliver Gutsche
  • olivier mattelaer
  • Richard Gerber
  • Salvatore Rappoccio
  • Stefan Hoeche
  • Stefan Piperov
  • Steve Mrenna
  • Steven Gottlieb
  • Taylor Childers
  • Thomas Le Compte
  • Tom Uram
  • Walter Giele

The  Beyond Leading Order Calculations on HPCs brought experts from the Argonne and Oak Ridge Leadership Computing Facility and NERSC together with theoretical physicists. These theorists are writing the (N)NLO parton interaction Monte Carlo generators and predictions the LHC experiments depend on for comparison to measurements. (N)NLO calculations are becoming computationally intensive requiring on the order of weeks to perform phase space integrals before events can be generated. As these calculations move to NNLO the computational complexity increases. 

    Radja Boughezal, a theorist at Argonne, showed how she and her collaborators are using the Mira supercomputer at Argonne to perform NNLO calculations. They performed V+jet calculations on the entire supercomputer for 3hr 30min and completed the results for one publication (Physics Letters B (2016), pp. 6-13), thus illustrating how supercomputers can enable excellence in particle physics by providing capabilities that are extremely difficult if not impossible to provide on traditional computing infrastructure.

    We had presentations from the three DOE supercomputer facilities (Argonne , NERSC, and Oak Ridge) and from three Monte Carlo event generators (Sherpa, MadGraph5_aMC@NLO, and Pythia). These presentations were followed by three discussion sections: How to build codes that target both GPUs and many-core CPUs, Common scalable theory and Monte Carlo tools, and experience from the LHC experiments in scaling existing codes.

    The common tools discussion concluded that a scalable Monte Carlo integrator would be the best tool for community development. The discussion of multi-architecture codes focused on how to target GPUs (like Titan) and many-core CPUs (like Cori & Theta). The consensus was a method similar to that used in the HACC simulation software of computational cosmology in which the core computational kernels of (N)NLO calculations are identified then multiple versions written that are optimized for a particular architecture. The top-level framework can otherwise remain the same. ATLAS & CMS described their experiences scaling generators. Taylor Childers described the experiences within ATLAS of scaling Alpgen to the entire Mira supercomputer (1.5M processes) and running Sherpa at the scale of one third of Mira (128k processes). Josh Bendavid described the plans of CMS to perform similar scaling of the Sherpa and MadGraph5_aMC@NLO generators in the near future.

There are minutes attached to this event. Show them.