Argonne & Fermilab host: Beyond Leading Order Calculations on HPCs

US/Central
The West Wing, Wilson Hall 10th floor NW corner (WH10NW) (LPC, Fermilab)

The West Wing, Wilson Hall 10th floor NW corner (WH10NW)

LPC, Fermilab

https://goo.gl/maps/GhLdhYkSFeo
Elizabeth Sexton-Kennedy (Fermi National Accelerator Lab. (US)), Taylor Childers (Argonne National Laboratory (US))
Description
This event is sponsored by the DOE HEP Center for Computing Excellence. The DOE will spend $0.5B over the next three years to replace current High Performance Computing (HPC) resources with new machines. Current machines provide about 15B core-hours per year to scientific research, with LHC experiments using around 100M core-hours per year. The new machines will provide more than an order of magnitude increase in the computing resources available at the institutes with HPCs such as the NERSC (National Energy Research Scientific Computing) center, the Argonne Leadership Computing Facility, and the Oak Ridge Computing Facility. HEP must focus on using these machines effectively in order to achieve the science goals set out by the P5 report. With this in mind, Argonne and the LHC Physics Center (LPC) at Fermilab are hosting this workshop to bring together authors of parton event generators, NLO/NNLO calcuations and software experts from HPC institutes. The goal is to work toward HPC friendly codes that allow for the most compute intensive calculations to scale to millions of parallel processes. Taylor Childers and Elizabeth Sexton-Kennedy Co-Chairs of the Organizing Committee Local Organizing Committee: Taylor Childers (Argonne) Bo Jayatilaka (Fermilab) Elizabeth Sexton-Kennedy (Fermilab) David Sheffield (Rutgers) Gabriele Benelli and Nadja Strobbe (LPC Event Committee Chairs) Boaz Klima and Meenakshi Narain (LPC Coordinators)
Registration
Registration Form
Participants
  • Amitoj Singh
  • Anthony Tiradani
  • Bo Jayatilaka
  • Bronson Messer
  • Christopher Neu
  • Ciaran Williams
  • David Abdurachmanov
  • David Sheffield
  • Elizabeth Sexton-Kennedy
  • Evan Michael Wolfe
  • Frank Petriello
  • Gabriel Perdue
  • Gabriele Benelli
  • Joey Huston
  • Josh Bendavid
  • Kalyan Kumaran
  • Lindsey Gray
  • Manfred Paulini
  • Marek Zielinski
  • Oliver Gutsche
  • olivier mattelaer
  • Richard Gerber
  • Salvatore Rappoccio
  • Stefan Hoeche
  • Stefan Piperov
  • Steve Mrenna
  • Steven Gottlieb
  • Taylor Childers
  • Thomas Le Compte
  • Tom Uram
  • Walter Giele

The  Beyond Leading Order Calculations on HPCs brought experts from the Argonne and Oak Ridge Leadership Computing Facility and NERSC together with theoretical physicists. These theorists are writing the (N)NLO parton interaction Monte Carlo generators and predictions the LHC experiments depend on for comparison to measurements. (N)NLO calculations are becoming computationally intensive requiring on the order of weeks to perform phase space integrals before events can be generated. As these calculations move to NNLO the computational complexity increases. 

    Radja Boughezal, a theorist at Argonne, showed how she and her collaborators are using the Mira supercomputer at Argonne to perform NNLO calculations. They performed V+jet calculations on the entire supercomputer for 3hr 30min and completed the results for one publication (Physics Letters B (2016), pp. 6-13), thus illustrating how supercomputers can enable excellence in particle physics by providing capabilities that are extremely difficult if not impossible to provide on traditional computing infrastructure.

    We had presentations from the three DOE supercomputer facilities (Argonne , NERSC, and Oak Ridge) and from three Monte Carlo event generators (Sherpa, MadGraph5_aMC@NLO, and Pythia). These presentations were followed by three discussion sections: How to build codes that target both GPUs and many-core CPUs, Common scalable theory and Monte Carlo tools, and experience from the LHC experiments in scaling existing codes.

    The common tools discussion concluded that a scalable Monte Carlo integrator would be the best tool for community development. The discussion of multi-architecture codes focused on how to target GPUs (like Titan) and many-core CPUs (like Cori & Theta). The consensus was a method similar to that used in the HACC simulation software of computational cosmology in which the core computational kernels of (N)NLO calculations are identified then multiple versions written that are optimized for a particular architecture. The top-level framework can otherwise remain the same. ATLAS & CMS described their experiences scaling generators. Taylor Childers described the experiences within ATLAS of scaling Alpgen to the entire Mira supercomputer (1.5M processes) and running Sherpa at the scale of one third of Mira (128k processes). Josh Bendavid described the plans of CMS to perform similar scaling of the Sherpa and MadGraph5_aMC@NLO generators in the near future.

There are minutes attached to this event. Show them.
  • Thursday, 22 September
    • 08:30 10:00
      First Morning Session
    • 10:00 10:30
      Coffee Break and Photo in front of Wilson Hall 30m

      Meet downstairs in the Wilson Hall atrium just after 10am for the workshop photo.

    • 10:30 12:00
      Second Morning Session
      • 10:30
        Sherpa NLO Generator: Software Operation and Organization 30m
        Speakers: Stefan Hoche (SLAC National Accelerator Laboratory (US)), Stefan Hoeche (SLAC)
      • 11:00
        Effectively targeting the Argonne Leadership Computing Facility 30m
        Speaker: Dr Kalyan Kumaran (Argonne National Laboratory)
      • 11:30
        Pythia LO Generator: Software Operation and Organization 30m
        Speakers: Stephen Mrenna (FERMILAB), Steve Mrenna (Fermi National Accelerator Lab. (US))
    • 12:00 13:00
      Working Lunch 1h
    • 13:00 14:30
      First Afternoon Session
      • 13:00
        Effectively targeting the Oak Ridge Leadership Computing Facility 30m
        Speaker: Dr Bronson Messer (Oak Ridge National Laboratory)
      • 13:30
        MadGraph_aMC@NLO: Software Operation and Organization 30m
        Speaker: Dr Olivier Pierre C Mattelaer (IPPP Durham)
      • 14:00
        Effectively targeting the NERSC Facility 30m
        Speaker: Dr Richard Gerber (NERSC/BerkelyLab)
    • 14:30 15:00
      Coffee Break 30m
    • 15:00 17:30
      Second Afternoon Session: Discussion
      • 15:00
        How do we develop system agnostic codes? 1h

        Obviously writing code that can run on a desktop or a supercomputer is hard or everyone would be doing it. What tips and tricks can we use to help not code ourselves into a corner in the future?

  • Friday, 23 September
    • 08:30 10:00
      First Morning Session
      • 08:30
        Common Theory and Monte Carlo Tools for Scalable Code 1h 30m

        Discuss possible common tools that both theorists and generator authors can use in their software. These tools should be scalable and portable for use on desktops, servers, clusters, or supercomputers.

    • 10:00 10:30
      Coffee Break 30m
    • 10:30 12:00
      Second Morning Session
      • 10:30
        Experience with current Generators 1h
        Speakers: Josh Bendavid (California Institute of Technology (US)), Taylor Childers (Argonne National Laboratory (US))