10–14 Oct 2016
San Francisco Marriott Marquis
America/Los_Angeles timezone

CMS use of allocation based HPC resources

13 Oct 2016, 15:00
15m
GG C2 (San Francisco Mariott)

GG C2

San Francisco Mariott

Oral Track 7: Middleware, Monitoring and Accounting Track 7: Middleware, Monitoring and Accounting

Description

The higher energy and luminosity from the LHC in Run2 has put increased pressure on CMS computing resources. Extrapolating to even higher luminosities (and thus higher event complexities and trigger rates) in Run3 and beyond, it becomes clear the current model of CMS computing alone will not scale accordingly. High Performance Computing (HPC) facilities, widely used in scientific computing outside of HEP, present a (at least so far) largely untapped computing resource for CMS. Even just being able use a small fraction of HPC facilities computing resources could significantly increase the overall computing available to CMS. Here we describe the CMS strategy for integrating HPC resources into CMS computing, the unique challenges these facilities present, and how we plan on overcoming these challenges. We also present the current status of ongoing CMS efforts at HPC sites such as NERSC (Cori cluster), SDSC (Comet cluster) and TACC (Stampede cluster).

Primary Keyword (Mandatory) High performance computing

Primary authors

Dr Bo Jayatilaka (Fermi National Accelerator Lab. (US)) Dirk Hufnagel (Fermi National Accelerator Lab. (US)) Elizabeth Sexton-Kennedy (Fermi National Accelerator Lab. (US)) Stefan Piperov (Brown University (US))

Presentation materials