Speaker
Description
The High-Luminosity LHC will provide an unprecedented data volume of complex collision events. The desire to keep as many of the "interesting" events for investigation by analysts implies a major increase in the scale of compute, storage and networking infrastructure required for HL-LHC experiments. An updated computing model is required to facilitate the timely publication of accurate physics results from HL-LHC data samples. This talk discusses the study of the computing requirements for CMS during the era of the HL-LHC. We will discuss how we have included requirements beyond the usual CPU, disk and tape estimates made by LHC experiments during Run 2, such as networking and tape read/write rate requirements. We will show how Run 2 monitoring data has been used to make choices towards a HL-LHC computing model. We will illustrate how changes to the computing infrastructure or analysis approach can impact total resource needs and cost. Finally, we will discuss the approach and status of the CMS process for evolving its HL-LHC computing model based on modeling and other factors.
Consider for promotion | Yes |
---|