Present:
ATLAS: Simona Campana, Torre Wenaus
LHCb: Stefan Roiser
ALICE: Predrag Buncic
CMS: Daniele Bonacorsi, David Lange
WLCG: Ian Bird
Summary of discussion on starting to prepare planing for HL-LHC computing
There are two main areas where we should try and pin down some baseline assumptions on numbers:
- machine parameters: livetimes, pile-up, luminosity etc
- experiment parameters: trigger rates, event sizes (for various formats)
From these we can derive raw data estimates, and baseline needs for storage, cpu, etc. based on today's models! This will clearly evolve but at least will be a starting point.
Way forward:
- Create a document on the baseline assumptions and resource needs - use the appendix from the Run 2 Computing Model Update document as a starting outiline (attached to the agenda)
- Work towards a whitepaper on the software and HL-LHC computing problem, including all aspects of development, performance understanding, maintenance in short and long term, not just for core software but for distributed computing and data tools too. This could be used as basis to ask for additional dedicated funding - could be via a project, contributions in-kind, or etc. The community whitepaper to be discussed in the May HSF workshop could be input to this also.
Comments:
- Should also think about appropriate metrics for optimisation and evaluation of models.
- Should also worry about needs of memory, I/O rates, networks in addition to CPU and storage needs
Next meetings
There are minutes attached to this event.
Show them.