Speaker
Description
The Einstein Telescope (ET) will be the next-generation European underground Gravitational Wave (GW) observatory, designed to open a new observational window on the Universe starting in the mid to late 2030s. Building upon the experience of current GW detectors such as LIGO and Virgo, ET will achieve a significant increase in sensitivity, enabling the detection of a much larger number of GW sources and the exploration of new astrophysical and cosmological regimes. This ambitious goal entails the generation, processing, and long-term preservation of data volumes far exceeding those produced by current GW facilities. More challenging though will be the computing power required to process the data, which scales with the rate of detections; ET will detect ~1000 times more GW events per year than LIGO and Virgo. Multi-messenger science requires that ET data, together with data from any other GW detector, be processed with low latency i.e. within seconds of being recorded.
ET Computing and Data requirements, based primarily on scaling the computing requirements of LIGO and Virgo, indicate that ET will require computing resources similar in scale to LHC experiments. The latest understanding of these requirements, informed by Mock Data Challenges, will be presented. Based on these requirements, the ET Computing Model concept has been defined as an evolution of the computing model used by LIGO and Virgo, incorporating best practices and solutions from industry and WLCG and the HSF. Significant work lies ahead to realise the ET computing model, which also represents opportunities for collaboration. In addition, the case for professionalising the software and computing workforce for ET is made, recognising the 50 year lifespan of ET using continually evolving and heterogeneous computing and software solutions.