Conveners
Virtualisation: Thu AM
- Alessandra Forti (University of Manchester (GB))
- Daniele Spiga (Universita e INFN, Perugia (IT))
Virtualisation: Thu PM
- Niko Neufeld (CERN)
- Gordon Watts (University of Washington (US))
The CERN ATLAS Experiment successfully uses a worldwide distributed computing Grid infrastructure to support its physics programme at the Large Hadron Collider (LHC). The Grid workflow system PanDA routinely manages up to 700'000 concurrently running production and analysis jobs to process simulation and detector data. In total more than 500 PB of data is distributed over more than 150 sites...
CloudVeneto is a private cloud implemented as the result of merging two existing cloud infrastructures: the INFN Cloud Area Padovana, and a private cloud owned by 10 departments of University of Padova.
This infrastructure is a full production facility, in continuous growth, both in terms of users, and in terms of computing and storage resources.
Even if the usage of CloudVeneto is not...
Abstract. The vast amounts of data generated by scientific research pose enormous challenges for capturing, managing and processing this data. Many trials have been made in different projects (such as HNSciCloud and OCRE), but today, commercial cloud services do not yet play a major role in the production computing environments of the publicly funded research sector in Europe. Funded by...
The inclusion of opportunistic resources, for example from High Performance Computing (HPC) centers or cloud providers, is an important contribution to bridging the gap between existing resources and future needs by the LHC collaborations, especially for the HL-LHC era. However, the integration of these resources poses new challenges and often needs to happen in a highly dynamic manner. To...
Computing resource needs are expected to increase drastically in the future. The HEP experiments ATLAS and CMS foresee an increase of a factor of 5-10 in the volume of recorded data in the upcoming years. The current infrastructure, namely the WLCG, is not sufficient to meet the demands in terms of computing and storage resources.
The usage of non HEP specific resources is one way to reduce...
High Energy Photon Source (HEPS) has the characteristic of large amount of data, high timeliness, and diverse requirements for scientific data analysis. Generally, researchers need to spend a lot of time in the configuration of the experimental environment. In response to the above problems, we introduce a remote data analysis system for HEPS. The platform provides users a web-based...
The ATLAS experiment’s software production and distribution on the grid benefits from a semi-automated infrastructure that provides up-to-date information about software usability and availability through the CVMFS distribution service for all relevant systems. The software development process uses a Continuous Integration pipeline involving testing, validation, packaging and installation...
In High Energy Physics facilities that provide High Performance Computing environments provide an opportunity to efficiently perform the statistical inference required for analysis of data from the Large Hadron Collider, but can pose problems with orchestration and efficient scheduling. The compute architectures at these facilities do not easily support the Python compute model, and the...
The challenges proposed by the HL-LHC era are not limited to the sheer amount of data to be processed: the capability of optimizing the analyser's experience will also bring important benefits for the LHC communities, in terms of total resource needs, user satisfaction and in the reduction of end time to publication. At the Italian National Institute for Nuclear Physics (INFN) a portable...
The infrastructure behind [home.cern][1] and 1000 other Drupal websites serves more than 15,000 unique visitors daily. To best serve the site owners, a small engineering team needs development speed to adapt to their evolving needs and operational velocity to troubleshoot emerging problems rapidly. We designed a new Web Frameworks platform by extending Kubernetes to replace the ageing physical...
Consistent, efficient software builds and deployments are a common concern for all HEP experiments. These proceedings describe the evolution of the usage of the Spack package manager in HEP in the context of the LCG stacks and the current Spack-based management of Key4hep software. Whereas previously Key4hep software used spack only for a thin layer of FCC experiment software on top of the LCG...
The File Transfer Service (FTS3) is a data movement service developed at CERN which is used to distribute the majority of the Large Hadron Collider's data across the Worldwide LHC Computing Grid (WLCG) infrastructure. At Fermilab, we have deployed FTS3 instances for Intensity Frontier experiments (e.g. DUNE) to transfer data in America and Europe, using a container-based strategy. In this...