Session

Virtualisation

20 May 2021, 10:50

Conveners

Virtualisation: Thu AM

  • Alessandra Forti (University of Manchester (GB))
  • Daniele Spiga (Universita e INFN, Perugia (IT))

Virtualisation: Thu PM

  • Niko Neufeld (CERN)
  • Gordon Watts (University of Washington (US))

Presentation materials

  1. Johannes Elmsheuser (Brookhaven National Laboratory (US))
    20/05/2021, 10:50
    Distributed Computing, Data Management and Facilities
    Short Talk

    The CERN ATLAS Experiment successfully uses a worldwide distributed computing Grid infrastructure to support its physics programme at the Large Hadron Collider (LHC). The Grid workflow system PanDA routinely manages up to 700'000 concurrently running production and analysis jobs to process simulation and detector data. In total more than 500 PB of data is distributed over more than 150 sites...

    Go to contribution page
  2. Massimo Sgaravatto (Universita e INFN, Padova (IT))
    20/05/2021, 11:03
    Distributed Computing, Data Management and Facilities
    Short Talk

    CloudVeneto is a private cloud implemented as the result of merging two existing cloud infrastructures: the INFN Cloud Area Padovana, and a private cloud owned by 10 departments of University of Padova.
    This infrastructure is a full production facility, in continuous growth, both in terms of users, and in terms of computing and storage resources.
    Even if the usage of CloudVeneto is not...

    Go to contribution page
  3. Apostolos Theodoridis (CERN)
    20/05/2021, 11:16
    Distributed Computing, Data Management and Facilities
    Short Talk

    Abstract. The vast amounts of data generated by scientific research pose enormous challenges for capturing, managing and processing this data. Many trials have been made in different projects (such as HNSciCloud and OCRE), but today, commercial cloud services do not yet play a major role in the production computing environments of the publicly funded research sector in Europe. Funded by...

    Go to contribution page
  4. Rene Caspart (KIT - Karlsruhe Institute of Technology (DE))
    20/05/2021, 11:29
    Distributed Computing, Data Management and Facilities
    Short Talk

    The inclusion of opportunistic resources, for example from High Performance Computing (HPC) centers or cloud providers, is an important contribution to bridging the gap between existing resources and future needs by the LHC collaborations, especially for the HL-LHC era. However, the integration of these resources poses new challenges and often needs to happen in a highly dynamic manner. To...

    Go to contribution page
  5. Ralf Florian Von Cube (KIT - Karlsruhe Institute of Technology (DE))
    20/05/2021, 11:42
    Distributed Computing, Data Management and Facilities
    Short Talk

    Computing resource needs are expected to increase drastically in the future. The HEP experiments ATLAS and CMS foresee an increase of a factor of 5-10 in the volume of recorded data in the upcoming years. The current infrastructure, namely the WLCG, is not sufficient to meet the demands in terms of computing and storage resources.

    The usage of non HEP specific resources is one way to reduce...

    Go to contribution page
  6. Zhibin Liu (Institute of High Energy Physis, CAS; University of Chinese Academy of Sciences)
    20/05/2021, 11:55
    Distributed Computing, Data Management and Facilities
    Short Talk

    High Energy Photon Source (HEPS) has the characteristic of large amount of data, high timeliness, and diverse requirements for scientific data analysis. Generally, researchers need to spend a lot of time in the configuration of the experimental environment. In response to the above problems, we introduce a remote data analysis system for HEPS. The platform provides users a web-based...

    Go to contribution page
  7. Nurcan Ozturk (University of Texas at Arlington (US))
    20/05/2021, 15:00
    Distributed Computing, Data Management and Facilities
    Short Talk

    The ATLAS experiment’s software production and distribution on the grid benefits from a semi-automated infrastructure that provides up-to-date information about software usability and availability through the CVMFS distribution service for all relevant systems. The software development process uses a Continuous Integration pipeline involving testing, validation, packaging and installation...

    Go to contribution page
  8. Matthew Feickert (Univ. Illinois at Urbana Champaign (US))
    20/05/2021, 15:13
    Distributed Computing, Data Management and Facilities
    Short Talk

    In High Energy Physics facilities that provide High Performance Computing environments provide an opportunity to efficiently perform the statistical inference required for analysis of data from the Large Hadron Collider, but can pose problems with orchestration and efficient scheduling. The compute architectures at these facilities do not easily support the Python compute model, and the...

    Go to contribution page
  9. Diego Ciangottini (INFN, Perugia (IT))
    20/05/2021, 15:26
    Distributed Computing, Data Management and Facilities
    Short Talk

    The challenges proposed by the HL-LHC era are not limited to the sheer amount of data to be processed: the capability of optimizing the analyser's experience will also bring important benefits for the LHC communities, in terms of total resource needs, user satisfaction and in the reduction of end time to publication. At the Italian National Institute for Nuclear Physics (INFN) a portable...

    Go to contribution page
  10. Konstantinos Samaras-Tsakiris (CERN)
    20/05/2021, 15:39
    Distributed Computing, Data Management and Facilities
    Short Talk

    The infrastructure behind [home.cern][1] and 1000 other Drupal websites serves more than 15,000 unique visitors daily. To best serve the site owners, a small engineering team needs development speed to adapt to their evolving needs and operational velocity to troubleshoot emerging problems rapidly. We designed a new Web Frameworks platform by extending Kubernetes to replace the ageing physical...

    Go to contribution page
  11. Valentin Volkl (University of Innsbruck (AT))
    20/05/2021, 15:52
    Offline Computing
    Short Talk

    Consistent, efficient software builds and deployments are a common concern for all HEP experiments. These proceedings describe the evolution of the usage of the Spack package manager in HEP in the context of the LCG stacks and the current Spack-based management of Key4hep software. Whereas previously Key4hep software used spack only for a thin layer of FCC experiment software on top of the LCG...

    Go to contribution page
  12. Lorena Lobato Pardavila (Fermi National Accelerator Lab. (US))
    20/05/2021, 16:05
    Distributed Computing, Data Management and Facilities
    Short Talk

    The File Transfer Service (FTS3) is a data movement service developed at CERN which is used to distribute the majority of the Large Hadron Collider's data across the Worldwide LHC Computing Grid (WLCG) infrastructure. At Fermilab, we have deployed FTS3 instances for Intensity Frontier experiments (e.g. DUNE) to transfer data in America and Europe, using a container-based strategy. In this...

    Go to contribution page
Building timetable...