19–25 Oct 2024
Europe/Zurich timezone

Contribution List

509 out of 509 displayed
Export to PDF
  1. Agnieszka Dziurda (Polish Academy of Sciences (PL)), Tomasz Szumlak (AGH University of Krakow (PL))
    21/10/2024, 09:00
  2. Alexander Held (University of Wisconsin Madison (US)), Brian Paul Bockelman (University of Wisconsin Madison (US)), Oksana Shadura (University of Nebraska Lincoln (US))
    21/10/2024, 09:30
    Plenary
    Talk

    The IRIS-HEP software institute, as a contributor to the broader HEP Python ecosystem, is developing scalable analysis infrastructure and software tools to address the upcoming HL-LHC computing challenges with new approaches and paradigms, driven by our vision of what HL-LHC analysis will require. The institute uses a “Grand Challenge” format, constructing a series of increasingly large,...

    Go to contribution page
  3. Marianna Fontana (INFN Bologna (IT)), Santiago Folgueras (Universidad de Oviedo (ES))
    21/10/2024, 10:00
    Plenary
    Talk

    For the High-Luminosity Large Hadron Collider era, the trigger and data acquisition system of the Compact Muon Solenoid experiment will be entirely replaced. Novel design choices have been explored, including ATCA prototyping platforms with SoC controllers and newly available interconnect technologies with serial optical links with data rates up to 28 Gb/s. Trigger data analysis will be...

    Go to contribution page
  4. Graeme A Stewart (CERN)
    21/10/2024, 11:00
    Plenary
    Talk

    Julia is a mature general-purpose programming language, with a large ecosystem of libraries and more than 10000 third-party packages, which specifically targets scientific computing. As a language, Julia is as dynamic, interactive, and accessible as Python with NumPy, but achieves run-time performance on par with C/C++. In this paper, we describe the state of adoption of Julia in HEP, where...

    Go to contribution page
  5. Andrea Rizzi
    21/10/2024, 11:30
    Plenary
    Talk

    Detailed event simulation at the LHC is taking a large fraction of computing budget. CMS developed an end-to-end ML based simulation that can speed up the time for production of analysis samples of several orders of magnitude with a limited loss of accuracy. As the CMS experiment is adopting a common analysis level format, the NANOAOD, for a larger number of analyses, such an event...

    Go to contribution page
  6. Zach Marshall (Lawrence Berkeley National Lab. (US))
    21/10/2024, 12:00
    Plenary
    Talk

    The ATLAS Collaboration has released an extensive volume of data for research use for the first time. The full datasets of proton collisions from 2015 and 2016, alongside a wide array of matching simulated data, are all offered in the PHYSLITE format. This lightweight format is chosen for its efficiency and is the preferred standard for ATLAS internal analyses. Additionally, the inclusion of...

    Go to contribution page
  7. Chris Lee (Stony Brook University (US))
    21/10/2024, 13:30
    Track 6 - Collaborative software and maintainability
    Talk

    The ATLAS offline code management system serves as a collaborative framework for developing a code base totaling more than 5 million lines. Supporting up to 50 nightly release branches, the ATLAS Nightly System offers abundant opportunities for updating existing software and developing new tools for forthcoming experimental stages within a multi-platform environment. This paper describes the...

    Go to contribution page
  8. Jonas Hahnfeld (CERN & Goethe University Frankfurt)
    21/10/2024, 13:30
    Track 3 - Offline Computing
    Talk

    RNTuple is the new columnar data format designed as the successor to ROOT's TTree format. It allows to make use of modern hardware capabilities and is expected to be used in production by the LHC experiments during the HL-LHC. In this contribution, we will discuss the usage of Direct I/O to fully exploit modern SSDs, especially in the context of the recent addition of parallel RNTuple writing....

    Go to contribution page
  9. Kati Lassila-Perini (Helsinki Institute of Physics (FI))
    21/10/2024, 13:30
    Track 8 - Collaboration, Reinterpretation, Outreach and Education
    Talk

    The CMS experiment at the Large Hadron Collider (LHC) regularly releases open data and simulations, enabling a wide range of physics analyses and studies by the global scientific community. The recent introduction of the NanoAOD data format has provided a more streamlined and efficient approach to data processing, allowing for faster analysis turnaround. However, the larger MiniAOD format...

    Go to contribution page
  10. Arsenii Gavrikov
    21/10/2024, 13:30
    Track 5 - Simulation and analysis tools
    Talk

    The Jiangmen Underground Neutrino Observatory (JUNO) is a neutrino experiment under construction in the Guangdong province of China. The experiment has a wide physics program with the most ambitious goal being the determination of the neutrino mass ordering and the high-precision measurement of neutrino oscillation properties using anti-neutrinos produced in the 50 km distant commercial...

    Go to contribution page
  11. Alessandro Scarabotto (Technische Universitaet Dortmund (DE))
    21/10/2024, 13:30
    Track 2 - Online and real-time computing
    Talk

    Since 2022, the LHCb detector is taking data with a full software trigger at the LHC proton-proton collision rate, implemented in GPUs in the first stage and CPUs in the second stage. This setup allows to perform the alignment & calibration online and to perform physics analyses directly on the output of the online reconstruction, following the real-time analysis paradigm. This talk will give...

    Go to contribution page
  12. Laura Cappelli (INFN Ferrara)
    21/10/2024, 13:30
    Track 3 - Offline Computing
    Talk

    Tracking charged particles in high-energy physics experiments is a computationally intensive task. With the advent of the High Luminosity LHC era, which is expected to significantly increase the number of proton-proton interactions per beam collision, the amount of data to be analysed will increase dramatically. As a consequence, local pattern recognition algorithms suffer from scaling...

    Go to contribution page
  13. Jerry 🦑 Ling (Harvard University (US))
    21/10/2024, 13:30
    Track 5 - Simulation and analysis tools
    Talk

    At the LHC experiments, RNTuple is emerging as the primary data storage solution, and will be ready for production next year. In this context, we introduce the latest development in UnROOT.jl, a high-performance and thread-safe Julia ROOT I/O package that facilitates both the reading and writing of RNTuple data.

    We briefly share insights gained from implementing RNTuple Reader twice: first...

    Go to contribution page
  14. Yuan-Tang Chou (University of Washington (US))
    21/10/2024, 13:48
    Track 3 - Offline Computing
    Talk

    Machine Learning (ML)-based algorithms play increasingly important roles in almost all aspects of the data analyses in ATLAS. Diverse ML models are used in detector simulations, event reconstructions, and data analyses. They are being deployed in the ATLAS software framework, Athena. The primary approach to perform ML inference in Athena is to use the ONNXRuntime. However, some ML models could...

    Go to contribution page
  15. Monika Wielers (RAL (UK))
    21/10/2024, 13:48
    Track 6 - Collaborative software and maintainability
    Talk

    The ATLAS experiment will undergo major upgrades for operation at the high luminosity LHC. The high pile-up interaction environment (up to 200 interactions per 40MHz bunch crossing) requires a new radiation-hard tracking detector with a fast readout.

    The scale of the proposed Inner Tracker (ITk) upgrade is much larger than the current ATLAS tracker. The current tracker consists of ~4000...

    Go to contribution page
  16. Yulei Zhang (University of Washington (US))
    21/10/2024, 13:48
    Track 5 - Simulation and analysis tools
    Talk

    The Fair Universe project is organising the HiggsML Uncertainty Challenge, which will/has run from June to October 2024.

    This HEP and Machine Learning competition is the first to strongly emphasise uncertainties: mastering uncertainties in the input training dataset and outputting credible confidence intervals.

    The context is the measurement of the Higgs to tau+ tau- cross section like...

    Go to contribution page
  17. Piet Nogga (University of Bonn (DE))
    21/10/2024, 13:48
    Track 8 - Collaboration, Reinterpretation, Outreach and Education
    Talk

    The Large Hadron Collider Beauty (LHCb) experiment offers an excellent environment to study a broad variety of modern physics topics. Its data from the major physics campaigns (Run 1 and 2) at the Large Hadron Collider (LHC) has accumulated over 600 scientific publications. In accordance with the CERN Open Data Policy, LHCb announced the release of the full Run 1 dataset gathered from...

    Go to contribution page
  18. Benjamin Morgan (University of Warwick (GB))
    21/10/2024, 13:48
    Track 5 - Simulation and analysis tools
    Talk

    The ATLAS experiment at the LHC heavily depends on simulated event samples produced by a full Geant4 detector simulation. This Monte Carlo (MC) simulation based on Geant4 is a major consumer of computing resources and is anticipated to remain one of the dominant resource users in the HL-LHC era. ATLAS has continuously been working to improve the computational performance of this simulation for...

    Go to contribution page
  19. Claudia Merlassino (Universita degli Studi di Udine (IT))
    21/10/2024, 13:48
    Track 2 - Online and real-time computing
    Talk

    The ATLAS experiment in the LHC Run 3 uses a two-level trigger system to select
    events of interest to reduce the 40 MHz bunch crossing rate to a recorded rate
    of up to 3 kHz of fully-built physics events. The trigger system is composed of
    a hardware based Level-1 trigger and a software based High Level Trigger.
    The selection of events by the High Level Trigger is based on a wide variety...

    Go to contribution page
  20. Xenofon Chiotopoulos (Nikhef National institute for subatomic physics (NL)), Mr Xenofon Chiotopoulos (Maastricht University)
    21/10/2024, 13:48
    Track 3 - Offline Computing
    Talk

    With the future high-luminosity LHC era fast approaching high-energy physics faces large computational challenges for event reconstruction. Employing the LHCb vertex locator as our case study we are investigating a new approach for charged particle track reconstruction. This new algorithm hinges on minimizing an Ising-like Hamiltonian using matrix inversion. Performing this matrix inversion...

    Go to contribution page
  21. Viola Cavallini (Universita e INFN, Ferrara (IT))
    21/10/2024, 14:06
    Track 2 - Online and real-time computing
    Talk

    Timepix4 is an innovative multi-purpose ASIC developed by the Medipix4 Collaboration at CERN for fundamental and applied physics detection systems. It is composed by a ~7cm$^2$ area matrix with about 230k independent pixels, each one with a charge integration circuit, a discriminator and a time-to-digital converter that allows to measure Time-of-Arrival with 195 ps width bins and...

    Go to contribution page
  22. Anna Sinopoulou (INFN - Sezione di Catania)
    21/10/2024, 14:06
    Track 3 - Offline Computing
    Talk

    The KM3NeT collaboration is constructing two underwater neutrino detectors in the Mediterranean Sea sharing the same technology: the ARCA and ORCA detectors. ARCA is optimized for the observation of astrophysical neutrinos, while ORCA is designed to determine the neutrino mass hierarchy by detecting atmospheric neutrinos. Data from the first deployed detection units are being analyzed and...

    Go to contribution page
  23. Nick Manganelli (University of Colorado Boulder (US))
    21/10/2024, 14:06
    Track 5 - Simulation and analysis tools
    Talk

    The high luminosity LHC (HL-LHC) era will deliver unprecedented luminosity and new detector capabilities for LHC experiments, leading to significant computing challenges with storing, processing, and analyzing the data. The development of small, analysis-ready storage formats like CMS NanoAOD (4kB/event), suitable for up to half of physics searches and measurements, helps achieve necessary...

    Go to contribution page
  24. Giovanni Guerrieri (CERN)
    21/10/2024, 14:06
    Track 8 - Collaboration, Reinterpretation, Outreach and Education
    Talk

    ATLAS Open Data for Education delivers proton-proton collision data from the ATLAS experiment at CERN to the public along with open-access resources for education and outreach. To date ATLAS has released a substantial amount of data from 8 TeV and 13 TeV collisions in an easily-accessible format and supported by dedicated documentation, software, and tutorials to ensure that everyone can...

    Go to contribution page
  25. Zhipeng Yao
    21/10/2024, 14:06
    Track 3 - Offline Computing
    Talk

    The Super Tau Charm Facility (STCF) is a future electron-positron collider proposed with a center-of-mass energy ranging from 2 to 7 GeV and a peak luminosity of 0.5$\times10^{35}$ ${\rm cm}^{-2}{\rm s}^{-1}$. In STCF, the identification of high-momentum hadrons is critical for various physics studies, therefore two Cherenkov detectors (RICH and DTOF) are designed to boost the PID...

    Go to contribution page
  26. Dr Phat Srimanobhas (Chulalongkorn University (TH))
    21/10/2024, 14:06
    Track 5 - Simulation and analysis tools
    Talk

    For the start of Run-3 CMS Full Simulation was based on Geant4 10.7.2. In this work we report on evolution of usage of Geant4 within CMSSW and adaptation of the newest Geant4 11.2.1, which is expected to be used for CMS simulation production in 2025. Physics validation results and results on CPU performance are reported.
    For the Phase-2 simulation several R&D are carried out. A significant...

    Go to contribution page
  27. Wenlong Yuan (The University of Edinburgh (GB))
    21/10/2024, 14:06
    Track 6 - Collaborative software and maintainability
    Talk

    XRootD is a robust, scalable service that supports globally distributed data management for diverse scientific communities. Within GridPP in the UK, XRootD is used by the Astronomy, High-Energy Physics (HEP) and other communities to access >100PB of storage. The optimal configuration for XRootD varies significantly across different sites due to unique technological frameworks and site-specific...

    Go to contribution page
  28. Mr Tigran Mkrtchyan (DESY)
    21/10/2024, 14:24
    Track 6 - Collaborative software and maintainability
    Talk

    For over two decades, the dCache project has provided open-source to satisfy ever-more demanding storage requirements. More than 80 sites around the world, rely on dCache to provide services for LHC experiments, Belle-II, EuXFEL and many others. This can be achieved only with a well-established process from a whiteboard, where ideas are created, through development, packaging and testing. The...

    Go to contribution page
  29. Paolo Mastrandrea (Universita & INFN Pisa (IT))
    21/10/2024, 14:24
    Track 5 - Simulation and analysis tools
    Talk

    The software toolbox used for "big data" analysis in the last few years is rapidly changing. The adoption of software design approaches able to exploit the new hardware architectures and improve code expressiveness plays a pivotal role in boosting data processing speed, resources optimisation, analysis portability and analysis preservation.
    The scientific collaborations in the field of High...

    Go to contribution page
  30. Maja Franz (Technical University of Applied Sciences, Regensburg)
    21/10/2024, 14:24
    Track 3 - Offline Computing
    Talk

    Noisy intermediate-scale quantum (NISQ) computers, while limited by imperfections and small scale, hold promise for near-term quantum advantages in nuclear and high-energy physics (NHEP) when coupled with co-designed quantum algorithms and special-purpose quantum processing units.
    Developing co-design approaches is essential for near-term usability, but inherent challenges exist due to the...

    Go to contribution page
  31. qianqian Shi
    21/10/2024, 14:24
    Track 3 - Offline Computing
    Talk

    The High Energy cosmic-Radiation Detection facility (HERD) is a scientific instrument planned for deployment on the Chinese Space Station, aimed at indirectly detecting dark matter and conducting gamma-ray astronomical research. HERD Offline Software (HERDOS) is developed for the HERD offline data processing, including Monte Carlo simulation, calibration, reconstruction and physics analysis...

    Go to contribution page
  32. Mr Mehulkumar Shiroya (GSI Helmholtzzentrum für Schwerionenforschung GmbH, Goethe Universität, Frankfurt am Main, Germany, Helmholtz Forschungsakademie Hessen für FAIR, Frankfurt am Main, Germany)
    21/10/2024, 14:24
    Track 5 - Simulation and analysis tools
    Talk

    The Compressed Baryonic Matter (CBM) is an under-construction heavy-ion physics experiment for exploring the QCD phase diagram at high $\mu_{B}$ which will use the new SIS-100 accelerator at the Facility for Anti-Proton and Ion Research (FAIR) in Darmstadt, Germany. The Silicon Tracking System (STS) is to be the main detector for tracking and momentum determination. A scaled-down prototype of...

    Go to contribution page
  33. Axel Naumann (CERN)
    21/10/2024, 14:24
    Track 8 - Collaboration, Reinterpretation, Outreach and Education
    Talk

    High Energy (Nuclear) Physics and Open Source are a perfect match with a long history. CERN has created an Open Source Program Office (CERN OSPO [1]) to help open-source hardware and software in the CERN community - for CERN staff and the experiments’ users. In the wider context, open source and CERN’s OSPO have key roles in CERN’s Open Science Policy [2]. With the OSPO, open-source projects...

    Go to contribution page
  34. Ottorino Frezza (Sapienza Universita e INFN, Roma I (IT)), Ottorino Frezza (INFN, Roma I (IT))
    21/10/2024, 14:24
    Track 2 - Online and real-time computing
    Talk

    The NA62 experiment is designed to study kaon’s rare decays using a decay-in-flight technique. Its Trigger and Data Acquisition (TDAQ) system is multi-level, making it critically dependent on the performance of the inter-level network.
    To manage the enormous amount of data produced by the detectors, three levels of triggers are used. The first level L0TP, implemented using an FPGA device, has...

    Go to contribution page
  35. Dr Michele Grossi (CERN)
    21/10/2024, 14:42
    Track 3 - Offline Computing
    Talk

    Quantum computing can empower machine learning models by enabling kernel machines to leverage quantum kernels for representing similarity measures between data. Quantum kernels are able to capture relationships in the data that are not efficiently computable on classical devices. However, there is no straightforward method to engineer the optimal quantum kernel for each specific use case.While...

    Go to contribution page
  36. Matthew Feickert (University of Wisconsin Madison (US))
    21/10/2024, 14:42
    Track 5 - Simulation and analysis tools
    Talk

    The ATLAS experiment is in the process of developing a columnar analysis demonstrator, which takes advantage of the Python ecosystem of data science tools. This project is inspired by the analysis demonstrator from IRIS-HEP.
    The demonstrator employs PHYSLITE OpenData from the ATLAS collaboration, the new Run 3 compact ATLAS analysis data format. The tight integration of ROOT features within...

    Go to contribution page
  37. Pablo Saiz (CERN)
    21/10/2024, 14:42
    Track 8 - Collaboration, Reinterpretation, Outreach and Education
    Talk

    The CERN Open Data Portal holds over 5 petabytes of high-energy physics experiment data, serving as a hub for global scientific collaboration. Committed to Open Science principles, the portal aims to democratize access to these datasets for outreach, training, education, and independent research.
    Recognizing the limitations of current disk-based storage, we are starting a project to expand...

    Go to contribution page
  38. Scott Snyder (Brookhaven National Laboratory (US))
    21/10/2024, 14:42
    Track 3 - Offline Computing
    Talk

    Run 4 of the LHC will yield an unprecedented volume of data. In order
    to process this data, the ATLAS collaboration is evolving its offline
    software to be able to use heterogenous resources such as GPUs and FPGAs.
    To reduce conversion overheads, the event data model (EDM) should be
    compatible with the requirements of these resources. While the
    ATLAS EDM has long allowed representing data...

    Go to contribution page
  39. Dr Sohichiroh Aogaki (Extreme Light Infrastructure-Nuclear Physics (ELI-NP)/Horia Hulubei National Institute for Physics and Nuclear Engineering (IFIN-HH), Str. Reactorului 30, Bucharest-Măgurele 077125, Romania), Soichiro Aogaki
    21/10/2024, 14:42
    Track 2 - Online and real-time computing
    Talk

    Digital ELI-NP List-mode Acquisition (DELILA) is a data acquisition (DAQ) system for the Variable Energy GAmma (VEGA) beamline system at Extreme Light Infrastructure – Nuclear Physics (ELI-NP), Magurele, Romania [1]. ELI-NP has been implementing the VEGA beamline and entirely operate the beamline in 2026. Several different detectors/experiments (e.g. High Purity Ge (HPGe) detectors, Si...

    Go to contribution page
  40. Kevin Meagher
    21/10/2024, 14:42
    Track 5 - Simulation and analysis tools
    Talk

    The IceCube Neutrino Observatory instruments one cubic kilometer of glacial ice at the geographic South Pole. Cherenkov light emitted by charged particles is detected by 5160 photomultiplier tubes embedded in the ice. Deep antarctic ice is extremely transparent, resulting in absorption lengths exceeding 100m. However, yearly variations in snow deposition rates on the glacier over the last 100...

    Go to contribution page
  41. Danilo Piparo (CERN)
    21/10/2024, 14:42
    Track 6 - Collaborative software and maintainability
    Talk

    ROOT is an open source framework, freely available on GitHub, at the heart of data acquisition, processing and analysis of HE(N)P experiments, and beyond.

    It is developed collaboratively: contributions are not authored only by ROOT team members, but also by a veritable nebula of developers and scientists from universities, labs as well as the private sector. More than 1500 GitHub Pull...

    Go to contribution page
  42. Eoin Clerkin (FAIR - Facility for Antiproton and Ion Research in Europe, Darmstadt)
    21/10/2024, 15:00
    Track 8 - Collaboration, Reinterpretation, Outreach and Education
    Talk

    In recent years, there has been significant political and administrative interest in “Open Science”, which on one hand has lead to additional obligations but also to significant financial backing. For institutes and scientific collaborations, the funding opportunities may have brought some focus on these topics, but there is also a the significant hope, though engagement in open science...

    Go to contribution page
  43. Torri Jeske
    21/10/2024, 15:00
    Track 2 - Online and real-time computing
    Talk

    The ePIC collaboration adopted the JANA2 framework to manage its reconstruction algorithms. This framework has since evolved substantially in response to ePIC's needs. There have been three main design drivers: integrating cleanly with the PODIO-based data models and other layers of the key4hep stack, enabling external configuration of existing components, and supporting timeframe splitting...

    Go to contribution page
  44. Dr Nicole Skidmore (University of Warwick)
    21/10/2024, 15:00
    Track 3 - Offline Computing
    Talk

    After two successful physics runs the LHCb experiment underwent a comprehensive upgrade to enable LHCb to run at five times the instantaneous luminosity for Run 3 of the LHC. With this upgrade, LHCb is now the largest producer of data at the LHC. A new offline dataflow was developed to facilitate fast time-to-insight whilst respecting constraints from disk and CPU resources. The Sprucing is an...

    Go to contribution page
  45. Dr Purba Bhattacharya (Adamas University, Kolkata, India)
    21/10/2024, 15:00
    Track 5 - Simulation and analysis tools
    Talk

    Over the past few decades, there has been a noticeable surge in muon tomography research, also referred to as muography. This method, falling under the umbrella of Non-Destructive Evaluation (NDE), constructs a three-dimensional image of a target object by harnessing the interaction between cosmic ray muons and matter, akin to how radiography utilizes X-rays. Essentially, muography entails...

    Go to contribution page
  46. Adriano Di Florio (CC-IN2P3)
    21/10/2024, 15:18
    Track 3 - Offline Computing
    Poster

    The future development projects for the Large Hadron Collider towards HL-LHC will constantly bring nominal luminosity increase, with the ultimate goal of reaching, e.g., a peak luminosity of $5 \cdot 10^{34} cm^{−2} s^{−1}$ for ATLAS and CMS experiments. This rise in luminosity will directly result in an increased number of simultaneous proton collisions (pileup), up to 200, that will pose new...

    Go to contribution page
  47. Saransh Chopra (Princeton University (US))
    21/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    Vector is a Python library for 2D, 3D, and Lorentz vectors, especially arrays of vectors, to solve common physics problems in a NumPy-like way. Vector currently supports creating pure Python Object, NumPy arrays, and Awkward arrays of vectors. The Object and Awkward backends are implemented in Numba to leverage JIT-compiled vector calculations. Furthermore, vector also supports JAX and Dask...

    Go to contribution page
  48. Michael Boehler (Albert Ludwigs Universitaet Freiburg (DE))
    21/10/2024, 15:18
    Track 4 - Distributed Computing
    Poster

    New strategies for the provisioning of compute resources, e.g. in the form of dynamically integrated resources enabled by the COBalD/TARDIS software toolkit, require a new approach of collecting accounting data. AUDITOR (AccoUnting DatahandlIng Toolbox for Opportunistic Resources), a flexible and expandable accounting ecosystem that can cover a wide range of use cases and infrastructures, was...

    Go to contribution page
  49. Andreas Joachim Peters (CERN), Elvin Alin Sindrilaru (CERN)
    21/10/2024, 15:18
    Track 1 - Data and Metadata Organization, Management and Access
    Poster

    The aim of this paper is to give an overview of the progress made in the EOS project - the large scale data storage system developed at CERN - during the preparation and during LHC Run-3. Developments consist of further simplification of the service architecture, metadata performance improvements, new memory inventory and cost & value interfaces, a new scheduler implementation, a generated...

    Go to contribution page
  50. Dario Barberis (Università e INFN Genova (IT))
    21/10/2024, 15:18
    Track 1 - Data and Metadata Organization, Management and Access
    Poster

    The ATLAS detector produces a wealth of information for each recorded event. Standard calibration and reconstruction procedures reduce this information to physics objects that can be used as input to most analyses; nevertheless, there are very specific analyses that need full information from some of the ATLAS subdetectors, or enhanced calibration and/or reconstruction algorithms. For these...

    Go to contribution page
  51. Leah-Louisa Sieder (Technische Universitaet Dresden (DE))
    21/10/2024, 15:18
    Track 2 - Online and real-time computing
    Poster

    The CMS Experiment at the CERN Large Hadron Collider (LHC) relies on a Level-1 Trigger system (L1T) to process in real time all potential collisions, happeing at a rate of 40 MHz, and select the most promising ones for data acquisition and further processing. The CMS upgrades for the upcoming high-luminosity LHC run will vastly improve the quality of the L1T event reconstruction, providing...

    Go to contribution page
  52. Clemens Lange (Paul Scherrer Institute (CH))
    21/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    The CMS experiment has recently established a new Common Analysis Tools (CAT) group. The CAT group implements a forum for the discussion, dissemination, organization and development of analysis tools, broadly bridging the gap between the CMS data and simulation datasets and the publication-grade plots and results. In this talk we discuss some of the recent developments carried out in the...

    Go to contribution page
  53. Oliver Lantwin (INFN Napoli)
    21/10/2024, 15:18
    Track 3 - Offline Computing
    Poster

    The recently approved SHiP experiment aims to search for new physics at the intensity frontier, including feebly interacting particles and light dark matter, and perform precision measurements of tau neutrinos.

    To fulfill its full discovery potential, the SHiP software framework is crucial, and faces some unique challenges due to the broad range of models under study, and the extreme...

    Go to contribution page
  54. Matteo Bartolini (Universita e INFN, Firenze (IT))
    21/10/2024, 15:18
    Track 4 - Distributed Computing
    Poster

    Data analysis in the field of High Energy Physics presents typical big data requirements, such as the vast amount of data to be processed efficiently and quickly. The Large Hadron Collider in its high luminosity phase will produce about 100 PB/year of data, ushering in the era of high precision physics. Currently, analysts are building and sharing their software on git-based platforms which...

    Go to contribution page
  55. Wenxing Fang
    21/10/2024, 15:18
    Track 3 - Offline Computing
    Poster

    The BESIII experiment operates as an electron-positron collider in the tau-charm energy region, pursuing a range of physics goals related to charm, charmonium, light hadron decays, and so on. Among these objectives, achieving accurate particle identification (PID) plays a crucial role, ensuring both high efficiency and low systematic uncertainty. In the BESIII experiment, PID performance...

    Go to contribution page
  56. Di Jiang (Institute of High Energy Physics, Chinese Academy of Sciences), Ye Yuan (Institute of High Energy Physics, Beijing)
    21/10/2024, 15:18
    Track 6 - Collaborative software and maintainability
    Poster

    A modern version control system is capable of performing Continuous Integration (CI) and Continuous Deployment (CD) in a safe and reliable manner. Many experiments and software projects of High Energy Physics are now developing based on such modern development tools, GitHub for example. However, refactoring a large-scale running system can be challenging and difficult to execute. This is the...

    Go to contribution page
  57. Jiri Chudoba (Czech Academy of Sciences (CZ))
    21/10/2024, 15:18
    Track 6 - Collaborative software and maintainability
    Poster

    Users may have difficulties to find the needed information in the documentation for products, when many pages of documentation are available on multiple web pages or in email forums. We have developed and tested an AI based tool, which can help users to find answers to their questions. The Docu-bot uses Retrieval Augmentation Generation solution to generate answers to various questions. It...

    Go to contribution page
  58. Pere Mato (CERN)
    21/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    EDM4hep aims to establish a standard event data model for the store and exchange of event data in HEP experiments, thereby fostering collaboration across various experiments and analysis frameworks. The Julia package EDM4hep.jl is capable of generating Julia-friendly structures for the EDM4hep data model and reading event data files in ROOT format (either TTree or RNTuple) that are written by ...

    Go to contribution page
  59. Rodrigo Sierra (CERN)
    21/10/2024, 15:18
    Track 8 - Collaboration, Reinterpretation, Outreach and Education
    Poster

    2024 marks not just CERN’s 70th birthday but also the end of analogue telephony at the laboratory. Traditional phone exchanges and the associated copper cabling cannot deliver 21st-century communication services and a decade-long project to modernize CERN’s telephony infrastructure was completed earlier this year.
    We report here on CERN’s modern fixed telephony infrastructure, firstly our...

    Go to contribution page
  60. John Wu (LAWRENCE BERKELEY NATIONAL LABORATORY)
    21/10/2024, 15:18
    Track 1 - Data and Metadata Organization, Management and Access
    Poster

    The dCache storage management system at Brookhaven National Lab plays a vital role as a disk cache, storing extensive datasets from high-energy physics experiments, mainly the ATLAS experiment. Given that dCache’s storage is significantly smaller than the total ATLAS data, it’s crucial to have an efficient cache management policy. A common approach is to keep files that are accessed often,...

    Go to contribution page
  61. Justin Spradley
    21/10/2024, 15:18
    Track 1 - Data and Metadata Organization, Management and Access
    Poster

    How to effectively and efficiently stage a large number of requests from an IBM HPSS environment using a MariaDB database to keep track of requests and use Python for all business logic and to consume the HPSS API. The goal is to be able to scale to handle a large number of requests and to meet different needs of different experiments, and to make the program adaptable enough to allow for...

    Go to contribution page
  62. Matteo Bunino (CERN)
    21/10/2024, 15:18
    Track 7 - Computing Infrastructure
    Poster

    The interTwin project, funded by the European Commission, is at the forefront of leveraging 'Digital Twins' across various scientific domains, with a particular emphasis on physics and earth observation. One of the most advanced use-cases of interTwin is event generation for particle detector simulation at CERN. interTwin enables particle detector simulations to leverage AI methodologies on...

    Go to contribution page
  63. Tomasz Bold (AGH University of Krakow (PL))
    21/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    The poster will present FunRootAna library.
    This is a simple framework allowing to do ROOT analysis in a more functional way. In comparison to RDFrame it offers more functional feel for the data analysis and can be used in any circumstances, not only with ROOT trees. Collections processing is inspired by Scala Apache Spark and the histograms creation and filling is much simplified. As...

    Go to contribution page
  64. Xiangyang Ju (Lawrence Berkeley National Lab. (US))
    21/10/2024, 15:18
    Track 3 - Offline Computing
    Poster

    Machine Learning (ML)-based algorithms play increasingly important roles in almost all aspects of data processing in the ATLAS experiment at CERN. Diverse ML models are used in detector simulation, event reconstruction, and data analysis. They are being deployed in the ATLAS software framework, Athena. Our primary approach to perform ML inference in Athena is to use ONNXRuntime. ONNXRuntime is...

    Go to contribution page
  65. Muhammad Imran (National Centre for Physics (PK))
    21/10/2024, 15:18
    Track 8 - Collaboration, Reinterpretation, Outreach and Education
    Poster

    CMS Analysis Database Interface (CADI) is a management tool for physics publications in the CMS experiment. It acts as a central database for the CMS collaboration, keeping track of the various analysis projects being conducted by researchers. Each analysis paper written by the authors goes through an extensive journey from early analysis to publication. There are various stakeholders involved...

    Go to contribution page
  66. Dr Christophe COLLARD (Laboratoire des 2 Infinis - Toulouse, CNRS / Univ. Paul Sabatier)
    21/10/2024, 15:18
    Track 3 - Offline Computing
    Poster

    Graph neural networks (GNN) have emerged as a cornerstone of ML-based reconstruction and analysis algorithms in particle physics. Many of the proposed algorithms are intended to be deployed close to the beginning of the data processing chain, e.g. in event reconstruction software of running and future collider-based experiments. For GNN to operate, the input data are represented as graphs. The...

    Go to contribution page
  67. Jan Gavranovic (Jozef Stefan Institute (SI))
    21/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    Monte Carlo (MC) simulations are a crucial component when analysing the Standard Model and New physics processes at the Large Hadron Collider. The goal of this work is to explore the performance of generative models for complementing the statistics of classical MC simulations in the final stage of data analysis by generating additional synthetic data that follows the same kinematic...

    Go to contribution page
  68. Jonathan Samudio (Baylor University (US))
    21/10/2024, 15:18
    Track 2 - Online and real-time computing
    Poster

    In response to increasing data challenges, CMS has adopted the use of GPU offloading at the High-Level Trigger (HLT). However, GPU acceleration is often hardware specific, and increases the maintenance burden on software development. The Alpaka (Abstraction Library for Parallel Kernel Acceleration) portability library offers a solution to this issue, and has been implemented into the CMS...

    Go to contribution page
  69. Aleksandra Poreba (CERN / Ruprecht Karls Universitaet Heidelberg (DE))
    21/10/2024, 15:18
    Track 2 - Online and real-time computing
    Poster

    With the upcoming upgrade of High Luminosity LHC, the need for computation
    power will increase in the ATLAS trigger system by more than an order of
    magnitude. Therefore, new particle track reconstruction techniques are explored
    by the ATLAS collaboration, including the usage of Graph Neural Networks (GNN).
    The project focusing on that research, GNN4ITk, considers several...

    Go to contribution page
  70. Hector Gutierrez Arance (Univ. of Valencia and CSIC (ES))
    21/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    The escalating demand for data processing in particle physics research has spurred the exploration of novel technologies to enhance efficiency and speed of calculations. This study presents the development of a porting of MADGRAPH, a widely used tool in particle collision simulations, to FPGA using High-Level Synthesis (HLS).
    Experimental evaluation is ongoing, but preliminary assessments...

    Go to contribution page
  71. Claire Antel (Universite de Geneve (CH))
    21/10/2024, 15:18
    Track 2 - Online and real-time computing
    Poster

    Deep sets network architectures have useful applications in finding
    correlations in unordered and variable length data input, thus having the
    interesting feature of being permutation invariant. Its use on FPGA would open
    up accelerated machine learning in areas where the input has no fixed length or
    order, such as inner detector hits for clustering or associated particle tracks
    for jet...

    Go to contribution page
  72. Federico Andrea Corchia (Universita e INFN, Bologna (IT))
    21/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    Simulation of the detector response is a major computational challenge in modern High-Energy Physics experiments, accounting for about 40% of the total computational resources used in ATLAS. The simulation of the calorimeter response is particularly demanding, consuming about 80% of the total simulation time.
    In order to make the best use of the available computational resources, fast...

    Go to contribution page
  73. Ismael Posada Trobo (CERN)
    21/10/2024, 15:18
    Track 7 - Computing Infrastructure
    Poster

    GitLab Runners have been deployed at CERN since 2015. A GitLab runner is an application that works with GitLab Continuous Integration and Continuous Delivery (CI/CD) to run jobs in a pipeline. CERN provides runners that are available to the whole GitLab instance and can be used by all eligible users. Until 2023, CERN was providing a fixed amount of Docker runners executing in OpenStack virtual...

    Go to contribution page
  74. Andreas Joachim Peters (CERN), Elvin Alin Sindrilaru (CERN), Luca Mascetti (CERN)
    21/10/2024, 15:18
    Track 1 - Data and Metadata Organization, Management and Access
    Poster

    Amazon S3 is a leading object storage service known for its scalability, data reliability, security and performance. It is used as a storage solution for data lakes, websites, mobile applications, backup, archiving and more. With its management features, users can optimise data access to meet specific requirements and compliance standards. Given its popularity, many tools utilise the S3...

    Go to contribution page
  75. Jack Henschel (CERN)
    21/10/2024, 15:18
    Track 7 - Computing Infrastructure
    Poster

    Since 2016, CERN has been using the OpenShift Kubernetes Distribution to host a platform-as-a-service (PaaS). This service is optimized for hosting web applications and has grown to tens of thousands of individual websites. By now, we have established a reliable framework that deals with varied use cases: thousands of websites per ingress controller (8K+ hostnames), handling with long-lived...

    Go to contribution page
  76. Daniel Lupu (INFN-LNL)
    21/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    Reinforcement Learning is emerging as a viable technology to implement autonomous beam dynamics setup and optimization in particle accelerators. A Deep Learning agent can be trained to efficiently explore the parameter space of an accelerator control system and converge to the optimal beam setup much faster than traditional methods. Training these models requires programmatic execution of a...

    Go to contribution page
  77. Mustafa Andre 🎈Schmidt (Bergische Universitaet Wuppertal (DE))
    21/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    In ATLAS and other high-energy physics experiments, the integrity of Monte-Carlo (MC) simulations is crucial for reliable physics analysis. The continuous evolution of MC generators necessitates regular validation to ensure the accuracy of simulations. We introduce an enhanced validation framework incorporating the Job Execution Monitor (JEM) resulting in the established Physics Modeling Group...

    Go to contribution page
  78. 张豪森 zhanghaosen
    21/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    The Jiangmen Underground Neutrino Observatory (JUNO), located in Southern China, is a multi-purpose neutrino experiment that consists of a central detector, a water Cherenkov detector and a top tracker. The primary goal of the experiment is to determine the neutrino mass ordering (NMO) and precisely measure neutrino oscillation parameters. The central detector contains 20,000 ton liquid...

    Go to contribution page
  79. Igor Soloviev (University of California Irvine (US))
    21/10/2024, 15:18
    Track 2 - Online and real-time computing
    Poster

    The ATLAS experiment at the LHC at CERN uses a large, distributed trigger and
    data acquisition system composed of many computing nodes, networks, and
    hardware modules. Its configuration service is used to provide descriptions of
    control, monitoring, diagnostic, recovery, dataflow and data quality
    configurations, connectivity, and parameters for modules, chips, and channels
    of various...

    Go to contribution page
  80. Xinnan Wang (Institute of High Energy Physics Chinese Academy of Scinences)
    21/10/2024, 15:18
    Track 3 - Offline Computing
    Poster

    Beijing Spectrometer (BESIII) detector is used for high-precision studies of hadron physics and tau-charm physics. Accurate and reliable particle identification (PID) is crucial to improve the signal-to-noise ratio, especially for K/π separation. The time-of-flight (TOF) system, which is based on plastic scintillators, is a powerful tool for particle identification at BESIII experiment. The...

    Go to contribution page
  81. Rowina Caspary (Heidelberg University (DE))
    21/10/2024, 15:18
    Track 3 - Offline Computing
    Poster

    The LHCb detector, a multi-purpose detector with a main focus on the study of hadrons containing b- and c-quarks, has been upgraded to enable precision measurements at an instantaneous luminosity of $2\times10^{33}cm^{-2}s^{-1}$ at $\sqrt{s}=14$ TeV, five times higher than the previous detector capacity. With the almost completely new detector, a software-only trigger system has been developed...

    Go to contribution page
  82. Giuseppe Lo Presti (CERN)
    21/10/2024, 15:18
    Track 1 - Data and Metadata Organization, Management and Access
    Poster

    CERNBox is an innovative scientific collaboration platform, built using solely open-source components to meet the unique requirements of scientific workflows. Used at CERN for the last decade, the service satisfies the 35K users at CERN and seamlessly integrates with batch farms and Jupyter-based services. Powered by Reva, an open-source HTTP and gRPC server written in Go, CERNBox has...

    Go to contribution page
  83. John Winnicki (Stanford University)
    21/10/2024, 15:18
    Track 3 - Offline Computing
    Poster

    LUX-ZEPLIN (LZ) is a dark matter direct detection experiment. Employing a dual-phase xenon time projection chamber, the LZ experiment set a world leading limit for spin-independent scattering at 36 GeV/c2 in 2022, rejecting cross sections above 9.2×10−48 cm2 at the 90% confidence level. Unsupervised machine learning methods are indispensable tools in working with big data, and have been...

    Go to contribution page
  84. Witold Przygoda (Jagiellonian University (PL))
    21/10/2024, 15:18
    Track 2 - Online and real-time computing
    Poster

    The main reconstruction and simulation software framework of the ATLAS
    experiment, Athena, underwent a major change during the LHC Run 3 in the way
    the configuration step of its applications is performed. The new configuration
    system, called ComponentAcumulator, emphasises modularity and provides a way
    for standalone execution of parts of a job, as long as the inputs are
    available, which...

    Go to contribution page
  85. Robin Hofsaess (KIT - Karlsruhe Institute of Technology (DE))
    21/10/2024, 16:15
    Track 7 - Computing Infrastructure
    Talk

    A robust computing infrastructure is essential for the success of scientific collaborations. However, smaller or newly founded collaborations often lack the resources to establish and maintain such an infrastructure, resulting in a fragmented analysis environment with varying solutions for different members. This fragmentation can lead to inefficiencies, hinder reproducibility, and create...

    Go to contribution page
  86. Fang-Ying Tsai (Stony Brook University (US))
    21/10/2024, 16:15
    Track 5 - Simulation and analysis tools
    Talk

    The ATLAS Fast Chain represents a significant advancement in streamlining Monte Carlo (MC) production efficiency, specifically for the High-Luminosity Large Hadron Collider (HL-LHC). This project aims to simplify the production of Analysis Object Data (AODs) and potentially Derived Analysis Object Data (DAODs) from generated events with a single transform, facilitating rapid reproduction of...

    Go to contribution page
  87. Alessandra Forti (University of Manchester (GB))
    21/10/2024, 16:15
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    ATLAS is participating in the WLCG Data Challenges, a bi-yearly program established in 2021 to prepare for the data rates of the High Luminosity HL-LHC. In each challenge, transfer rates are increased to ensure preparedness for the full rates by 2029. The goal of the 2024 Data Challenge (DC24) was to reach 25% of the HL-LHC expected transfer rates, with each experiment deciding how to execute...

    Go to contribution page
  88. Mr Greg Corbett
    21/10/2024, 16:15
    Track 8 - Collaboration, Reinterpretation, Outreach and Education
    Talk

    The Science and Technology Facilities Council (STFC), part of UK Research and Innovation (UKRI), has a rich tradition of fostering public engagement and outreach, as part of its strategic aim to showcase and celebrate STFC science, technology, and staff, both within its National Laboratories and throughout the broader community.

    As part of its wider programme, STFC organised two large scale...

    Go to contribution page
  89. Marco Clemencic (CERN)
    21/10/2024, 16:15
    Track 6 - Collaborative software and maintainability
    Talk

    The LHCb Software Framework Gaudi has been developed in C++ since 1998. Over the years it evolved following the changes in the C++ established best practices and the evolution of the C++ standard, even reaching the point of enabling the development of multi-threaded applications.
    In the past few years there has been several announcements and debates over the so called C++ successor languages...

    Go to contribution page
  90. Jan de Cuveland (Goethe University Frankfurt (DE))
    21/10/2024, 16:15
    Track 2 - Online and real-time computing
    Talk

    The CBM experiment, currently being constructed at GSI/FAIR, aims to investigate QCD at high baryon densities. The CBM First-level Event Selector (FLES) serves as the central event selection system of the experiment. It functions as a high-performance computer cluster tasked with the online analysis of physics data, including full event reconstruction, at an incoming data rate which exceeds 1...

    Go to contribution page
  91. Mateusz Jakub Fila (CERN)
    21/10/2024, 16:15
    Track 3 - Offline Computing
    Talk

    With the increasing amount of optimized and specialized hardware such as GPUs, ML cores, etc. HEP applications face the opportunity and the challenge of being enabled to take advantage of these resources, which are becoming more widely available on scientific computing sites. The Heterogenous Frameworks project aims at evaluating new methods and tools for the support of both heterogeneous...

    Go to contribution page
  92. Samuel Cadellin Skipsey
    21/10/2024, 16:33
    Track 6 - Collaborative software and maintainability
    Talk

    Recently, interest in measuring and improving the energy (and carbon) efficiency of computation in HEP, and elsewhere, has grown significantly. Measurements have been, and continue to be, made of the efficiency of various computational architectures in standardised benchmarks... but those benchmarks tend to compare only implementations in single programming languages. Similarly, comparisons of...

    Go to contribution page
  93. Christoph Wissing (Deutsches Elektronen-Synchrotron (DE))
    21/10/2024, 16:33
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    To verify the readiness of the data distribution infrastructure for the HL-LHC, which is planned to start in 2029, WLCG is organizing a series of data challenges with increasing throughput and complexity. This presentation addresses the contribution of CMS to Data Challenge 2024, which aims to reach 25% of the expected network throughput of the HL-LHC. During the challenge CMS tested various...

    Go to contribution page
  94. Beojan Stanislaus (Lawrence Berkeley National Lab. (US))
    21/10/2024, 16:33
    Track 3 - Offline Computing
    Talk

    The large increase in luminosity expected from Run 4 of the LHC presents the ATLAS experiment with a new scale of computing challenge, and we can no longer restrict our computing to CPUs in a High Throughput Computing paradigm. We must make full use of the High Performance Computing resources available to us, exploiting accelerators and making efficient use of large jobs over many nodes.
    Here...

    Go to contribution page
  95. Tadej Novak (Jozef Stefan Institute (SI))
    21/10/2024, 16:33
    Track 5 - Simulation and analysis tools
    Talk

    Simulation of physics processes and detector response is a vital part of high energy physics research but also representing a large fraction of computing cost. Generative machine learning is successfully complementing full (standard, Geant4-based) simulation as part of fast simulation setups improving the performance compared to classical approaches.
    A lot of attention has been given to...

    Go to contribution page
  96. Serguei Kolos (University of California Irvine (US))
    21/10/2024, 16:33
    Track 2 - Online and real-time computing
    Talk

    The High-Luminosity Large Hadron Collider (HL-LHC), scheduled to start
    operating in 2029, aims to increase the instantaneous luminosity by a factor of
    10 compared to the LHC. To match this increase, the ATLAS experiment has been
    implementing a major upgrade program divided into two phases. The first phase
    (Phase-I), completed in 2022, introduced new trigger and detector systems that
    have...

    Go to contribution page
  97. Dr Marcus Ebert (University of Victoria)
    21/10/2024, 16:33
    Track 7 - Computing Infrastructure
    Talk

    BaBar stopped data taking in 2008 but its data is still analyzed by the collaboration. In 2021 a new computing system outside of the SLAC National Accelerator Laboratory was developed and major changes were needed to keep the ability to analyze the data by the collaboration, while the user facing front ends all needed to stay the same. The new computing system was put in production in 2022 and...

    Go to contribution page
  98. Kyle Knoepfel (Fermi National Accelerator Laboratory)
    21/10/2024, 16:33
    Track 8 - Collaboration, Reinterpretation, Outreach and Education
    Talk

    Since 1983 the Italian groups collaborating with Fermilab (US) have been running a 2-month summer training program for Master students. While in the first year the program involved only 4 physics students, in the following years it was extended to engineering students. Many students have extended their collaboration with Fermilab with their Master Thesis and PhD.
    The program has involved...

    Go to contribution page
  99. Seth Johnson (Oak Ridge National Laboratory (US))
    21/10/2024, 16:51
    Track 5 - Simulation and analysis tools
    Talk

    Celeritas is a rapidly developing GPU-enabled detector simulation code aimed at accelerating the most computationally intensive problems in high energy physics. This presentation will highlight exciting new performance results for complex subdetectors from the CMS and ATLAS experiments using EM secondaries from hadronic interactions. The performance will be compared on both Nvidia and AMD GPUs...

    Go to contribution page
  100. Dainius Simelevicius (Vilnius University (LT)), Dainius Simelevicius (CERN, Vilnius University)
    21/10/2024, 16:51
    Track 2 - Online and real-time computing
    Talk

    The data acquisition (DAQ) system stands as an essential component within the CMS experiment at CERN. It relies on a large network system of computers with demanding requirements on control, monitoring, configuration and high throughput communication. Furthermore, the DAQ system must accommodate various application scenarios, such as interfacing with external systems, accessing custom...

    Go to contribution page
  101. Dr Andrea Bocci (CERN)
    21/10/2024, 16:51
    Track 3 - Offline Computing
    Talk

    To achieve better computational efficiency and exploit a wider range of computing resources, the CMS software framework (CMSSW) has been extended to offload part of the physics reconstruction to NVIDIA GPUs. To support additional back-ends, as well to avoid the need to write, validate and maintain a separate implementation of the reconstruction algorithms for each back-end, CMS has adopted the...

    Go to contribution page
  102. Xoan Carlos Cosmed Peralejo (CERN)
    21/10/2024, 16:51
    Track 7 - Computing Infrastructure
    Talk

    Although wireless IoT devices are omnipresent in our homes and workplaces, their use in particle accelerators is still uncommon. Although the advantages of movable sensors communicating over wireless networks are obvious, the harsh radiation environment of a particle accelerator has been an obstacle to the use of such sensitive devices. Recently, though, CERN has developed a radiation-hard...

    Go to contribution page
  103. Andreas Joachim Peters (CERN), Elvin Alin Sindrilaru (CERN)
    21/10/2024, 16:51
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    ALICE introduced ground-breaking advances in data processing and storage requirements and presented the CERN IT data centre with new challenges with the highest data recording requirement of all experiments. For these reasons, the EOS O2 storage system was designed to be cost-efficient, highly redundant and maximise data resilience to keep data accessible even in the event of unexpected...

    Go to contribution page
  104. Dr Vincenzo Eduardo Padulano (CERN)
    21/10/2024, 16:51
    Track 6 - Collaborative software and maintainability
    Talk

    ROOT is a software toolkit at the core of LHC experiments and HENP collaborations worldwide, widely used by the community and in continuous development with it. The package is available through many channels that cater different types of users with different needs. This ranges from software releases on the LCG stacks provided via CVMFS for all HENP users to benefit, to pre-built binaries...

    Go to contribution page
  105. Lauren Mowberry (STFC UKRI)
    21/10/2024, 16:51
    Track 8 - Collaboration, Reinterpretation, Outreach and Education
    Talk

    The Remote^3 (Remote Cubed) project is an STFC Public Engagement Leadership Fellowship funded activity, organised in collaboration between the University of Edinburgh (UoE), and STFC’s Public Engagement Team, Scientific Computing Department, and Boulby Underground Laboratory – part of STFC Particle Physics.
    Remote^3 works with school audiences to challenge teams of young people to design,...

    Go to contribution page
  106. Dirk Hutter (Goethe University Frankfurt (DE))
    21/10/2024, 17:09
    Track 2 - Online and real-time computing
    Talk

    The CBM First-level Event Selector (FLES) serves as the central data processing and event selection system for the upcoming CBM experiment at FAIR. Designed as a scalable high-performance computing cluster, it facilitates online analysis of unfiltered physics data at rates surpassing 1 TByte/s.

    As the input to the FLES, the CBM detector subsystems deliver free-streaming, self-triggered data...

    Go to contribution page
  107. Juan Manuel Guijarro (CERN)
    21/10/2024, 17:09
    Track 6 - Collaborative software and maintainability
    Talk

    In the vast landscape of CERN's internal documentation, finding and accessing relevant detailed information remains a complex and time-consuming task. To address this challenge, the AccGPT project proposes the development of an intelligent chatbot leveraging Natural Language Processing (NLP) technologies. The primary objective is to harness open-source Large Language Models (LLMs) to create a...

    Go to contribution page
  108. Juan Gonzalez Caminero
    21/10/2024, 17:09
    Track 5 - Simulation and analysis tools
    Talk

    An important alternative for boosting the throughput of simulation applications is to take advantage of accelerator hardware, by making general particle transport simulation for high-energy physics (HEP) single-instruction-multiple-thread (SIMT) friendly. This challenge is not yet resolved due to difficulties in mapping the complexity of Geant4 components and workflow to the massive...

    Go to contribution page
  109. Abhijit Mathad (CERN)
    21/10/2024, 17:09
    Track 3 - Offline Computing
    Talk

    As the Large Hadron Collider progresses through Run 3, the LHCb experiment has made significant strides in upgrading its offline analysis framework and associated tools to efficiently handle the increasing volumes of data generated. Numerous specialised algorithms have been developed for offline analysis, with a central innovation being FunTuple--a newly developed component designed to...

    Go to contribution page
  110. Ms Joni Pham (University of Melbourne (AU))
    21/10/2024, 17:09
    Track 8 - Collaboration, Reinterpretation, Outreach and Education
    Talk

    Virtual Visits have been an integral component of the ATLAS Education and Outreach programme since their inception in 2010. Over the years, collaboration members have hosted visits for tens of thousands of visitors located all over the globe. In 2024, alone there have already been 59 visits through the month of May. Visitors in classrooms, festivals, events or even at home have a unique...

    Go to contribution page
  111. Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US)), Marian Babik (CERN), Tristan Sullivan (University of Victoria)
    21/10/2024, 17:09
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    High-Energy Physics (HEP) experiments rely on complex, global networks to interconnect collaborating sites, data centers, and scientific instruments. Managing these networks for data-intensive scientific projects presents significant challenges because of the ever-increasing volume of data transferred, diverse project requirements with varying quality of service needs, multi-domain...

    Go to contribution page
  112. Aksieniia Shtimmerman (INFN-CNAF), Giacomo Levrini (Universita e INFN, Bologna (IT))
    21/10/2024, 17:09
    Track 7 - Computing Infrastructure
    Talk

    The modern data centers provide the efficient Information Technologies (IT) infrastructure needed to deliver resources,
    services, monitoring systems and collected data in a timely fashion. At the same time, data centres have been continuously
    evolving, foreseeing large increase of resources and adapting to cover multifaced niches.

    The CNAF group at INFN (National Institute for Nuclear...

    Go to contribution page
  113. James William Walder (Science and Technology Facilities Council STFC (GB))
    21/10/2024, 17:27
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    To address the needs of forthcoming projects such as the Square Kilometre Array (SKA) and the HL-LHC, there is a critical demand for data transfer nodes (DTNs) to realise O(100)Gb/s of data movement. This high-throughput can be attained through combinations of increased concurrency of transfers and improvements in the speed of individual transfers. At the Rutherford Appleton Laboratory...

    Go to contribution page
  114. Alina Corso Radu (University of California Irvine (US))
    21/10/2024, 17:27
    Track 2 - Online and real-time computing
    Talk

    The ATLAS experiment at the Large Hadron Collider (LHC) at CERN continuously
    evolves its Trigger and Data Acquisition (TDAQ) system to meet the challenges
    of new physics goals and technological advancements. As ATLAS prepares for the
    Phase-II Run 4 of the LHC, significant enhancements in the TDAQ Controls and
    Configuration tools have been designed to ensure efficient data...

    Go to contribution page
  115. Severin Diederichs (CERN)
    21/10/2024, 17:27
    Track 5 - Simulation and analysis tools
    Talk

    The demands for Monte-Carlo simulation are drastically increasing with the high-luminosity upgrade of the Large Hadron Collider, and expected to exceed the currently available compute resources. At the same time, modern high-performance computing has adopted powerful hardware accelerators, particularly GPUs. AdePT is one of the projects aiming to address the demanding computational needs by...

    Go to contribution page
  116. Jim Pivarski (Princeton University)
    21/10/2024, 17:27
    Track 8 - Collaboration, Reinterpretation, Outreach and Education
    Talk

    If a physicist needs to ask for help on some software, where should they go? For a specific software package, there may be a preferred website, such as the ROOT Forum or a GitHub/GitLab Issues page, but how would they find this out? What about problems that cross package boundaries? What if they haven't found a tool that would solve their problem yet?

    HEP-Help (hep-help.org) is intended as...

    Go to contribution page
  117. Dr Nathan Grieser (University of Cincinnati (US))
    21/10/2024, 17:27
    Track 6 - Collaborative software and maintainability
    Talk

    The LHCb collaboration continues to primarily utilize the Run 1 and Run 2 legacy datasets well into Run 3. As the operational focus shifts from the legacy data to the live Run 3 samples, it is vital that a sustainable and efficient system is in place to allow analysts to continue to profit from the legacy datasets. The LHCb Stripping project is the user-facing offline data-processing stage...

    Go to contribution page
  118. Christian Voss
    21/10/2024, 17:27
    Track 7 - Computing Infrastructure
    Talk

    DESY operates multiple dCache storage instances for multiple communities. As each community has different workflows and workloads, their dCache installations range from very large instances with more than 100 PB of data, to instances with up to billions of files or instances with significant LAN and WAN I/O.
    To successful operate all instances and quickly identify issues and performance...

    Go to contribution page
  119. Dr Michael Hudson Kirby (Brookhaven National Laboratory (US))
    21/10/2024, 17:27
    Track 3 - Offline Computing
    Talk

    We summarize the status of the Deep Underground Neutrino Experiment (DUNE) software and computing development. The DUNE Collaboration has been successfully operating the DUNE prototype detectors at both Fermilab and CERN, and testing offline computing services, software, and infrastructure using the data collected. We give an overview of results from end-to-end testing of systems needed to...

    Go to contribution page
  120. Daniel Peter Traynor
    21/10/2024, 17:45
    Track 7 - Computing Infrastructure
    Talk

    Queen Mary University of London (QMUL) has recently finished refurbishing its data centre that house our computing cluster supporting the WLCG project. After 20 years of operation the original data centre had significant cooling issues and increases in energy prices have all driven the need for refurbishment amid growing awareness of climate change.In addition there is a need to increase the...

    Go to contribution page
  121. Piotr Konopka (CERN)
    21/10/2024, 17:45
    Track 3 - Offline Computing
    Talk

    Since the mid-2010s, the ALICE experiment at CERN has seen significant changes in its software, especially with the introduction of the Online-Offline (O²) computing system during Long Shutdown 2. This evolution required continuous adaptation of the Quality Control (QC) framework responsible for online Data Quality Monitoring (DQM) and offline Quality Assurance (QA).

    After a general...

    Go to contribution page
  122. Thomas Byrne, Thomas Jyothish (STFC)
    21/10/2024, 17:45
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    To address the need for high transfer throughput for projects such as the LHC experiments, including the upcoming HL-LHC, it is important to make optimal and sustainable use of our available capacity. Load balancing algorithms play a crucial role in distributing incoming network traffic across multiple servers, ensuring optimal resource utilization, preventing server overload, and enhancing...

    Go to contribution page
  123. Bartosz Mindur (AGH University of Krakow (PL))
    21/10/2024, 17:45
    Track 6 - Collaborative software and maintainability
    Talk

    I will be presenting the history of the design, implementation, testing, and release of the production version of a C++-based software for the Gas Gain Stabilization System (GGSS) used in the TRT detector at the ATLAS experiment. This system operates 24/7 in the CERN Point1 environment under the control of the Detector Control System (DCS) and plays a crucial role in delivering reliable data...

    Go to contribution page
  124. Gordon Watts (University of Washington (US))
    21/10/2024, 17:45
    Track 8 - Collaboration, Reinterpretation, Outreach and Education
    Talk

    Large Language Models (LLMs) have emerged as a transformative tool in society and are steadily working their way into scientific workflows. Despite their known tendency to hallucinate, rendering them perhaps unsuitable for direct scientific pipelines, LLMs excel in text-related tasks, offering a unique solution to manage the overwhelming volume of information presented at large conferences...

    Go to contribution page
  125. simon blyth (IHEP, CAS)
    21/10/2024, 17:45
    Track 5 - Simulation and analysis tools
    Talk

    Opticks is an open source project that accelerates optical photon simulation
    by integrating NVIDIA GPU ray tracing, accessed via the NVIDIA OptiX API, with
    Geant4 toolkit based simulations.
    Optical photon simulation times of 14 seconds per 100 million photons
    have been measured within a fully analytic JUNO GPU geometry
    auto-translated from the Geant4 geometry when using a single...

    Go to contribution page
  126. Maria Adriana Sabia
    21/10/2024, 17:45
    Track 2 - Online and real-time computing
    Talk

    The DarkSide-20k detector is now under construction in the Gran Sasso National Laboratory (LNGS) in Italy, the biggest underground physics facility. It is designed to directly detect dark matter by observing weakly interacting massive particles (WIMPs) scattering off the nuclei in 20 tonnes of underground-sourced liquid argon in the dual-phase time projection chamber (TPC). Additionally two...

    Go to contribution page
  127. Agnieszka Dziurda (Polish Academy of Sciences (PL)), Tomasz Szumlak (AGH University of Krakow (PL))
    21/10/2024, 18:30

    Place: AGH University main building A0, Mickiewicza 30 Av., Krakow
    The route from the main venue is here:
    https://www.google.com/maps/d/edit?mid=1lzudzN5SpFXrPZnD1y5GEpd18xuZY6s&usp=sharing

    Go to contribution page
  128. 22/10/2024, 08:55
  129. Voica Radescu
    22/10/2024, 09:00
    Plenary
    Talk

    Quantum computers have reached a stage where they can perform complex calculations on around 100 qubits - referred to as Quantum Utility Era.
    They are being utilized in industries such as materials science, condensed matter, and particle physics for problem exploration beyond the capabilities of classical computers. In this talk, we will highlight the progress in both IBM quantum hardware...

    Go to contribution page
  130. Dr Michele Grossi (CERN QTI)
    22/10/2024, 09:30
    Plenary
    Talk

    This year CERN celebrates its 70th Anniversary, and the 60th anniversary of Bell's theorem, a result that arguably had the single strongest impact on modern foundations of quantum physics, both at the conceptual and methodological level, as well as at the level of its applications in information theory and technology.
    CERN has started its second phase of the Quantum Technology Initiative with...

    Go to contribution page
  131. Wojtek Fedorko (TRIUMF)
    22/10/2024, 10:00
    Plenary
    Talk

    As CERN approaches the launch of the High Luminosity-LHC Large Hadron Collider (HL-LHC) by the decade’s end, the computational demands of traditional simulations have become untenably high. Projections show millions of CPU-years required to create simulated datasets - with a substantial fraction of CPU time devoted to calorimetric simulations. This presents unique opportunities for...

    Go to contribution page
  132. Sarah Heim (Deutsches Elektronen-Synchrotron (DE))
    22/10/2024, 11:00
    Plenary
    Talk

    Recent Large Language Models like ChatGPT show impressive capabilities, e.g. in the automated generation of text and computer code. These new techniques will have long-term consequences, including for scientific research in fundamental physics. In this talk I present the highlights of the first Large Language Model Symposium (LIPS) which took place in Hamburg earlier this year. I will focus on...

    Go to contribution page
  133. Andrea Rizzi (Universita & INFN Pisa (IT)), Elizabeth Sexton-Kennedy (Fermi National Accelerator Lab. (US)), Dr Michele Grossi (CERN), Olivier Mattelaer (UCLouvain), Sascha Caron (Nikhef National institute for subatomic physics (NL)), Dr Tommaso Boccali (INFN Sezione di Pisa), Voica Radescu
    22/10/2024, 11:30

    A diverse panel that will discuss the potential impact of the progress in the fields of Quantum Computing and the latest generation of Machine Learning, like LLMs. On the panel are experts from QC, LLM, ML in HEP, Theoretical Physics and large scale computing in HEP. The discussion will be moderated by Liz Sexton Kennedy from the Fermi National Accelerator Laboratory.

    To submit questions...

    Go to contribution page
  134. Thomas Owen James (CERN)
    22/10/2024, 13:30
    Track 8 - Collaboration, Reinterpretation, Outreach and Education
    Talk

    CERN openlab is a unique resource within CERN that works to establish strategic collaborations with industry, fuel technological innovation and expose novel technologies to the scientific community.
    ICT innovation is needed to deal with the unprecedented levels of data volume and complexity generated by the High Luminosity LHC. The current CERN openlab Phase VIII is designed to tackle these...

    Go to contribution page
  135. Marco Buonsante (Universita e INFN, Bari (IT))
    22/10/2024, 13:30
    Track 2 - Online and real-time computing
    Talk

    Ensuring the quality of data in large HEP experiments such as CMS at the LHC is crucial for producing reliable physics outcomes. The CMS protocols for Data Quality Monitoring (DQM) are based on the analysis of a standardized set of histograms offering a condensed snapshot of the detector's condition. Besides the required personpower, the method has a limited time granularity, potentially...

    Go to contribution page
  136. Wouter Deconinck
    22/10/2024, 13:30
    Track 7 - Computing Infrastructure
    Talk

    The ePIC collaboration is working towards the realization of the first detector at the upcoming Electron-Ion Collider. As part of our computing strategy, we have settled on containers for the distribution of our modular software stacks using spack as the package manager. Based on abstract definitions of multiple mutually consistent software environments, we build dedicated containers on each...

    Go to contribution page
  137. Dr Jaroslav Guenther (CERN)
    22/10/2024, 13:30
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    The CERN Tape Archive (CTA) scheduling system implements the workflow and lifecycle of Archive, Retrieve and Repack requests. The transient metadata for queued requests is stored in the Scheduler backend store (Scheduler DB). In our previous work, we presented the CTA Scheduler together with an objectstore-based implementation of the Scheduler DB. Now with four years of experience in...

    Go to contribution page
  138. Corentin Santos (University of Strasbourg)
    22/10/2024, 13:30
    Track 5 - Simulation and analysis tools
    Talk

    In this work we present the Graph-based Full Event Interpretation (GraFEI), a machine learning model based on graph neural networks to inclusively reconstruct events in the Belle II experiment.
    Belle II is well suited to perform measurements of $B$ meson decays involving invisible particles (e.g. neutrinos) in the final state. The kinematical properties of such particles can be deduced from...

    Go to contribution page
  139. Wahid Redjeb (Rheinisch Westfaelische Tech. Hoch. (DE))
    22/10/2024, 13:30
    Track 3 - Offline Computing
    Talk

    The imminent high-luminosity era of the LHC will pose unprecedented challenges to the CMS detector. To meet these challenges, the CMS detector will undergo several upgrades, including replacing the current endcap calorimeters with a novel High-Granularity Calorimeter (HGCAL). A dedicated reconstruction framework, The Iterative Clustering (TICL), is being developed within the CMS Software...

    Go to contribution page
  140. Dr Jonathan Mark Woithe (University of Adelaide (AU))
    22/10/2024, 13:48
    Track 7 - Computing Infrastructure
    Talk

    The economies of scale realised by institutional and commercial cloud providers make such resources increasingly attractive for grid computing. We describe an implementation of this approach which has been deployed for
    Australia's ATLAS and Belle II grid sites.

    The sites are built entirely with Virtual Machines (VM) orchestrated by an OpenStack [1] instance. The Storage Element (SE)...

    Go to contribution page
  141. Joao Afonso (CERN)
    22/10/2024, 13:48
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    The latest tape hardware technologies (LTO-9, IBM TS1170) impose new constraints on the management of data archived to tape. In the past, new drives could read the previous one or even two generations of media, but this is no longer the case. This means that repacking older media to new media must be carried out on a more agressive schedule than in the past. An additional challenge is the...

    Go to contribution page
  142. Dolores Garcia (CERN)
    22/10/2024, 13:48
    Track 3 - Offline Computing
    Talk

    We present an ML-based end-to-end algorithm for adaptive reconstruction in different FCC detectors. The algorithm takes detector hits from different subdetectors as input and reconstructs higher-level objects. For this, it exploits a geometric graph neural network, trained with object condensation, a graph segmentation technique. We apply this approach to study the performance of pattern...

    Go to contribution page
  143. Maksymilian Graczyk (CERN)
    22/10/2024, 13:48
    Track 6 - Collaborative software and maintainability
    Talk

    Given the recent slowdown of the Moore’s Law and increasing awareness of the need for sustainable and edge computing, physicists and software developers can no longer just rely on computer hardware becoming faster and faster or moving processing to the cloud to meet the ever-increasing computing demands of their research (e.g. the data rate increase in HL-LHC). However, algorithmic...

    Go to contribution page
  144. Brad Sawatzky (Jefferson Lab)
    22/10/2024, 13:48
    Track 2 - Online and real-time computing
    Talk

    Hydra is an advanced framework designed for training and managing AI models for near real time data quality monitoring at Jefferson Lab. Deployed in all four experimental halls, Hydra has analyzed over 2 million images and has extended its capabilities to offline monitoring and validation. Hydra utilizes computer vision to continually analyze sets of images of monitoring plots generated 24/7...

    Go to contribution page
  145. Boyang Yu
    22/10/2024, 13:48
    Track 5 - Simulation and analysis tools
    Talk

    In analyses conducted at Belle II, it is often beneficial to reconstruct the entire decay chain of both B mesons produced in an electron-positron collision event using the information gathered from detectors. The currently used reconstruction algorithm, starting from the final state particles, consists of multiple stages that necessitate manual configurations and suffers from low efficiency...

    Go to contribution page
  146. Kevin Pedro (Fermi National Accelerator Lab. (US))
    22/10/2024, 13:48
    Track 8 - Collaboration, Reinterpretation, Outreach and Education
    Talk

    GlideinWMS is a workload manager provisioning resources for many experiments including CMS and DUNE. The software is distributed both as native packages and specialized production containers. Following an approach used in other communities like web development
    we built our workspaces, system-like containers to ease development and testing.
    Developers can change the source tree or check out a...

    Go to contribution page
  147. Dr Guang Zhao (Institute of High Energy Physics (CAS))
    22/10/2024, 14:06
    Track 3 - Offline Computing
    Talk

    Particle identification (PID) is crucial in particle physics experiments. A promising breakthrough in PID involves cluster counting, which quantifies primary ionizations along a particle’s trajectory in a drift chamber (DC), rather than relying on traditional dE/dx measurements. However, a significant challenge in cluster counting lies in developing an efficient reconstruction algorithm to...

    Go to contribution page
  148. Clemens Lange (Paul Scherrer Institute (CH))
    22/10/2024, 14:06
    Track 7 - Computing Infrastructure
    Talk

    A large fraction of computing workloads in high-energy and nuclear physics is executed using software containers. For physics analysis use, such container images often have sizes of several gigabytes. Executing a large number of such jobs in parallel on different compute nodes efficiently, demands the availability and use of caching mechanisms and image loading techniques to prevent network...

    Go to contribution page
  149. Uraz Odyurt (Nikhef National institute for subatomic physics (NL))
    22/10/2024, 14:06
    Track 5 - Simulation and analysis tools
    Talk

    Subatomic particle track reconstruction (tracking) is a vital task in High-Energy Physics experiments. Tracking, in its current form, is exceptionally computationally challenging. Fielded solutions, relying on traditional algorithms, do not scale linearly and pose a major limitation for the HL-LHC era. Machine Learning (ML) assisted solutions are a promising answer.

    Current ML model design...

    Go to contribution page
  150. Roger Jones (Lancaster University (GB))
    22/10/2024, 14:06
    Track 8 - Collaboration, Reinterpretation, Outreach and Education
    Talk

    Virtual Reality (VR) applications play an important role in HEP Outreach & Education. They make it possible to organize virtual tours of the experimental infrastructure by virtually interacting with detector facilities, describing their purpose and functionalities. However, nowadays VR applications require expensive hardware, like the Oculus headset or MS Hololense, and powerful computers. As...

    Go to contribution page
  151. Mr Dorin-Daniel Lobontu
    22/10/2024, 14:06
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    Storing the ever-increasing amount of data generated by LHC experiments is still inconceivable without making use of the cost effective, though inherently complex, tape technology. GridKa tape storage system used to rely on IBM Spectrum Protect (SP). Due to a variety of limitations and to meet the even higher requirements of HL-LHC project, GridKa decided to switch from SP to High Performance...

    Go to contribution page
  152. Joshua Ethan Horswill (University of Manchester (GB))
    22/10/2024, 14:06
    Track 2 - Online and real-time computing
    Talk

    The first level of the trigger system of the LHCb experiment (HLT1) reconstructs and selects events in real-time at the LHC bunch crossing rate in software using GPUs. It must carefully balance a broad physics programme that extends from kaon physics up to the electroweak scale. An automated procedure to determine selection criteria is adopted that maximises the physics output of the entirety...

    Go to contribution page
  153. Silia Taider (CPE (FR))
    22/10/2024, 14:06
    Track 6 - Collaborative software and maintainability
    Talk

    The software framework of the Large Hadron Collider Beauty (LHCb) experiment, Gaudi, heavily relies on the ROOT framework and its I/O subsystems for data persistence mechanisms. Gaudi internally leverages the ROOT TTree data format, as it is currently used in production by LHC experiments. However, with the introduction and scaling of multi-threaded capabilities within Gaudi, the limitations...

    Go to contribution page
  154. Sergei Zharko (GSI Helmholtzzentrum für Schwerionenforschung, Darmstadt, Germany)
    22/10/2024, 14:24
    Track 6 - Collaborative software and maintainability
    Talk

    A data quality assurance (QA) framework is being developed for the CBM experiment. It provides flexible tools for monitoring of reference quantity distributions for different detector subsystems and data reconstruction algorithms. This helps to identify software malfunctions and calibration status, to prepare a setup for the data taking and to prepare data for the production. A modular...

    Go to contribution page
  155. Julian Myrcha (Warsaw University of Technology (PL))
    22/10/2024, 14:24
    Track 2 - Online and real-time computing
    Talk

    The architecture of the existing ALICE Run 3 on-line real time visualization solution was designed for easy modification of the visualization method used. In addition to the existing visualization based on the desktop application, a version using browser-based visualization has been prepared. In this case, the visualization is computed and displayed on the user's computer. There is no need to...

    Go to contribution page
  156. Xin Zhao (Brookhaven National Laboratory (US))
    22/10/2024, 14:24
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    The High Luminosity upgrade to the LHC (HL-LHC) is expected to generate scientific data on the scale of multiple exabytes. To tackle this unprecedented data storage challenge, the ATLAS experiment initiated the Data Carousel project in 2018. Data Carousel is a tape-driven workflow in which bulk production campaigns with input data resident on tape are executed by staging and promptly...

    Go to contribution page
  157. Gerardo Ganis (CERN)
    22/10/2024, 14:24
    Track 8 - Collaboration, Reinterpretation, Outreach and Education
    Talk

    Data Preservation (DP) is a mandatory specification for any present and future experimental facility and it is a cost-effective way of doing fundamental research by exploiting unique data sets in the light of the ever increasing theoretical understanding. When properly taken into account, DP leads to a significant increase in the scientific output (10% typically) for a minimal investment...

    Go to contribution page
  158. Daniele Spiga (Universita e INFN, Perugia (IT))
    22/10/2024, 14:24
    Track 7 - Computing Infrastructure
    Talk

    In recent years, the CMS experiment has expanded the usage of HPC systems for data processing and simulation activities. These resources significantly extend the conventional pledged Grid compute capacity. Within the EuroHPC program, CMS applied for a "Benchmark Access" grant at VEGA in Slovenia, an HPC centre that is being used very successfully by the ATLAS experiment. For CMS, VEGA was...

    Go to contribution page
  159. Abhishek Nath (Heidelberg University (DE))
    22/10/2024, 14:24
    Track 5 - Simulation and analysis tools
    Talk

    Direct photons are unique probes to study and characterize the quark-gluon plasma (QGP) as they leave the collision medium mostly unscathed. Measurements at top Large Hadron Collider (LHC) energies at low pT reveal a very small thermal photon signal accompanied by considerable systematic uncertainties. Reduction of such uncertainties, which arise from the π0 and η measurements, as...

    Go to contribution page
  160. Dolores Garcia (CERN)
    22/10/2024, 14:24
    Track 3 - Offline Computing
    Talk

    We present an end-to-end reconstruction algorithm for highly granular calorimeters that includes track information to aid the reconstruction of charged particles. The algorithm starts from calorimeter hits and reconstructed tracks, and outputs a coordinate transformation in which all shower objects are well separated from each other, and in which clustering becomes trivial. Shower properties...

    Go to contribution page
  161. Ross John Hunter (University of Warwick (GB))
    22/10/2024, 14:42
    Track 2 - Online and real-time computing
    Talk

    The LHCb experiment at CERN has undergone a comprehensive upgrade. In particular, its trigger system has been completely redesigned into a hybrid-architecture, software-only system that delivers ten times more interesting signals per unit time than its predecessor. This increased efficiency - as well as the growing diversity of signals physicists want to analyse - makes conforming to crucial...

    Go to contribution page
  162. Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US))
    22/10/2024, 14:42
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    The Vera Rubin Observatory is a very ambitious project. Using the world’s largest ground-based telescope, it will take two panoramic sweeps of the visible sky every three nights using a 3.2 Giga-pixel camera. The observation products will generate 15 PB of new data each year for 10 years. Accounting for reprocessing and related data products the total amount of critical data will reach several...

    Go to contribution page
  163. Luke Grazette (University of Warwick (GB))
    22/10/2024, 14:42
    Track 6 - Collaborative software and maintainability
    Talk

    CHEP Track: 6 - Collaborative software and maintainability

    The LHCb high-level trigger applications consists of components that run reconstruction algorithms and perform physics object selections, scaling from hundreds to tens of thousands depending on the selection stage. The configuration of the components, the data flow and the control flow are implemented in Python. The resulting...

    Go to contribution page
  164. Dr Mindaugas Sarpis (Vilnius University)
    22/10/2024, 14:42
    Track 8 - Collaboration, Reinterpretation, Outreach and Education
    Talk

    With the onset of ever more data collected by the experiments at the LHC and the increasing complexity of the analysis workflows themselves, there is a need to ensure the scalability of a physics data analysis. Logical parts of an analysis should be well separated - the analysis should be modularized. Where possible, these different parts should be maintained and reused for other analyses or...

    Go to contribution page
  165. Matthieu Martin Melennec (Centre National de la Recherche Scientifique (FR))
    22/10/2024, 14:42
    Track 3 - Offline Computing
    Talk

    In the recent years, high energy physics discoveries have been driven by the increasing of detector volume and/or granularity. This evolution gives access to bigger statistics and data samples, but can make it hard to process results with current methods and algorithms. Graph neural networks, particularly graph convolution networks, have been shown to be powerful tools to address these...

    Go to contribution page
  166. Claudio Grandi (INFN - Bologna)
    22/10/2024, 14:42
    Track 7 - Computing Infrastructure
    Talk

    The Italian National Institute for Nuclear Physics (INFN) has recently developed a national cloud platform to enhance access to distributed computing and storage resources for scientific researchers. A critical aspect of this initiative is the INFN Cloud Dashboard, a user-friendly web portal that allows users to request high-level services on demand, such as Jupyter Hub, Kubernetes, and Spark...

    Go to contribution page
  167. Luca Clissa (Universita e INFN, Bologna (IT))
    22/10/2024, 14:42
    Track 5 - Simulation and analysis tools
    Talk

    Particle flow reconstruction at colliders combines various detector subsystems (typically the calorimeter and tracker) to provide a combined event interpretation that utilizes the strength of each detector. The accurate association of redundant measurements of the same particle between detectors is the key challenge in this technique. This contribution describes recent progress in the ATLAS...

    Go to contribution page
  168. Julien Leduc (CERN)
    22/10/2024, 15:00
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    Due to the increasing volume of physics data being produced, the LHC experiments are making more active use of archival storage. Constraints on available disk storage have motivated the evolution towards the "data carousel" and similar models. Datasets on tape are recalled multiple times for reprocessing and analysis, and this trend is expected to accelerate during the Hi-Lumi era (LHC Run-4...

    Go to contribution page
  169. Giacomo Tenaglia (CERN), Victoria Stephany Huisman Sigcha (Saxion University of Applied Scienc (NL))
    22/10/2024, 15:00
    Track 6 - Collaborative software and maintainability
    Talk

    At the core of CERN's mission lies a profound dedication to open science; a principle that has fueled decades of ground-breaking collaborations and discoveries. This presentation introduces an ambitious initiative: a comprehensive catalogue of CERN's open-source projects, purveyed by CERN’s own OSPO. The mission? To spotlight every flag-bearing and nascent project under the CERN umbrella,...

    Go to contribution page
  170. Matthias Richter (University of Bergen (NO))
    22/10/2024, 15:00
    Track 7 - Computing Infrastructure
    Talk

    Norwegian contributions to the WLCG consist of computing and storage resources in Bergen and Oslo for the ALICE and ATLAS experiments. The increasing scale and complexity of Grid site infrastructure and operation require integration of national WLCG resources into bigger shared installations. Traditional HPC resources often come with restrictions with respect to software, administration, and...

    Go to contribution page
  171. Nikolai Hartmann (Ludwig Maximilians Universitat (DE))
    22/10/2024, 15:00
    Track 5 - Simulation and analysis tools
    Talk

    Accurate modeling of backgrounds for the development of analyses requires large enough simulated samples of background data. When searching for rare processes, a large fraction of these expensively produced samples is discarded by the analysis criteria that try to isolate the rare events. At the Belle II experiment, the event generation stage takes only a small fraction of the computational...

    Go to contribution page
  172. Matthew Kenneth Maroun (University of Massachusetts (US))
    22/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    In the ATLAS analysis model, users must interact with specialized algorithms to perform a variety of tasks on their physics objects including calibration, identification, and obtaining systematic uncertainties for simulated events. These algorithms have a wide variety of configurations, and often must be applied in specific orders. A user-friendly configuration mechanism has been developed...

    Go to contribution page
  173. Jihwan Oh
    22/10/2024, 15:18
    Track 3 - Offline Computing
    Poster

    We explore applications of quantum graph neural network(QGNN) on physics and non-physics data set. Based on a single quantum circuit architecture, we perform node, edge, and graph-level prediction tasks. Our main example is particle trajectory reconstruction starting from a set of detector data. Along with this, we expand our analysis on artificial helical trajectory data set. Finally, we will...

    Go to contribution page
  174. Carolina Niklaus Moreira Da Rocha Rodrigues (Federal University of Rio de Janeiro (BR))
    22/10/2024, 15:18
    Track 6 - Collaborative software and maintainability
    Poster

    The ATLAS experiment involves over 6000 active members, including students, physicists, engineers, and researchers, and more than 2500 members are authors. This dynamic CERN environment brings up some challenges, such as managing the qualification status of each author. The Qualification system, developed by the Glance team, aims to automate the processes required for monitoring the progress...

    Go to contribution page
  175. Grigori Rybkin (Université Paris-Saclay (FR))
    22/10/2024, 15:18
    Track 1 - Data and Metadata Organization, Management and Access
    Poster

    The software of the ATLAS experiment at the CERN LHC accelerator contains a number of tools to analyze (validate, summarize, peek into etc.) all its official data formats recorded in ROOT files. These tools - mainly written in the Python programming language - handle the ROOT TTree which is currently the main storage object format of ROOT files. However, the ROOT project has developed an...

    Go to contribution page
  176. Daniel Suchy (Comenius University (SK))
    22/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    The ATLAS Tile Calorimeter (TileCal) is the central hadronic calorimeter of the ATLAS detector at the Large Hadron Collider at CERN. It plays an important role in the reconstruction of jets, hadronically decaying tau leptons and missing transverse energy, and also provides information to the dedicated calorimeter trigger. The TileCal readout is segmented into nearly 10000 channels that are...

    Go to contribution page
  177. Michal Svatos (Czech Academy of Sciences (CZ))
    22/10/2024, 15:18
    Track 7 - Computing Infrastructure
    Poster

    The distributed computing of the ATLAS experiment at the Large Hadron Collider (LHC) utilizes computing resources provided by the Czech national High Performance Computing (HPC) center, IT4Innovations. This is done through ARC-CEs deployed at the Czech Tier2 site, praguelcg2. Over the years, this system has undergone continuous evolution, marked by recent enhancements aimed at improving...

    Go to contribution page
  178. Ke LI, Mr Yipu Liao (IHEP, China), Yiyu Zhang (Institute of High Energy Physics), Zhengde Zhang (中国科学院高能物理研究所)
    22/10/2024, 15:18
    Track 6 - Collaborative software and maintainability
    Poster

    The data processing and analyzing is one of the main challenges at HEP experiments, normally one physics result can take more than 3 years to be conducted. To accelerate the physics analysis and drive new physics discovery, the rapidly developing Large Language Model (LLM) is the most promising approach, it have demonstrated astonishing capabilities in recognition and generation of text while...

    Go to contribution page
  179. Mwai Karimi
    22/10/2024, 15:18
    Track 1 - Data and Metadata Organization, Management and Access
    Poster

    The huge volume of data generated by scientific facilities such as EuXFEL or LHC places immense strain on the data management infrastructure within laboratories. This includes poorly shareable resources of archival storage, typically, tape libraries. Maximising the efficiency of these tape resources necessitates a deep integration between hardware and software components.

    CERN's Tape...

    Go to contribution page
  180. Torri Jeske
    22/10/2024, 15:18
    Track 7 - Computing Infrastructure
    Poster

    Monitoring the status of a high throughput computing cluster running computationally intensive production jobs is a crucial yet challenging system administration task due to the complexity of such systems. To this end, we train autoencoders using the Linux kernel CPU metrics of the cluster. Additionally, we explore assisting these models with graph neural networks to share information across...

    Go to contribution page
  181. Kevin Pedro (Fermi National Accelerator Lab. (US))
    22/10/2024, 15:18
    Track 4 - Distributed Computing
    Poster

    Coprocessors, especially GPUs, will be a vital ingredient of data production workflows at the HL-LHC. At CMS, the GPU-as-a-service approach for production workflows is implemented by the SONIC project (Services for Optimized Network Inference on Coprocessors). SONIC provides a mechanism for outsourcing computationally demanding algorithms, such as neural network inference, to remote servers,...

    Go to contribution page
  182. Andrea Petrucci (Univ. of California San Diego (US))
    22/10/2024, 15:18
    Track 2 - Online and real-time computing
    Poster

    The event builder in the Data Acquisition System (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) is responsible for assembling events at a rate of 100 kHz during the current LHC run 3, and 750 kHz for the upcoming High Luminosity LHC, scheduled to start in 2029. Both the current and future DAQ architectures leverage on state-of-the-art network technologies, employing...

    Go to contribution page
  183. Flavio Pisani (CERN)
    22/10/2024, 15:18
    Track 2 - Online and real-time computing
    Poster

    The LHCb Experiment employs GPU cards in its first level trigger system to enhance computing efficiency, achieving a data rate of 40Tb/s from the detector. GPUs were selected for their computational power, parallel processing capabilities, and adaptability.

    However, trigger tasks necessitate extensive combinatorial and bitwise operations, ideally suited for FPGA implementation. Yet, FPGA...

    Go to contribution page
  184. Matthias Jochen Schnepf
    22/10/2024, 15:18
    Track 7 - Computing Infrastructure
    Poster

    Computing Centers always look for new server systems that can reduce operational costs, especially power consumption, and provide higher performance.
    ARM-CPUs promise higher energy efficiency than x86-CPUs.
    Therefore, the WLCG Tier1 center GridKa will partially use worker nodes with ARM-CPUs and has already carried out various power consumption and performance tests based on the HEPScore23...

    Go to contribution page
  185. Natthan PIGOUX
    22/10/2024, 15:18
    Track 4 - Distributed Computing
    Poster

    Dirac, a versatile grid middleware framework, is pivotal in managing computational tasks and workflows across a spectrum of scientific research domains including high energy physics and astrophysics. Historically, Dirac has employed specialized descriptive languages that, while effective, have introduced significant complexities and barriers to workflow interoperability and reproducibility....

    Go to contribution page
  186. Alberto Pimpo, Ismael Posada Trobo (CERN)
    22/10/2024, 15:18
    Track 7 - Computing Infrastructure
    Poster

    CERN has a huge demand for computing services. To accommodate this requests, a highly-scalable and highly-dense infrastructure is necessary.

    To accomplish this, CERN adopted Kubernetes, an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications.

    This session will discuss the strategies and tooling used to simplify...

    Go to contribution page
  187. Brad Sawatzky (Jefferson Lab)
    22/10/2024, 15:18
    Track 9 - Analysis facilities and interactive computing
    Poster

    In this study, we introduce the JIRIAF (JLAB Integrated Research Infrastructure Across Facilities) system, an innovative prototype of an operational, flexible, and widely distributed computing cluster, leveraging readily available resources from Department of Energy (DOE) computing facilities. JIRIAF employs a customized Kubernetes orchestration system designed to integrate geographically...

    Go to contribution page
  188. Claire Antel (Universite de Geneve (CH))
    22/10/2024, 15:18
    Track 2 - Online and real-time computing
    Poster

    In the realm of high-energy physics research, the demand for computational
    power continues to increase, particularly in online applications such as Event
    Filter. Innovations in performance enhancement are sought after, leading to
    exploration in integrating FPGA accelerators within existing software
    frameworks like Athena, extensively employed in the ATLAS experiment at CERN.
    This...

    Go to contribution page
  189. Ilija Vukotic (University of Chicago (US))
    22/10/2024, 15:18
    Track 1 - Data and Metadata Organization, Management and Access
    Poster

    This study explores possible enhancements in analysis speed, WAN bandwidth efficiency, and data storage management through an innovative data access strategy. The proposed model introduces specialized "delivery" services for data preprocessing, which include filtering and reformatting tasks executed on dedicated hardware located alongside the data repositories at the CERN Tier-0 or at Tier-1...

    Go to contribution page
  190. Alexey Rybalchenko (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
    22/10/2024, 15:18
    Track 6 - Collaborative software and maintainability
    Poster

    Collaborative software development for particle physics experiments demands rigorous code review processes to ensure maintainability, reliability, and efficiency. This work explores the integration of Large Language Models (LLMs) into the code review process, with a focus on utilizing both commercial and open models. We present a comprehensive code review workflow that incorporates LLMs,...

    Go to contribution page
  191. Dr Jerome Odier (LPSC/CNRS (Grenoble, FR))
    22/10/2024, 15:18
    Track 1 - Data and Metadata Organization, Management and Access
    Poster

    The ATLAS Metadata Interface (AMI) ecosystem has been developed within the context of ATLAS, one of the largest scientific collaborations. AMI is a mature, generic, metadata-oriented ecosystem that has been maintained for over 23 years. This paper briefly describes the main applications of the ecosystem within the experiment, including metadata aggregation for millions of datasets and billions...

    Go to contribution page
  192. Diana Gaponcic (IT-PW-PI), Spyridon Trigazis (CERN)
    22/10/2024, 15:18
    Track 7 - Computing Infrastructure
    Poster

    CERN IT has offered a Kubernetes service since 2016, expanding to incorporate multiple other technologies from the cloud native ecosystem over time. Currently the service runs over 500 clusters and thousands of nodes serving use cases from different sectors in the organization.

    In 2021 the ATS sector showed interest in looking at a similar setup for their container orchestration effort. A...

    Go to contribution page
  193. Jonas Schmeing (Bergische Universitaet Wuppertal (DE))
    22/10/2024, 15:18
    Track 2 - Online and real-time computing
    Poster

    To operate ATLAS ITk system tests and later the final detector, a graphical operation and configuration system is needed. For this a flexible and scalable framework based on distributed microservices has been introduced. Different microservices are responsible for configuration or operation of all parts of the readout chain.

    The configuration database microservice provides the configuration...

    Go to contribution page
  194. Maria Teresa Camerlingo (Universita e INFN, Bari (IT))
    22/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    The ALICE Collaboration aims to precisely measure heavy-flavour (HF) hadron production in high-energy proton-proton and heavy-ion collisions since it can provide valuable tests of perturbative quantum chromodynamics models and insights into hadronization mechanisms. Measurements of the Ξ$_c^+$ and Λ$_c^+$ production decaying in a proton (p) and charged π and K mesons are remarkable examples of...

    Go to contribution page
  195. Andrea Petrucci (Univ. of California San Diego (US))
    22/10/2024, 15:18
    Track 2 - Online and real-time computing
    Poster

    The OMS data warehouse (DWH) constitutes the foundation of the Online Monitoring System (OMS) architecture within the CMS experiment at CERN, responsible for the storage and manipulation of non-event data within ORACLE databases. Leveraging on PL/SQL code, the DWH orchestrates the aggregation and modification of data from several sources, inheriting and revamping code from the previous project...

    Go to contribution page
  196. Yuncong Zhai (Shandong University)
    22/10/2024, 15:18
    Track 3 - Offline Computing
    Poster

    The Super Tau-Charm Facility (STCF) is the new generation $e^+$$e^−$ collider aimed at studying tau-charm physics. The particle identification (PID), as one of the most fundamental tools for various physics research in STCF experiment, is crucial for achieving various physics goals of STCF. In the recent decades, machine learning (ML) has emerged as a powerful alternative for particle...

    Go to contribution page
  197. Ding-Ze Hu
    22/10/2024, 15:18
    Track 1 - Data and Metadata Organization, Management and Access
    Poster

    The PATOF project builds on work at MAMI particle physics experiment A4. A4 produced a stream of valuable data for many years which already released scientific output of high quality and still provides a solid basis for future publications. The A4 data set consists of 100 TB and 300 million files of different types (Vague context because of hierarchical folder structure and file format with...

    Go to contribution page
  198. Gianfranco Sciacca (Universitaet Bern (CH))
    22/10/2024, 15:18
    Track 3 - Offline Computing
    Poster

    Developments in microprocessor technology have confirmed the trend towards higher core counts and decreased amount of memory per core, resulting in major improvements in power efficiency for a given level of performance. Core counts have increased significantly over the past five years for the x86_64 architecture, which is dominating in the LHC computing environment, and the higher core...

    Go to contribution page
  199. Ka Hei Martin Kwok (Fermi National Accelerator Lab. (US))
    22/10/2024, 15:18
    Track 2 - Online and real-time computing
    Poster

    CMS has deployed a number of different GPU algorithms at the High-Level Trigger (HLT) in Run 3. As the code base for GPU algorithms continues to grow, the burden for developing and maintaining separate implementations for GPU and CPU becomes increasingly challenging. To mitigate this, CMS has adopted the Alpaka (Abstraction Library for Parallel Kernel Acceleration) library as the performance...

    Go to contribution page
  200. Marta Vila Fernandes (CERN)
    22/10/2024, 15:18
    Track 7 - Computing Infrastructure
    Poster

    Efficient, ideally fully automated, software package building is essential in the computing supply chain of the CERN experiments. With Koji, a very popular package software building system used in the upstream Enterprise Linux communities, CERN IT provides a service to build software and images for the Linux OSes we support. Due to the criticality of the service and the limitations in Koji's...

    Go to contribution page
  201. Larissa Schmid (KIT - Karlsruhe Institute of Technology (DE))
    22/10/2024, 15:18
    Track 4 - Distributed Computing
    Poster

    The sheer volume of data generated by LHC experiments presents a computational challenge, necessitating robust infrastructure for storage, processing, and analysis. The Worldwide LHC Computing Grid (WLCG) addresses this challenge by integrating global computing resources into a cohesive entity. To cope with changes in the infrastructure and increased demands, the compute model needs to be...

    Go to contribution page
  202. Andreas Joachim Peters (CERN), Jakub Moscicki (CERN), Luca Mascetti (CERN), Michael Davis (CERN), Oliver Keeble (CERN)
    22/10/2024, 15:18
    Track 8 - Collaboration, Reinterpretation, Outreach and Education
    Poster

    TechWeekStorage24 was introduced by CERN IT Storage and Data Management group as a new “Center of Excellence” community networking format: a co-located series of events on Open Source Data Technologies, bringing together a wide range of communities, far beyond High Energy Physics and highlighting the wider technology impact of IT solutions born in HEP.

    Combining the annual CS3 conference,...

    Go to contribution page
  203. Federica Fanzago (INFN Padova)
    22/10/2024, 15:18
    Track 7 - Computing Infrastructure
    Poster

    Over time, the idea of exploiting voluntary computing resources as additional capacity for experiments at the LHC has given rise to individual initiatives such as the CMS@Home project. With a starting point of R&D prototypes and projects such as "jobs in the Vacuum" and SETI@Home, the experiments have tried integrating these resources into their data production frameworks transparently to the...

    Go to contribution page
  204. Hang Zhou
    22/10/2024, 15:18
    Track 3 - Offline Computing
    Poster

    With an electron-positron collider operating at center-of-mass-energy 2∼7 GeV and a peak luminosity above 0.5 × 10^35 cm^−2 s^−1, the STCF physics program will provide an unique platform for in-depth studies of hadron structure and non-perturbative strong interaction, as well as probing physics beyond the Standard Model at the τ-Charm sector succeeding the present Being Electron-Positron...

    Go to contribution page
  205. Dr Marcus Ebert (University of Victoria)
    22/10/2024, 15:18
    Track 1 - Data and Metadata Organization, Management and Access
    Poster

    The HEP-RC group at UVic used Dynafed intensively to create federated storage clusters for Belle-II and ATLAS; which was used by worker nodes deployed on clouds around the world. Since the end of the DPM development also means the end of the development for Dynafed, xrootd was tested with S3 as backend to replace Dynafed. We will show similarities as well as major differences between the two...

    Go to contribution page
  206. Ke LI, Mr Siyang Chen (IHEP, China), Yiyu Zhang (Institute of High Energy Physics), Zhengde Zhang (中国科学院高能物理研究所)
    22/10/2024, 15:18
    Track 6 - Collaborative software and maintainability
    Poster

    Large Language Models (LLMs) are undergoing a period of rapid updates and changes, with state-of-art model frequently being replaced. WEhen applying LLMs to a specific scientific field it is challenging to acquire unique domain knowledge while keeping th emodel ifself advanced. To address this challenge, a sophisticated large language model system named Xiwu has been developed, allowing...

    Go to contribution page
  207. Federico Andrea Corchia (Universita e INFN, Bologna (IT))
    22/10/2024, 16:15
    Track 5 - Simulation and analysis tools
    Talk

    As we are approaching the high-luminosity era of the LHC, the computational requirements of the ATLAS experiment are expected to increase significantly in the coming years. In particular, the simulation of MC events is immensely computationally demanding, and their limited availability is one of the major sources of systematic uncertainties in many physics analyses. The main bottleneck in the...

    Go to contribution page
  208. Juan Miguel Carceller (CERN)
    22/10/2024, 16:15
    Track 6 - Collaborative software and maintainability
    Talk

    The Key4hep software stack enables studies for future collider projects. It provides a full software suite for doing event generation, detector simulation as well as reconstruction and analysis. In the Key4hep stack, over 500 packages are built using the spack package manager and deployed via the cvmfs software distribution system. In this contribution, we explain the current setup for...

    Go to contribution page
  209. Dmitry Litvintsev (Fermi National Accelerator Lab. (US)), Mr Tigran Mkrtchyan (DESY)
    22/10/2024, 16:15
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    The dCache project provides open-source software deployed internationally
    to satisfy ever-more demanding storage requirements. Its multifaceted
    approach provides an integrated way of supporting different use-cases
    with the same storage, from high throughput data ingest, data sharing
    over wide area networks, efficient access from HPC clusters, and long
    term data persistence on tertiary...

    Go to contribution page
  210. Brij Kishor Jashal (RAL, TIFR and IFIC), Jiahui Zhuo (Univ. of Valencia and CSIC (ES))
    22/10/2024, 16:15
    Track 2 - Online and real-time computing
    Talk

    A new algorithm, called "Downstream", has been developed and implemented at LHCb, which is able to reconstruct and select very displaced vertices in real time at the first level of the trigger (HLT1). It makes use of the Upstream Tracker (UT) and the Scintillator Fiber detector (SciFI) of LHCb and it is executed on GPUs inside the Allen framework. In addition to an optimized strategy, it...

    Go to contribution page
  211. Uday Saidev Polisetty (Georg August Universitaet Goettingen (DE))
    22/10/2024, 16:15
    Track 7 - Computing Infrastructure
    Talk

    The German university-based Tier-2 centres successfully contributed a significant fraction of the computing power required for Runs 1-3 of the LHC. But for the upcoming Run 4, with its increased need for both storage and computing power for the various HEP computing tasks, a transition to a new model becomes a necessity. In this context, the German community under the FIDIUM project is making...

    Go to contribution page
  212. Benjamin Huber (Technische Universitaet Wien (AT))
    22/10/2024, 16:15
    Track 3 - Offline Computing
    Talk

    Developments of the new Level-1 Trigger at CMS for the High-Luminosity Operation of the LHC are in full swing. The Global Trigger, the final stage of this new Level-1 Trigger pipeline, is foreseen to evaluate a menu of over 1000 cut-based algorithms, each of which targeting a specific physics signature or acceptance region. Automating the task of tailoring individual algorithms to specific...

    Go to contribution page
  213. Mr Tom Dack
    22/10/2024, 16:15
    Track 4 - Distributed Computing
    Talk

    Since 2017, the Worldwide LHC Computing Grid (WLCG) has been working towards enabling token-based authentication and authorization throughout its entire middleware stack.
    Taking guidance from the WLCG Token Transition Timeline, published in 2022, substantial progress has been achieved not only in making middleware compatible with the use of tokens, but also in understanding the limitations...

    Go to contribution page
  214. Tyler Anderson (SLAC)
    22/10/2024, 16:33
    Track 3 - Offline Computing
    Talk

    Searching for anomalous data is especially important in rare event searches like that of the LUX-ZEPLIN (LZ) experiment's hunt for dark matter. While LZ's data processing provides analyzer-friendly features for all data, searching for anomalous data after minimal reconstruction allows one to find anomalies which may not have been captured by reconstructed features and allows us to avoid any...

    Go to contribution page
  215. Luca Bassi
    22/10/2024, 16:33
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    After the deprecation of the open-source Globus Toolkit used for GridFTP transfers, the WLCG community has shifted its focus to the HTTP protocol. The WebDAV protocol extends HTTP to create, move, copy and delete resources on web servers. StoRM WebDAV provides data storage access and management through the WebDAV protocol over a POSIX file system. Mainly designed to be used by the WLCG...

    Go to contribution page
  216. Dr Sergey Gorbunov (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
    22/10/2024, 16:33
    Track 2 - Online and real-time computing
    Talk

    The event reconstruction in the CBM experiment is challenging.
    There will be no simple hardware trigger due to the novel concepts of free-streaming data and self-triggered front-end electronics.
    Thus, there is no a priori association of signals to physical events.
    CBM will operate at interaction rates of 10 MHz, unprecedented for heavy ion experiments.
    At this rate, collisions overlap...

    Go to contribution page
  217. Kyle Knoepfel (Fermi National Accelerator Laboratory)
    22/10/2024, 16:33
    Track 6 - Collaborative software and maintainability
    Talk

    The Spack package manager has been widely adopted in the supercomputing community as a means of providing consistently built on-demand software for the platform of interest. Members of the high-energy and nuclear physics (HENP) community, in turn, have recognized Spack’s strengths, used it for their own projects, and even become active Spack developers to better support HENP needs. Code...

    Go to contribution page
  218. Matt Doidge (Lancaster University (GB))
    22/10/2024, 16:33
    Track 4 - Distributed Computing
    Talk

    Created in 2023, the Token Trust and Traceability Working Group (TTT) was formed in order to answer questions of policy and best practice with the ongoing move from X.509 and VOMS proxy certificates to token-based solutions as the primary authorisation and authentication method in grid environments. With a remit to act in an investigatory and advisory capacity alongside other working groups in...

    Go to contribution page
  219. Kevin Pedro (Fermi National Accelerator Lab. (US))
    22/10/2024, 16:33
    Track 5 - Simulation and analysis tools
    Talk

    Detector simulation is a key component of physics analysis and related activities in CMS. In the upcoming High Luminosity LHC era, simulation will be required to use a smaller fraction of computing in order to satisfy resource constraints. At the same time, CMS will be upgraded with the new High Granularity Calorimeter (HGCal), which requires significantly more resources to simulate than the...

    Go to contribution page
  220. Diego Ciangottini (INFN, Perugia (IT))
    22/10/2024, 16:33
    Track 7 - Computing Infrastructure
    Talk

    In a geo-distributed computing infrastructure with heterogeneous resources (HPC and HTC and possibly cloud), a key to unlock an efficient and user-friendly access to the resources is being able to offload each specific task to the best suited location. One of the most critical problems involve the logistics of wide-area with multi stage workflows back and forth multiple resource providers....

    Go to contribution page
  221. Hugo Gonzalez Labrador (CERN)
    22/10/2024, 16:51
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    Managing the data deluge generated by large-scale scientific collaborations is a challenge. The Rucio Data Management platform is an open-source framework engineered to orchestrate the storage, distribution, and management of massive data volumes across a globally distributed computing infrastructure. Rucio meets the requirements of high-energy physics, astrophysics, genomics, and beyond,...

    Go to contribution page
  222. Arantza De Oyanguren Campos (Univ. of Valencia and CSIC (ES)), Valerii Kholoimov (Instituto de Física Corpuscular (Univ. of Valencia))
    22/10/2024, 16:51
    Track 2 - Online and real-time computing
    Talk

    In this presentation, we introduce BuSca, a prototype algorithm designed for real-time particle searches, leveraging the enhanced parallelization capabilities of the new LHCb trigger scheme implemented on GPUs. BuSca is focused on downstream reconstructed tracks, detected exclusively by the UT and SciFi detectors. By projecting physics candidates onto 2D histograms of flight distance and mass...

    Go to contribution page
  223. Brian Paul Bockelman (University of Wisconsin Madison (US))
    22/10/2024, 16:51
    Track 4 - Distributed Computing
    Talk

    Within the LHC community, a momentous transition has been occurring in authorization. For nearly 20 years, services within the Worldwide LHC Computing Grid (WLCG) have authorized based on mapping an identity, derived from an X.509 credential, or a group/role derived from a VOMS extension issued by the experiment. A fundamental shift is occurring to capabilities: the credential, a bearer...

    Go to contribution page
  224. Dmitry Kalinkin, Wouter Deconinck
    22/10/2024, 16:51
    Track 6 - Collaborative software and maintainability
    Talk

    The ePIC collaboration is working towards realizing the primary detector for the upcoming Electron-Ion Collider (EIC). As ePIC approaches critical decision milestones and moves towards future operation, software plays a critical role in systematically evaluating detector performance and laying the groundwork for achieving the scientific goals of the EIC project. The scope and schedule of the...

    Go to contribution page
  225. Jose Hernandez (CIEMAT)
    22/10/2024, 16:51
    Track 7 - Computing Infrastructure
    Talk

    The MareNostrum 5 (MN5) is the new 750k-core general-purpose cluster recently deployed at the Barcelona Supercomputing Center (BSC). MN5 presents new opportunities for the execution of CMS data processing and simulation tasks but suffers from the same stringent network connectivity limitations as its predecessor, MN4. The innovative solutions implemented to navigate these constraints and...

    Go to contribution page
  226. Mr Pralay Kumar das (Saha Institute Of Nuclear Physics)
    22/10/2024, 16:51
    Track 5 - Simulation and analysis tools
    Talk

    In the realm of low-energy nuclear physics experiments, the Active Target Time Projection Chamber (AT-TPC) can be advantageous for studying nuclear reaction kinematics, such as the alpha cluster decay of $^{12}C$, by tracking the reaction products produced in the active gas medium of the TPC. The tracking capability of the TPC is strongly influenced by the homogeneity of the electric field...

    Go to contribution page
  227. Wojciech Gomulka (AGH University of Krakow (PL))
    22/10/2024, 16:51
    Track 3 - Offline Computing
    Talk

    The upcoming upgrades of LHC experiments and next-generation FCC (Future Circular Collider) machines will again change the definition of big data for the HEP environment. The ability to effectively analyse and interpret complex, interconnected data structures will be vital. This presentation will delve into the innovative realm of Graph Neural Networks (GNNs). This powerful tool extends...

    Go to contribution page
  228. Mingrun Li (IHEP, CAS)
    22/10/2024, 17:09
    Track 3 - Offline Computing
    Talk

    The BESIII at the BEPCII electron-positron accelerator, which is located at IHEP, Beijing, China, is an experiment for the studies of hadron physics and $\tau$-charm physics with the highest accuracy achieved until now. It has collected several world's largest $e^+e^-$ samples in $\tau$-charm region. Anomaly detection on BESIII detectors is an important segment of improving data quality,...

    Go to contribution page
  229. Aashay Arora (Univ. of California San Diego (US))
    22/10/2024, 17:09
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    The data movement manager (DMM) is a prototype interface between the CERN developed data management software Rucio and the software defined networking (SDN) service SENSE by ESNet. It allows for SDN enabled high energy physics data flows using the existing worldwide LHC computing grid infrastructure. In addition to the key feature of DMM, namely transfer-priority based bandwidth allocation for...

    Go to contribution page
  230. Muhammad Imran (National Centre for Physics (PK))
    22/10/2024, 17:09
    Track 7 - Computing Infrastructure
    Talk

    The CMS experiment's operational infrastructure hinges significantly on the CMSWEB cluster, which serves as the cornerstone for hosting a multitude of services critical to the data taking and analysis. Operating on Kubernetes ("k8s") technology, this cluster powers over two dozen distinct web services, including but not limited to DBS, DAS, CRAB, WMarchive, and WMCore.

    In this talk, we...

    Go to contribution page
  231. Nick Smith (Fermi National Accelerator Lab. (US))
    22/10/2024, 17:09
    Track 4 - Distributed Computing
    Talk

    Fermilab is the first High Energy Physics institution to transition from X.509 user certificates to authentication tokens in production systems. All of the experiments that Fermilab hosts are now using JSON Web Token (JWT) access tokens in their grid jobs. Many software components have been either updated or created for this transition, and most of the software is available to others as open...

    Go to contribution page
  232. Michał Mazurek (National Centre for Nuclear Research (PL))
    22/10/2024, 17:09
    Track 5 - Simulation and analysis tools
    Talk

    In high energy physics, fast simulation techniques based on machine learning could play a crucial role in generating sufficiently large simulated samples. Transitioning from a prototype to a fully deployed model usable in a full scale production is a very challenging task.

    In this talk, we introduce the most recent advances in the implementation of fast simulation for calorimeter showers in...

    Go to contribution page
  233. Gagik Gavalian (Jefferson National Lab)
    22/10/2024, 17:09
    Track 2 - Online and real-time computing
    Talk

    Online reconstruction is key for monitoring purposes and real time analysis in High Energy and Nuclear Physics (HEP) experiments. A necessary component of reconstruction algorithms is particle identification (PID) that combines information left by a particle passing through several detector components to identify the particle’s type. Of particular interest to electro-production Nuclear Physics...

    Go to contribution page
  234. Sakib Rahman
    22/10/2024, 17:09
    Track 6 - Collaborative software and maintainability
    Talk

    The ePIC collaboration is realizing the first experiment of the future Electron-Ion Collider (EIC) at the Brookhaven National Laboratory that will allow for a precision study of the nucleon and the nucleus at the scale of sea quarks and gluons through the study of electron-proton/ion collisions. This talk will discuss the current workflow in place for running centralized simulation campaigns...

    Go to contribution page
  235. Jamie Gooding (Technische Universitaet Dortmund (DE))
    22/10/2024, 17:27
    Track 2 - Online and real-time computing
    Talk

    Ahead of Run 3 of the LHC, the trigger of the LHCb experiment was redesigned. The L0 hardware stage present in Runs 1 and 2 was removed, with detector readout at 30 MHz passing directly into the first stage of the software-based High Level Trigger (HLT), run on GPUs. Additionally, the second stage of the upgraded HLT makes extensive use of the Turbo event model, wherein only those candidates...

    Go to contribution page
  236. Filippo Cattafesta (Scuola Normale Superiore & INFN Pisa (IT))
    22/10/2024, 17:27
    Track 5 - Simulation and analysis tools
    Talk

    The event simulation is a key element for data analysis at present and future particle accelerators. We show [1] that novel machine learning algorithms, specifically Normalizing Flows and Flow Matching, can be effectively used to perform accurate simulations with several orders of magnitude of speed-up compared to traditional approaches when only analysis level information is needed. In such a...

    Go to contribution page
  237. Enrico Vianello (INFN-CNAF)
    22/10/2024, 17:27
    Track 4 - Distributed Computing
    Talk

    INDIGO IAM (Identity and Access Management) is a comprehensive service that enables organizations to manage and control access to their resources and systems effectively. It implements a standard OAuth2 Authorization Service and OpenID Connect Provider and it has been chosen as the AAI solution by the WLCG community for the transition from VOMS proxy-based authorization to JSON web...

    Go to contribution page
  238. Robin Hofsaess (KIT - Karlsruhe Institute of Technology (DE))
    22/10/2024, 17:27
    Track 7 - Computing Infrastructure
    Talk

    The efficient utilization of multi-purpose HPC resources for High Energy Physics applications is increasingly important, in particularly with regard to the upcoming changes in the German HEP computing infrastructure.
    In preparation for the future, we are developing and testing an XRootD-based caching and buffering approach for workflow and efficiency optimizations to exploit the full...

    Go to contribution page
  239. Lorenzo Malentacca (Cern, Milano-Bicocca)
    22/10/2024, 17:27
    Track 3 - Offline Computing
    Talk

    During LHC High-Luminosity phase, the LHCb RICH detector will face challenges due to increased particle multiplicity and high occupancy. Introducing sub-100ps time information becomes crucial for maintaining excellent particle identification (PID) performance. The LHCb RICH collaboration plans to anticipate the introduction of timing through an enhancement program during the third LHC Long...

    Go to contribution page
  240. Carolina Niklaus Moreira Da Rocha Rodrigues (Federal University of Rio de Janeiro (BR))
    22/10/2024, 17:27
    Track 6 - Collaborative software and maintainability
    Talk

    Considering CERN's prosperous environment, developing groundbreaking research in physics and pushing technology's barriers, CERN members participate in many talks and conferences every year. However, given that the ATLAS experiment has around 6000 members and more than one could be qualified to present the same talk, the experiment developed metrics to prioritize them.

    Currently, ATLAS is...

    Go to contribution page
  241. Katy Ellis (Science and Technology Facilities Council STFC (GB))
    22/10/2024, 17:27
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    The Large Hadron Collider (LHC) experiments rely heavily on the XRootD software suite for data transfer and streaming across the Worldwide LHC Computing Grid (WLCG) both within sites (LAN) and across sites (WAN). While XRootD offers extensive monitoring data, there's no single, unified monitoring tool for all experiments. This becomes increasingly critical as network usage grows, and with the...

    Go to contribution page
  242. Gabriela Lemos Lucidi Pinhao (LIP - Laboratorio de Instrumentação e Física Experimental de Partículas (PT))
    22/10/2024, 17:45
    Track 6 - Collaborative software and maintainability
    Talk

    CERN has a very dynamic environment and faces challenges such as information centralization, communication between the experiments’ working groups, and the continuity of workflows. The solution found for those challenges is automation and, therefore, the Glance project, an essential management software tool for all four large LHC experiments. Its main purpose is to develop and maintain...

    Go to contribution page
  243. Mihai Patrascoiu (CERN)
    22/10/2024, 17:45
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    The WLCG community, with the main LHC experiments at the forefront, is moving away from x509 certificates, replacing the Authentication and Authorization layer with OAuth2 tokens. FTS, as a middleware and core component of the WLCG, plays a crucial role in the transition from x509 proxy certificates to tokens. The paper will present in-detail the FTS token design and how this will serve the...

    Go to contribution page
  244. Milosz Zdybal (Polish Academy of Sciences (PL))
    22/10/2024, 17:45
    Track 2 - Online and real-time computing
    Talk

    The evergrowing amounts of data produced by the high energy physics experiments create a need for fast and efficient track reconstruction algorithms. When storing all incoming information is not feasible, online algorithms need to provide reconstruction quality similar to their offline counterparts. To achieve it, novel techniques need to be introduced, utilizing acceleration offered by the...

    Go to contribution page
  245. Carmelo Pellegrino
    22/10/2024, 17:45
    Track 4 - Distributed Computing
    Talk

    X.509 certificates and VOMS proxies are still widely used by various scientific communities for authentication and authorization (authN/Z) in Grid Storage and Computing Elements. Although this has contributed to improve the scientific collaboration worldwide, X.509 authN/Z comes with some interoperability issues with modern Cloud-based tools and services.

    The Grid computing communities have...

    Go to contribution page
  246. Anatolii Korol (Deutsches Elektronen-Synchrotron (DESY))
    22/10/2024, 17:45
    Track 5 - Simulation and analysis tools
    Talk

    Fast simulation of the energy depositions in high-granular detectors is needed for future collider experiments with ever increasing luminosities. Generative machine learning (ML) models have been shown to speed up and augment the traditional simulation chain. Many previous efforts were limited to models relying on fixed regular grid-like geometries leading to artifacts when applied to highly...

    Go to contribution page
  247. Federico Stagni (CERN)
    23/10/2024, 09:00
    Plenary
    Talk

    The Dirac interware has long served as a vital resource for user communities seeking access to distributed computing resources. Originating within the LHCb collaboration around 2000, Dirac has undergone significant evolution. A pivotal moment occurred in 2008 with a major refactoring, resulting in the development of the experiment-agnostic core Dirac, which paved the way for customizable...

    Go to contribution page
  248. David South (Deutsches Elektronen-Synchrotron (DE))
    23/10/2024, 09:30
    Plenary
    Talk

    The ATLAS Google Project was established as part of an ongoing evaluation of the use of commercial clouds by the ATLAS Collaboration, in anticipation of the potential future adoption of such resources by WLCG grid sites to fulfil or complement their computing pledges. Seamless integration of Google cloud resources into the worldwide ATLAS distributed computing infrastructure was achieved at...

    Go to contribution page
  249. Ivan Knezevic (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
    23/10/2024, 10:00
    Plenary
    Talk

    The metadata schema for experimental nuclear physics project aims to facilitate data management and data publication under the FAIR principles in the experimental Nuclear Physics communities, by developing a cross-domain metadata schema and generator, tailored for diverse datasets, with the possibility of integration with other, similar fields of research (i.e. Astro and Particle...

    Go to contribution page
  250. Andreas Joachim Peters (CERN), Jakob Blomer (CERN)
    23/10/2024, 11:00
    Plenary
    Talk

    For several years, the ROOT team is developing the new RNTuple I/O subsystem in preparation of the next generation of collider experiments. Both HL-LHC and DUNE are expected to start data taking by the end of this decade. They pose unprecedented challenges to event data I/O in terms of data rates, event sizes and event complexity. At the same time, the I/O landscape is getting more diverse....

    Go to contribution page
  251. Katy Ellis (Science and Technology Facilities Council STFC (GB))
    23/10/2024, 11:30
    Plenary
    Talk

    During Run-3 the Large Hadron Collider (LHC) experiments are transferring up to 10PB of data daily across the Worldwide LHC Computing Grid (WLCG) sites. However, following the transition from Run-3 to Run-4, data volumes are expected to increase tenfold. The WLCG Data Challenge aims to address this significant scaling challenge through a series of rigorous test events.

    The primary objective...

    Go to contribution page
  252. Tony Cass (CERN)
    23/10/2024, 12:00
    Plenary
    Talk

    Back in the late 1990’s when planning for LHC computing started in earnest, arranging network connections to transfer the huge LHC data volumes between participating sites was seen as a problem. Today, 30 years later, the LHC data volumes are even larger, WLCG traffic has switched from a hierarchical to a mesh model and yet almost nobody worries about the network.

    Some people still do...

    Go to contribution page
  253. David Karres (Heidelberg University (DE))
    23/10/2024, 13:30
    Track 2 - Online and real-time computing
    Talk

    The Mu3e experiment at the Paul-Scherrer-Institute will be searching for the charged lepton flavor violating decay $\mu^+ \rightarrow e^+e^-e^+$. To reach its ultimate sensitivity to branching ratios in the order of $10^{-16}$, an excellent momentum resolution for the reconstructed electrons is required, which in turn necessitates precise detector alignment. To compensate for weak modes in the...

    Go to contribution page
  254. 曾珊 zengshan (IHEP)
    23/10/2024, 13:30
    Track 7 - Computing Infrastructure
    Talk

    According to the estimated data rates, it is predicted that 800 TB raw experimental data will be produced per day from 14 beamlines at the first stage of the High-Energy Photon Source (HEPS) in China, and the data volume will be even greater with the completion of over 90 beamlines at the second stage in the future. Therefore, designing a high-performance, scalable network architecture plays a...

    Go to contribution page
  255. Brij Kishor Jashal (Rutherford Appleton Laboratory)
    23/10/2024, 13:30
    Track 4 - Distributed Computing
    Talk

    The CMS computing infrastructure spread globally over 150 WLCG sites forms a intricate ecosystem of computing resources, software and services. In 2024, the production computing cores breached half a million mark and storage capacity is at 250 PetaBytes on disk and 1.20 ExaBytes on Tape. To monitor these resources in real time, CMS working closely with CERN IT has developed a multifaceted...

    Go to contribution page
  256. Lorenzo Moneta (CERN)
    23/10/2024, 13:30
    Track 5 - Simulation and analysis tools
    Talk

    Within the ROOT/TMVA project, we have developed a tool called SOFIE, that takes externally trained deep learning models in ONNX format or Keras and PyTorch native formats and generates C++ code that can be easily included and invoked for fast inference of the model. The code has a minimal dependency and can be easily integrated into the data processing and analysis workflows of the HEP...

    Go to contribution page
  257. Graeme A Stewart (CERN)
    23/10/2024, 13:30
    Track 3 - Offline Computing
    Talk

    Jet reconstruction remains a critical task in the analysis of data from HEP colliders. We describe in this paper a new, highly performant, Julia package for jet reconstruction, JetReconstruction.jl, which integrates into the growing ecosystem of Julia packages for HEP. With this package users can run sequential reconstruction algoritms for jets, In particular, for LHC events, the...

    Go to contribution page
  258. Dr Anthony Hartin (LMU)
    23/10/2024, 13:30
    Track 5 - Simulation and analysis tools
    Talk

    Non perturbative QED is used to predict beam backgrounds at the interaction point of colliders, in calculations of Schwinger pair creation and in precision QED tests with ultra-intense lasers. In order to predict these phenomena, custom built monte carlo event generators based on a suitable non perturbative theory have to be developed. One such suitable theory uses the Furry Interaction...

    Go to contribution page
  259. Hasan Ozturk (CERN)
    23/10/2024, 13:30
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    The CMS experiment manages a large-scale data infrastructure, currently handling over 200 PB of disk and 500 PB of tape storage and transferring more than 1 PB of data per day on average between various WLCG sites. Utilizing Rucio for high-level data management, FTS for data transfers, and a variety of storage and network technologies at the sites, CMS confronts inevitable challenges due to...

    Go to contribution page
  260. Bernhard Meirose (Chalmers University of Technology + Lund University)
    23/10/2024, 13:48
    Track 5 - Simulation and analysis tools
    Talk

    The HIBEAM-NNBAR experiment at the European Spallation Source is a multidisciplinary two-stage program of experiments that includes high-sensitivity searches for neutron oscillations, searches for sterile neutrons, searches for axions, as well as the search for exotic decays of the neutron. The computing framework of the collaboration includes diverse software, from particle generators to...

    Go to contribution page
  261. Gagik Gavalian (Jefferson National Lab)
    23/10/2024, 13:48
    Track 2 - Online and real-time computing
    Talk

    The increasing complexity and data volume of Nuclear Physics experiments require significant computing resources to process data from experimental setups. The entire experimental data set has to be processed to extract sub-samples for physics analysis. The advancements in Artificial Intelligence and Machine Learning fields provide tools and procedures that can significantly enhance the...

    Go to contribution page
  262. Wenlong Yuan (The University of Edinburgh (GB))
    23/10/2024, 13:48
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    The Deep Underground Neutrino Experiment (DUNE) is scheduled to start running in 2029, expected to record 30 PB/year of raw data. To handle this large-scale data, DUNE has adopted and deployed Rucio, the next-generation Data Replica service originally designed by the ATLAS collaboration, as an essential component of its Distributed Data Management system.

    DUNE's use of Rucio has demanded...

    Go to contribution page
  263. Gabriele Bortolato (Universita e INFN, Padova (IT))
    23/10/2024, 13:48
    Track 7 - Computing Infrastructure
    Talk

    In a DAQ system a large fraction of CPU resources is engaged in networking rather than in data processing. The common network stacks that take care of network traffic usually manipulate data through several copies performing expensive operations. Thus, when the CPU is asked to handle networking, the main drawbacks are throughput reduction and latency increase due to the overhead added to the...

    Go to contribution page
  264. Andrea Valassi (CERN)
    23/10/2024, 13:48
    Track 5 - Simulation and analysis tools
    Talk

    The effort to speed up the Madgraph5_aMC@NLO generator by exploiting CPU vectorization and GPUs, which started at the beginning of 2020, is expected to deliver the first production release of the code for QCD leading-order (LO) processes in 2024. To achieve this goal, many additional tests, fixes and improvements have been carried out by the development team in recent months, both to carry out...

    Go to contribution page
  265. Juan Miguel Carceller (CERN)
    23/10/2024, 13:48
    Track 3 - Offline Computing
    Talk

    Key4hep, a software framework and stack for future accelerators, integrates all the steps in the typical offline pipeline: generation, simulation, reconstruction and analysis. The different components of Key4hep use a common event data model, called EDM4hep. For reconstruction, Key4hep leverages Gaudi, a proven framework already in use by several experiments at the LHC, to orchestrate...

    Go to contribution page
  266. Marta Bertran Ferrer (CERN)
    23/10/2024, 13:48
    Track 4 - Distributed Computing
    Talk

    JAliEn, the ALICE experiment's Grid middleware, utilizes whole-node scheduling to maximize resource utilization from participating sites. This approach offers flexibility in resource allocation and partitioning, allowing for customized configurations that adapt to the evolving needs of the experiment. This scheduling model is gaining traction among Grid sites due to its initial performance...

    Go to contribution page
  267. Rose Cooper
    23/10/2024, 14:06
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    The File Transfer Service (FTS) is a bulk data mover responsible for queuing, scheduling, dispatching and retrying file transfer requests, making it a critical infrastructure component for many experiments. FTS is primarily used by the LHC experiments, namely ATLAS, CMS and LHCb, but is also used by some non-LHC experiments, including both AMS and DUNE. FTS is as an essential part in the data...

    Go to contribution page
  268. Zenny Jovi Joestar Wettersten (CERN)
    23/10/2024, 14:06
    Track 5 - Simulation and analysis tools
    Talk

    As the quality of experimental measurements increases, so does the need for Monte Carlo-generated simulated events — both with respect to total amount, and to their precision. In perturbative methods this involves the evaluation of higher order corrections to the leading order (LO) scattering amplitudes, including real emissions and loop corrections. Although experimental uncertainties today...

    Go to contribution page
  269. Aashay Arora (Univ. of California San Diego (US))
    23/10/2024, 14:06
    Track 7 - Computing Infrastructure
    Talk

    The data reduction stage is a major bottleneck in processing data from the Large Hadron Collider (LHC) at CERN, which generates hundreds of petabytes annually for fundamental particle physics research. Here, scientists must refine petabytes into only gigabytes of relevant information for analysis. This data filtering process is limited by slow network speeds when fetching data from globally...

    Go to contribution page
  270. Maris Arthurs
    23/10/2024, 14:06
    Track 3 - Offline Computing
    Talk

    LUX-ZEPLIN (LZ) is a dark matter direct detection experiment using a dual-phase xenon time projection chamber with a 7-ton active volume. In 2022, LZ collaboration published a world leading limit on WIMP dark matter interactions with nucleons. The success of the LZ experiment hinges both on the resilient design of its hardware and software infrastructures. This talk will give an overview of...

    Go to contribution page
  271. Mr Pavel Goncharov
    23/10/2024, 14:06
    Track 2 - Online and real-time computing
    Talk

    The reconstruction of charged particle trajectories in tracking detectors is crucial for analyzing experimental data in high-energy and nuclear physics. Processing of the vast amount of data generated by modern experiments requires computationally efficient solutions to save time and resources. In response, we introduce TrackNET, a recurrent neural network specifically designed for track...

    Go to contribution page
  272. Maksim Melnik Storetvedt (CERN)
    23/10/2024, 14:06
    Track 4 - Distributed Computing
    Talk

    Job pilots in the ALICE Grid have become increasingly tasked with how to best manage the resources given to each job slot. With the emergence of more complex and multicore oriented workflows, this has since become an increasingly challenging process, as users often request arbitrary resources, in particular CPU and memory. This is further exacerbated by often having several user payloads...

    Go to contribution page
  273. Dr Vincenzo Eduardo Padulano (CERN)
    23/10/2024, 14:06
    Track 5 - Simulation and analysis tools
    Talk

    The ROOT software framework is widely used in HENP for storage, processing, analysis and visualization of large datasets. With the large increase in usage of ML for experiment workflows, especially lately in the last steps of the analysis pipeline, the matter of exposing ROOT data ergonomically to ML models becomes ever more pressing. This contribution presents the advancements in an...

    Go to contribution page
  274. Lia Lavezzi (INFN Torino (IT))
    23/10/2024, 14:24
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    Modern physics experiments are often led by large collaborations including scientists and institutions from different parts of the world. To cope with the ever increasing computing and storage demands, computing resources are nowadays offered as part of a distributed infrastructure. Einstein Telescope (ET) is a future third-generation interferometer for gravitational wave (GW) detection, and...

    Go to contribution page
  275. Yutaro Iiyama (University of Tokyo (JP))
    23/10/2024, 14:24
    Track 5 - Simulation and analysis tools
    Talk

    Quantum computers may revolutionize event generation for collider physics by allowing calculation of scattering amplitudes from full quantum simulation of field theories. Although rapid progress is being made in understanding how best to encode quantum fields onto the states of quantum registers, most formulations are lattice-based and would require an impractically large number of qubits when...

    Go to contribution page
  276. Torri Jeske
    23/10/2024, 14:24
    Track 2 - Online and real-time computing
    Talk

    Tracking charged particles resulting from collisions in the presence of strong magnetic field is an important and challenging problem. Reconstructing the tracks from the hits created by those generated particles on the detector layers via ionization energy deposits is traditionally achieved through Kalman filters that scale worse than linearly as the number of hits grow. To improve efficiency...

    Go to contribution page
  277. Paul Gessinger (CERN)
    23/10/2024, 14:24
    Track 3 - Offline Computing
    Talk

    ACTS is an experiment independent toolkit for track reconstruction, which is designed from the ground up for thread-safety and high performance. It is built to accommodate different experiment deployment scenarios, and also serves as community platform for research and development of new approaches and algorithms.

    A fundamental component of ACTS is the geometry library. It models a...

    Go to contribution page
  278. Florine de Geus (CERN/University of Twente (NL))
    23/10/2024, 14:24
    Track 5 - Simulation and analysis tools
    Talk

    With the large data volume increase expected for HL-LHC and the even more complex computing challenges set by future colliders, the need for more elaborate data access patterns will become more pressing. ROOT’s next-generation data format and I/O subsystem, RNTuple, is designed to address those challenges, currently already showing a clear improvement in storage and I/O efficiency with respect...

    Go to contribution page
  279. Ewoud Ketele (CERN)
    23/10/2024, 14:24
    Track 4 - Distributed Computing
    Talk

    The Unified Experiment Monitoring (UEM) is the project in WLCG with the objective to harmonise the WLCG job accounting reports across the LHC experiments, in order to provide aggregated reports of the compute capacity used by WLCG along time. This accounting overview of all LHC experiments is vital for the strategy planning of WLCG and therefore it finds the strong support of the LHC Committee...

    Go to contribution page
  280. Johannes Elmsheuser (Brookhaven National Laboratory (US))
    23/10/2024, 14:24
    Track 7 - Computing Infrastructure
    Talk

    With the large dataset expected from 2029 onwards by the HL-LHC at CERN, the ATLAS experiment is reaching the limits of the current data processing model in terms of traditional CPU resources based on x86_64 architectures and an extensive program for software upgrades towards the HL-LHC has been set up. The ARM CPU architecture is becoming a competitive and energy efficient alternative....

    Go to contribution page
  281. Fabio Hernandez (IN2P3 / CNRS computing centre)
    23/10/2024, 14:42
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    The set of sky images recorded nightly by the camera mounted on the telescope of the [Vera C. Rubin Observatory][1] will be processed in facilities located on three continents. Data acquisition will happen in Cerro Pachón in the Andes mountains in Chile where the observatory is located. A first copy of the raw image data set is stored at the summit site of the observatory and immediately...

    Go to contribution page
  282. Dr David Crooks (UKRI STFC)
    23/10/2024, 14:42
    Track 4 - Distributed Computing
    Talk

    The risk of cyber attack against members of the research and education sector remains persistently high, with several recent high visibility incidents including a well-reported ransomware attack against the British Library. As reported previously, we must work collaboratively to defend our community against such attacks, notably through the active use of threat intelligence shared with trusted...

    Go to contribution page
  283. Jim Pivarski (Princeton University)
    23/10/2024, 14:42
    Track 5 - Simulation and analysis tools
    Talk

    Uproot is a Python library for ROOT I/O that uses NumPy and Awkward Array to represent and perform computations on bulk data. However, Uproot uses pure Python to navigate through ROOT's data structures to find the bulk data, which can be a performance issue in metadata-intensive I/O: (a) many small files, (b) many small TBaskets, and/or (c) low compression overhead. Worse, these performance...

    Go to contribution page
  284. Diana Gaponcic (IT-PW-PI)
    23/10/2024, 14:42
    Track 7 - Computing Infrastructure
    Talk

    GPUs and accelerators are changing traditional High Energy Physics (HEP) deployments while also being the key to enable efficient machine learning. The challenge remains to improve overall efficiency and sharing opportunities of what are currently expensive and scarce resources.

    In this paper we describe the common patterns of GPU usage in HEP, including spiky requirements with low overall...

    Go to contribution page
  285. James Whitehead
    23/10/2024, 14:42
    Track 5 - Simulation and analysis tools
    Talk

    The generation of large event samples with Monte Carlo Event Generators is expected to be a computational bottleneck for precision phenomenology at the HL-LHC and beyond. This is due in part to the computational cost incurred by negative weights in 'matched' calculations combining NLO perturbative QCD with a parton shower: for the same target uncertainty, a larger sample must be...

    Go to contribution page
  286. Christian Sonnabend (CERN, Heidelberg University (DE))
    23/10/2024, 14:42
    Track 2 - Online and real-time computing
    Talk

    The ALICE Time Projection Chamber (TPC) is the detector with the highest data rate of the ALICE experiment at CERN and is the central detector for tracking and particle identification. Efficient online computing such as clusterization and tracking are mainly performed on GPU's with throughputs of approximately 900 GB/s. Clusterization itself has a well known background with a variety of...

    Go to contribution page
  287. Ben Salisbury (HISKP Bonn)
    23/10/2024, 14:42
    Track 3 - Offline Computing
    Talk

    To increase the automation to convert Computer-Aided-Design detector components as well as entire detector systems into simulatable ROOT geometries, TGeoArbN, a ROOT compatible geometry class, was implemented allowing the use of triangle meshes in VMC-based simulation. To improve simulation speed a partitioning structure in form of an Octree can be utilized. TGeoArbN in combination with a...

    Go to contribution page
  288. Kyle Knoepfel (Fermi National Accelerator Laboratory)
    23/10/2024, 15:00
    Track 4 - Distributed Computing
    Talk

    GlideinWMS has been one of the first middleware in the WLCG community to transition from X.509 to support also tokens. The first step was to get from the prototype in 2019 to using tokens in production in 2022. This paper will present the challenges introduced by the wider adoption of tokens and the evolution plans for securing the pilot infrastructure of GlideinWMS and supporting the new...

    Go to contribution page
  289. Torri Jeske
    23/10/2024, 15:00
    Track 2 - Online and real-time computing
    Talk

    Polarized cryo-targets and polarized photon beams are widely used in experiments at Jefferson Lab. Traditional methods for maintaining the optimal polarization involve manual adjustments throughout data taking-- an approach that is prone to inconsistency and human error. Implementing machine learning-based control systems can improve the stability of the polarization without relying on human...

    Go to contribution page
  290. Salvatore La Cagnina
    23/10/2024, 15:00
    Track 5 - Simulation and analysis tools
    Talk

    The generation of Monte Carlo events is a crucial step for all particle collider experiments. Accurately simulating the hard scattering processes is the foundation for subsequent steps, such as QCD parton showering, hadronization, and detector simulations. A major challenge in event generation is the efficient sampling of the phase spaces of hard scattering processes due to the potentially...

    Go to contribution page
  291. Tristan Bloomfield (KEK IPNS)
    23/10/2024, 15:00
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    The Belle II raw data transfer system is responsible for transferring raw data from the Belle II detector to the local KEK computing centre, and from there to the GRID. The Belle II experiment recently completed its first Long Shutdown period - during this time many upgrades were made to the detector and tools used to handle and analyse the data. The Belle II data acquisition (DAQ) systems...

    Go to contribution page
  292. Luis Guilherme Neri Ferreira (Federal University of Rio de Janeiro (BR))
    23/10/2024, 15:00
    Track 7 - Computing Infrastructure
    Talk

    The Glance project provides software solutions for managing high-energy physics collaborations' data and workflow. It was started in 2003 and operates in the ALICE, AMBER, ATLAS, CMS, and LHCb CERN experiments on top of CERN common infrastructure. The project develops Web applications using PHP and Vue.js, running on CENTOS virtual machines hosted on the CERN OpenStack private cloud. These...

    Go to contribution page
  293. Julius Hrivnac (Université Paris-Saclay (FR))
    23/10/2024, 15:00
    Track 5 - Simulation and analysis tools
    Talk

    Representing HEP and astrophysics data as graphs (i.e. networks of related entities) is becoming increasingly popular. These graphs are not only useful for structuring data storage but are also increasingly utilized within various machine learning frameworks.

    However, despite their rising popularity, numerous unused opportunities exist, particularly concerning the utilization of graph...

    Go to contribution page
  294. Jiahui Zhuo (Univ. of Valencia and CSIC (ES)), Volodymyr Svintozelskyi (Univ. of Valencia and CSIC (ES))
    23/10/2024, 15:18
    Track 2 - Online and real-time computing
    Poster

    One of the most significant challenges in tracking reconstruction is the reduction of "ghost tracks," which are composed of false hit combinations in the detectors. When tracking reconstruction is performed in real-time at 30 MHz, it introduces the difficulty of meeting high efficiency and throughput requirements. A single-layer feed-forward neural network (NN) has been developed and trained...

    Go to contribution page
  295. Enrico Vianello (INFN-CNAF)
    23/10/2024, 15:18
    Track 7 - Computing Infrastructure
    Poster

    The Italian National Institute for Nuclear Physics (INFN) has recently launched the INFN Cloud initiative, aimed at providing a federated Cloud infrastructure and a dynamic portfolio of services to scientific communities supported by the Institute. The federative middleware of INFN Cloud is based on the INDIGO PaaS orchestration system, consisting of interconnected open-source microservices....

    Go to contribution page
  296. Nora Bluhme (Goethe University Frankfurt (DE))
    23/10/2024, 15:18
    Track 3 - Offline Computing
    Poster

    The future Compressed Baryonic Matter experiment (CBM), which is currently being planned and will be realised at the Facility for Antiproton and Ion Research (FAIR), is dedicated to the investigation of heavy-ion collisions at high interaction rates. For this purpose, a track-based software alignment is necessary to determine the precise detector component positions with sufficient accuracy....

    Go to contribution page
  297. Baosong Shan (Beihang University (CN))
    23/10/2024, 15:18
    Track 6 - Collaborative software and maintainability
    Poster

    The Alpha Magnetic Spectrometer (AMS) is a particle physics experiment installed and operating aboard the International Space Station (ISS) from May 2011 and expected to last through 2030 and beyond. Data reconstruction and Monte-Carlo simulation are two major production activities in AMS offline computing, and templates are defined as a collection of data cards to describe different...

    Go to contribution page
  298. Jacopo Siniscalco
    23/10/2024, 15:18
    Track 4 - Distributed Computing
    Poster

    The LUX-ZEPLIN (LZ) experiment is a world-leading direct dark matter detection experiment, implementing a dual-phase Xe Time Projection Chamber (TPC) design. The success of the experiment necessitates an in-depth characterization of the pertinent backgrounds, which in turn implies a heavy simulations burden. In this talk, I will present the infrastructure that was developed to allocate and...

    Go to contribution page
  299. Silvio Pardi (University Federico II and INFN, Naples (IT))
    23/10/2024, 15:18
    Track 1 - Data and Metadata Organization, Management and Access
    Poster

    The Belle II experiment relies on a distributed computing infrastructure spanning 19 countries and over 50 sites. It is expected to generate approximately 40TB/day of raw data in 2027, necessitating distribution from the High Energy Accelerator Research Organization (KEK) in Japan to six Data Centers across the USA, Europe, and Canada. Establishing a high-quality network has been a priority...

    Go to contribution page
  300. Aashay Arora (Univ. of California San Diego (US))
    23/10/2024, 15:18
    Track 1 - Data and Metadata Organization, Management and Access
    Poster

    In anticipation of the High Luminosity-LHC era, there's a critical need to oversee software readiness for upcoming growth in network traffic for production and user data analysis access. This paper looks into software and hardware required improvements in US-CMS Tier-2 sites to be able sustain and meet the projected 400 Gbps bandwidth demands, while tackling the challenge posed by varying...

    Go to contribution page
  301. Dr Vardan Gyurjyan
    23/10/2024, 15:18
    Track 4 - Distributed Computing
    Poster

    Managing and orchestrating complex data processing pipelines require advanced systems capable of handling diverse and collaborative components, such as data acquisition, streaming, aggregation, event identification, distribution, detector calibration, processing, analytics, and archiving. This paper introduces a data processing workflow description and orchestration system designed to...

    Go to contribution page
  302. Aaron Jomy (Princeton University (US))
    23/10/2024, 15:18
    Track 9 - Analysis facilities and interactive computing
    Poster

    The Cling C++ interpreter has transformed language bindings by enabling incremental compilation at runtime. This allows Python to interact with C++ on demand and lazily construct bindings between the two. The emergence of Clang-REPL as a potential alternative to Cling within the LLVM compiler framework highlights the need for a unified framework for interactive C++ technologies.

    We present...

    Go to contribution page
  303. Kyle Knoepfel (Fermi National Accelerator Laboratory)
    23/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    The processing tasks of an event-processing workflow in high-energy and nuclear physics (HENP) can typically be represented as a directed acyclic graph formed according to the data flow—i.e. the data dependencies among algorithms executed as part of the workflow. With this representation, an HENP framework can optimally execute a workflow, exploiting the parallelism inherent among independent...

    Go to contribution page
  304. Qingbao Hu (IHEP), Yaosong Cheng (Institute of High Energy Physics Chinese Academy of Sciences, IHEP)
    23/10/2024, 15:18
    Track 1 - Data and Metadata Organization, Management and Access
    Poster

    The High Energy Photon Source (HEPS) in China will become one of the world's fourth-generation synchrotron light sources with the lowest emittance and highest brightness. The 14 beamlines for the phase I of HEPS will produces about 300PB/year raw data, posing significant challenges in data storage, data access, and data exchange. In order to balance the cost-effectiveness of storage devices...

    Go to contribution page
  305. 李亚康 liyk
    23/10/2024, 15:18
    Track 9 - Analysis facilities and interactive computing
    Poster

    In neutron scattering experiments, the complexity of data analysis and the demand for computational resources have significantly increased. To address these challenges, we have developed a remote desktop system for neutron scattering data analysis based on the Openstack platform. This system leverages WebRTC technology to build a push-pull streaming service system, which includes the...

    Go to contribution page
  306. Mr Pawan Kumar Sharma (VECC Kolkata)
    23/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    The CBM experiment at FAIR-SIS100 will investigate strongly interacting matter at high baryon density and moderate temperature. One of proposed key observable is the measurement of the low mass vector mesons(LMVMs), which can be detected via their di-lepton decay channel. As the decayed leptons leave the dense and hot fireball without further interactions, they can provide unscathed...

    Go to contribution page
  307. Ric Evans (Wisconsin IceCube Particle Astrophysics Center)
    23/10/2024, 15:18
    Track 4 - Distributed Computing
    Poster

    How does one take a workload, consisting of millions or billions of tasks, and group it into tens of thousands of jobs? Partitioning the workload into a workflow of long-running jobs minimizes the use of scheduler resources; however, smaller, more fine-grained jobs allow more efficient use of computing resources. When the runtime of a task averages a minute or less, severe scaling challenges...

    Go to contribution page
  308. Dr Fernando Abudinén (University of Warwick (GB))
    23/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    The EvtGen generator, an essential tool for the simulation of heavy-flavour hadron decays, has recently gone through a modernisation campaign aiming to implement thread safety. A first iteration of this concluded with an adaptation of the core software, where we identified possibilities for future developments to further exploit the capabilities of multi-threaded processing. However, the...

    Go to contribution page
  309. Marco Mascheroni (Univ. of California San Diego (US))
    23/10/2024, 15:18
    Track 4 - Distributed Computing
    Poster

    GlideinWMS, a widely utilized workload management system in high-energy physics (HEP) research, serves as the backbone for efficient job provisioning across distributed computing resources. It is utilized by various experiments and organizations, including CMS, OSG, Dune, and FIFE, to create HTCondor pools as large as 600k cores. In particular, a shared factory service historically deployed at...

    Go to contribution page
  310. Kadir Murat Tastepe (Heidelberg University (DE))
    23/10/2024, 15:18
    Track 2 - Online and real-time computing
    Poster

    Online reconstruction of charged particle tracks is one of the most computationally intensive tasks within current and future filter farms of large HEP experiments, requiring clever algorithms and appropriate hardware choices for its acceleration. The General Triplet Track Fit is a novel track-fitting algorithm that offers great potential for speed-up by processing triplets of hits...

    Go to contribution page
  311. Jonathan Samudio (Baylor University (US))
    23/10/2024, 15:18
    Track 2 - Online and real-time computing
    Poster

    In response to increasing data challenges, CMS has adopted the use of GPU offloading at the High-Level Trigger (HLT). However, GPU acceleration is often hardware specific, and increases the maintenance burden on software development. The Alpaka (Abstraction Library for Parallel Kernel Acceleration) portability library offers a solution to this issue, and has been implemented into the CMS...

    Go to contribution page
  312. Benedikt Riedel
    23/10/2024, 15:18
    Track 4 - Distributed Computing
    Poster

    An Artificial Intelligence (AI) model will spend “90% of its lifetime in inference.” To fully utilize coprocessors, such as FPGAs or GPUs, for AI inference requires O(10) CPU cores to feed to work to the coprocessors. Traditional data analysis pipelines will not be able to effectively and efficiently use the coprocessors to their full potential. To allow for distributed access to coprocessors...

    Go to contribution page
  313. Andrew McNab (University of Manchester)
    23/10/2024, 15:18
    Track 4 - Distributed Computing
    Poster

    We describe the justIN workflow management system developed by DUNE to address its unique requirements and constraints. The DUNE experiment will start running in 2029, recording 30 PB/year of raw data from the detectors, with typical readouts at the scale of gigabytes, but with regular supernova candidate readouts of several hundred terabytes. DUNE benefits from the rich heritage of neutrino...

    Go to contribution page
  314. Mark Nicholas Matthewman (HEPHY)
    23/10/2024, 15:18
    Track 3 - Offline Computing
    Poster

    The High Luminosity phase of the LHC (HL-LHC) will offer a greatly increased number of events for more precise standard model measurements and BSM searches. To cope with the harsh environment created by numerous simultaneous proton-proton collisions, the CMS Collaboration has begun construction of a new endcap calorimeter, the High-Granularity Calorimeters (HGCAL). As part of this project, a...

    Go to contribution page
  315. 尹维卿 yinwq
    23/10/2024, 15:18
    Track 4 - Distributed Computing
    Poster

    On behalf of JUNO collaboration.
    The Jiangmen Underground Neutrino Observatory (JUNO), located in Southern China, is a neutrino experiment aiming to determine the neutrino mass ordering (NMO) and precisely measure neutrino oscillation parameters. JUNO is expected to operate over 20-30 years, generating approximately 2PB of raw data annually. Offline Data Processing Workflow involves data...

    Go to contribution page
  316. Dr Binbin Qi (University of Science and Technology of China (USTC))
    23/10/2024, 15:18
    Track 3 - Offline Computing
    Poster

    The Super Tau-Charm Facility (STCF) is a proposed electron-positron collider in China, designed to achieve a peak luminosity exceeding $\rm 0.5 \times 10^{35} \ cm^{-2} s^{-1}$ and a center-of-mass energy ranging from 2 to 7 GeV. To meet the particle identification (PID) requirements essential for the physics goals of the STCF experiment, a dedicated PID system is proposed to identify $\rm...

    Go to contribution page
  317. Nick Smith (Fermi National Accelerator Lab. (US))
    23/10/2024, 15:18
    Track 1 - Data and Metadata Organization, Management and Access
    Poster

    In CMS, data access and management is organized around the data-tier model: a static definition of what subset of event information is available in a particular dataset, realized as a collection of files. In previous works, we have proposed a novel data management model that obviates the need for data tiers by exploding files into individual event data product objects. We present here a study...

    Go to contribution page
  318. Mathis Frahm (Hamburg University (DE))
    23/10/2024, 15:18
    Track 4 - Distributed Computing
    Poster

    To study and search for increasingly rare physics processes at the LHC, a staggering amount of data needs to be analyzed with progressively complex methods. Analyses involving tens of billions of recorded and simulated events, multiple machine learning algorithms for different purposes, and an amount of 100 or more systematic variations are no longer uncommon. These conditions impose a complex...

    Go to contribution page
  319. Dr Charles Leggett (Lawrence Berkeley National Lab (US))
    23/10/2024, 15:18
    Track 7 - Computing Infrastructure
    Poster

    High energy physics experiments are making increasing use of GPUs and GPU dominated High Performance Computer facilities. Both the software and hardware of these systems are rapidly evolving, creating challenges for experiments to make informed decisions as to where they wish to devote resources. In its first phase, the High Energy Physics Center for Computational Excellence (HEP-CCE) produced...

    Go to contribution page
  320. Aryan Roy
    23/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    Uproot can read ROOT files directly in pure Python but cannot (yet) compute expressions in ROOT’s TTreeFormula expression language. Despite its popularity, this language has only one implementation and no formal specification. In a package called “formulate,” we defined the language’s syntax in standard BNF and parse it with Lark, a fast and modern parsing toolkit in Python. With formulate,...

    Go to contribution page
  321. Dr Michael Kirby (Brookhaven National Laboratory)
    23/10/2024, 15:18
    Track 1 - Data and Metadata Organization, Management and Access
    Poster

    The DUNE experiment will produce vast amounts of metadata, which describe the data coming from the read-out of the primary DUNE detectors. Various databases will collect the metadata from different sources. The conditions data, which is the subset of all the metadata that is accessed during the offline reconstruction and analysis, will be stored in a dedicated database. ProtoDUNE at CERN is...

    Go to contribution page
  322. 李骋 chengli
    23/10/2024, 15:18
    Track 7 - Computing Infrastructure
    Poster

    As a WLCG prototype T1 site, IHEP's network performance directly impacts the site's reliability. The current primary method for measuring network performance is implemented through Perfsonar, which actively measures performance metrics such as bandwidth, connection status, one-way and two-way latency, packet loss rate, and jitter between IHEP and other sites. However, there is a lack of...

    Go to contribution page
  323. Martin Beyer (Justus-Liebig-Universität Gießen)
    23/10/2024, 15:18
    Track 3 - Offline Computing
    Poster

    The Compressed Baryonic Matter experiment (CBM) at FAIR is designed to explore the QCD phase diagram at high baryon densities with interaction rates up to 10 MHz using triggerless free-streaming data acquisition. For the overall PID, the CBM Ring Imaging Cherenkov detector (RICH) contributes by identifying electrons from lowest momenta up to 10 GeV/c, with a pion suppression of > 100. The RICH...

    Go to contribution page
  324. Emmanuel Moutoussamy (University of Bergen, Norway)
    23/10/2024, 15:18
    Track 7 - Computing Infrastructure
    Poster

    Research groups at scientific institutions have an increasing demand for computing and storage resources. The national High-Performance Computing (HPC) systems usually have a high threshold to come in and cloud solutions could be challenging and demand a high learning curve.

    Here we introduce the Scientific NREC Cluster (SNC), which leverages the Norwegian Research and Education Cloud...

    Go to contribution page
  325. Raffaella Radogna (Universita e INFN, Bari (IT))
    23/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    The proposal to create a multi-Tev Muon Collider presents an unprecedented opportunity for advancing high energy physics research and offers the possibility to accurately measure the Higgs couplings with other Standard Model particles and search for new physics at TeV scale.
    This demands for accurate full event reconstruction and particle identification. However, this is complicated by the...

    Go to contribution page
  326. Antonio Nappi (CERN)
    23/10/2024, 15:18
    Track 7 - Computing Infrastructure
    Poster

    The CERN Single Sign On (SSO) hosting infrastructure underwent a major reconstruction in 2023 in an effort to increase service reliability and operational efficiency.
    This session will outline how the Cloud Native Computing Foundation (CNCF) tools facilitate that, with particular attention to the key decisions, difficulties, and architectural concerns for this critical IT service

    Go to contribution page
  327. Lisa Zangrando, Marco Verlato (Universita e INFN, Padova (IT))
    23/10/2024, 15:18
    Track 7 - Computing Infrastructure
    Poster

    CloudVeneto is a distributed private cloud, which harmonizes the resources of two INFN units and the University of Padua. Tailored to meet the specialized scientific computing needs of user communities within these organizations, it promotes collaboration and enhances innovation. CloudVeneto basically implements an OpenStack based IaaS (Infrastructure-as-a-Service) cloud. However users are...

    Go to contribution page
  328. Joel Murray Davies (CERN)
    23/10/2024, 15:18
    Track 7 - Computing Infrastructure
    Poster

    For nearly five decades, Data Centre Operators have provided critical support to the CERN Meyrin Data Centre, from its infancy, until spring 2024. However, advancements in Data Centre technology and resilience built into IT services have rendered the Console Service obsolete.

    In the early days of the Meyrin Data Centre, day to day operations relied heavily on the expertise and manual...

    Go to contribution page
  329. Pralay Kumar Das
    23/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    In low-energy nuclear physics experiments, an Active Target Time Projection Chamber (AT-TPC) [1] can be advantageous for studying nuclear reaction kinematics. The α-cluster decay of $^{12}C$ is one such reaction requiring careful investigation due to its vital role in producing heavy elements through astrophysical processes [2]. The breakup mechanism of the Hoyle state, a highly α-clustered...

    Go to contribution page
  330. Marco Donadoni (CERN)
    23/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    We present the new user-sharing feature of the REANA reproducible analysis platform. The researchers are allowed to share their selected workflow runs, job logs, and output files with colleagues. The analyst retains the full read-write access to the workflow and may opt for granting individual read-only access to colleagues for a possibly-limited period of time. The workflow sharing feature...

    Go to contribution page
  331. Enric Tejedor Saavedra (CERN)
    23/10/2024, 16:15
    Track 9 - Analysis facilities and interactive computing
    Talk

    Experiment analysis frameworks, physics data formats and expectations of scientists at the LHC have been evolving towards interactive analysis with short turnaround times. Several sites in the community have reacted by setting up dedicated Analysis Facilities, providing tools and interfaces to computing and storage resources suitable for interactive analysis. It is expected that this demand...

    Go to contribution page
  332. Marcin Nowak (Brookhaven National Laboratory (US))
    23/10/2024, 16:15
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    Since the start of LHC in 2008, the ATLAS experiment has relied on ROOT to provide storage technology for all its processed event data. Internally, ROOT files are organized around TTree structures that are capable of storing complex C++ objects. The capabilities of TTrees developed over the years and are now offering support for advanced concepts like polymorphism, schema evolution and user...

    Go to contribution page
  333. Zach Marshall (Lawrence Berkeley National Lab. (US))
    23/10/2024, 16:15
    Track 7 - Computing Infrastructure
    Talk

    The ATLAS Collaboration operates a large, distributed computing infrastructure: almost 1M cores of computing and almost 1 EB of data are distributed over about 100 computing sites worldwide. These resources contribute significantly to the total carbon footprint of the experiment, and they are expected to grow by a large factor as a part of the experimental upgrades for the HL-LHC at the end of...

    Go to contribution page
  334. David Rohr (CERN)
    23/10/2024, 16:15
    Track 2 - Online and real-time computing
    Talk

    ALICE is the dedicated heavy ion experiment at the LHC at CERN and records lead-lead collisions at a rate of up to 50 kHz.
    The detector with the highest data rate of up to 3.4 TB/s is the TPC.
    ALICE performs the full online TPC processing corresponding to more than 95% of the total workload on GPUs, and when there is no beam in the LHC, the online computing farm's GPUs are used to speed up...

    Go to contribution page
  335. Charis Kleio Koraka (University of Wisconsin Madison (US))
    23/10/2024, 16:15
    Track 3 - Offline Computing
    Talk

    Electrons are one of the key particles that are detected by the CMS experiment and are reconstructed using the CMS software (CMSSW). Reconstructing electrons in CMSSW is a computational intensive task that is split into several steps, seeding being the most time consuming one. During the electron seeding process, the collection of tracker hits (seeds) is significantly reduced by selecting only...

    Go to contribution page
  336. Panos Paparrigopoulos (CERN)
    23/10/2024, 16:15
    Track 4 - Distributed Computing
    Talk

    The WLCG infrastructure is quickly evolving thanks to technology evolution in all areas of LHC computing: storage, network, alternative processor architectures, new authentication & authorization mechanisms, etc. This evolution also has to address challenges like the seamless integration of HPC and cloud resources, the significant rise of energy costs, licensing issues and support changes....

    Go to contribution page
  337. Iason Krommydas (Rice University (US))
    23/10/2024, 16:15
    Track 5 - Simulation and analysis tools
    Talk

    Model fitting using likelihoods is a crucial part of many analyses in HEP.
    zfit started over five years ago with the goal of providing this capability within the Python analysis ecosystem by offering a variety of advanced features and high performance tailored to the needs of HEP.
    After numerous iterations with users and a continuous development, zfit reached a maturity stage with a stable...

    Go to contribution page
  338. Alex Owen (University of London (GB))
    23/10/2024, 16:33
    Track 7 - Computing Infrastructure
    Talk

    As UKRI moves towards a NetZero Digital Research Infrastructure [1] an understanding of how carbon costs of computing infrastructures can be allocated to individual scientific payloads will be required. The IRIS community [2] forms a multi-site heterogenous infrastructure so is a good testing ground to develop carbon allocation models with wide applicability.

    The IRISCAST Project [3,4]...

    Go to contribution page
  339. Thomas Hartmann (Deutsches Elektronen-Synchrotron (DE))
    23/10/2024, 16:33
    Track 9 - Analysis facilities and interactive computing
    Talk

    The National Analysis Facility at DESY has been in production for nearly 15 years. Over various stages of development, experiences gained in continuous operations have continuously been feed and integrated back into the evolving NAF. As a "living" infrastructure, one fundamental constituent of the NAF is the close contact between NAF users, NAF admins and storage admins & developers. Since the...

    Go to contribution page
  340. Nuno Dos Santos Fernandes (Laboratory of Instrumentation and Experimental Particle Physics (PT))
    23/10/2024, 16:33
    Track 2 - Online and real-time computing
    Talk

    ATLAS is one of the two general-purpose experiments at the Large Hadron
    Collider (LHC), aiming to detect a wide variety of physics processes. Its
    trigger system plays a key role in selecting the events that are detected,
    filtering them down from the 40 MHz bunch crossing rate to the 1 kHz rate at
    which they are committed to storage. The ATLAS trigger works in two stages,
    Level- 1 and the...

    Go to contribution page
  341. Haakon Andre Reme-Ness (Western Norway University of Applied Sciences (NO))
    23/10/2024, 16:33
    Track 4 - Distributed Computing
    Talk

    This paper presents a comprehensive analysis of the implementation and performance enhancements of the new job optimizer service within the JAliEn (Java ALICE environment) middleware framework developed for the ALICE grid. The job optimizer service aims to efficiently split large-scale computational tasks into smaller grid jobs, thereby optimizing resource utilization and throughput of the...

    Go to contribution page
  342. Lawrence Ng
    23/10/2024, 16:33
    Track 5 - Simulation and analysis tools
    Talk

    NIFTy[1], a probabilistic programming framework developed for astrophysics,
    has recently been adapted to be used in partial wave analyses (PWA) at the
    COMPASS [2] experiment located in CERN. A non-parametric model, described
    as a correlated field, is used to characterize kinematically-smooth complex-
    binned amplitudes. Parametric models, like a Breit-Wigner distribution, can
    also be mixed...

    Go to contribution page
  343. Nick Smith (Fermi National Accelerator Lab. (US))
    23/10/2024, 16:33
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    ROOT is planning to move from TTree to RNTuple as the data storage format for HL-LHC in order to, for example, speed up the IO, make the files smaller, and have a modern C++ API. Initially, RNTuple was not planned to support the same set of C++ data structures as TTree supports. CMS has explored the necessary transformations in its standard persistent data types to switch to RNTuple. Many...

    Go to contribution page
  344. Beomki Yeo (University of California Berkeley (US))
    23/10/2024, 16:33
    Track 3 - Offline Computing
    Talk

    GPUs are expected to be a key solution to the data challenges posed by track reconstruction in future high energy physics experiments. traccc, an R&D project within the ACTS track reconstruction toolkit, aims to demonstrate tracking algorithms in GPU programming models including CUDA and SYCL without loss of physical accuracy such as tracking efficiency and fitted parameter resolution. We...

    Go to contribution page
  345. Yizhou Zhang (Institute of High Energy Physics)
    23/10/2024, 16:51
    Track 3 - Offline Computing
    Talk

    The Circular Electron Positron Collider (CEPC) is a future experiment mainly designed to precisely measure the Higgs boson’s properties and search for new physics beyond the Standard Model. In the design of the CEPC detector, the VerTeX detector (VTX) is the innermost tracker playing a dominant role in determining the vertexes of a collision event. The VTX detector is also responsible for...

    Go to contribution page
  346. Alexander Lory (Ludwig Maximilians Universitat (DE))
    23/10/2024, 16:51
    Track 4 - Distributed Computing
    Talk

    HammerCloud (HC) is a framework for testing and benchmarking resources of the world wide LHC computing grid (WLCG). It tests the computing resources and the various components of distributed systems with workloads that can range from very simple functional tests to full-chain experiment workflows. This contribution concentrates on the ATLAS implementation, which makes extensive use of HC for...

    Go to contribution page
  347. Emanuele Simili, Emanuele Simili (University of Glasgow (GB))
    23/10/2024, 16:51
    Track 7 - Computing Infrastructure
    Talk

    The Glasgow ScotGrid facility is now a truly heterogeneous site, with over 4k ARM cores representing 20% of our compute nodes, which has enabled large-scale testing by the experiments and more detailed investigations of performance in a production environment. We present here a number of updates and new results related to our efforts to optimise power efficiency for High Energy Physics (HEP)...

    Go to contribution page
  348. Piero Viscone (CERN & University of Zurich (CH))
    23/10/2024, 16:51
    Track 2 - Online and real-time computing
    Talk

    In preparation for the High Luminosity LHC (HL-LHC) run, the CMS collaboration is working on an ambitious upgrade project for the first stage of its online selection system: the Level-1 Trigger. The upgraded system will use powerful field-programmable gate arrays (FPGA) processors connected by a high-bandwidth network of optical fibers. The new system will access highly granular calorimeter...

    Go to contribution page
  349. Dr Byrav Ramamurthy (University of Nebraska-Lincoln)
    23/10/2024, 16:51
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    Although caching-based efforts [1] have been in place in the LHC infrastructure in the US, we show that integrating intelligent prefetching and targeted dataset placement into the underlying caching strategy can improve job efficiency further. Newer experiments and experiment upgrades such as HL-LHC and DUNE are expected to produce 10x the amount of data than currently being produced. This...

    Go to contribution page
  350. Jonas Rembser (CERN)
    23/10/2024, 16:51
    Track 5 - Simulation and analysis tools
    Talk

    With the growing datasets of HE(N)P experiments, statistical analysis becomes more computationally demanding, requiring improvements in existing statistical analysis algorithms and software. One way forward is to use Machine Learning (ML) techniques to approximate the otherwise untractable likelihood ratios. Likelihood fits in HEP are often done with RooFit, a C++ framework for statistical...

    Go to contribution page
  351. Antonio Perez-Calero Yzquierdo (Centro de Investigaciones Energéticas Medioambientales y Tecnológicas)
    23/10/2024, 16:51
    Track 9 - Analysis facilities and interactive computing
    Talk

    The anticipated surge in data volumes generated by the LHC in the coming years, especially during the High-Luminosity LHC phase, will reshape how physicists conduct their analysis. This necessitates a shift in programming paradigms and techniques for the final stages of analysis. As a result, there's a growing recognition within the community of the need for new computing infrastructures...

    Go to contribution page
  352. Maciej Pawel Szymanski (Argonne National Laboratory (US))
    23/10/2024, 17:09
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    The High-Luminosity upgrade of the Large Hadron Collider (HL-LHC) will increase luminosity and the number of events by an order of magnitude, demanding more concurrent processing. Event processing is trivially parallel, but metadata handling is more complex and breaks that parallelism. However, correct and reliable in-file metadata is crucial for all workflows of the experiment, enabling tasks...

    Go to contribution page
  353. Oliver Schulz (Max-Planck-Institut für Physik)
    23/10/2024, 17:09
    Track 5 - Simulation and analysis tools
    Talk

    The Bayesian Analysis Toolkit in Julia (BAT.jl) is an open source software package that provides user-friendly tooling to tackle statistical problems encountered in Bayesian (an not just Bayesian) inference.

    BAT.jl succeeds the very successful BAT-C++ (over 500 citations) using modern Julia language. We chose Julia because of its high performance, native automatic differentiation, support...

    Go to contribution page
  354. Lincoln Bryant (University of Chicago (US))
    23/10/2024, 17:09
    Track 9 - Analysis facilities and interactive computing
    Talk

    We explore the adoption of cloud-native tools and principles to forge flexible and scalable infrastructures, aimed at supporting analysis frameworks being developed for the ATLAS experiment in the High Luminosity Large Hadron Collider (HL-LHC) era. The project culminated in the creation of a federated platform, integrating Kubernetes clusters from various providers such as Tier-2 centers,...

    Go to contribution page
  355. Matteo Concas (CERN)
    23/10/2024, 17:09
    Track 3 - Offline Computing
    Talk

    During Run 3, ALICE has enhanced its data processing and reconstruction chain by integrating GPUs, a leap forward in utilising high-performance computing at the LHC.

    The initial 'synchronous' phase engages GPUs to reconstruct and compress data from the TPC detector. Subsequently, the 'asynchronous' phase partially frees GPU resources, allowing further offloading of additional reconstruction...

    Go to contribution page
  356. Natalia Diana Szczepanek (CERN)
    23/10/2024, 17:09
    Track 4 - Distributed Computing
    Talk

    In April 2023 HEPScore23, the new benchmark based on HEP specific applications, was adopted by WLCG, replacing HEP-SPEC06. As part of the transition to the new benchmark, the CPU core power published by the sites needed to be compared with the effective power observed while running ATLAS workloads. One aim was to verify the conversion rate between the scores of the old and the new benchmark....

    Go to contribution page
  357. Abhirikshma Nandi (Heidelberg University (DE))
    23/10/2024, 17:09
    Track 2 - Online and real-time computing
    Talk

    The General Triplet Track Fit (GTTF) is a generalization of the Multiple Scattering Triplet Fit [NIMA 844 (2017) 135] to additionally take hit uncertainties into account. This makes it suitable for use in collider experiments, where the position uncertainties of hits dominate for high momentum tracks. Since the GTTF is based on triplets of hits that can be processed independently, the fit is...

    Go to contribution page
  358. Emanuele Simili, Emanuele Simili (University of Glasgow (GB))
    23/10/2024, 17:09
    Track 7 - Computing Infrastructure
    Talk

    In pursuit of energy-efficient solutions for computing in High Energy Physics (HEP) we have extended our investigations of non-x86 architectures beyond the ARM platforms that we have previously studied. In this work, we have taken a first look at the RISC-V architecture for HEP workloads, leveraging advancements in both hardware and software maturity.

    We introduce the Pioneer Milk-V, a...

    Go to contribution page
  359. Felix Weiglhofer (Goethe University Frankfurt (DE))
    23/10/2024, 17:27
    Track 2 - Online and real-time computing
    Talk

    The CBM experiment is expected to run with a data rate exceeding 500 GB/s even after averaging. At this rate storing raw detector data is not feasible and an efficient online reconstruction is instead required. GPUs have become essential for HPC workloads. The higher memory bandwidth and parallelism of GPUs can provide significant speedups over traditional CPU applications. These properties...

    Go to contribution page
  360. Pablo Collado Soto (Universidad Autonoma de Madrid (ES))
    23/10/2024, 17:27
    Track 9 - Analysis facilities and interactive computing
    Talk

    This work is going to show the Spanish Tier-1 and Tier-2s contribution to the computing of the ATLAS experiment at the LHC during the Run3 period. The Tier-1 and Tier-2 GRID infrastructures, encompassing data storage, processing, and involvement in software development and computing tasks for the experiment, will undergo updates to enhance efficiency and visibility within the experiment.
    The...

    Go to contribution page
  361. Mr Fabian Lambert (LPSC Grenoble IN2P3/CNRS (FR))
    23/10/2024, 17:27
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    The ATLAS Metadata Interface (AMI) is a comprehensive ecosystem designed for metadata aggregation, transformation, and cataloging. With over 20 years of feedback in the LHC context, it is particularly well-suited for scientific experiments that generate large volumes of data.

    This presentation explains, in a general manner, why managing metadata is essential regardless of the experiment's...

    Go to contribution page
  362. Daniele Lattanzio
    23/10/2024, 17:27
    Track 7 - Computing Infrastructure
    Talk

    At INFN-T1 we recently acquired some ARM nodes: initially they were given to LHC experiments to test workflow and submission pipelines. After some time, they were given as standard CPU resources, since the stability both of the nodes and of the code was production quality ready.
    In this presentation we will describe all the activities that were necessary to enable users to run on ARM and will...

    Go to contribution page
  363. Tatiana Korchuganova (University of Pittsburgh (US))
    23/10/2024, 17:27
    Track 4 - Distributed Computing
    Talk

    In early 2024, ATLAS undertook an architectural review to evaluate the functionalities of its current components within the workflow and workload management ecosystem. Pivotal to the review was the assessment of the Production and Distributed Analysis (PanDA) system, which plays a vital role in the overall infrastructure.
    The review findings indicated that while the current system shows no...

    Go to contribution page
  364. Aishik Ghosh (University of California Irvine (US))
    23/10/2024, 17:27
    Track 5 - Simulation and analysis tools
    Talk

    Neural Simulation-Based Inference (NSBI) is a powerful class of machine learning (ML)-based methods for statistical inference that naturally handle high dimensional parameter estimation without the need to bin data into low-dimensional summary histograms. Such methods are promising for a range of measurements at the Large Hadron Collider, where no single observable may be optimal to scan over...

    Go to contribution page
  365. Petya Vasileva (University of Michigan (US))
    23/10/2024, 17:45
    Track 7 - Computing Infrastructure
    Talk

    The research and education community relies on a robust network to access the vast amounts of data generated by scientific experiments. The underlying infrastructure connects a few hundred sites worldwide, requiring reliable and efficient transfers of increasingly large datasets. These activities demand proactive methods in network management, where potentially severe issues could be predicted...

    Go to contribution page
  366. Federica Maria Simone (Universita e INFN, Bari (IT))
    23/10/2024, 17:45
    Track 9 - Analysis facilities and interactive computing
    Talk

    The analysis of data collected by the ATLAS and CMS experiments at CERN, ahead of the next phase of high-luminosity at the LHC, requires a flexible and dynamic access to big amounts of data, as well as an environment capable of dynamically accessing distributed resources. An interactive high throughput platform, based on a parallel and geographically distributed back-end, has been developed in...

    Go to contribution page
  367. Dr Yury Malyshkin (GSI / Forschungszentrum Jülich)
    23/10/2024, 17:45
    Track 5 - Simulation and analysis tools
    Talk

    JUNO (Jiangmen Underground Neutrino Observatory) is a neutrino experiment being built in South China. Its primary goals are to resolve the order of the neutrino mass eigenstates and to precisely measure the oscillation parameters $\sin^2\theta_{12}$, $\Delta m^2_{21}$, and $\Delta m^2_{31 (32)}$ by observing the oscillation pattern of electron antineutrinos produced in eight reactor cores of...

    Go to contribution page
  368. Marco Mascheroni (Univ. of California San Diego (US))
    23/10/2024, 17:45
    Track 4 - Distributed Computing
    Talk

    Efficient utilization of vast amounts of distributed compute resources is a key element in the success of the scientific programs of the LHC experiments. The CMS Submission Infrastructure is the main computing resource provisioning system for CMS workflows, including data processing, simulation and analysis. Resources geographically distributed across numerous institutions, including Grid, HPC...

    Go to contribution page
  369. Lorenzo Rinaldi (Universita e INFN, Bologna (IT)), Luciano Gaido
    23/10/2024, 17:45
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    Large international collaborations in the field of Nuclear and Subnuclear Physics have been leading the implementation of FAIR principles for managing research data. These principles are essential when dealing with large volumes of data over extended periods and involving scientists from multiple countries. Recently, smaller communities and individual experiments have also started adopting...

    Go to contribution page
  370. Bartosz Sobol (Jagiellonian University)
    23/10/2024, 17:45
    Track 2 - Online and real-time computing
    Talk

    The PANDA experiment has been designed to incorporate software triggers and online data processing. Although PANDA may not surpass the largest experiments in terms of raw data rates, designing and developing the processing pipeline and software platform for this purpose is still a challenge. Given the uncertain timeline for PANDA and the constantly evolving landscape of computing hardware, our...

    Go to contribution page
  371. Giuseppe Andronico (Universita e INFN, Catania (IT)), Joao Pedro Athayde Marcondes De Andre (Centre National de la Recherche Scientifique (FR)), Xiaomei Zhang (Chinese Academy of Sciences (CN))
    24/10/2024, 09:00
    Plenary
    Talk

    The Jiangmen Underground Neutrino Observatory (JUNO) in southern China has set its primary goals as determining the neutrino mass ordering and precisely measuring oscillation parameters. JUNO plans to start data-taking in late 2024, with an expected event rate of approximately 1 kHz at full operation. This translates to around 60 MB of byte-stream raw data being produced every second,...

    Go to contribution page
  372. Olivier Mattelaer (UCLouvain)
    24/10/2024, 09:30
    Plenary
    Talk

    High-Luminosity LHC will provide an unprecedented amount of experimental data. The improvement in experimental precision needs to be matched with an increase of accuracy in the theoretical predictions, stressing our compute capability.

    In this talk, I will focus on the current and future precision needed by LHC experiments and how those needs are supplied by Event Generators. I will focus...

    Go to contribution page
  373. Danilo Piparo (CERN)
    24/10/2024, 10:00
    Plenary
    Talk

    In this contribution, we’ll review the status of the ROOT project towards the end of LHC Run 3.
    We'll review its structure, available effort and management strategy, allowing to push innovation while guaranteeing long term support.
    In particular, we'll describe how ROOT became a veritable community effort attracting contributions not only from the ROOT team, but from collaborators at labs,...

    Go to contribution page
  374. Christian Voss
    24/10/2024, 11:00
    Plenary
    Talk

    Historically, DESY has been a HEP site with its on-site accelerators DESY, PETRA, DORIS, and HERA. Since the end of the HERA data taking, a strategic shift has taken place at DESY towards supporting Research with Photons with user facilities at the Hamburg site in addition to the continuing support for Particle Physics. Since then some of the existing HEP accelerators have been redesigned to...

    Go to contribution page
  375. David Britton (University of Glasgow (GB))
    24/10/2024, 11:30
    Plenary
    Talk

    We present first results from a new simulation of the WLCG Glasgow Tier-2 site, designed to investigate the potential for reducing our carbon footprint by reducing the CPU clock frequency across the site in response to a higher-than-normal fossil-fuel component in the local power supply. The simulation uses real (but historical) data for the UK power-mix, together with measurements of power...

    Go to contribution page
  376. Dr Andrea Sciabà (CERN)
    24/10/2024, 12:00
    Plenary
    Talk

    Decades of advancements in computing hardware technologies have enabled HEP experiments to achieve their scientific objectives, facilitated by meticulous planning and collaboration among all stakeholders. However, the path to HL-LHC demands a continuously improving alignment between our ever increasing needs and the available computing and storage resources, not matched by any increase in...

    Go to contribution page
  377. Simone Rossi Tisbeni (Universita e INFN, Bologna (IT))
    24/10/2024, 13:30
    Track 3 - Offline Computing
    Talk

    Efficient and precise track reconstruction is critical for the results of the Compact Muon Solenoid (CMS) experiment. The current CMS track reconstruction algorithm is a multi-step procedure based on the combinatorial Kalman filter as well as a Cellular Automaton technique to create track seeds. Multiple parameters regulate the reconstruction steps, populating a large phase space of possible...

    Go to contribution page
  378. Fabio Andrijauskas (Univ. of California San Diego (US))
    24/10/2024, 13:30
    Track 7 - Computing Infrastructure
    Talk

    Research has become dependent on processing power and storage, with one crucial aspect being data sharing. The Open Science Data Federation (OSDF) project aims to create a scientific global data distribution network, expanding on the StashCache project to add new data origins and caches, access methods, monitoring, and accounting mechanisms. OSDF does not develop any new software, relying on ...

    Go to contribution page
  379. John Wu (LAWRENCE BERKELEY NATIONAL LABORATORY)
    24/10/2024, 13:30
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    The surge in data volumes from large scientific collaborations, like the Large Hadron Collider (LHC), poses challenges and opportunities for High Energy Physics (HEP). With annual data projected to grow thirty-fold by 2028, efficient data management is paramount. The HEP community heavily relies on wide-area networks for global data distribution, often resulting in redundant long-distance...

    Go to contribution page
  380. Ianna Osborne (Princeton University)
    24/10/2024, 13:30
    Track 9 - Analysis facilities and interactive computing
    Talk

    Scientific computing relies heavily on powerful tools like Julia and Python. While Python has long been the preferred choice in High Energy Physics (HEP) data analysis, there’s a growing interest in migrating legacy software to Julia. We explore language interoperability, focusing on how Awkward Array data structures can connect Julia and Python. We discuss memory management, data buffer...

    Go to contribution page
  381. Sebastian Dittmeier (Ruprecht-Karls-Universitaet Heidelberg (DE))
    24/10/2024, 13:30
    Track 2 - Online and real-time computing
    Talk

    For the HL-LHC upgrade of the ATLAS TDAQ system, a heterogeneous computing farm
    deploying GPUs and/or FPGAs is under study, together with the use of modern
    machine learning algorithms such as Graph Neural Networks (GNNs). We present a
    study on the reconstruction of tracks in the ATLAS Inner Tracker using GNNs on
    FPGAs for the Event Filter system. We explore each of the steps in a...

    Go to contribution page
  382. Ian Collier (Science and Technology Facilities Council STFC (GB))
    24/10/2024, 13:30
    Track 4 - Distributed Computing
    Talk

    The Square Kilometre Array (SKA) is set to be the largest and most sensitive radio telescope in the world. As construction advances, the managing and processing of data on an exabyte scale becomes a paramount challenge to enable the SKA science community to process and analyse their data. To address this, the SKA Regional Centre Network (SRCNet) has been established to provide the necessary...

    Go to contribution page
  383. Joshua Falco Beirer (CERN)
    24/10/2024, 13:30
    Track 5 - Simulation and analysis tools
    Talk

    For high-energy physics experiments, the generation of Monte Carlo events, and in particular the simulation of the detector response, is a very computationally intensive process. In many cases, the primary bottleneck in detector simulation is the detailed simulation of the electromagnetic and hadronic showers in the calorimeter system. For the ATLAS experiment, about 80% of the total CPU usage...

    Go to contribution page
  384. Enrique Garcia Garcia (CERN)
    24/10/2024, 13:48
    Track 9 - Analysis facilities and interactive computing
    Talk

    During the ESCAPE project, the pillars of a pilot analysis facility were built following a bottom-up approach, in collaboration with all the partners of the project. As a result, the CERN Virtual Research Environment (VRE) initiative proposed a workspace that facilitates the access to the data in the ESCAPE Data Lake, a large scale data management system defined by Rucio, along with the...

    Go to contribution page
  385. Nadezhda Dobreva
    24/10/2024, 13:48
    Track 3 - Offline Computing
    Talk

    Track reconstruction, a.k.a., tracking, is a crucial part of High Energy Physics experiments. Traditional methods for the task, relying on Kalman Filters, scale poorly with detector occupancy. In the context of the upcoming High Luminosity-LHC, solutions based on Machine Learning (ML) and deep learning are very appealing. We investigate the feasibility of training multiple ML architectures to...

    Go to contribution page
  386. Jose Flix Molina (CIEMAT - Centro de Investigaciones Energéticas Medioambientales y Tec. (ES))
    24/10/2024, 13:48
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    The Large Hadron Collider (LHC) at CERN in Geneva is preparing for a major upgrade that will improve both its accelerator and particle detectors. This strategic move comes in anticipation of a tenfold increase in proton-proton collisions, expected to kick off by 2029 in the upcoming high-luminosity phase. The backbone of this evolution is the World-Wide LHC Computing Grid, crucial for handling...

    Go to contribution page
  387. Max Dupuis (CERN)
    24/10/2024, 13:48
    Track 7 - Computing Infrastructure
    Talk

    CERN's state-of-the-art Prévessin Data Centre (PDC) is now operational, complementing CERN's Meyrin Data Centre Tier-0 facility to provide additional and sustainable computing power to meet the needs of High-Luminosity LHC in 2029 (expected to be ten times greater than today). In 2019, it was decided to tender the design and construction of a new, modern, energy-efficient (PUE of ≤ 1.15) Data...

    Go to contribution page
  388. Federico Lazzari (Universita di Pisa & INFN Pisa (IT))
    24/10/2024, 13:48
    Track 2 - Online and real-time computing
    Talk

    Abstract: The LHCb collaboration is planning an upgrade (LHCb "Upgrade-II") to collect data at an increased instantaneous luminosity (a factor of 7.5 larger than the current one). LHCb relies on a complete real-time reconstruction of all collision events at LHC-Point 8, which will have to cope with both the luminosity increase and the introduction of correspondingly more granular and complex...

    Go to contribution page
  389. Adam Morris (CERN)
    24/10/2024, 13:48
    Track 5 - Simulation and analysis tools
    Talk

    Gaussino is an experiment-independent simulation package built upon the Gaudi software framework. It provides generic core components and interfaces for a complete HEP simulation application: event generation, detector simulation, geometry, monitoring and output of the simulated data. This makes it suitable for use as a standalone application for early prototyping, testbeam setups etc as well...

    Go to contribution page
  390. Natthan PIGOUX
    24/10/2024, 13:48
    Track 4 - Distributed Computing
    Talk

    The Cherenkov Telescope Array Observatory (CTAO) is the next-generation instrument in the very-high energy gamma ray astronomy domain. It will consist of tens of Cherenkov telescopes deployed in 2 CTAO array sites at La Palma (Spain) and Paranal (ESO, Chile) respectively. Currently under construction, CTAO will start operations in the coming years for a duration of about 30 years. During...

    Go to contribution page
  391. Dr Marcus Ebert (University of Victoria)
    24/10/2024, 14:06
    Track 7 - Computing Infrastructure
    Talk

    We present our unique approach to host the Canadian share of the Belle-II raw data and the computing infrastructure needed to process the raw data. We will describe the details of the storage system which is a disk-only storage solution based on xrootd and ZFS, as well as TSM for backup purpose. We will also detail the compute that involves starting specialized Virtual Machine (VMs) to process...

    Go to contribution page
  392. Carlos Fernando Gamboa (Brookhaven National Laboratory (US)), Carlos Fernando Gamboa (Department of Physics-Brookhaven National Laboratory (BNL)-Unkno)
    24/10/2024, 14:06
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    Scientific experiments and computations, especially in High Energy Physics, are generating and accumulating data at an unprecedented rate. Effectively managing this vast volume of data while ensuring efficient data analysis poses a significant challenge for data centers, which must integrate various storage technologies. This paper proposes addressing this challenge by designing a multi-tiered...

    Go to contribution page
  393. Jakub Hajduga (AGH University of Krakow (PL))
    24/10/2024, 14:06
    Track 5 - Simulation and analysis tools
    Talk

    Reconfigurable detector for the measurement of spatial radiation dose distribution for applications in the preparation of individual patient treatment plans [1] was a research and development project aimed at improving radiation dose distribution measurement techniques for therapeutic applications. The main idea behind the initiative was to change the current radiation dose distribution...

    Go to contribution page
  394. Carlo Varni (University of California Berkeley (US))
    24/10/2024, 14:06
    Track 3 - Offline Computing
    Talk

    In view of the High-Luminosity LHC era the ATLAS experiment is carrying out an upgrade campaign which foresees the installation of a new all-silicon Inner Tracker (ITk) and the modernization of the reconstruction software.

    Track reconstruction will be pushed to its limits by the increased number of proton-proton collisions per bunch-crossing and the granularity of the ITk detector. In order...

    Go to contribution page
  395. Melissa Quinnan (Univ. of California San Diego (US))
    24/10/2024, 14:06
    Track 2 - Online and real-time computing
    Talk

    We present the preparation, deployment, and testing of an autoencoder trained for unbiased detection of new physics signatures in the CMS experiment Global Trigger (GT) test crate FPGAs during LHC Run 3. The GT makes the final decision whether to readout or discard the data from each LHC collision, which occur at a rate of 40 MHz, within a 50 ns latency. The Neural Network makes a prediction...

    Go to contribution page
  396. Dr Stefano Bagnasco (Istituto Nazionale di Fisica Nucleare, Torino)
    24/10/2024, 14:06
    Track 4 - Distributed Computing
    Talk

    The Einstein Telescope is the proposed European next-generation ground-based gravitational-wave observatory, that is planned to have a vastly increased sensitivity with respect to current observatories, particularly in the lower frequencies. This will result in the detection of far more transient events, which will stay in-band for much longer, such that there will nearly always be at least...

    Go to contribution page
  397. Serguei Linev (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
    24/10/2024, 14:06
    Track 9 - Analysis facilities and interactive computing
    Talk

    The ROOT framework provides various implementations of graphics engines tailored for different platforms, along with specialized support of batch mode. Over time, as technology evolves and new versions of X11 or Cocoa are released, maintaining the functionality of correspondent ROOT components becomes increasingly challenging. The TWebCanvas class in ROOT represents an attempt to unify all...

    Go to contribution page
  398. Michelle Ann Solis (University of Arizona (US))
    24/10/2024, 14:24
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    This paper presents a novel approach to enhance the analysis of ATLAS Detector Control System (DCS) data at CERN. Traditional storage in Oracle databases, optimized for WinCC archiver operations, is challenged by the need for extensive analysis across long timeframes and multiple devices, alongside correlating conditions data. We introduce techniques to improve troubleshooting and analysis of...

    Go to contribution page
  399. Haider Abidi (Brookhaven National Laboratory (US))
    24/10/2024, 14:24
    Track 2 - Online and real-time computing
    Talk

    For the upcoming HL-LHC upgrade of the ATLAS experiment, the deployment of GPU
    or FPGA co-processors within the online Event Filter system is being studied as
    a measure to increase throughput and save power. End-to-end track
    reconstruction pipelines are currently being developed using commercially
    available FPGA accelerator cards. These utilize FPGA base partitions, drivers
    and runtime...

    Go to contribution page
  400. Jacob Michael Calcutt (Brookhaven National Lab)
    24/10/2024, 14:24
    Track 4 - Distributed Computing
    Talk

    The DUNE experiment will start running in 2029 and record 30 PB/year of raw waveforms from Liquid Argon TPCs and photon detectors. The size of individual readouts can range from 100 MB to a typical 8 GB full readout of the detector to extended readouts of up to several 100 TB from supernova candidates. These data then need to be cataloged, stored and then distributed for processing worldwide....

    Go to contribution page
  401. Albert Gyorgy Borbely (University of Glasgow (GB))
    24/10/2024, 14:24
    Track 9 - Analysis facilities and interactive computing
    Talk

    Over the last few years, an increasing number of sites have started to offer access to GPU accelerator cards but in many places they remain underutilised. The experiment collaborations are gradually increasing the fraction of their code that can exploit GPUs, driven in many case by developments of specific reconstruction algorithms to exploit the HLT farms when data is not being taken....

    Go to contribution page
  402. Peidong Yu (IHEP)
    24/10/2024, 14:24
    Track 5 - Simulation and analysis tools
    Talk

    The Jiangmen Underground Neutrino Observatory (JUNO) is a multi-purpose experiment under construction in southern China. JUNO is designed to determine the mass ordering of neutrinos and precisely measure neutrino oscillation parameters by detecting reactor neutrinos from the Yangjiang and Taishan Nuclear Power Plants. Atmospheric neutrinos, solar neutrinos, geo-neutrinos, supernova burst...

    Go to contribution page
  403. Dr David Park (Brookhaven National Laboratory)
    24/10/2024, 14:24
    Track 7 - Computing Infrastructure
    Talk

    Large-scale scientific collaborations like ATLAS, Belle II, CMS, DUNE, and others involve hundreds of research institutes and thousands of researchers spread across the globe. These experiments generate petabytes of data, with volumes soon expected to reach exabytes. Consequently, there is a growing need for computation, including structured data processing from raw data to consumer-ready...

    Go to contribution page
  404. Hang Zhou
    24/10/2024, 14:24
    Track 3 - Offline Computing
    Talk

    The Super Tau-Charm Facility (STCF) proposed in China is an electron-positron collider designed to operate in a center-of-mass energy range from 2 to 7 GeV with peak luminosity above $0.5 × 10^{35}$cm$^{-2}s^{-1}$. The STCF will provide a unique platform for studies of hadron physics, strong interactions and searches for new physics beyond the Standard Model in the tau-charm region. To fulfill...

    Go to contribution page
  405. Thomas Madlener (Deutsches Elektronen-Synchrotron (DESY))
    24/10/2024, 14:42
    Track 5 - Simulation and analysis tools
    Talk

    The common and shared event data model EDM4hep is a core part of the Key4hep project. It is the component that is used to not only exchange data between the different software pieces, but it also serves as a common language for all the components that belong to Key4hep. Since it is such a central piece, EDM4hep has to offer an efficient implementation. On the other hand, EDM4hep has to be...

    Go to contribution page
  406. Andrew Malone Melo (Vanderbilt University (US))
    24/10/2024, 14:42
    Track 9 - Analysis facilities and interactive computing
    Talk

    The success and adoption of machine learning (ML) approaches to solving HEP problems has been widespread and fast. As useful a tool as ML has been to the field, the growing number of applications, larger datasets, and increasing complexity of models creates a demand for both more capable hardware infrastructure and cleaner methods of reproducibilty and deployment. We have developed a prototype...

    Go to contribution page
  407. Pierpaolo Perticaroli (INFN, Roma I (IT))
    24/10/2024, 14:42
    Track 2 - Online and real-time computing
    Talk

    This work presents FPGA-RICH, an FPGA-based online partial particle identification system for the NA62 experiment utilizing AI techniques. Integrated between the readout of the Ring Imaging Cherenkov detector (RICH) and the low-level trigger processor (L0TP+) , FPGA-RICH implements a fast pipeline to process in real-time the RICH raw hit data stream, producing trigger-primitives containing...

    Go to contribution page
  408. Tatiana Ovsiannikova (University of Washington (US))
    24/10/2024, 14:42
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    Over the past years, the ROOT team has been developing a new I/O format called RNTuple to store data from experiments at CERN's Large Hadron Collider. RNTuple is designed to improve ROOT's existing TTree I/O subsystem by improving I/O speed and introducing a more efficient binary data format. It can be stored in both ROOT files and object stores, and it's optimized for modern storage hardware...

    Go to contribution page
  409. Benoit Roland (KIT - Karlsruhe Institute of Technology (DE))
    24/10/2024, 14:42
    Track 7 - Computing Infrastructure
    Talk

    The PUNCH4NFDI consortium, funded by the German Research Foundation for an initial period of five years, gathers various physics communities - particle, astro-, astroparticle, hadron and nuclear physics - from different institutions embedded in the National Research Data Infrastructure initiative. The overall goal of PUNCH4NFDI is the establishment and support of FAIR data management solutions...

    Go to contribution page
  410. Ksenia de Leo (INFN Trieste (IT))
    24/10/2024, 14:42
    Track 3 - Offline Computing
    Talk

    The upgrade of the CMS apparatus for the HL-LHC will provide unprecedented timing measurement capabilities, in particular for charged particles through the Mip Timing Detector (MTD). One of the main goals of this upgrade is to compensate the deterioration of primary vertex reconstruction induced by the increased pileup of proton-proton collisions by separating clusters of tracks not only in...

    Go to contribution page
  411. Fabio Hernandez (IN2P3 / CNRS computing centre)
    24/10/2024, 14:42
    Track 4 - Distributed Computing
    Talk

    After several years of focused work, preparation for Data Release Production (DRP) of the Vera C. Rubin Observatory’s Legacy Survey of Space and Time (LSST) at multiple data facilities is taking its shape. Rubin Observatory DRP features both complex, long workflows with many short jobs, and fewer long jobs with sometimes unpredictably large memory usage. Both of them create scaling issues that...

    Go to contribution page
  412. Qingbao Hu (IHEP)
    24/10/2024, 15:00
    Track 4 - Distributed Computing
    Talk

    The High Energy cosmic-Radiation Detection (HERD) facility is an under construction space astronomy and particle astrophysics experiment in collaboration between China and Italy, and will run on the China Space Station for more than 10 years since 2027. HERD is designed to search for dark matter with unprecedented sensitivity, investigate the century-old mystery of the origin of cosmic rays,...

    Go to contribution page
  413. Alessandro Maria Ricci (INFN Pisa)
    24/10/2024, 15:00
    Track 3 - Offline Computing
    Talk

    Mu2e will search for the neutrinoless coherent $\mu^-\rightarrow e^-$ conversion in the field of an Al nucleus, a Charged Lepton Flavor Violation (CLFV) process. The experiment is expected to start in 2026 and will improve the current limit by 4 orders of magnitude.

    Mu2e consists of a straw-tube tracker and crystal calorimeter in a 1T B field complemented by a plastic scintillation counter...

    Go to contribution page
  414. Daniele Lattanzio
    24/10/2024, 15:00
    Track 7 - Computing Infrastructure
    Talk

    We are moving INFN-T1 data center to a new location. In this presentation we will describe all the steps taken to complete the task without decreasing the general availability of the site and of all the services provided.
    We will also briefly describe the new features of our new data center compared to the current one.

    Go to contribution page
  415. Matteo Barbetti (INFN CNAF)
    24/10/2024, 15:00
    Track 9 - Analysis facilities and interactive computing
    Talk

    Machine Learning (ML) is driving a revolution in the way scientists design, develop, and deploy data-intensive software. However, the adoption of ML presents new challenges for the computing infrastructure, particularly in terms of provisioning and orchestrating access to hardware accelerators for development, testing, and production.
    The INFN-funded project AI_INFN ("Artificial Intelligence...

    Go to contribution page
  416. Riccardo Maria Bianchi (University of Pittsburgh (US))
    24/10/2024, 15:00
    Track 5 - Simulation and analysis tools
    Talk

    The software description of the ATLAS detector is based on the GeoModel toolkit, developed in-house for the ATLAS experiment but released and maintained as a separate package with few dependencies. A compact SQLite-based exchange format permits the sharing of geometrical information between applications including visualization, clash detection, material inventory, database browsing, and...

    Go to contribution page
  417. Francisco Hervas Alvarez (Univ. of Valencia and CSIC (ES))
    24/10/2024, 15:00
    Track 2 - Online and real-time computing
    Talk

    Particle detectors at accelerators generate large amount of data, requiring analysis to derive insights. Collisions lead to signal pile up, where multiple particles produce signals in the same detector sensors, complicating individual signal identification. This contribution describes the implementation of a deep learning algorithm on a Versal ACAP device for improved processing via...

    Go to contribution page
  418. Enrico Vianello (INFN-CNAF)
    24/10/2024, 15:18
    Track 7 - Computing Infrastructure
    Poster

    Cloud computing technologies are becoming increasingly important to provide a variety of services able to serve different communities' needs. This is the case of the DARE project (Digital Lifelong Prevention), a four-year initiative, co-financed by the Italian Ministry of University and Research as part of the National Plan of Complementary Investments to the PNRR. The project aims to develop...

    Go to contribution page
  419. Nick Smith (Fermi National Accelerator Lab. (US))
    24/10/2024, 15:18
    Track 4 - Distributed Computing
    Poster

    Fermilab is transitioning authentication and authorization for grid operations to using bearer tokens based on the WLCG Common JWT (JSON Web Token) Profile. One of the functionalities that Fermilab experimenters rely on is the ability to automate batch job submission, which in turn depends on the ability to securely refresh and distribute the necessary credentials to experiment job submit...

    Go to contribution page
  420. Matteo Rama (INFN Pisa (IT)), Sergey Kholodenko (Universita & INFN Pisa (IT))
    24/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    The simulation of physics events in the LHCb experiment uses the majority of the distributed computing resources available to the experiment. Notably, around 50% of the overall CPU time in the Geant4-based detailed simulation of physics events is spent in the calorimeter system. This talk presents a solution implemented in the LHCb simulation software framework to accelerate the calorimeter...

    Go to contribution page
  421. Stefan Michal Horodenski (AGH University of Krakow (PL))
    24/10/2024, 15:18
    Track 2 - Online and real-time computing
    Poster

    The Adaptive Hough Transform (AHT) is a variant of the Hough transform for particle tracking. Compared to other solutions using Hough Transforms, the benefit of the described algorithm is a shifted balance between memory usage and computation, which could make it more suitable for computational devices with less memory that can be accessed very fast. In addition, the AHT algorithm's...

    Go to contribution page
  422. Andrea Bellora (Universita e INFN Torino (IT))
    24/10/2024, 15:18
    Track 2 - Online and real-time computing
    Poster

    The Precision Proton Spectrometer (PPS) is a near-beam spectrometer that utilizes timing and
    tracking detectors to measure scattered protons surviving collisions at the CMS interaction
    point (IP). It is installed on both sides of CMS, approximately 200 meters from the IP, within
    mechanical structures called Roman Pots. These special beam pockets enable the detectors to
    approach the LHC...

    Go to contribution page
  423. Jindrich Lidrych (Universite Catholique de Louvain (UCL) (BE)), Oğuz Güzel (Université Catholique de Louvain (BE))
    24/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    This poster presents an overview and features of a bamboo framework designed for HEP data analysis. The bamboo framework defines a domain-specific language, embedded in python, that allows to concisely express the analysis logic in a functional style. The implementation based on ROOT's RDataFrame and cling C++ JIT compiler approaches the performance of dedicated native code. Bamboo is...

    Go to contribution page
  424. Maxim Potekhin (Brookhaven National Laboratory (US))
    24/10/2024, 15:18
    Track 8 - Collaboration, Reinterpretation, Outreach and Education
    Poster

    The ePIC Collaboration is actively working on the Technical Design Report (TDR) for its future detector at the Electron Ion Collider to be built at Brookhaven National Laboratory within the next decade. The development of the TDR by an international Collaboration with over 850 members requires a plethora of physics and detector studies that need to be coordinated. An effective set of...

    Go to contribution page
  425. Gordon Watts (University of Washington (US))
    24/10/2024, 15:18
    Track 8 - Collaboration, Reinterpretation, Outreach and Education
    Poster

    The ATLAS Collaboration consists of around 6000 members from over 100 different countries. Regional, age and gender demographics of the collaboration are presented, including the time evolution over the lifetime of the experiment. In particular, the relative fraction of women is discussed, including their share of contributions, recognition and positions of responsibility, including showing...

    Go to contribution page
  426. Simon Nicklas Neuhaus, Eoin Clerkin (FAIR - Facility for Antiproton and Ion Research in Europe, Darmstadt)
    24/10/2024, 15:18
    Track 3 - Offline Computing
    Poster

    Fully automated conversion from CAD geometries directly into their ROOT geometry equivalents has been a hot topic of conversation at CHEP conferences. Today multiple approaches for CAD to ROOT conversion exist. Many appear not to work well. In this paper, we report on three separate and distinct successful efforts from within the CBM collaboration, namely from our Silicon Tracking System team,...

    Go to contribution page
  427. Barnali Chowdhury (Argonne National Laboratory)
    24/10/2024, 15:18
    Track 1 - Data and Metadata Organization, Management and Access
    Poster

    DUNE’s current processing framework (art) was branched from the event processing framework of CMS, a collider-physics experiment. Therefore art is built on event-based concepts as its fundamental processing unit. The “event” concept is not always helpful for neutrino experiments, such as DUNE. DUNE uses trigger records that are much larger than collider events (several GB vs. MB). Therefore,...

    Go to contribution page
  428. Severin Diederichs (CERN)
    24/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    Geant4 hadronic physics sub-library includes a wide variety of models for high and low-energy hadronic interactions. We report on recent progress in development of the Geant4 nuclear de-excitation module. This module is used by many Geant4 models for sampling of de-excitation of nuclear recoil produced in nuclear reactions. Hadronic shower shape and energy deposition are sensitive to these...

    Go to contribution page
  429. Marco Mambelli (Fermilab (US))
    24/10/2024, 15:18
    Track 4 - Distributed Computing
    Poster

    Choosing the right resource can speedup jobs completion, better utilize the available hardware and visibly reduce costs, especially when renting computers on the cloud. This was demonstrated in earlier studies on HEPCloud. But the benchmarking of the resources proved to be a laborious and time-consuming process. This paper presents GlideinBenchmark, a new Web application leveraging the pilot...

    Go to contribution page
  430. Pratixan Sarmah
    24/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    Monte Carlo Event Generators contain several free parameters that cannot be inferred from first principles and need to be tuned to better model the data. With increasing precision of perturbative calculations to higher orders and hence decreasing theoretical uncertainties, it becomes crucial to study the systematics of non-perturbative phenomenological models. A recent attempt was made at...

    Go to contribution page
  431. Rocky Bala Garg (Stanford University (US))
    24/10/2024, 15:18
    Track 3 - Offline Computing
    Poster

    Extensive research has been conducted on deep neural networks (DNNs) for the identification and localization of primary vertices (PVs) in proton-proton collision data from ATLAS/ACTS. Previous studies focused on locating primary vertices in simulated ATLAS data using a hybrid methodology. This approach began with the derivation of kernel density estimators (KDEs) from the ensemble of charged...

    Go to contribution page
  432. Emma Hess (Universita & INFN Pisa (IT))
    24/10/2024, 15:18
    Track 3 - Offline Computing
    Poster

    Precision measurements of fundamental properties of particles serve as stringent tests of the Standard Model and search for new physics. These experiments require robust particle identification and event classification capabilities, often achievable through machine learning techniques. This presentation introduces a Graph Neural Network (GNN) approach tailored for identifying outgoing...

    Go to contribution page
  433. Rouven Spreckels, Soren Lars Gerald Fleischer (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
    24/10/2024, 15:18
    Track 4 - Distributed Computing
    Poster

    The implementation of a federated access system for GSI's local Lustre storage using XRootD and HTTP(s) protocols will be presented. It aims at ensuring a secure and efficient data access for the diverse scientific communities at GSI. This prototype system is a key step towards integrating GSI/FAIR into a federated data analysis model. We use Keycloak for authentication, which issues SciTokens...

    Go to contribution page
  434. Mr Krystian Roslon (Warsaw University of Technology (PL))
    24/10/2024, 15:18
    Track 2 - Online and real-time computing
    Poster

    The official data collection for the RUN3 of the Large Hadron Collider (LHC) at CERN in Geneva commenced on July 5, 2022, following approximately three and a half years of maintenance, upgrades, and commissioning. Among the many enhancements to ALICE (A Large Ion Collider Experiment) is the new Fast Interaction Trigger (FIT) detector. Constant improvements to FIT's hardware, firmware, and...

    Go to contribution page
  435. Andrzej Konrad Siodmok (Jagiellonian University (PL))
    24/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    We present an overview of the Monte Carlo event generator for lepton and quark pair production for the high-energy electron-positron annihilation process. We note that it is still the most sophisticated event generator for such processes. Its entire source code is rewritten in the modern C++ language. We checked that it reproduces all features of the older code in Fortran 77. We discuss a...

    Go to contribution page
  436. Marina Sahakyan
    24/10/2024, 15:18
    Track 1 - Data and Metadata Organization, Management and Access
    Poster

    Traditional filesystems organize data in directories. The directories are typically a collection of files whose grouping is based on one criteria, i.e., the starting date of the experiment, experiment name, beamline ID, measurement device, or instrument. However, each file in a directory can belong to different logical groups, such as a special event type, experiment condition, or a part of a...

    Go to contribution page
  437. Emir Muhammad (University of Warwick (GB))
    24/10/2024, 15:18
    Track 4 - Distributed Computing
    Poster

    The LHCb experiment requires a wide variety of Monte Carlo simulated samples to support its physics programme. LHCb’s centralised production system operates on the DIRAC backend of the WLCG; users interact with it via the DIRAC web application to request and produce samples.

    To simplify this procedure, LbMCSubmit was introduced, automating the generation of request configurations from a...

    Go to contribution page
  438. Oleksandr Savchenko (AGH University of Krakow (PL))
    24/10/2024, 15:18
    Track 2 - Online and real-time computing
    Poster

    The poster presents the first experiments with the time-to-digital converter (TDC) for the Fast Interaction Trigger detector in ALICE experiment at CERN. It is implemented in Field-Programmable Gate Array (FPGA) technology and uses Serializer and Deserializers (ISERDES) with multiple-phase clocks.
    The input pulse is a standard differential input signal. The signal is sampled with eight...

    Go to contribution page
  439. Pierre-Alain Loizeau (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
    24/10/2024, 15:18
    Track 2 - Online and real-time computing
    Poster

    The Compressed Baryonic Matter (CBM) experiment at FAIR will explore the QCD phase diagram at high net-baryon densities through heavy-ion collisions, using the beams provided by the SIS100 synchrotron in the energy range of 4.5-11 AGeV/c (fully stripped gold ions). This physics program strongly relies on rare probes with complex signatures, for which high interaction rates and a strong...

    Go to contribution page
  440. Christoph Beyer, Thomas Hartmann (Deutsches Elektronen-Synchrotron (DE)), Yves Kemp
    24/10/2024, 15:18
    Track 9 - Analysis facilities and interactive computing
    Poster

    The National Analysis Facility (NAF) at DESY is a multi-purpose compute cluster available to a broad community of high-energy particle physics, astro particle physics as well as other communities. Being continuously in production for about 15 years now, the NAF evolved through a number of hardware and software revisions. A constant factor however has been the human factor, as the broad set of...

    Go to contribution page
  441. Barnali Chowdhury (Argonne National Laboratory)
    24/10/2024, 15:18
    Track 1 - Data and Metadata Organization, Management and Access
    Poster

    The Deep Underground Neutrino Experiment (DUNE), hosted by the U.S. Department of Energy’s Fermilab, is expected to begin operations in the late 2020s. The validation of one far detector module design for DUNE will come from operational experience gained from deploying offline computing infrastructure for the ProtoDUNE (PD) Horizontal Drift (HD) detector. The computing infrastructure of PD HD...

    Go to contribution page
  442. Jin Choi (Seoul National University (KR))
    24/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    We will present the first analysis of the computational speedup achieved through the use of the GPU version of Madgraph, known as MG4GPU. Madgraph is the most widely used event generator in CMS. Our work is the first step toward benchmarking the improvement obtained through the use of its GPU implementation. In this presentation, we will show the timing improvement achieved without affecting...

    Go to contribution page
  443. Giulio Cordova (Universita & INFN Pisa (IT))
    24/10/2024, 15:18
    Track 2 - Online and real-time computing
    Poster

    The increasing computing power and bandwidth of FPGAs opens new possibilities in the field of real-time processing of HEP data. LHCb now uses a cluster-finder FPGA architecture to reconstruct hits in the VELO pixel detector on-the-fly during readout. In addition to its usefulness in accelerating HLT1 reconstruction by providing it with pre-reconstructed data, this system enables further...

    Go to contribution page
  444. Acelya Deniz Gungordu (Istanbul Technical University (TR)), Dorukhan Boncukcu (Istanbul Technical University (TR))
    24/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    A growing reliance on the fast Monte Carlo (FastSim) will accompany the high luminosity and detector granularity expected in Phase 2. FastSim is roughly 10 times faster than equivalent GEANT4-based full simulation (FullSim). However, reduced accuracy of the FastSim affects some analysis variables and collections. To improve its accuracy, FastSim is refined using regression-based neural...

    Go to contribution page
  445. Giovanna Lazzari Miotto (CERN)
    24/10/2024, 15:18
    Track 2 - Online and real-time computing
    Poster

    Level-1 Data Scouting (L1DS) is a novel data acquisition subsystem at the CMS Level-1 Trigger (L1T) that exposes the L1T event selection data primitives for online processing at the LHC’s 40 MHz bunch-crossing rate, enabling unbiased and unconventional analyses. An L1DS demonstrator has been operating since Run 3, relying on a ramdisk for ephemeral storage of incoming and intermediate data,...

    Go to contribution page
  446. Tomasz Marcin Lelek (AGH University of Krakow (PL))
    24/10/2024, 15:18
    Track 4 - Distributed Computing
    Poster

    The ALICE Grid processes up to one million computational jobs daily, leveraging approximately 200,000 CPU cores distributed across about 60 computing centers. Enhancing the prediction accuracy for job execution times could significantly optimize job scheduling, leading to better resource allocation and increased throughput of job execution. We present results of applying machine learning...

    Go to contribution page
  447. Kamila Kalecińska (AGH University of Krakow (PL))
    24/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    The super-resolution (SR) techniques are often used in the up-scaling process to add-in details that are not present in the original low-resolution image. In radiation therapy the SR can be applied to enhance the quality of medical images used in treatment planning. The Dose3D detector measuring spatial dose distribution [1][2], the dedicated set of ML algorithms for SR has been proposed to...

    Go to contribution page
  448. Menglin Xu (University of Warwick (GB))
    24/10/2024, 15:18
    Track 3 - Offline Computing
    Poster

    The LHCb Detector project is home to the detector description of the LHCb experiment. It is used in all data processing applications, from simulation to reconstruction . It is based on the DD4hep package relying on the combination of XML files and C++ code. The need to support different versions of the detector layout in different data taking periods, on top of the DD4hep detector...

    Go to contribution page
  449. Emilia Majerz (AGH University of Krakow (PL))
    24/10/2024, 15:18
    Track 5 - Simulation and analysis tools
    Poster

    Simulating the Large Hadron Collider detectors, particularly the Zero Degree Calorimeter (ZDC) of the ALICE experiment, is computationally expensive. This process uses the Monte Carlo approach, which demands significant computational resources, and involves many steps. However, recent advances in generative deep learning architectures present promising methods for speeding up these...

    Go to contribution page
  450. Marta Czurylo (CERN)
    24/10/2024, 16:15
    Track 9 - Analysis facilities and interactive computing
    Talk

    The ROOT software package provides the data format used in High Energy Physics by the LHC experiments. It offers a data analysis interface called RDataFrame, which has proven to adapt well to the requirements of modern physics analyses. However, with increasing data collected by the LHC experiments, the challenge to perform an efficient analysis expands. One of the solutions to ease this...

    Go to contribution page
  451. Federica Legger (Universita e INFN Torino (IT))
    24/10/2024, 16:15
    Track 5 - Simulation and analysis tools
    Talk

    Gravitational Waves (GW) were first predicted by Einstein in 1918, as a consequence of his theory of General Relativity published in 1915. The first direct GW detection was announced in 2016 by the LIGO and Virgo collaborations. Both experiments consist of a modified Michelson-Morley interferometer that can measure deformations of the interferometer arms of about 1/1,000 the width of a proton....

    Go to contribution page
  452. Daniel Thomas Murnane (Niels Bohr Institute, University of Copenhagen)
    24/10/2024, 16:15
    Track 3 - Offline Computing
    Talk

    Graph neural networks and deep geometric learning have been successfully proven in the task of track reconstruction in recent years. The GNN4ITk project employs these techniques in the context of the ATLAS upgrade ITk detector to produce similar physics performance as traditional techniques, while scaling sub-quadratically. However, one current bottleneck in the throughput and physics...

    Go to contribution page
  453. Qingbao Hu (IHEP)
    24/10/2024, 16:15
    Track 4 - Distributed Computing
    Talk

    LHAASO experiment is a new generation multi-component experiment designed to study cosmic rays and gamma-ray astronomy. The data volume from LHAASO are currently reaching to ~40PB and ~11PB of new data will be generated every year in the future. Such scale of data needs a big scale of computing resources to process. For LHAASO experiment, there are several types of computing sites to join the...

    Go to contribution page
  454. James William Walder (Science and Technology Facilities Council STFC (GB))
    24/10/2024, 16:15
    Track 7 - Computing Infrastructure
    Talk

    The Square Kilometre Array (SKA) is set to revolutionise radio astronomy and will utilise a distributed network of compute and storage resources, known as SRCNet, to store, process and analyse the data at the exoscale. The United Kingdom plays a pivotal role in this initiative, contributing a significant portion of the SRCNet infrastructure. SRCNet v0.1, scheduled for early 2025, will...

    Go to contribution page
  455. Riccardo Vari (Sapienza Universita e INFN, Roma I (IT))
    24/10/2024, 16:15
    Track 2 - Online and real-time computing
    Talk

    The ATLAS experiment at CERN is constructing upgraded system
    for the "High Luminosity LHC", with collisions due to start in
    2029. In order to deliver an order of magnitude more data than
    previous LHC runs, 14 TeV protons will collide with an instantaneous
    luminosity of up to 7.5 x 10e34 cm^-2s^-1, resulting in much higher pileup and
    data rates than the current experiment was designed to...

    Go to contribution page
  456. Andrzej Nowicki (CERN)
    24/10/2024, 16:15
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    In this presentation, I will outline the upcoming transformations set to take place within CERN's database infrastructure. Among the challenges facing our database team during the Long Shutdown 3 (LS3) will be the upgrade of Oracle databases.

    The forthcoming version of Oracle database is introducing a significant internal change as the databases will be converted to a container...

    Go to contribution page
  457. Matteo Migliorini (Universita e INFN, Padova (IT))
    24/10/2024, 16:33
    Track 2 - Online and real-time computing
    Talk

    The CMS Level-1 Trigger Data Scouting (L1DS) introduces a novel approach within the CMS Level-1 Trigger (L1T), enabling the acquisition and processing of L1T primitives at the 40 MHz LHC bunch-crossing (BX) rate. The target for this system is the CMS Phase-2 Upgrade for the High Luminosity phase of LHC, harnessing the improved Phase-2 L1T design, where tracker and high-granularity calorimeter...

    Go to contribution page
  458. Shah Rukh Qasim (University of Zurich (CH))
    24/10/2024, 16:33
    Track 3 - Offline Computing
    Talk

    High quality particle reconstruction is crucial to data acquisition at large CERN experiments. While the classical algorithms have been successful so far, in recent years, the use of pattern recognition has become more and more necessary due to increasing complexity of the modern detectors. Graph Neural Network based approaches have been recently proposed to tackle challenges such as...

    Go to contribution page
  459. Carlotta Chiarini (Sapienza Universita e INFN, Roma I (IT))
    24/10/2024, 16:33
    Track 7 - Computing Infrastructure
    Talk

    In the High-Performance Computing (HPC) field, fast and reliable interconnects remain pivotal in delivering efficient data access and analytics.

    In recent years, several interconnect implementations have been proposed, targeting optimization, reprogrammability and other critical aspects. Custom Network Interface Cards (NIC) have emerged as viable alternatives to commercially available...

    Go to contribution page
  460. Dr I. Can Dikmen (Temsa R\&D Center), Mr Murat Isik (Drexel University)
    24/10/2024, 16:33
    Track 5 - Simulation and analysis tools
    Talk

    This paper presents the innovative HPCNeuroNet model, a pioneering fusion of Spiking Neural Networks (SNNs), Transformers, and high-performance computing tailored for particle physics, particularly in particle identification from detector responses. Drawing from the intrinsic temporal dynamics of SNNs and the robust attention mechanisms of Transformers, our approach capitalizes on these...

    Go to contribution page
  461. Sergiu Weisz (National University of Science and Technology POLITEHNICA Bucharest (RO)), Sergiu Weisz (Lawrance Berkeley National Lab)
    24/10/2024, 16:33
    Track 4 - Distributed Computing
    Talk

    The Perlmutter HPC system is the 9th generation supercomputer deployed at the National Energy Research Scientific Computing Center (NERSC) It provides both CPU and GPU resources, offering 393216 AMD EPYC Milan cores with 4 GB of memory per core, for CPU-oriented jobs and 7168 NVIDIA A100 GPUs. The machine allows connections from the worker nodes to the outside and already mounts CVMFS for...

    Go to contribution page
  462. Robert William Gardner Jr (University of Chicago (US))
    24/10/2024, 16:33
    Track 9 - Analysis facilities and interactive computing
    Talk

    The ATLAS experiment is currently developing columnar analysis frameworks which leverage the Python data science ecosystem. We describe the construction and operation of the infrastructure necessary to support demonstrations of these frameworks, with a focus on those from IRIS-HEP. One such demonstrator aims to process the compact ATLAS data format PHYSLITE at rates exceeding 200 Gbps. Various...

    Go to contribution page
  463. Guilherme Amadio (CERN)
    24/10/2024, 16:33
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    Remote file access is critical in High Energy Physics (HEP) and is currently facilitated by XRootD and HTTP(S) protocols. With a tenfold increase in data volume expected for Run-4, higher throughput is critical. We compare some client-server implementations on 100GE LANs connected to high-throughput storage devices. A joint project between IT and EP departments aims to evaluate RNTuple as a...

    Go to contribution page
  464. Zachary Goggin
    24/10/2024, 16:51
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    The recent commissioning of CERN’s Prevessin Data Centre (PDC) brings the opportunity for multi-datacentre Ceph deployements, bringing advantages for business continuity and disaster recovery. However, the simple extension of a single cluster across data centres is impractical due to the impact of latency on Ceph’s strong consistency requirements. This paper reports on our research towards...

    Go to contribution page
  465. Maksim Melnik Storetvedt (CERN)
    24/10/2024, 16:51
    Track 4 - Distributed Computing
    Talk

    The ALICE Collaboration has begun exploring the use of ARM resources for the execution of Grid payloads. This was prompted by both their recent availability in the WLCG, as well as their increased competitiveness with traditional x86-based hosts in terms of both cost and performance. With the number of OEMs providing ARM offerings aimed towards servers and HPC growing, the presence of these...

    Go to contribution page
  466. Alina Lazar (Youngstown State University (US))
    24/10/2024, 16:51
    Track 3 - Offline Computing
    Talk

    Track reconstruction is an essential element of modern and future collider experiments, including the ATLAS detector. The HL-LHC upgrade of the ATLAS detector brings an unprecedented tracking reconstruction challenge, both in terms of the large number of silicon hit cluster readouts and the throughput required for budget-constrained track reconstruction. Traditional track reconstruction...

    Go to contribution page
  467. Oriel Orphee Moira Kiss (Universite de Geneve (CH))
    24/10/2024, 16:51
    Track 5 - Simulation and analysis tools
    Talk

    Hamiltonian moments in Fourier space—expectation values of the unitary evolution operator under a Hamiltonian at various times—provide a robust framework for understanding quantum systems. They offer valuable insights into energy distribution, higher-order dynamics, response functions, correlation information, and physical properties. Additionally, Fourier moments enable the computation of...

    Go to contribution page
  468. Axel Naumann (CERN)
    24/10/2024, 16:51
    Track 2 - Online and real-time computing
    Talk

    The Next Generation Triggers project (NextGen in short) is a five-year collaboration across ATLAS and CMS (with contributions from LHCb and ALICE) and the Experimental Physics, Theoretical Physics, and Information Technology Departments of CERN to research and develop new ideas and technologies for the experiment trigger systems for HL-LHC and beyond. After more than a year of preparation in...

    Go to contribution page
  469. David Kelsey (Science and Technology Facilities Council STFC (GB))
    24/10/2024, 16:51
    Track 7 - Computing Infrastructure
    Talk

    The Worldwide Large Hadron Collider Computing Grid (WLCG) community’s deployment of dual-stack IPv6/IPv4 on its worldwide storage infrastructure is very successful and has been presented by us at earlier CHEP conferences. Dual-stack is not, however, a viable long-term solution; the HEPiX IPv6 Working Group has focused on studying where and why IPv4 is still being used, and how to flip such...

    Go to contribution page
  470. Oksana Shadura (University of Nebraska Lincoln (US))
    24/10/2024, 16:51
    Track 9 - Analysis facilities and interactive computing
    Talk

    As a part of the IRIS-HEP “Analysis Grand Challenge” activities, the Coffea-casa AF team executed a “200 Gbps Challenge”. One of the goals of this challenge was to provide a setup for execution of a test notebook-style analysis on the facility that could process a 200 TB CMS NanoAOD dataset in 20 minutes.

    We describe the solutions we deployed at the facility to execute the challenge tasks....

    Go to contribution page
  471. Laura Promberger (CERN)
    24/10/2024, 17:09
    Track 4 - Distributed Computing
    Talk

    The CernVM File System (CVMFS) is an efficient distributed, read-only file system that streams software and data on demand. Its main focus is to distribute experiment software and conditions data to the world-wide LHC computing infrastructure. In WLCG, more than 5 billion files are distributed via CVMFS and its read-only file system client is installed on more than 100,000 worker nodes. Recent...

    Go to contribution page
  472. Hideki Okawa (Chinese Academy of Sciences (CN))
    24/10/2024, 17:09
    Track 5 - Simulation and analysis tools
    Talk

    Jets are key observables to measure the hadronic activities at high energy colliders such as the Large Hadron Collider (LHC) and future colliders such as the High Luminosity LHC (HL-LHC) and the Circular Electron Positron Collider (CEPC). Yet jet reconstruction is a computationally expensive task especially when the number of final-state particles is large. Such a clustering task can be...

    Go to contribution page
  473. Corentin Allaire (IJCLab, Université Paris-Saclay, CNRS/IN2P3)
    24/10/2024, 17:09
    Track 3 - Offline Computing
    Talk

    The reconstruction of particle trajectories is a key challenge of particle physics experiments, as it directly impacts particle identification and physics performances while also representing one of the primary CPU consumers of many high-energy physics experiments. As the luminosity of particle colliders increases, this reconstruction will become more challenging and resource-intensive. New...

    Go to contribution page
  474. Matt Doidge (Lancaster University (GB))
    24/10/2024, 17:09
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    Erasure-coded storage systems based on Ceph have become a mainstay within UK Grid sites as a means of providing bulk data storage whilst maintaining a good balance between data safety and space efficiency. A favoured deployment, as used at the Lancaster Tier-2 WLCG site, is to use CephFS mounted on frontend XRootD gateways as a means of presenting this storage to grid users.

    These storage...

    Go to contribution page
  475. Fabio Rossi
    24/10/2024, 17:09
    Track 2 - Online and real-time computing
    Talk

    The new generation of high-energy physics experiments plans to acquire data in streaming mode. With this approach, it is possible to access the information of the whole detector (organized in time slices) for optimal and lossless triggering of data acquisition. Each front-end channel sends data to the processing node via TCP/IP when an event is detected. The data rate in large detectors is...

    Go to contribution page
  476. Kevin Patrick Lannon (University of Notre Dame (US))
    24/10/2024, 17:09
    Track 9 - Analysis facilities and interactive computing
    Talk

    In the data analysis pipeline for LHC experiments, a key aspect is the step in which small groups of researchers—typically graduate students and postdocs—reduce the smallest, common-denominator data format down to a small set of specific histograms suitable for statistical interpretation. Here, we will refer to this step as “analysis” with the recognition that in other contexts, “analysis”...

    Go to contribution page
  477. Justas Balcas (ESnet)
    24/10/2024, 17:09
    Track 7 - Computing Infrastructure
    Talk

    The Large Hadron Collider (LHC) experiments rely on a diverse network of National Research and Education Networks (NRENs) to distribute their data efficiently. These networks are treated as "best-effort" resources by the experiment data management systems. Following the High Luminosity upgrade, the Compact Muon Solenoid (CMS) experiment is projected to generate approximately 0.5 exabytes of...

    Go to contribution page
  478. Dr Michał Orzechowski (AGH University of Krakow, Faculty of Computer Science, Poland)
    24/10/2024, 17:27
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    Onedata [1] platform is a high-performance data management system with a distributed, global infrastructure that enables users to access heterogeneous storage resources worldwide. It supports various use cases ranging from personal data management to data-intensive scientific computations. Onedata has a fully distributed architecture that facilitates the creation of a hybrid cloud...

    Go to contribution page
  479. Maria Del Carmen Misa Moreira (CERN)
    24/10/2024, 17:27
    Track 7 - Computing Infrastructure
    Talk

    The Network Optimised Experimental Data Transfer (NOTED) has undergone successful testing at several international conferences, including the International Conference for High Performance Computing, Networking, Storage and Analysis (also known as SuperComputing). It has also been tested at scale during the WLCG Data Challenge 2024, in which NREN's and WLCG sites conducted testing at 25% of the...

    Go to contribution page
  480. Jay Chan (Lawrence Berkeley National Lab. (US))
    24/10/2024, 17:27
    Track 3 - Offline Computing
    Talk

    Track reconstruction is a crucial task in particle experiments and is traditionally very computationally expensive due to its combinatorial nature. Recently, graph neural networks (GNNs) have emerged as a promising approach that can improve scalability. Most of these GNN-based methods, including the edge classification (EC) and the object condensation (OC) approach, require an input graph that...

    Go to contribution page
  481. Nick Smith (Fermi National Accelerator Lab. (US))
    24/10/2024, 17:27
    Track 4 - Distributed Computing
    Talk

    The HEPCloud Facility at Fermilab has now been in operation for six years. This facility is used to give a unified provisioning gateway to high performance computing centers, including NERSC, ORLF, and ALCF, other large supercomputers run by the NSF, and commercial clouds. HEPCloud delivers hundreds of millions of core-hours yearly for CMS. HEPCloud also serves other Fermilab experiments...

    Go to contribution page
  482. Qingbao Hu (IHEP)
    24/10/2024, 17:27
    Track 9 - Analysis facilities and interactive computing
    Talk

    China’s High-Energy Photon Source (HEPS), the first national high-energy synchrotron radiation light source, is under design and construction. HEPS computing center is the principal provider of high-performance computing and data resources and services for science experiments of HEPS. The mission of HEPS scientific computing platform is to accelerate the scientific discovery for the...

    Go to contribution page
  483. Yi-An Chen (National Taiwan University (TW))
    24/10/2024, 17:27
    Track 5 - Simulation and analysis tools
    Talk

    Machine learning, particularly deep neural networks, has been widely used in high-energy physics, demonstrating remarkable results in various applications. Furthermore, the extension of machine learning to quantum computers has given rise to the emerging field of quantum machine learning. In this paper, we propose the Quantum Complete Graph Neural Network (QCGNN), which is a variational...

    Go to contribution page
  484. Gabriele Bortolato (Universita e INFN, Padova (IT))
    24/10/2024, 17:27
    Track 2 - Online and real-time computing
    Talk

    The High-Luminosity LHC upgrade will have a new trigger system that utilizes detailed information from the calorimeter, muon and track finder subsystems at the bunch crossing rate, which enables the final stage of the Level-1 Trigger, the Global Trigger (GT), to use high-precision trigger objects. In addition to cut-based algorithms, novel machine-learning-based algorithms will be employed in...

    Go to contribution page
  485. Marco Donadoni (CERN)
    24/10/2024, 17:45
    Track 9 - Analysis facilities and interactive computing
    Talk

    We have created a Snakemake computational analysis workflow corresponding to the IRIS-HEP Analysis Grand Challenge (AGC) example studying ttbar production channels in the CMS open data. We describe the extensions to the AGC pipeline that allowed porting of the notebook-based analysis to Snakemake. We discuss the applicability of the Snakemake multi-cascading paradigm for running...

    Go to contribution page
  486. Samuel Cadellin Skipsey
    24/10/2024, 17:45
    Track 1 - Data and Metadata Organization, Management and Access
    Talk

    In order to achieve the higher performance year on year required by the 2030s for future LHC upgrades at a sustainable carbon cost
    to the environment, it is essential to start with accurate measurements of the state of play. Whilst there have been a number of studies
    of the carbon cost of compute for WLCG workloads published, rather less has been said on the topic of storage, both nearline...

    Go to contribution page
  487. Heberth Torres (L2I Toulouse, CNRS/IN2P3, UT3)
    24/10/2024, 17:45
    Track 3 - Offline Computing
    Talk

    Graph neural networks represent a potential solution for the computing challenge posed by the reconstruction of tracks at the High Luminosity LHC [1, 2, 3]. The graph concept is convenient to organize the data and to split up the tracking task itself into the subtasks of identifying the correct hypothetical connections (edges) between the hits, subtasks that are easy to parallelize and process...

    Go to contribution page
  488. Volodymyr Svintozelskyi (Univ. of Valencia and CSIC (ES))
    24/10/2024, 17:45
    Track 2 - Online and real-time computing
    Talk

    In this talk we present the HIGH-LOW project, which addresses the need to achieve sustainable computational systems and to develop new Artificial Intelligence (AI) applications that cannot be implemented with the current hardware solutions due to the requirements of high-speed response and power constraints. In particular we are focused on the several computing solutions at the Large Hadron...

    Go to contribution page
  489. Sergio Andreozzi (EGI Foundation)
    24/10/2024, 17:45
    Track 4 - Distributed Computing
    Talk

    The amount of data gathered, shared and processed in frontier research is set to increase steeply in the coming decade, leading to unprecedented data processing, simulation and analysis needs.

    In particular, the research communities in High Energy Physics and Radio Astronomy are preparing to launch new instruments that require data and compute infrastructures several orders of magnitude...

    Go to contribution page
  490. David Lange, David Lange (Princeton University (US))
    24/10/2024, 17:45
    Track 5 - Simulation and analysis tools
    Talk

    Built on algorithmic differentiation (AD) techniques, differentiable programming allows to evaluate derivatives of computer programs. Such derivatives are useful across domains for gradient-based design optimization and parameter fitting, among other applications. In high-energy physics, AD is frequently used in machine learning model training and in statistical inference tasks such as maximum...

    Go to contribution page
  491. Lucia Morganti, Ruslan Mashinistov (Brookhaven National Laboratory (US)), Samuel Cadellin Skipsey, Mr Tigran Mkrtchyan (DESY)
    25/10/2024, 09:00
  492. Christina Agapopoulou (Université Paris-Saclay (FR)), David Rohr (CERN), Kunihiro Nagano (KEK High Energy Accelerator Research Organization (JP)), Marco Battaglieri (INFN)
    25/10/2024, 09:15
  493. Charis Kleio Koraka (University of Wisconsin Madison (US)), Davide Valsecchi (ETH Zurich (CH)), Laura Cappelli (INFN Ferrara), Rosen Matev (CERN)
    25/10/2024, 09:30
  494. Daniela Bauer (Imperial College (GB)), Fabio Hernandez (IN2P3 / CNRS computing centre), Gianfranco Sciacca (Universitaet Bern (CH)), Panos Paparrigopoulos (CERN)
    25/10/2024, 09:45
  495. Giacomo De Pietro (Karlsruhe Institute of Technology), Jonas Rembser (CERN), Marilena Bandieramonte (University of Pittsburgh (US)), Tobias Stockmanns
    25/10/2024, 10:00
  496. Matthew Feickert (University of Wisconsin Madison (US)), Nathan Grieser (University of Cincinnati (US)), Tobias Fitschen (The University of Manchester (GB)), Wouter Deconinck
    25/10/2024, 10:15
  497. Bruno Heinrich Hoeft (KIT - Karlsruhe Institute of Technology (DE)), Christoph Wissing (Deutsches Elektronen-Synchrotron (DE)), Flavio Pisani (CERN), Henryk Giemza (National Centre for Nuclear Research (PL))
    25/10/2024, 11:00
  498. Giovanni Guerrieri (CERN), Jake Vernon Bennett (University of Mississippi (US)), James Catmore (University of Oslo (NO)), Lene Kristian Bryngemark (Lund University (SE))
    25/10/2024, 11:15
  499. Enric Tejedor Saavedra (CERN), Marta Czurylo (CERN), Nick Smith (Fermi National Accelerator Lab. (US)), Dr Nicole Skidmore (University of Warwick)
    25/10/2024, 11:30
  500. Agnieszka Dziurda (Polish Academy of Sciences (PL)), Dorothea Vom Bruch (Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille, France), Katy Ellis (Science and Technology Facilities Council STFC (GB)), Stephan Hageboeck (CERN), Tomasz Szumlak (AGH University of Krakow (PL))
    25/10/2024, 11:45
  501. Agnieszka Dziurda (Polish Academy of Sciences (PL)), Dorothea Vom Bruch (Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille, France), Katy Ellis (Science and Technology Facilities Council STFC (GB)), Phat Srimanobhas (Chulalongkorn University (TH)), Stephan Hageboeck (CERN), Tomasz Szumlak (AGH University of Krakow (PL))
    25/10/2024, 12:00
  502. 25/10/2024, 17:00
    Talk

    Panel dyskusyjny z udziałem naukowców z ośrodków krakowskich:

    1. Prof. Agnieszka Obłąkowska-Mucha
    2. Dr inż. Agnieszka Dziurda
    3. Prof Tomasz Bołd
    4. Dr inż. Paweł Janowski
    5. Dr inż. Bartłomiej Rachwał
    6. Prof. Tomasz Szumlak
    Go to contribution page
  503. Dorothea Vom Bruch (Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille, France), Katy Ellis (Science and Technology Facilities Council STFC (GB)), Stephan Hageboeck (CERN)
    Talk
  504. Dr Madan Timalsina (NERSC/LBNL)
    Track 7 - Computing Infrastructure
    Talk

    This presentation delves into the implementation and optimization of checkpoint-restart mechanisms in High-Performance Computing (HPC) environments, with a particular focus on Distributed MultiThreaded CheckPointing (DMTCP). We explore the use of DMTCP both within and outside of containerized environments, emphasizing its application on NERSC Perlmutter, a cutting-edge supercomputing system....

    Go to contribution page
  505. Antonio Gioiosa (University of Molise & INFN Roma Tor Vergata)
    Track 2 - Online and real-time computing
    Talk

    The Mu2e experiment at Fermilab aims to observe coherent neutrinoless conversion of a muon to an electron in the field of an aluminum nucleus, with a sensitivity improvement of 10,000 times over current limits.
    The Mu2e Trigger and Data Acquisition System (TDAQ) uses \emph{otsdaq} framework as the online Data Acquisition System (DAQ) solution.
    Developed at Fermilab, \emph{otsdaq} integrates...

    Go to contribution page
  506. Track 3 - Offline Computing
    Talk
  507. Dr Andrea Sciabà (CERN), Jakob Blomer (CERN), Dr Vincenzo Eduardo Padulano (CERN)
    Plenary
    Talk

    Collaboratively, the IT and EP departments have launched a formal project within the Research and Computing sector to evaluate a novel data format for physics analysis data utilized in LHC experiments and other fields. The objective of this initiative is to substitute the current TTree data format of ROOT with a more efficient format known as RNTuple, which provides superior support for...

    Go to contribution page
  508. Jakob Blomer (CERN)
    Plenary

    For several years, the ROOT team is developing the new RNTuple I/O subsystem in preparation of the next generation of collider experiments. Both HL-LHC and DUNE are expected to start data taking by the end of this decade. They pose unprecedented challenges to event data I/O in terms of data rates, event sizes and event complexity. At the same time, the I/O landscape is getting more diverse....

    Go to contribution page
  509. Bartlomiej Rachwal (AGH University of Krakow (PL))
    Track 2 - Online and real-time computing
    Talk