23–28 Oct 2022
Villa Romanazzi Carducci, Bari, Italy
Europe/Rome timezone

Session

Poster session with coffee break

24 Oct 2022, 11:00
Area Poster (Floor -1) (Villa Romanazzi)

Area Poster (Floor -1)

Villa Romanazzi

Presentation materials

There are no materials yet.

  1. Michael Boehler (Albert Ludwigs Universitaet Freiburg (DE))
    24/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The goal of this study is to understand the observed differences in ATLAS software performance, when comparing results measured under ideal laboratory conditions with those from ATLAS computing resources on the Worldwide LHC Computing Grid (WLCG). The laboratory results are based on the full simulation of a single ttbar event and use dedicated, local hardware. In order to have a common and...

    Go to contribution page
  2. Dr Guang Zhao (Institute of High Energy Physics)
    24/10/2022, 11:00
    Poster

    Ionization of matters by charged particles are the main mechanism for particle identification in gaseous detectors. Traditionally, the ionization is measured by the total energy loss (dE/dx). The concept of cluster counting, which measures the number of clusters per track length (dN/dx), was proposed in the 1970s. The dN/dx measurement can avoid many sources of fluctuations from the dE/dx...

    Go to contribution page
  3. Diego Ciangottini (INFN, Perugia (IT))
    24/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The challenges expected for the HL-LHC era, both in terms of storage and computing resources, provide LHC experiments with a strong motivation for evaluating ways of re-thinking their computing models at many levels. In fact a big chunk of the R&D efforts of the CMS experiment have been focused on optimizing the computing and storage resource utilization for the data analysis, and Run3 could...

    Go to contribution page
  4. Fabrizio Alfonsi (Universita e INFN, Bologna (IT))
    24/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The High Energy Physics world will face challenging trigger requests in the next decade. In particular the luminosity increase to 5-7.5 x 1034 cm-2 s-1 at LHC will push the major experiments as ATLAS to exploit the online tracking for their inner detector to reach 10 kHz of events from 1 MHz of Calorimeter and Muon Spectrometer trigger. The project described here is a proposal for a tuned...

    Go to contribution page
  5. Thomas Britton
    24/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    Hydra is an AI system employing off-the-shelf computer vision technologies aimed at autonomously monitoring data quality. Data quality monitoring is an essential step in modern experimentation and Nuclear Physics is no exception. Certain failures can be identified through alarms (e.g. electrical heartbeats) while others are more subtle and often require expert knowledge to identify and...

    Go to contribution page
  6. Biying Hu (Sun Yat-sen University)
    24/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    High energy physics experiments are pushing forward the precision measurements and searching for new physics beyond standard model. It is urgent to simulate and generate mass data to meet requirements from physics. It is one of the most popular areas to make good use of existing power of supercomputers for high energy physics computing. Taking the BESIII experiment as an illustration, we...

    Go to contribution page
  7. Rui Zhang (University of Wisconsin Madison (US))
    24/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    AtlFast3 is the next generation of high precision fast simulation in ATLAS that is being deployed by the collaboration and was successfully used for the simulation of 7 billion events in Run 2 data taking conditions. AtlFast3 combines a parametrization-based approach known as FastCaloSimV2 and a machine-learning based tool that exploits Generative Adversarial Networks (FastCaloGAN) for the...

    Go to contribution page
  8. Antonio Vagnerini (Università di Torino)
    24/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The inner tracking system of the CMS experiment, consisting of the silicon pixel and strip detectors, is designed to provide a precise measurement of the momentum of charged particles and to perform the primary and secondary vertex reconstruction. The movements of the individual substructures of the tracker detectors are driven by the change in the operating conditions during data taking....

    Go to contribution page
  9. Brunella D'Anzi (Universita e INFN, Bari (IT)), CMS Collaboration
    24/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Accurate reconstruction of charged particle trajectories and measurement of their parameters (tracking) is one of the major challenges of the CMS experiment. A precise and efficient tracking is one of the critical components of the CMS physics program as it impacts the ability to reconstruct the physics objects needed to understand proton-proton collisions at the LHC. In this work, we present...

    Go to contribution page
  10. CMS collaboration, Marc Huwiler (University of Zurich (CH))
    24/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    Building on top of the multithreading functionality that was introduced in Run-2, the CMS software framework (CMSSW) has been extended in Run-3 to offload part of the physics reconstruction to NVIDIA GPUs. The first application of this new feature is the High Level Trigger (HLT): the new computing farm installed at the beginning of Run-3 is composed of 200 nodes, and for the first time each...

    Go to contribution page
  11. Lukas Alexander Heinrich (CERN)
    24/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    High Energy Physics (HEP) has been using column-wise data stored in synchronized containers, such as most prominently ROOT’s TTree, for decades. These containers have proven to be very powerful as they combine row-wise association capabilities needed by most HEP event processing frameworks (e.g. Athena) with column-wise storage, which typically results in better compression and more efficient...

    Go to contribution page
  12. Stefano Lacaprara (INFN sezione di Padova)
    24/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Belle II experiment has been collecting data since 2019 at the second generation e+/e- B-factory SuperKEKB in Tsukuba, Japan. The goal of the experiment is to explore new physics via high precision measurement in flavor physics. This is achieved by collecting a large amount of data that needs to be calibrated promptly for fast reconstruction and recalibrated thoroughly for the final...

    Go to contribution page
  13. Xiaoyu Liu (Central China Normal University CCNU (CN)), Xiaoyu Liu (Institute of High Energy Physics, CAS)
    24/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    Computing in high energy physics is one kind of typical data-intensive applications, especially some data analysis , which require access to a large amount of data. The traditional computing system adopts the "computing-storage" separation mode, which leads to large data volume move during the computing process, and and also increase transmission delay and network load. Therefore, it can...

    Go to contribution page
  14. Claudio Caputo (Universite Catholique de Louvain (UCL) (BE))
    24/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The outstanding performances obtained by the CMS experiment during Run1 and Run2 represent a great achievement of seamless hardware and software integration. Among the different software parts, the CMS offline reconstruction software is essential for translating the data acquired by the detectors into concrete objects that can be easily handled by the analyzers. The CMS offline reconstruction...

    Go to contribution page
  15. Antonio Perez-Calero Yzquierdo (Centro de Investigaciones Energéticas Medioambientales y Tecnológicas)
    24/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The landscape of computing power available for the CMS experiment is rapidly evolving, from a scenario dominated by x86 processors deployed at WLCG sites, towards a more diverse mixture of Grid, HPC, and Cloud facilities incorporating a higher fraction of non-CPU components, such as GPUs. Using these facilities’ heterogeneous resources efficiently to process the vast amounts of data to be...

    Go to contribution page
  16. Andrius Vaitkus (University of London (GB))
    24/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    During ATLAS Run 2, in the online track reconstruction algorithm of the Inner Detector (ID), a large proportion of the CPU time was dedicated to the fast track finding. With the proposed HL-LHC upgrade, where the event pile-up is predicted to reach <μ>=200, track finding will see a further large increase in CPU usage. Moreover, only a small subset of Pixel-only seeds is accepted after the...

    Go to contribution page
  17. William Axel Leight (University of Massachusetts Amherst)
    24/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The production of simulated datasets for use by physics analyses consumes a large fraction of ATLAS computing resources, a problem that will only get worse as increases in the instantaneous luminosity provided by the LHC lead to more collisions per bunch crossing (pile-up). One of the more resource-intensive steps in the Monte Carlo production is reconstructing the tracks in the ATLAS Inner...

    Go to contribution page
  18. Xiaoyu Liu (IHEP)
    24/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    With the scale and complexity of High Energy Physics(HEP) experiments increase, researchers are facing the challenge of large-scale data processing. In terms of storage, HDFS, a distributed file system that supports the "data-centric" processing model, has been widely used in academia and industry. This file system can support Spark and other distributed data localization calculations,...

    Go to contribution page
  19. Boyang Yu
    24/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    When measuring rare processes at Belle II, a huge luminosity is required, which means a large number of simulations are necessary to determine signal efficiencies and background contributions. However, this process demands high computation costs while most of the simulated data, in particular in case of background, are discarded by the event selection. Thus filters using graph neural networks...

    Go to contribution page
  20. Meinrad Moritz Schefer (Universitaet Bern (CH))
    24/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The ATLAS detector at CERN measures proton proton collisions at the Large Hadron Collider (LHC) which allows us to test the limits of the Standard Model (SM) of particles physics. Forward moving electrons produced at these collisions are promising candidates for finding physics beyond the SM. However, the ATLAS detector is not construed to measure forward leptons with pseudorapidity $\eta$ of...

    Go to contribution page
  21. Ceyhun Uzunoglu (CERN)
    24/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    As CMS starts the Run 3 data taking, the experiment’s data management software tools along with the monitoring infrastructure have undergone significant upgrades to cope up with the conditions expected in the coming years. The challenges of an efficient, real-time monitoring for the performance of the computing infrastructure or for data distribution are being met using state-of-the-art...

    Go to contribution page
  22. Lia Lavezzi (Universita e INFN Torino (IT))
    24/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    PARSIFAL (PARametrized SImulation) is a software tool originally implemented to reproduce the complete response of a triple-GEM detector to the passage of a charged particle, taking into account the involved physical processes by their simple parametrization and thus in a very fast way.
    Robust and reliable software, such as GARFIELD++, is widely used to simulate the transport of electrons...

    Go to contribution page
  23. Farouk Mokhtar (Univ. of California San Diego (US))
    24/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The particle-flow (PF) algorithm is of central importance to event reconstruction at the CMS detector, and has been a focus of developments in light of planned Phase-2 running conditions with an increased pileup and detector granularity. Current rule-based implementations rely on extrapolating tracks to the calorimeters, correlating them with calorimeter clusters, subtracting charged energy...

    Go to contribution page
  24. Muhammad Imran (National Centre for Physics (PK))
    24/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    Secrets Management is a process where we manage secrets, like certificates, database credentials, tokens, and API keys in a secure and centralized way. In the present CMSWEB (the portfolio of CMS internal IT services) infrastructure, only the operators maintain all services and cluster secrets in a secure place. However, if all relevant persons with secrets are away, then we are left with no...

    Go to contribution page
  25. Antonio Perez-Calero Yzquierdo (Centro de Investigaciones Energéticas Medioambientales y Tecnológicas)
    24/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The CMS Submission Infrastructure is the main computing resource provisioning system for CMS workflows, including data processing, simulation and analysis. It currently aggregates nearly 400k CPU cores distributed worldwide from Grid, HPC and cloud providers. CMS Tier-0 tasks, such as data repacking and prompt reconstruction, critical for data-taking operations, are executed on a collection of...

    Go to contribution page
  26. Elliott Kauffman (Duke University (US))
    24/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Over the past several years, a deep learning model based on convolutional neural networks has been developed to find proton-proton collision points (also known as primary vertices, or PVs) in Run 3 LHCb data. By converting the three-dimensional space of particle hits and tracks into a one-dimensional kernel density estimator (KDE) along the direction of the beamline and using the KDE as an...

    Go to contribution page
  27. Ralf Florian Von Cube (KIT - Karlsruhe Institute of Technology (DE))
    24/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    Restarting the LHC again after more than 3 years of shutdown, unprecedented amounts of data are expected to be recorded. Even with the WLCG providing a tremendous amount of compute resources to process this data, local resources will have to be used for additional compute power. This, however, makes the landscape in which computing takes place more heterogeneous.

    In this contribution, we...

    Go to contribution page
  28. Stefano Dal Pra (Universita e INFN, Bologna (IT))
    24/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The INFN-CNAF Tier-1 is engaged for years in a continuous effort to integrate its computing centre with more tipologies of computing resources. In particular, the challenge of providing opportunistic access to nonstandard CPU architectures, such as PowerPC or hardware accelerators (GPUs) has been actively exploited. In this work, we describe a solution to transparently integrate access to...

    Go to contribution page
  29. Michael Boehler (Albert Ludwigs Universitaet Freiburg (DE))
    24/10/2022, 16:10
    Poster

    The goal of this study is to understand the observed differences in ATLAS software performance, when comparing results measured under ideal laboratory conditions with those from ATLAS computing resources on the Worldwide LHC Computing Grid (WLCG). The laboratory results are based on the full simulation of a single ttbar event and use dedicated, local hardware. In order to have a common and...

    Go to contribution page
  30. Dr Guang Zhao (Institute of High Energy Physics)
    24/10/2022, 16:10
    Poster

    Ionization of matters by charged particles are the main mechanism for particle identification in gaseous detectors. Traditionally, the ionization is measured by the total energy loss (dE/dx). The concept of cluster counting, which measures the number of clusters per track length (dN/dx), was proposed in the 1970s. The dN/dx measurement can avoid many sources of fluctuations from the dE/dx...

    Go to contribution page
  31. Diego Ciangottini (INFN, Perugia (IT))
    24/10/2022, 16:10
    Poster

    The challenges expected for the HL-LHC era, both in terms of storage and computing resources, provide LHC experiments with a strong motivation for evaluating ways of re-thinking their computing models at many levels. In fact a big chunk of the R&D efforts of the CMS experiment have been focused on optimizing the computing and storage resource utilization for the data analysis, and Run3 could...

    Go to contribution page
  32. Fabrizio Alfonsi (Universita e INFN, Bologna (IT))
    24/10/2022, 16:10
    Poster

    The High Energy Physics world will face challenging trigger requests in the next decade. In particular the luminosity increase to 5-7.5 x 1034 cm-2 s-1 at LHC will push the major experiments as ATLAS to exploit the online tracking for their inner detector to reach 10 kHz of events from 1 MHz of Calorimeter and Muon Spectrometer trigger. The project described here is a proposal for a tuned...

    Go to contribution page
  33. Thomas Britton
    24/10/2022, 16:10
    Poster

    Hydra is an AI system employing off-the-shelf computer vision technologies aimed at autonomously monitoring data quality. Data quality monitoring is an essential step in modern experimentation and Nuclear Physics is no exception. Certain failures can be identified through alarms (e.g. electrical heartbeats) while others are more subtle and often require expert knowledge to identify and...

    Go to contribution page
  34. Biying Hu (Sun Yat-sen University)
    24/10/2022, 16:10
    Poster

    High energy physics experiments are pushing forward the precision measurements and searching for new physics beyond standard model. It is urgent to simulate and generate mass data to meet requirements from physics. It is one of the most popular areas to make good use of existing power of supercomputers for high energy physics computing. Taking the BESIII experiment as an illustration, we...

    Go to contribution page
  35. Rui Zhang (University of Wisconsin Madison (US))
    24/10/2022, 16:10
    Poster

    AtlFast3 is the next generation of high precision fast simulation in ATLAS that is being deployed by the collaboration and was successfully used for the simulation of 7 billion events in Run 2 data taking conditions. AtlFast3 combines a parametrization-based approach known as FastCaloSimV2 and a machine-learning based tool that exploits Generative Adversarial Networks (FastCaloGAN) for the...

    Go to contribution page
  36. Antonio Vagnerini (Università di Torino)
    24/10/2022, 16:10
    Poster

    The inner tracking system of the CMS experiment, consisting of the silicon pixel and strip detectors, is designed to provide a precise measurement of the momentum of charged particles and to perform the primary and secondary vertex reconstruction. The movements of the individual substructures of the tracker detectors are driven by the change in the operating conditions during data taking....

    Go to contribution page
  37. Brunella D'Anzi (Universita e INFN, Bari (IT)), CMS Collaboration
    24/10/2022, 16:10
    Poster

    Accurate reconstruction of charged particle trajectories and measurement of their parameters (tracking) is one of the major challenges of the CMS experiment. A precise and efficient tracking is one of the critical components of the CMS physics program as it impacts the ability to reconstruct the physics objects needed to understand proton-proton collisions at the LHC. In this work, we present...

    Go to contribution page
  38. CMS collaboration, Marc Huwiler (University of Zurich (CH))
    24/10/2022, 16:10
    Poster

    Building on top of the multithreading functionality that was introduced in Run-2, the CMS software framework (CMSSW) has been extended in Run-3 to offload part of the physics reconstruction to NVIDIA GPUs. The first application of this new feature is the High Level Trigger (HLT): the new computing farm installed at the beginning of Run-3 is composed of 200 nodes, and for the first time each...

    Go to contribution page
  39. Lukas Alexander Heinrich (CERN)
    24/10/2022, 16:10
    Poster

    High Energy Physics (HEP) has been using column-wise data stored in synchronized containers, such as most prominently ROOT’s TTree, for decades. These containers have proven to be very powerful as they combine row-wise association capabilities needed by most HEP event processing frameworks (e.g. Athena) with column-wise storage, which typically results in better compression and more efficient...

    Go to contribution page
  40. Stefano Lacaprara (INFN sezione di Padova)
    24/10/2022, 16:10
    Poster

    The Belle II experiment has been collecting data since 2019 at the second generation e+/e- B-factory SuperKEKB in Tsukuba, Japan. The goal of the experiment is to explore new physics via high precision measurement in flavor physics. This is achieved by collecting a large amount of data that needs to be calibrated promptly for fast reconstruction and recalibrated thoroughly for the final...

    Go to contribution page
  41. Xiaoyu Liu (Central China Normal University CCNU (CN)), Xiaoyu Liu (Institute of High Energy Physics, CAS)
    24/10/2022, 16:10
    Poster

    Computing in high energy physics is one kind of typical data-intensive applications, especially some data analysis , which require access to a large amount of data. The traditional computing system adopts the "computing-storage" separation mode, which leads to large data volume move during the computing process, and and also increase transmission delay and network load. Therefore, it can...

    Go to contribution page
  42. Claudio Caputo (Universite Catholique de Louvain (UCL) (BE))
    24/10/2022, 16:10
    Poster

    The outstanding performances obtained by the CMS experiment during Run1 and Run2 represent a great achievement of seamless hardware and software integration. Among the different software parts, the CMS offline reconstruction software is essential for translating the data acquired by the detectors into concrete objects that can be easily handled by the analyzers. The CMS offline reconstruction...

    Go to contribution page
  43. Antonio Perez-Calero Yzquierdo (Centro de Investigaciones Energéticas Medioambientales y Tecnológicas)
    24/10/2022, 16:10
    Poster

    The landscape of computing power available for the CMS experiment is rapidly evolving, from a scenario dominated by x86 processors deployed at WLCG sites, towards a more diverse mixture of Grid, HPC, and Cloud facilities incorporating a higher fraction of non-CPU components, such as GPUs. Using these facilities’ heterogeneous resources efficiently to process the vast amounts of data to be...

    Go to contribution page
  44. Andrius Vaitkus (University of London (GB))
    24/10/2022, 16:10
    Poster

    During ATLAS Run 2, in the online track reconstruction algorithm of the Inner Detector (ID), a large proportion of the CPU time was dedicated to the fast track finding. With the proposed HL-LHC upgrade, where the event pile-up is predicted to reach <μ>=200, track finding will see a further large increase in CPU usage. Moreover, only a small subset of Pixel-only seeds is accepted after the...

    Go to contribution page
  45. William Axel Leight (University of Massachusetts Amherst)
    24/10/2022, 16:10
    Poster

    The production of simulated datasets for use by physics analyses consumes a large fraction of ATLAS computing resources, a problem that will only get worse as increases in the instantaneous luminosity provided by the LHC lead to more collisions per bunch crossing (pile-up). One of the more resource-intensive steps in the Monte Carlo production is reconstructing the tracks in the ATLAS Inner...

    Go to contribution page
  46. Xiaoyu Liu (IHEP)
    24/10/2022, 16:10
    Poster

    With the scale and complexity of High Energy Physics(HEP) experiments increase, researchers are facing the challenge of large-scale data processing. In terms of storage, HDFS, a distributed file system that supports the "data-centric" processing model, has been widely used in academia and industry. This file system can support Spark and other distributed data localization calculations,...

    Go to contribution page
  47. Boyang Yu
    24/10/2022, 16:10
    Poster

    When measuring rare processes at Belle II, a huge luminosity is required, which means a large number of simulations are necessary to determine signal efficiencies and background contributions. However, this process demands high computation costs while most of the simulated data, in particular in case of background, are discarded by the event selection. Thus filters using graph neural networks...

    Go to contribution page
  48. Steffen Stärz (McGill University, (CA))
    24/10/2022, 16:10
    Poster

    The Phase-II upgrade of the LHC will increase its instantaneous luminosity by a factor of 7 leading to the High Luminosity LHC (HL-LHC). At the HL-LHC, the number of proton-proton collisions in one bunch crossing (called pileup) increases significantly, putting more stringent requirements on the LHC detectors electronics and real-time data processing capabilities.

    The ATLAS Liquid Argon...

    Go to contribution page
  49. Meinrad Moritz Schefer (Universitaet Bern (CH))
    24/10/2022, 16:10
    Poster

    The ATLAS detector at CERN measures proton proton collisions at the Large Hadron Collider (LHC) which allows us to test the limits of the Standard Model (SM) of particles physics. Forward moving electrons produced at these collisions are promising candidates for finding physics beyond the SM. However, the ATLAS detector is not construed to measure forward leptons with pseudorapidity $\eta$ of...

    Go to contribution page
  50. Ceyhun Uzunoglu (CERN)
    24/10/2022, 16:10
    Poster

    As CMS starts the Run 3 data taking, the experiment’s data management software tools along with the monitoring infrastructure have undergone significant upgrades to cope up with the conditions expected in the coming years. The challenges of an efficient, real-time monitoring for the performance of the computing infrastructure or for data distribution are being met using state-of-the-art...

    Go to contribution page
  51. Lia Lavezzi (Universita e INFN Torino (IT))
    24/10/2022, 16:10
    Poster

    PARSIFAL (PARametrized SImulation) is a software tool originally implemented to reproduce the complete response of a triple-GEM detector to the passage of a charged particle, taking into account the involved physical processes by their simple parametrization and thus in a very fast way.
    Robust and reliable software, such as GARFIELD++, is widely used to simulate the transport of electrons...

    Go to contribution page
  52. Farouk Mokhtar (Univ. of California San Diego (US))
    24/10/2022, 16:10
    Poster

    The particle-flow (PF) algorithm is of central importance to event reconstruction at the CMS detector, and has been a focus of developments in light of planned Phase-2 running conditions with an increased pileup and detector granularity. Current rule-based implementations rely on extrapolating tracks to the calorimeters, correlating them with calorimeter clusters, subtracting charged energy...

    Go to contribution page
  53. Muhammad Imran (National Centre for Physics (PK))
    24/10/2022, 16:10
    Poster

    Secrets Management is a process where we manage secrets, like certificates, database credentials, tokens, and API keys in a secure and centralized way. In the present CMSWEB (the portfolio of CMS internal IT services) infrastructure, only the operators maintain all services and cluster secrets in a secure place. However, if all relevant persons with secrets are away, then we are left with no...

    Go to contribution page
  54. Antonio Perez-Calero Yzquierdo (Centro de Investigaciones Energéticas Medioambientales y Tecnológicas)
    24/10/2022, 16:10
    Poster

    The CMS Submission Infrastructure is the main computing resource provisioning system for CMS workflows, including data processing, simulation and analysis. It currently aggregates nearly 400k CPU cores distributed worldwide from Grid, HPC and cloud providers. CMS Tier-0 tasks, such as data repacking and prompt reconstruction, critical for data-taking operations, are executed on a collection of...

    Go to contribution page
  55. Elliott Kauffman (Duke University (US))
    24/10/2022, 16:10
    Poster

    Over the past several years, a deep learning model based on convolutional neural networks has been developed to find proton-proton collision points (also known as primary vertices, or PVs) in Run 3 LHCb data. By converting the three-dimensional space of particle hits and tracks into a one-dimensional kernel density estimator (KDE) along the direction of the beamline and using the KDE as an...

    Go to contribution page
  56. Ralf Florian Von Cube (KIT - Karlsruhe Institute of Technology (DE))
    24/10/2022, 16:10
    Poster

    Restarting the LHC again after more than 3 years of shutdown, unprecedented amounts of data are expected to be recorded. Even with the WLCG providing a tremendous amount of compute resources to process this data, local resources will have to be used for additional compute power. This, however, makes the landscape in which computing takes place more heterogeneous.

    In this contribution, we...

    Go to contribution page
  57. Stefano Dal Pra (Universita e INFN, Bologna (IT))
    24/10/2022, 16:10
    Poster

    The INFN-CNAF Tier-1 is engaged for years in a continuous effort to integrate its computing centre with more tipologies of computing resources. In particular, the challenge of providing opportunistic access to nonstandard CPU architectures, such as PowerPC or hardware accelerators (GPUs) has been actively exploited. In this work, we describe a solution to transparently integrate access to...

    Go to contribution page
  58. Jacopo Cerasoli (CNRS - IPHC)
    25/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Over the past few years, intriguing deviations from the Standard Model predictions have been reported in measurements of angular observables and branching fractions of $B$ meson decays, suggesting the existence of a new interaction that acts differently on the three lepton families. The Belle II experiment has unique features that allow to study $B$ meson decays with invisible particles in the...

    Go to contribution page
  59. Zhijun Li (Sun Yat-Sen University (CN))
    25/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    Detector modeling and visualization are essential in the life cycle of a High Energy Physics (HEP) experiment. Unity is a professional multi-media creation software that has the advantages of rich visualization effects and easy deployment on various platforms. In this work, we applied the method of detector transformation to convert the BESIII detector description from the offline software...

    Go to contribution page
  60. Ianna Osborne (Princeton University)
    25/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    Awkward Arrays and RDataFrame provide two very different ways of performing calculations at scale. By adding the ability to zero-copy convert between them, users get the best of both. It gives users a better flexibility in mixing different packages and languages in their analysis.

    In Awkward Array version 2, the ak.to_rdataframe function presents a view of an Awkward Array as an RDataFrame...

    Go to contribution page
  61. Andrii Verbytskyi (Max Planck Society (DE))
    25/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    We present a revived version of the CERNLIB, the basis for software
    ecosystems of most of the pre-LHC HEP experiments. The efforts to
    consolidate the CERNLIB are part of the activities of the Data Preservation
    for High Energy Physics collaboration to preserve data and software of
    the past HEP experiments.

    The presented version is based on the CERNLIB version 2006 with numerous...

    Go to contribution page
  62. Simon Akar (University of Cincinnati (US))
    25/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Identifying and locating proton-proton collisions in LHC experiments (known as primary vertices or PVs) has been the topic of numerous conference talks in the past few years (2019-2021). Efforts to search for a variety of potential architectures have yielded potential candidates for PV-finder. The UNet model, for example, has achieved an efficiency of 98% with a low false-positive rate. These...

    Go to contribution page
  63. Dennis Klein (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE)), Dr Christian Tacke (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
    25/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The FairRoot software stack is a toolset for the simulation, reconstruction, and analysis of high energy particle physics experiments (currently used i.e. at FAIR/GSI, and CERN). In this work we give insight into recent improvements of Continuous Integration (CI) for this software stack. CI is a modern software engineering method to efficiently assure software quality. We discuss relevant...

    Go to contribution page
  64. Rahul Chauhan (CERN)
    25/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    After a successful adoption of Rucio following its inception in 2018 as the new data management system, a subsequent step is to advertise this to the users among other stakeholders. In this perspective, one of the objectives is to keep improving the tooling around Rucio. As Rucio introduces a new data management paradigm w.r.t the previous model, we begin by tackling the challenges arising...

    Go to contribution page
  65. Kaixuan Huang
    25/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    In High Energy Physics (HEP) experiment, Data Quality Monitoring (DQM) system is crucial to ensure the correct and smooth operation of the experimental apparatus during the data taking. DQM at Jiangmen Underground Neutrino Observatory (JUNO) will reconstruct raw data directly from JUNO Data Acquisition (DAQ) system and use event visualization tools to show the detector performance for high...

    Go to contribution page
  66. Alexey Rybalchenko (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
    25/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The common ALICE-FAIR software framework ALFA offers a platform for simulation, reconstruction and analysis of particle physics experiments. FairMQ is a module of ALFA that provides building blocks for distributed data processing pipelines, composed out of components communicating via message passing. FairMQ integrates and efficiently utilizes standard industry data transport technologies,...

    Go to contribution page
  67. Irene Andreou, Noam Mouelle (Imperial College London)
    25/10/2022, 11:00
    Poster

    We evaluate two Generative Adversarial Network (GAN) models developed by the COherent Muon to Electron Transition (COMET) collaboration to generate sequences of particle hits in a Cylindrical Drift Chamber (CDC). The models are first evaluated by measuring the similarity between distributions of particle-level, physical features. We then measure the Effectively Unbiased Fréchet Inception...

    Go to contribution page
  68. Namitha Chithirasreemadam (University of Pisa)
    25/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The [Mu2e][1] experiment will search for the CLFV neutrinoless coherent conversion of muon to electron, in the field of an Aluminium nucleus. A custom offline event display has been developed for Mu2e using [TEve][2], a ROOT based 3-D event visualisation framework. Event displays are crucial for monitoring and debugging during live data taking as well as for public outreach. A custom GUI...

    Go to contribution page
  69. Aurora Perego (Universita & INFN, Milano-Bicocca (IT))
    25/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The CMS software framework (CMSSW) has been recently extended to perform part of the physics reconstruction with NVIDIA GPUs. To avoid writing a different implementations of the code for each back-end the decision was to use a performance portability library and so Alpaka has been chosen as the solution for Run-3.
    In the meantime different studies have been performed to test the track...

    Go to contribution page
  70. Stefan Rua (Aalto University)
    25/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The CMS collaboration has a growing interest in the use of heterogeneous computing and accelerators to reduce the costs and improve the efficiency of the online and offline data processing: online, the High Level Trigger is fully equipped with NVIDIA GPUs; offline, a growing fraction of the computing power is coming from GPU-equipped HPC centres. One of the topics where accelerators could be...

    Go to contribution page
  71. Dalila Salamani (CERN)
    25/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Description of development of cascades of particles in a calorimeter of a high energy physics experiment relies on precise simulation of particle interactions with matter. It is inherently slow and constitutes a challenge for HEP experiments. Furthermore, with the upcoming high luminosity upgrade of the Large Hadron Collider and a much increased data production rate, the amount of required...

    Go to contribution page
  72. Eric Cano (CERN)
    25/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    GPU applications require a structure of array (SoA) layout for the data to achieve good memory access performance. During the development of the CMS Pixel reconstruction for GPUs, the Patatrack developers crafted various techniques to optimise the data placement in memory and its access inside GPU kernels. The work presented here gathers, automates and extends those patterns, and offers a...

    Go to contribution page
  73. Annika Stein (Rheinisch Westfaelische Tech. Hoch. (DE)), Spandan Mondal (RWTH Aachen (DE))
    25/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    In the field of high-energy physics, deep learning algorithms continue to gain in relevance and provide performance improvements over traditional methods, for example when identifying rare signals or finding complex patterns. From an analyst’s perspective, obtaining highest possible performance is desirable, but recently, some focus has been laid on studying robustness of models to investigate...

    Go to contribution page
  74. Benno Kach (Deutsches Elektronen-Synchrotron (DE))
    25/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    In this study, jets with up to 30 particles are modelled using Normalizing Flows with Rational Quadratic Spline coupling layers. The invariant mass of the jet is a powerful global feature to control whether the flow-generated data contains the same high-level correlations as the training data. The use of normalizing flows without conditioning shows that they lack the expressive power to do...

    Go to contribution page
  75. Rosamaria Venditti (Universita e INFN, Bari (IT))
    25/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The CMS experiment employs an extensive data quality monitoring (DQM) and data certification (DC) procedure. Currently, this approach consists mainly of the visual inspection of reference histograms which summarize the status and performance of the detector. Recent developments in several of the CMS subsystems have shown the potential of computer-assisted DQM and DC using autoencoders,...

    Go to contribution page
  76. Wuming Luo (Institute of High Energy Physics, Chinese Academy of Science)
    25/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Jiangmen Underground Neutrino Observatory (JUNO), located at the southern part of China, will be the world’s largest liquid scintillator(LS) detector. Equipped with 20 kton LS, 17623 20-inch PMTs and 25600 3-inch PMTs in the central detector, JUNO will provide a unique apparatus to probe the mysteries of neutrinos, particularly the neutrino mass ordering puzzle. One of the challenges for JUNO...

    Go to contribution page
  77. Felice Pantaleo (CERN)
    25/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The Particle Flow (PF) algorithm, used for a majority of CMS data analyses for event reconstruction, provides a comprehensive list of final-state state particle candidates and enables efficient identification and mitigation methods for simultaneous proton-proton collisions (pileup). The higher instantaneous luminosity expected during the upcoming LHC Run 3 will impose challenges for CMS event...

    Go to contribution page
  78. Oscar Roberto Chaparro Amaro (Instituto Politécnico Nacional. Centro de Investigación en Computación)
    25/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Density Functional Theory (DFT) is an extended ab initio method used for calculating the electronic properties of molecules. Considering Hartree Fock methods, the DFT offers appropriate approximations regarding the time calculations. Recently, the DFT method has been used for discovering and analyzing protein interactions by means of calculating the free energies of these macro-molecules from...

    Go to contribution page
  79. Daniele Spiga (Universita e INFN, Perugia (IT))
    25/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    Computing resources in the Worldwide LHC Computing Grid (WLCG) have been based entirely on the x86 architecture for more than two decades. In the near future, however, heterogeneous non-x86 resources, such as ARM, POWER and Risc-V, will become a substantial fraction of the resources that will be provided to the LHC experiments, due to their presence in existing and planned world-class HPC...

    Go to contribution page
  80. Danilo Piparo (CERN)
    25/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Phase-2 upgrade of CMS, coupled with the projected performance of the HL-LHC, shows great promise in terms of discovery potential. However, the increased granularity of the CMS detector and the higher complexity of the collision events generated by the accelerator pose challenges in the areas of data acquisition, processing, simulation, and analysis. These challenges cannot be solved...

    Go to contribution page
  81. Elias Leutgeb (Technische Universitaet Wien (AT))
    25/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The CMS Level-1 Trigger, for its operation during Phase-2 of LHC, will undergo a significant upgrade and redesign. The new trigger system, based on multiple families of custom boards, equipped with Xilinx Ultrascale Plus FPGAs and interconnected with high speed optical links at 25 Gb/s, will exploit more detailed information from the detector subsystems (calorimeter, muon systems, tracker). In...

    Go to contribution page
  82. John Lawrence (University of Notre Dame (US))
    25/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    With the start of run 3 in 2022, the LHC has entered a new period, now delivering higher energy and luminosity proton beams to the Compact Muon Solenoid (CMS) experiment. These increases make it critical to maintain and upgrade the tools and methods used to monitor the rate at which data is collected (the trigger rate). Software tools have been developed to allow for automated rate monitoring,...

    Go to contribution page
  83. Bernhard Manfred Gruber (Technische Universitaet Dresden (DE))
    25/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    Choosing the best memory layout for each hardware architecture is increasingly important as more and more programs become memory bound. For portable codes that run across heterogeneous hardware architectures, the choice of the memory layout for data structures is ideally decoupled from the rest of a program.
    The low-level abstraction of memory access (LLAMA) is a C++ library that provides a...

    Go to contribution page
  84. Giulia Lavizzari
    25/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    We present a machine-learning based method to detect deviations from a reference model, in an almost independent way with respect to the theory assumed to describe the new physics responsible for the discrepancies.

    The analysis is based on an Effective Field Theory (EFT) approach: under this hypothesis the Lagrangian of the system can be written as an infinite expansion of terms, where the...

    Go to contribution page
  85. Moritz David Bauer
    25/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Belle II experiment at the second generation e+/e- B-factory SuperKEKB has been collecting data since 2019 and aims to accumulate 50 times more data than the first generation experiment, Belle.
    To efficiently process these steadily growing datasets of recorded and
    simulated data that end up on the order of 100 PB and to support
    Grid-based analysis workflows using the DIRAC Workload...

    Go to contribution page
  86. Jacopo Cerasoli (CNRS - IPHC)
    25/10/2022, 16:10
    Poster

    Over the past few years, intriguing deviations from the Standard Model predictions have been reported in measurements of angular observables and branching fractions of $B$ meson decays, suggesting the existence of a new interaction that acts differently on the three lepton families. The Belle II experiment has unique features that allow to study $B$ meson decays with invisible particles in the...

    Go to contribution page
  87. Zhijun Li (Sun Yat-Sen University (CN))
    25/10/2022, 16:10
    Poster

    Detector modeling and visualization are essential in the life cycle of a High Energy Physics (HEP) experiment. Unity is a professional multi-media creation software that has the advantages of rich visualization effects and easy deployment on various platforms. In this work, we applied the method of detector transformation to convert the BESIII detector description from the offline software...

    Go to contribution page
  88. Garima Singh (Princeton University (US))
    25/10/2022, 16:10
    Poster

    RooFit is a toolkit for statistical modeling and fitting used by most experiments in particle physics. Just as data sets from next-generation experiments grow, processing requirements for physics analysis become more computationally demanding, necessitating performance optimizations for RooFit. One possibility to speed-up minimization and add stability is the use of automatic differentiation...

    Go to contribution page
  89. Ianna Osborne (Princeton University)
    25/10/2022, 16:10
    Poster

    Awkward Arrays and RDataFrame provide two very different ways of performing calculations at scale. By adding the ability to zero-copy convert between them, users get the best of both. It gives users a better flexibility in mixing different packages and languages in their analysis.

    In Awkward Array version 2, the ak.to_rdataframe function presents a view of an Awkward Array as an RDataFrame...

    Go to contribution page
  90. Andrii Verbytskyi (Max Planck Society (DE))
    25/10/2022, 16:10
    Poster

    We present a revived version of the CERNLIB, the basis for software
    ecosystems of most of the pre-LHC HEP experiments. The efforts to
    consolidate the CERNLIB are part of the activities of the Data Preservation
    for High Energy Physics collaboration to preserve data and software of
    the past HEP experiments.

    The presented version is based on the CERNLIB version 2006 with numerous...

    Go to contribution page
  91. Simon Akar (University of Cincinnati (US))
    25/10/2022, 16:10
    Poster

    Identifying and locating proton-proton collisions in LHC experiments (known as primary vertices or PVs) has been the topic of numerous conference talks in the past few years (2019-2021). Efforts to search for a variety of potential architectures have yielded potential candidates for PV-finder. The UNet model, for example, has achieved an efficiency of 98% with a low false-positive rate. These...

    Go to contribution page
  92. Dennis Klein (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE)), Dr Christian Tacke (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
    25/10/2022, 16:10
    Poster

    The FairRoot software stack is a toolset for the simulation, reconstruction, and analysis of high energy particle physics experiments (currently used i.e. at FAIR/GSI, and CERN). In this work we give insight into recent improvements of Continuous Integration (CI) for this software stack. CI is a modern software engineering method to efficiently assure software quality. We discuss relevant...

    Go to contribution page
  93. Rahul Chauhan (CERN)
    25/10/2022, 16:10
    Poster

    After a successful adoption of Rucio following its inception in 2018 as the new data management system, a subsequent step is to advertise this to the users among other stakeholders. In this perspective, one of the objectives is to keep improving the tooling around Rucio. As Rucio introduces a new data management paradigm w.r.t the previous model, we begin by tackling the challenges arising...

    Go to contribution page
  94. Kaixuan Huang
    25/10/2022, 16:10
    Poster

    In High Energy Physics (HEP) experiment, Data Quality Monitoring (DQM) system is crucial to ensure the correct and smooth operation of the experimental apparatus during the data taking. DQM at Jiangmen Underground Neutrino Observatory (JUNO) will reconstruct raw data directly from JUNO Data Acquisition (DAQ) system and use event visualization tools to show the detector performance for high...

    Go to contribution page
  95. Alexey Rybalchenko (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
    25/10/2022, 16:10
    Poster

    The common ALICE-FAIR software framework ALFA offers a platform for simulation, reconstruction and analysis of particle physics experiments. FairMQ is a module of ALFA that provides building blocks for distributed data processing pipelines, composed out of components communicating via message passing. FairMQ integrates and efficiently utilizes standard industry data transport technologies,...

    Go to contribution page
  96. Irene Andreou, Noam Mouelle (Imperial College London)
    25/10/2022, 16:10
    Poster

    We evaluate two Generative Adversarial Network (GAN) models developed by the COherent Muon to Electron Transition (COMET) collaboration to generate sequences of particle hits in a Cylindrical Drift Chamber (CDC). The models are first evaluated by measuring the similarity between distributions of particle-level, physical features. We then measure the Effectively Unbiased Fréchet Inception...

    Go to contribution page
  97. Namitha Chithirasreemadam (University of Pisa)
    25/10/2022, 16:10
    Poster

    The [Mu2e][1] experiment will search for the CLFV neutrinoless coherent conversion of muon to electron, in the field of an Aluminium nucleus. A custom offline event display has been developed for Mu2e using [TEve][2], a ROOT based 3-D event visualisation framework. Event displays are crucial for monitoring and debugging during live data taking as well as for public outreach. A custom GUI...

    Go to contribution page
  98. Aurora Perego (Universita & INFN, Milano-Bicocca (IT))
    25/10/2022, 16:10
    Poster

    The CMS software framework (CMSSW) has been recently extended to perform part of the physics reconstruction with NVIDIA GPUs. To avoid writing a different implementations of the code for each back-end the decision was to use a performance portability library and so Alpaka has been chosen as the solution for Run-3.
    In the meantime different studies have been performed to test the track...

    Go to contribution page
  99. Stefan Rua (Aalto University)
    25/10/2022, 16:10
    Poster

    The CMS collaboration has a growing interest in the use of heterogeneous computing and accelerators to reduce the costs and improve the efficiency of the online and offline data processing: online, the High Level Trigger is fully equipped with NVIDIA GPUs; offline, a growing fraction of the computing power is coming from GPU-equipped HPC centres. One of the topics where accelerators could be...

    Go to contribution page
  100. Dalila Salamani (CERN)
    25/10/2022, 16:10
    Poster

    Description of development of cascades of particles in a calorimeter of a high energy physics experiment relies on precise simulation of particle interactions with matter. It is inherently slow and constitutes a challenge for HEP experiments. Furthermore, with the upcoming high luminosity upgrade of the Large Hadron Collider and a much increased data production rate, the amount of required...

    Go to contribution page
  101. Eric Cano (CERN)
    25/10/2022, 16:10
    Poster

    GPU applications require a structure of array (SoA) layout for the data to achieve good memory access performance. During the development of the CMS Pixel reconstruction for GPUs, the Patatrack developers crafted various techniques to optimise the data placement in memory and its access inside GPU kernels. The work presented here gathers, automates and extends those patterns, and offers a...

    Go to contribution page
  102. Annika Stein (Rheinisch Westfaelische Tech. Hoch. (DE)), Spandan Mondal (RWTH Aachen (DE))
    25/10/2022, 16:10
    Poster

    In the field of high-energy physics, deep learning algorithms continue to gain in relevance and provide performance improvements over traditional methods, for example when identifying rare signals or finding complex patterns. From an analyst’s perspective, obtaining highest possible performance is desirable, but recently, some focus has been laid on studying robustness of models to investigate...

    Go to contribution page
  103. Benno Kach (Deutsches Elektronen-Synchrotron (DE))
    25/10/2022, 16:10
    Poster

    In this study, jets with up to 30 particles are modelled using Normalizing Flows with Rational Quadratic Spline coupling layers. The invariant mass of the jet is a powerful global feature to control whether the flow-generated data contains the same high-level correlations as the training data. The use of normalizing flows without conditioning shows that they lack the expressive power to do...

    Go to contribution page
  104. Rosamaria Venditti (Universita e INFN, Bari (IT))
    25/10/2022, 16:10
    Poster

    The CMS experiment employs an extensive data quality monitoring (DQM) and data certification (DC) procedure. Currently, this approach consists mainly of the visual inspection of reference histograms which summarize the status and performance of the detector. Recent developments in several of the CMS subsystems have shown the potential of computer-assisted DQM and DC using autoencoders,...

    Go to contribution page
  105. Wuming Luo (Institute of High Energy Physics, Chinese Academy of Science)
    25/10/2022, 16:10
    Poster

    Jiangmen Underground Neutrino Observatory (JUNO), located at the southern part of China, will be the world’s largest liquid scintillator(LS) detector. Equipped with 20 kton LS, 17623 20-inch PMTs and 25600 3-inch PMTs in the central detector, JUNO will provide a unique apparatus to probe the mysteries of neutrinos, particularly the neutrino mass ordering puzzle. One of the challenges for JUNO...

    Go to contribution page
  106. Felice Pantaleo (CERN)
    25/10/2022, 16:10
    Poster

    The Particle Flow (PF) algorithm, used for a majority of CMS data analyses for event reconstruction, provides a comprehensive list of final-state state particle candidates and enables efficient identification and mitigation methods for simultaneous proton-proton collisions (pileup). The higher instantaneous luminosity expected during the upcoming LHC Run 3 will impose challenges for CMS event...

    Go to contribution page
  107. Oscar Roberto Chaparro Amaro (Instituto Politécnico Nacional. Centro de Investigación en Computación)
    25/10/2022, 16:10
    Poster

    Density Functional Theory (DFT) is an extended ab initio method used for calculating the electronic properties of molecules. Considering Hartree Fock methods, the DFT offers appropriate approximations regarding the time calculations. Recently, the DFT method has been used for discovering and analyzing protein interactions by means of calculating the free energies of these macro-molecules from...

    Go to contribution page
  108. Daniele Spiga (Universita e INFN, Perugia (IT))
    25/10/2022, 16:10
    Poster

    Computing resources in the Worldwide LHC Computing Grid (WLCG) have been based entirely on the x86 architecture for more than two decades. In the near future, however, heterogeneous non-x86 resources, such as ARM, POWER and Risc-V, will become a substantial fraction of the resources that will be provided to the LHC experiments, due to their presence in existing and planned world-class HPC...

    Go to contribution page
  109. Elias Leutgeb (Technische Universitaet Wien (AT))
    25/10/2022, 16:10
    Poster

    The CMS Level-1 Trigger, for its operation during Phase-2 of LHC, will undergo a significant upgrade and redesign. The new trigger system, based on multiple families of custom boards, equipped with Xilinx Ultrascale Plus FPGAs and interconnected with high speed optical links at 25 Gb/s, will exploit more detailed information from the detector subsystems (calorimeter, muon systems, tracker). In...

    Go to contribution page
  110. John Lawrence (University of Notre Dame (US))
    25/10/2022, 16:10
    Poster

    With the start of run 3 in 2022, the LHC has entered a new period, now delivering higher energy and luminosity proton beams to the Compact Muon Solenoid (CMS) experiment. These increases make it critical to maintain and upgrade the tools and methods used to monitor the rate at which data is collected (the trigger rate). Software tools have been developed to allow for automated rate monitoring,...

    Go to contribution page
  111. Bernhard Manfred Gruber (Technische Universitaet Dresden (DE))
    25/10/2022, 16:10
    Poster

    Choosing the best memory layout for each hardware architecture is increasingly important as more and more programs become memory bound. For portable codes that run across heterogeneous hardware architectures, the choice of the memory layout for data structures is ideally decoupled from the rest of a program.
    The low-level abstraction of memory access (LLAMA) is a C++ library that provides a...

    Go to contribution page
  112. Giulia Lavizzari
    25/10/2022, 16:10
    Poster

    We present a machine-learning based method to detect deviations from a reference model, in an almost independent way with respect to the theory assumed to describe the new physics responsible for the discrepancies.

    The analysis is based on an Effective Field Theory (EFT) approach: under this hypothesis the Lagrangian of the system can be written as an infinite expansion of terms, where the...

    Go to contribution page
  113. Moritz David Bauer
    25/10/2022, 16:10
    Poster

    The Belle II experiment at the second generation e+/e- B-factory SuperKEKB has been collecting data since 2019 and aims to accumulate 50 times more data than the first generation experiment, Belle.
    To efficiently process these steadily growing datasets of recorded and
    simulated data that end up on the order of 100 PB and to support
    Grid-based analysis workflows using the DIRAC Workload...

    Go to contribution page
  114. Yu Hu
    26/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The high-performance fourth-generation synchrotron radiation light source, e.g., the High Energy Photon Source (HEPS) has been proposed and built successively. The advent of beamlines at fourth-generation synchrotron sources and the advanced detector has made significant progress that push the demand for computing resource at the edge of current workstation capabilities. On the other hand, the...

    Go to contribution page
  115. Yu Gao
    26/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    ROOT TTree has been widely used in the analysis and storage of various high-energy physical experiment data. The event data generated by the experiment is stored in TTree's bunch and further compressed and archived into a standard ROOT format file. At present, ROOT supports the compression storage of TBasket, the buffer of TBranch, using compression algorithms such as zlib, lzma, lz4, zstd,...

    Go to contribution page
  116. Max Fischer (Karlsruhe Institute of Technology)
    26/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    Modern high energy physics experiments and similar compute intensive fields are pushing the limits of dedicated grid and cloud infrastructure. In the past years research into augmenting this dedicated infrastructure by integrating opportunistic resources, i.e. compute resources temporarily acquired from third party resource providers, has yielded various strategies to approach this challenge....

    Go to contribution page
  117. Andrew Schick
    26/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    A precise measurement of the polarizability of the charged pion provides an important experimental test of our understanding of low-energy QCD. The goal of the Charged Pion Polarizability (CPP) experiment in Hall D at JLab, currently underway, is to make a precision measurement of this quantity through a high statistics study of the γγ → π+π− reaction near 2π threshold. The production of...

    Go to contribution page
  118. Corentin Allaire (Université Paris-Saclay (FR)), Rocky Bala Garg (Stanford University (US))
    26/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The reconstruction of particle trajectories is a key challenge of particle physics experiments as it directly impacts particle reconstruction and physics performances. To reconstruct these trajectories, different reconstruction algorithms are used sequentially. Each of these algorithms use many configuration parameters that need to be fine-tuned to properly account for the...

    Go to contribution page
  119. Yuyi Wang (Tsinghua University)
    26/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    One way to improve the position and energy resolution in neutrino experiments, is to give parameters with high resolution to the reconstruction method. These parameters, the photon electron(PE) hit time and the expectation of PE count, can be analyzed from the waveforms. We developed a new waveform analysis method called Fast Scholastic Matching Pursuit(FSMP). It is based on Bayesian...

    Go to contribution page
  120. Ms Xiaoqian Jia (Shandong University)
    26/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Track reconstruction (or tracking) plays an essential role in the offline data processing of collider experiments. For the BESIII detector working in the tau-charm energy region, plenty of efforts were made previously to improve the tracking performance with traditional methods, such as pattern recognition and Hough transform etc. However, for challenging tasks, such as the tracking of low...

    Go to contribution page
  121. Simon Schnake (DESY / RWTH Aachen University)
    26/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    In particle physics, precise simulations are necessary to enable scientific progress. However, accurate simulations of the interaction processes in calorimeters are complex and computationally very expensive, demanding a large fraction of the available computing resources in particle physics at present. Various generative models have been proposed to reduce this computational cost. Usually,...

    Go to contribution page
  122. Carlos Perez Dengra (PIC-CIEMAT)
    26/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Xrootd protocol is used by CMS experiment of LHC to access, transfer, and store data within Worldwide LHC Computing Grid (WLCG) sites running different kinds of jobs on their compute nodes. Its redirector system allows some execution tasks to run by accessing input data that is stored on any WLCG site. In 2029 the Large Hadron Collider (LHC) will start the High-Luminosity LHC (HL-LHC)...

    Go to contribution page
  123. Tao Lin (Chinese Academy of Sciences (CN))
    26/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Jiangmen Underground Neutrino Observatory (JUNO) has a very rich physics program which primarily aims to the determination of the neutrino mass ordering and to the precisely measurement of oscillation parameters. It is under construction in South China at a depth of about 700~m underground. As data taking will start in 2023, a complete data processing chain is developed before the data...

    Go to contribution page
  124. Anish Biswas (Princeton University (US))
    26/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    Awkward Array is a library for nested, variable-sized data, including arbitrary-length lists, records, mixed types, and missing data, using NumPy-like idioms. Auto-differentiation (also known as “autograd” and “autodiff”) is a technique for computing the derivative of a function defined by an algorithm, which requires the derivative of all operations used in that algorithm to be known.

    The...

    Go to contribution page
  125. Ameya Thete (Birla Institute of Technology and Science, Pilani - KK Birla Goa Campus (IN))
    26/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    A broad range of particle physics data can be naturally represented as graphs. As a result, Graph Neural Networks (GNNs) have gained prominence in HEP and have increasingly been adopted for a wide array of particle physics tasks, including particle track reconstruction. Most problems in physics involve data that have some underlying compatibility with symmetries. These problems may either...

    Go to contribution page
  126. Charles Leggett (Lawrence Berkeley National Lab (US))
    26/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    High-energy physics (HEP) experiments have developed millions of lines of code over decades that are optimized to run on traditional x86 CPU systems. However we are seeing a rapidly increasing fraction of floating point computing power in leadership-class computing facilities and traditional data centers coming from new accelerator architectures, such as GPUs. HEP experiments are now faced...

    Go to contribution page
  127. Matteo Barbetti (Universita e INFN, Firenze (IT))
    26/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The simplest and often most effective way of parallelizing the training of complex Machine Learning models is to execute several training instances on multiple machines, possibly scanning the hyperparameter space to optimize the underlying statistical model and the learning procedure.
    Often, such a meta learning procedure is limited by the ability of accessing securely a common database...

    Go to contribution page
  128. Eric Wulff (CERN)
    26/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    In the European Center of Excellence in Exascale Computing "Research on AI- and Simulation-Based Engineering at Exascale" (CoE RAISE), researchers from science and industry develop novel, scalable Artificial Intelligence technologies towards Exascale. In this work, we leverage European High performance Computing (HPC) resources to perform large-scale hyperparameter optimization (HPO),...

    Go to contribution page
  129. Tao Lin (Chinese Academy of Sciences (CN))
    26/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Jiangmen Underground Neutrino Observatory (JUNO) is under construction in South China and will start data taking in 2023. It has a central detector with a 20-kt liquid scintillator, equipped with 17,612 20-inch PMTs (photo-multiplier tubes) and 25,600 3-inch PMTs. The requirement on energy resolution of 3\%@1MeV makes the offline data processing challenging, so several machine learning...

    Go to contribution page
  130. Erica Brondolin (CERN)
    26/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    CLUE is a fast and innovative density-based clustering algorithm to group digitized energy deposits (hits) left by a particle traversing the active sensors of a high-granularity calorimeter in clusters with a well-defined seed hit. Outliers, i.e. hits which do not belong to any clusters, are also identified. Its outstanding performance has been proven in the context of the CMS Phase-2 upgrade...

    Go to contribution page
  131. Matteo Barbetti (Universita e INFN, Firenze (IT))
    26/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    About 90% of the computing resources available to the LHCb experiment has been spent to produce simulated data samples for Run 2 of the Large Hadron Collider. The upgraded LHCb detector will operate at much-increased luminosity, requiring many more simulated events for the Run 3. Simulation is a key necessity of analysis to interpret data in terms of signal and background and estimate relevant...

    Go to contribution page
  132. Alessandra Carlotta Re (Universita' degli Studi & INFN of Milano (Italy))
    26/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The Jiangmen Underground Neutrino Observatory (JUNO) is under construction in South China at a depth of about 700~m underground: the data taking is expected to start in late 2023. JUNO has a very rich physics program which primarily aims to the determination of the neutrino mass ordering and to the precisely measurement of oscillation parameters.
    The JUNO average raw data volume is expected...

    Go to contribution page
  133. Thomas Madlener (Deutsches Elektronen-Synchrotron (DESY))
    26/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The podio event data model (EDM) toolkit provides an easy way to generate a performant implementation of an EDM from a high level description in yaml format. We present the most recent developments in podio, most importantly the inclusion of a schema evolution mechanism for generated EDMs as well as the "Frame", a thread safe, generalized event data container. For the former we discuss some of...

    Go to contribution page
  134. Tim Voigtlaender (KIT - Karlsruhe Institute of Technology (DE))
    26/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    Machine Learning (ML) applications, which have become quite common tools for many High Energy Physics (HEP) analyses, benefit significantly from GPU resources. GPU clusters are important to fulfill the rapidly increasing demand for GPU resources in HEP. Therefore, the Karlsruhe Institute of Technology (KIT) provides a GPU cluster for HEP accessible from the physics institute via its batch...

    Go to contribution page
  135. Davide Valsecchi (ETH Zurich (CH))
    26/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The reconstruction of electrons and photons in CMS depends on topological clustering of the energy deposited by an incident particle in different crystals of the electromagnetic calorimeter (ECAL). These clusters are formed by aggregating neighbouring crystals according to the expected topology of an electromagnetic shower in the ECAL. The presence of upstream material (beampipe, tracker and...

    Go to contribution page
  136. Adriano Di Florio (Politecnico e INFN, Bari), Giorgio Pizzati (Universita & INFN, Milano-Bicocca (IT))
    26/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The future development projects for the Large Hadron Collider will constantly bring nominal luminosity increase, with the ultimate goal of reaching a peak luminosity of $5 \times 10^{34} cm^{−2} s^{−1}$. This would result in up to 200 simultaneous proton collisions (pileup), posing significant challenges for the CMS detector reconstruction.

    The CMS primary vertex (PV) reconstruction is a...

    Go to contribution page
  137. Federico Scutti (Swinburne University of Technology)
    26/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The pyrate framework provides a dynamic, versatile, and memory-efficient approach to data format transformations, object reconstruction and data analysis in particle physics. Developed within the context of the SABRE experiment for dark matter direct detection, pyrate relies on a blackboard design pattern where algorithms are dynamically evaluated throughout a run and scheduled by a central...

    Go to contribution page
  138. Florian Reiss (University of Manchester (GB))
    26/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The LHCb detector at the LHC is a general purpose detector in the forward region with a focus on studying decays of c- and b-hadrons. For Run 3 of the LHC (data taking from 2022), LHCb will take data at an instantaneous luminosity of 2 × 10^{33} cm−2 s−1, five times higher than in Run 2 (2015-2018). To cope with the harsher data taking conditions, LHCb will deploy a purely software based...

    Go to contribution page
  139. Melvin Strobl
    26/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Quantum Computing and Machine Learning are both significant and appealing research fields. In particular, the combination of both has led to the emergence of the research field of quantum machine learning which has recently taken enormous popularity. We investigate in the potential advantages of this synergy for the application in high energy physics, more precisely in the reconstruction of...

    Go to contribution page
  140. Manos Vourliotis (Univ. of California San Diego (US))
    26/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    One of the most challenging computational problems in the Run 3 of the Large Hadron Collider (LHC) and more so in the High-Luminosity LHC (HL-LHC) is expected to be finding and fitting charged-particle tracks during event reconstruction. The methods used so far at the LHC and in particular at the CMS experiment are based on the Kalman filter technique. Such methods have shown to be robust and...

    Go to contribution page
  141. Valentin Volkl (CERN)
    26/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Key4hep project aims to provide a turnkey software solution for the full experiment life-cycle, based on established community tools. Several future collider communities (CEPC, CLIC, EIC, FCC, and ILC) have joined to develop and adapt their workflows to use the common data model EDM4hep and common framework. Besides sharing of existing experiment workflows, one focus of the Key4hep project...

    Go to contribution page
  142. Annabel Kropf (DESY Hamburg)
    26/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    LUXE (Laser Und XFEL Experiment) is a proposed experiment at DESY using the electron beam of the European XFEL and a high-intensity laser. LUXE will study Quantum Electrodynamics (QED) in the strong-field regime, where QED becomes non-perturbative. One of the key measurements is the positron rate from electron-positron pair creation, which is enabled by the use of a silicon tracking detector....

    Go to contribution page
  143. Marcel Hohmann
    27/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The Belle II experiment has been taking data at the SuperKEKB collider since 2018. Particle identification is a key component of the reconstruction, and several detector upgrades from Belle to Belle II were designed to maintain performance with the higher background rates.
    We present a method for a data-driven calibration that improves the overall particle identification performance and is...

    Go to contribution page
  144. Maggie Voetberg, Sophia Zhou
    27/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The size, complexity, and duration of telescope surveys are growing beyond the capacity of traditional methods for scheduling observations. Scheduling algorithms must have the capacity to balance multiple (often competing) observational and scientific goals, address both short-term and long-term considerations, and adapt to rapidly changing stochastic elements (e.g., weather). Reinforcement...

    Go to contribution page
  145. Marco Barbone
    27/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    In this work we present the adaptation of the popular clustering algorithm DBSCAN to reconstruct the primary vertex (PV) at the hardware trigger level in collisions at the High-Luminosity LHC. Nominally, PV reconstruction is performed by a simple histogram-based algorithm. The main challenge in PV reconstruction is that the particle tracks need to be processed in a low-latency environment...

    Go to contribution page
  146. Jerry 🦑 Ling (Harvard University (US))
    27/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Template Bayesian inference via Automatic Differentiation in JuliaLang

    Binned template-fitting is one of the most important tools in the High-Energy physics (HEP) statistics toolbox. Statistical models based on combinations of histograms are often the last step in a HEP physics analysis. Both model and data can be represented in a standardized format - HistFactory (C++/XML) and more...

    Go to contribution page
  147. Svenja Diekmann (Rheinisch Westfaelische Tech. Hoch. (DE))
    27/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The usage of Deep Neural Networks (DNNs) as multi-classifiers is widespread in modern HEP analyses. In standard categorisation methods, the high-dimensional output of the DNN is often reduced to a one-dimensional distribution by exclusively passing the information about the highest class score to the statistical inference method. Correlations to other classes are hereby omitted.
    Moreover, in...

    Go to contribution page
  148. Nick Smith (Fermi National Accelerator Lab. (US))
    27/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    To support the needs of novel collider analyses such as long-lived particle searches, considerable computing resources are spent forward-copying data products from low-level data tiers like CMS AOD and MiniAOD to reduced data formats for end-user analysis tasks. In the HL-LHC era, it will be increasingly difficult to ensure online access to low-level data formats. In this talk, we present a...

    Go to contribution page
  149. Brunella D'Anzi (Universita e INFN, Bari (IT))
    27/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The large statistical fluctuations in the ionization energy loss high energy physics process by charged particles in gaseous detectors implies that many measurements are needed along the particle track to get a precise mean, and this represent a limit to the particle separation capabilities that should be overcome in the design of future colliders. The cluster counting technique (dN/dx)...

    Go to contribution page
  150. Saransh Chopra (Cluster Innovation Centre, University of Delhi)
    27/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    Due to the massive nature of HEP data, performance has always been a factor in its analysis and processing. Languages like C++ would be fast enough but are often challenging to grasp for beginners, and can be difficult to iterate quickly in an interactive environment . On the other hand, the ease of writing code and extensive library ecosystem make Python an enticing choice for data analysis....

    Go to contribution page
  151. Ali Marafi (Kuwait University (KW)), Andrea Bocci (CERN)
    27/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    In the past years the CMS software framework (CMSSW) has been extended to offload part of the physics reconstruction to NVIDIA GPUs. This can achieve a higher computational efficiency, but it adds extra complexity to the design of dedicated data centres and the use of opportunistic resources, like HPC centres. A possible solution to increase the flexibility of heterogeneous clusters is to...

    Go to contribution page
  152. Andrea Di Luca (Universita degli Studi di Trento and INFN (IT))
    27/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    HEPD-02 is a new, upgraded version of the High Energy Particle Detector as part of a suite of instruments for the second mission of the China Seismo-Electromagnetic Satellite (CSES-02) to be launched in 2023. Designed and realized by the Italian Collaboration LIMADOU of the CSES program, it is optimized to identify fluxes of charged particles (mostly electrons and protons) and determine their...

    Go to contribution page
  153. Michael Poat
    27/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    In real-time computing facilities - system, network, and security monitoring are core components to run efficiently and effectively. As there are many diverse functions that can go awry, such as load, network, processes, and power issues, having a well-functioning monitoring system is imperative. In many facilities you will see the standard set of tools such as Ganglia, Grafana, Nagios, etc....

    Go to contribution page
  154. Alexander Bogatskiy (Flatiron Institute, Simons Foundation)
    27/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    We hold these truths to be self-evident: that all physics problems are created unequal, that they are endowed with their unique data structures and symmetries, that among these are tensor transformation laws, Lorentz symmetry, and permutation equivariance. A lot of attention has been paid to the applications of common machine learning methods in physics experiments and theory. However, much...

    Go to contribution page
  155. Gabor Biro (Wigner Research Centre for Physics (Wigner RCP) (HU))
    27/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The ever growing increase of computing power necessary for the storage and data analysis of the high-energy physics experiments at CERN requires performance optimization of the existing and planned IT resources.

    One of the main computing capacity consumers in the HEP software workflow is the data analysis. To optimize the resource usage, the concept of Analysis Facility (AF) for Run 3 has...

    Go to contribution page
  156. Mohamed Hemdan
    27/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Particle physics experiments spend large amounts of computational effort on Monte Carlo simulations. Due to the computational expense of simulations, they are often executed and stored in large distributed computing clusters. To lessen the computational cost, physicists have introduced alternatives to speed up the simulation. Generative Adversarial Networks (GANs) are an excellent...

    Go to contribution page
  157. Josh Bendavid (CERN), Kenneth Long (Massachusetts Inst. of Technology (US))
    27/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The unprecedented volume of data and Monte Carlo simulations at the HL-LHC will pose increasing challenges for data analysis both in terms of computing resource requirements as well as "time to insight". Precision measurements with present LHC data already face many of these challenges today. We will discuss performance scaling and optimization of RDataFrame for complex physics analyses,...

    Go to contribution page
  158. Sophie Berkman
    27/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    Neutrino experiments that use liquid argon time projection chamber (LArTPC) detectors are growing bigger and expect to see more neutrinos with next generation beams, and therefore will require more computing resources to reach their physics goals of measuring CP violation in the neutrino sector and exploring anomalies. These resources can be used to their full capacity by incorporating...

    Go to contribution page
  159. Nicola De Fillipis
    27/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Ultra-low mass and high granularity Drift Chambers fulfill the requirements for tracking systems of modern High Energy Physics experiments at the future high luminosity facilities (FCC-ee or CEPC).
    \indent We present how, in Helium based gas mixtures, by measuring the arrival times of each individual ionization cluster and by using proper statistical tools, it is possible to perform a bias...

    Go to contribution page
  160. Zef Wolffs (Nikhef National institute for subatomic physics (NL))
    27/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    RooFit is a toolkit for statistical modeling and fitting, and together with RooStats it is used for measurements and statistical tests by most experiments in particle physics, particularly the LHC experiments. As the LHC program progresses, physics analyses become more computationally demanding. Therefore, recent RooFit developments were focused on performance optimization, in particular to...

    Go to contribution page
  161. Hosein Karimi Khozani (IHEP)
    27/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    There are established classical methods to reconstruct particle tracks from recorded hits on the particle detectors. Current algorithms do this either by cut in some features, like recorded time of the hits, or by the fitting process. This is potentially error prone and resource consuming. For high noise events, these issues are more critical and this method might even fail. We have been...

    Go to contribution page
  162. Mr Jan Stephan
    27/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    The alpaka library is a header-only C++17 abstraction library for development across hardware accelerators (CPUs, GPUs, FPGAs). Its aim is to provide performance portability across accelerators through the abstraction (not hiding!) of the underlying levels of parallelism. In this talk we will show the concepts behind alpaka, how it is mapped to the various underlying hardware models, and show...

    Go to contribution page
  163. Umit Sozbilir (Universita e INFN, Bari (IT))
    27/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    In recent years, new technologies and new approaches have been developed in academia and industry to face the necessity to both handle and easily visualize huge amounts of data, the so-called “big data”. The increasing volume and complexity of HEP data challenge the HEP community to develop simpler and yet powerful interfaces based on parallel computing on heterogeneous platforms. Good...

    Go to contribution page
  164. Dmitry Popov (University of Chinese Academy of Sciences (CN))
    27/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    Monte Carlo simulation is a vital tool for all physics programmes of particle physics experiments. Their accuracy and reliability in reproducing detector response is of the utmost importance. For the LHCb experiment, which is embarking on a new data-take era with an upgraded detector, a full suite of verifications has been put in place for its simulation software to ensure the quality of the...

    Go to contribution page
  165. Vasilis Belis (ETH Zurich (CH))
    27/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    We developed supervised and unsupervised quantum machine learning models for anomaly detection tasks at the Large Hadron Collider at CERN. Current Noisy Intermediate Scale Quantum (NISQ) devices have a limited number of qubits and qubit coherence. We designed dimensionality reduction models based on Autoencoders to accommodate the constraints dictated by the quantum hardware. Different designs...

    Go to contribution page
  166. Giovanna Lazzari Miotto (Universidade Federál Do Rio Grande Do Sul (BR))
    27/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    Compared to LHC Run 1 and Run 2, future HEP experiments, e.g. at the HL-LHC, will increase the volume of generated data by an order of magnitude. In order to sustain the expected analysis throughput, ROOT's RNTuple I/O subsystem has been engineered to overcome the bottlenecks of the TTree I/O subsystem, focusing also on a compact data format, asynchronous and parallel requests, and a layered...

    Go to contribution page
  167. Lorenzo Moneta (CERN)
    27/10/2022, 11:00
    Poster

    Through its TMVA package, ROOT provides and connects to machine learning tools for data analysis at HEP experiments and beyond. In addition, ROOT provides through its powerful I/O system and RDataFrame analysis tools the capability to efficiently select and query input data from large data sets as typically used in HEP analysis. At the same time, several existing Machine Learning tools exist...

    Go to contribution page
  168. Irina Espejo Morales (New York University (US))
    27/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    MadMiner is a python module that implements a powerful family of multivariate inference techniques that leverage both matrix element information and machine learning.

    This multivariate approach neither requires the reduction of high-dimensional data to summary statistics nor any simplifications to the under-lying physics or detector response.

    In this paper, we address some of the...

    Go to contribution page
  169. Nathalie Soybelman (Weizmann Institute of Science (IL)), Mr Nilotpal Kakati (Weizmann Institute of Science)
    27/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The feature complexity of data recorded by particle detectors combined with the availability of large simulated datasets presents a unique environment for applying state-of-the-art machine learning (ML) architectures to physics problems. We present the Simplified Cylindrical Detector (SCD): a fully configurable GEANT4 calorimeter simulation which mimics the granularity and response...

    Go to contribution page
  170. Felice Pantaleo (CERN), Wahid Redjeb (Rheinisch Westfaelische Tech. Hoch. (DE))
    27/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    To sustain the harsher conditions of the high-luminosity LHC, the CMS Collaboration is designing a novel endcap calorimeter system. The new calorimeter will predominantly use silicon sensors to achieve sufficient radiation tolerance and will maintain highly granular information in the readout to help mitigate the effects of the pile up. In regions characterized by lower radiation levels, small...

    Go to contribution page
  171. Elham E Khoda (University of Washington (US))
    27/10/2022, 11:00
    Track 1: Computing Technology for Physics Research
    Poster

    Recurrent neural networks have been shown to be effective architectures for many tasks in high energy physics, and thus have been widely adopted. Their use in low-latency environments has, however, been limited as a result of the difficulties of implementing recurrent architectures on field-programmable gate arrays (FPGAs). In this paper we present an implementation of two types of recurrent...

    Go to contribution page
  172. Raquel Pezoa Rivera (Universidad de Valparaíso)
    27/10/2022, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The classification of HEP events, or separating signal events from the background, is one of the most important analysis tasks in High Energy Physics (HEP), and a foundational task in the search for new phenomena. Complex deep learning-based models have been fundamental for achieving accurate and outstanding performance in this classification task. However, the quantification of the...

    Go to contribution page
  173. Marcel Hohmann
    27/10/2022, 16:10
    Poster

    The Belle II experiment has been taking data at the SuperKEKB collider since 2018. Particle identification is a key component of the reconstruction, and several detector upgrades from Belle to Belle II were designed to maintain performance with the higher background rates.
    We present a method for a data-driven calibration that improves the overall particle identification performance and is...

    Go to contribution page
  174. Maggie Voetberg, Sophia Zhou
    27/10/2022, 16:10
    Poster

    The size, complexity, and duration of telescope surveys are growing beyond the capacity of traditional methods for scheduling observations. Scheduling algorithms must have the capacity to balance multiple (often competing) observational and scientific goals, address both short-term and long-term considerations, and adapt to rapidly changing stochastic elements (e.g., weather). Reinforcement...

    Go to contribution page
  175. Marco Barbone
    27/10/2022, 16:10
    Poster

    In this work we present the adaptation of the popular clustering algorithm DBSCAN to reconstruct the primary vertex (PV) at the hardware trigger level in collisions at the High-Luminosity LHC. Nominally, PV reconstruction is performed by a simple histogram-based algorithm. The main challenge in PV reconstruction is that the particle tracks need to be processed in a low-latency environment...

    Go to contribution page
  176. Jerry 🦑 Ling (Harvard University (US))
    27/10/2022, 16:10
    Poster

    Template Bayesian inference via Automatic Differentiation in JuliaLang

    Binned template-fitting is one of the most important tools in the High-Energy physics (HEP) statistics toolbox. Statistical models based on combinations of histograms are often the last step in a HEP physics analysis. Both model and data can be represented in a standardized format - HistFactory (C++/XML) and more...

    Go to contribution page
  177. Svenja Diekmann (Rheinisch Westfaelische Tech. Hoch. (DE))
    27/10/2022, 16:10
    Poster

    The usage of Deep Neural Networks (DNNs) as multi-classifiers is widespread in modern HEP analyses. In standard categorisation methods, the high-dimensional output of the DNN is often reduced to a one-dimensional distribution by exclusively passing the information about the highest class score to the statistical inference method. Correlations to other classes are hereby omitted.
    Moreover, in...

    Go to contribution page
  178. Nick Smith (Fermi National Accelerator Lab. (US))
    27/10/2022, 16:10
    Poster

    To support the needs of novel collider analyses such as long-lived particle searches, considerable computing resources are spent forward-copying data products from low-level data tiers like CMS AOD and MiniAOD to reduced data formats for end-user analysis tasks. In the HL-LHC era, it will be increasingly difficult to ensure online access to low-level data formats. In this talk, we present a...

    Go to contribution page
  179. Brunella D'Anzi (Universita e INFN, Bari (IT))
    27/10/2022, 16:10
    Poster

    The large statistical fluctuations in the ionization energy loss high energy physics process by charged particles in gaseous detectors implies that many measurements are needed along the particle track to get a precise mean, and this represent a limit to the particle separation capabilities that should be overcome in the design of future colliders. The cluster counting technique (dN/dx)...

    Go to contribution page
  180. Saransh Chopra (Cluster Innovation Centre, University of Delhi)
    27/10/2022, 16:10
    Poster

    Due to the massive nature of HEP data, performance has always been a factor in its analysis and processing. Languages like C++ would be fast enough but are often challenging to grasp for beginners, and can be difficult to iterate quickly in an interactive environment . On the other hand, the ease of writing code and extensive library ecosystem make Python an enticing choice for data analysis....

    Go to contribution page
  181. Felix Wagner (HEPHY Vienna)
    27/10/2022, 16:10
    Poster

    Cryogenic phonon detectors are used by direct detection dark matter experiments to achieve sensitivity to light dark matter particle interactions. Such detectors consist of a target crystal equipped with a superconducting thermometer. The temperature of the thermometer and the bias current in its readout circuit need careful optimization to achieve optimal sensitivity of the detector. This...

    Go to contribution page
  182. Ali Marafi (Kuwait University (KW)), Andrea Bocci (CERN)
    27/10/2022, 16:10
    Poster

    In the past years the CMS software framework (CMSSW) has been extended to offload part of the physics reconstruction to NVIDIA GPUs. This can achieve a higher computational efficiency, but it adds extra complexity to the design of dedicated data centres and the use of opportunistic resources, like HPC centres. A possible solution to increase the flexibility of heterogeneous clusters is to...

    Go to contribution page
  183. Andrea Di Luca (Universita degli Studi di Trento and INFN (IT))
    27/10/2022, 16:10
    Poster

    HEPD-02 is a new, upgraded version of the High Energy Particle Detector as part of a suite of instruments for the second mission of the China Seismo-Electromagnetic Satellite (CSES-02) to be launched in 2023. Designed and realized by the Italian Collaboration LIMADOU of the CSES program, it is optimized to identify fluxes of charged particles (mostly electrons and protons) and determine their...

    Go to contribution page
  184. Michael Poat
    27/10/2022, 16:10
    Poster

    In real-time computing facilities - system, network, and security monitoring are core components to run efficiently and effectively. As there are many diverse functions that can go awry, such as load, network, processes, and power issues, having a well-functioning monitoring system is imperative. In many facilities you will see the standard set of tools such as Ganglia, Grafana, Nagios, etc....

    Go to contribution page
  185. Alexander Bogatskiy (Flatiron Institute, Simons Foundation)
    27/10/2022, 16:10
    Poster

    We hold these truths to be self-evident: that all physics problems are created unequal, that they are endowed with their unique data structures and symmetries, that among these are tensor transformation laws, Lorentz symmetry, and permutation equivariance. A lot of attention has been paid to the applications of common machine learning methods in physics experiments and theory. However, much...

    Go to contribution page
  186. Gabor Biro (Wigner Research Centre for Physics (Wigner RCP) (HU))
    27/10/2022, 16:10
    Poster

    The ever growing increase of computing power necessary for the storage and data analysis of the high-energy physics experiments at CERN requires performance optimization of the existing and planned IT resources.

    One of the main computing capacity consumers in the HEP software workflow is the data analysis. To optimize the resource usage, the concept of Analysis Facility (AF) for Run 3 has...

    Go to contribution page
  187. Mohamed Hemdan
    27/10/2022, 16:10
    Poster

    Particle physics experiments spend large amounts of computational effort on Monte Carlo simulations. Due to the computational expense of simulations, they are often executed and stored in large distributed computing clusters. To lessen the computational cost, physicists have introduced alternatives to speed up the simulation. Generative Adversarial Networks (GANs) are an excellent...

    Go to contribution page
  188. Josh Bendavid (CERN), Kenneth Long (Massachusetts Inst. of Technology (US))
    27/10/2022, 16:10
    Poster

    The unprecedented volume of data and Monte Carlo simulations at the HL-LHC will pose increasing challenges for data analysis both in terms of computing resource requirements as well as "time to insight". Precision measurements with present LHC data already face many of these challenges today. We will discuss performance scaling and optimization of RDataFrame for complex physics analyses,...

    Go to contribution page
  189. Sophie Berkman
    27/10/2022, 16:10
    Poster

    Neutrino experiments that use liquid argon time projection chamber (LArTPC) detectors are growing bigger and expect to see more neutrinos with next generation beams, and therefore will require more computing resources to reach their physics goals of measuring CP violation in the neutrino sector and exploring anomalies. These resources can be used to their full capacity by incorporating...

    Go to contribution page
  190. gianluigi chiarello
    27/10/2022, 16:10
    Poster

    Ultra-low mass and high granularity Drift Chambers fulfill the requirements for tracking systems of modern High Energy Physics experiments at the future high luminosity facilities (FCC-ee or CEPC).
    \indent We present how, in Helium based gas mixtures, by measuring the arrival times of each individual ionization cluster and by using proper statistical tools, it is possible to perform a bias...

    Go to contribution page
  191. Marcel Rieger (Hamburg University (DE))
    27/10/2022, 16:10
    Poster

    In particle physics, workflow management systems are primarily used as tailored solutions in dedicated areas such as Monte Carlo production. However, physicists performing data analyses are usually required to steer their individual, complex workflows manually, frequently involving job submission in several stages and interaction with distributed storage systems by hand. This process is not...

    Go to contribution page
  192. Zef Wolffs (Nikhef National institute for subatomic physics (NL))
    27/10/2022, 16:10
    Poster

    RooFit is a toolkit for statistical modeling and fitting, and together with RooStats it is used for measurements and statistical tests by most experiments in particle physics, particularly the LHC experiments. As the LHC program progresses, physics analyses become more computationally demanding. Therefore, recent RooFit developments were focused on performance optimization, in particular to...

    Go to contribution page
  193. Hosein Karimi Khozani (IHEP)
    27/10/2022, 16:10
    Poster

    There are established classical methods to reconstruct particle tracks from recorded hits on the particle detectors. Current algorithms do this either by cut in some features, like recorded time of the hits, or by the fitting process. This is potentially error prone and resource consuming. For high noise events, these issues are more critical and this method might even fail. We have been...

    Go to contribution page
  194. Mr Jan Stephan
    27/10/2022, 16:10
    Poster

    The alpaka library is a header-only C++17 abstraction library for development across hardware accelerators (CPUs, GPUs, FPGAs). Its aim is to provide performance portability across accelerators through the abstraction (not hiding!) of the underlying levels of parallelism. In this talk we will show the concepts behind alpaka, how it is mapped to the various underlying hardware models, and show...

    Go to contribution page
  195. Umit Sozbilir (Universita e INFN, Bari (IT))
    27/10/2022, 16:10
    Poster

    In recent years, new technologies and new approaches have been developed in academia and industry to face the necessity to both handle and easily visualize huge amounts of data, the so-called “big data”. The increasing volume and complexity of HEP data challenge the HEP community to develop simpler and yet powerful interfaces based on parallel computing on heterogeneous platforms. Good...

    Go to contribution page
  196. Dmitry Popov (University of Chinese Academy of Sciences (CN))
    27/10/2022, 16:10
    Poster

    Monte Carlo simulation is a vital tool for all physics programmes of particle physics experiments. Their accuracy and reliability in reproducing detector response is of the utmost importance. For the LHCb experiment, which is embarking on a new data-take era with an upgraded detector, a full suite of verifications has been put in place for its simulation software to ensure the quality of the...

    Go to contribution page
  197. Vasilis Belis (ETH Zurich (CH))
    27/10/2022, 16:10
    Poster

    We developed supervised and unsupervised quantum machine learning models for anomaly detection tasks at the Large Hadron Collider at CERN. Current Noisy Intermediate Scale Quantum (NISQ) devices have a limited number of qubits and qubit coherence. We designed dimensionality reduction models based on Autoencoders to accommodate the constraints dictated by the quantum hardware. Different designs...

    Go to contribution page
  198. Giovanna Lazzari Miotto (Universidade Federál Do Rio Grande Do Sul (BR))
    27/10/2022, 16:10
    Poster

    Compared to LHC Run 1 and Run 2, future HEP experiments, e.g. at the HL-LHC, will increase the volume of generated data by an order of magnitude. In order to sustain the expected analysis throughput, ROOT's RNTuple I/O subsystem has been engineered to overcome the bottlenecks of the TTree I/O subsystem, focusing also on a compact data format, asynchronous and parallel requests, and a layered...

    Go to contribution page
  199. Lorenzo Moneta (CERN)
    27/10/2022, 16:10
    Poster

    Through its TMVA package, ROOT provides and connects to machine learning tools for data analysis at HEP experiments and beyond. In addition, ROOT provides through its powerful I/O system and RDataFrame analysis tools the capability to efficiently select and query input data from large data sets as typically used in HEP analysis. At the same time, several existing Machine Learning tools exist...

    Go to contribution page
  200. Irina Espejo Morales (New York University (US))
    27/10/2022, 16:10
    Poster

    MadMiner is a python module that implements a powerful family of multivariate inference techniques that leverage both matrix element information and machine learning.

    This multivariate approach neither requires the reduction of high-dimensional data to summary statistics nor any simplifications to the under-lying physics or detector response.

    In this paper, we address some of the...

    Go to contribution page
  201. Nathalie Soybelman (Weizmann Institute of Science (IL)), Mr Nilotpal Kakati (Weizmann Institute of Science)
    27/10/2022, 16:10
    Poster

    The feature complexity of data recorded by particle detectors combined with the availability of large simulated datasets presents a unique environment for applying state-of-the-art machine learning (ML) architectures to physics problems. We present the Simplified Cylindrical Detector (SCD): a fully configurable GEANT4 calorimeter simulation which mimics the granularity and response...

    Go to contribution page
  202. Felice Pantaleo (CERN)
    27/10/2022, 16:10
    Poster

    To sustain the harsher conditions of the high-luminosity LHC, the CMS Collaboration is designing a novel endcap calorimeter system. The new calorimeter will predominantly use silicon sensors to achieve sufficient radiation tolerance and will maintain highly granular information in the readout to help mitigate the effects of the pile up. In regions characterized by lower radiation levels, small...

    Go to contribution page
  203. Elham E Khoda (University of Washington (US))
    27/10/2022, 16:10
    Poster

    Recurrent neural networks have been shown to be effective architectures for many tasks in high energy physics, and thus have been widely adopted. Their use in low-latency environments has, however, been limited as a result of the difficulties of implementing recurrent architectures on field-programmable gate arrays (FPGAs). In this paper we present an implementation of two types of recurrent...

    Go to contribution page
  204. Raquel Pezoa Rivera (Universidad de Valparaíso)
    27/10/2022, 16:10
    Poster

    The classification of HEP events, or separating signal events from the background, is one of the most important analysis tasks in High Energy Physics (HEP), and a foundational task in the search for new phenomena. Complex deep learning-based models have been fundamental for achieving accurate and outstanding performance in this classification task. However, the quantification of the...

    Go to contribution page
Building timetable...