29 November 2021 to 3 December 2021
Virtual and IBS Science Culture Center, Daejeon, South Korea
Asia/Seoul timezone

Contribution List

227 out of 227 displayed
Export to PDF
  1. David Britton (University of Glasgow (GB))
    29/11/2021, 15:00
  2. Doris Yangsoo Kim (Soongsil University), Soonwook Hwang (KiSTi Korea Institute of Science & Technology Information (KR))
    29/11/2021, 15:05
  3. Prof. Do Young Noh (IBS)
    29/11/2021, 15:15
  4. Julia Fitzner
    29/11/2021, 15:20

    The World Health Organization has been and is monitoring the development of the pandemic through the regular collection of disease and laboratory data from all member states. Data is collected on the number of cases and death, the age distribution, infections in health care workers, but also on what public health measures are taken and where how many people are vaccinated. This data allows...

    Go to contribution page
  5. Anja Butter (Universität Heidelberg, ITP)
    29/11/2021, 15:50

    Over the next years, measurements at the LHC and the HL-LHC will provide us with a wealth of data. The best hope of answering fundamental questions like the nature of dark matter, is to adopt big data techniques in simulations and analyses to extract all relevant information.

    On the theory side, LHC physics crucially relies on our ability to simulate events efficiently from first...

    Go to contribution page
  6. Kevin Buzzard (Imperial College London)
    29/11/2021, 16:20
  7. Simon Platzer (University of Vienna (AT))
    29/11/2021, 17:20
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    Amplitude level evolution has become a new theoretical paradigm to analyze parton shower algorithms which are at the heart of multi-purpose event generator simulations used for particle collider experiments. It can also be implemented as a numerical algorithm in its own right to perform resummation of non-global observables beyond the leading colour approximation, leading to a new kind of...

    Go to contribution page
  8. Lu Wang (Computing Center,Institute of High Energy Physics, CAS)
    29/11/2021, 17:20
    Track 1: Computing Technology for Physics Research
    Oral

    Problematic I/O pattern is the major cause of low efficiency HEP jobs. When the computing cluster is partially occupied by jobs with problematical I/O patterns, the overall CPU efficiency will dramatically drop down. In a cluster with thousands of users, locating the source of an anomalous workload is not an easy task. Automatic anomaly detection of I/O behavior can largely alleviate the...

    Go to contribution page
  9. Katya Govorkova (CERN)
    29/11/2021, 17:20
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    We show how to adapt and deploy anomaly detection algorithms based on deep autoencoders, for the unsupervised detection of new physics signatures in the extremely challenging environment of a real-time event selection system at the Large Hadron Collider (LHC). We demonstrate that new physics signatures can be enhanced by three orders of magnitude, while staying within the strict latency and...

    Go to contribution page
  10. Henry Truong
    29/11/2021, 17:40
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    In this talk we present a neural network based model to emulate matrix
    elements. This model improves on existing methods by taking advantage of the known
    factorisation properties of matrix elements to separate out the divergent regions.
    In doing so the neural network learns about the factorisation property in singular limits, meaning we can control the behaviour of simulated matrix elements...

    Go to contribution page
  11. Gaia Grosso (Universita e INFN, Padova (IT))
    29/11/2021, 17:40
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    We present a machine-learning based strategy to detect data departures from a given reference model, with no prior bias on the nature of the new physics responsible for the discrepancy. The main idea behind this method is to build the likelihood-ratio hypothesis test by directly translating the problem of maximizing a likelihood-ratio into the minimization of a loss function. A neural network...

    Go to contribution page
  12. David Rousseau (IJCLab-Orsay)
    29/11/2021, 17:40
    Track 1: Computing Technology for Physics Research
    Oral

    Future HEP experiments will have ever higher read-out rate. It is then essential to explore new hardware paradigms for large scale computations. In this work we consider the Optical Processing Unit (OPU) from [LightOn][1], which is an optical device allowing to compute in a fast analog way the multiplication of an input vector of size 1 million by a 1 million x 1 million fixed random matrix,...

    Go to contribution page
  13. Bruno Alves (LIP Laboratorio de Instrumentacao e Fisica Experimental de Part)
    29/11/2021, 18:00
    Track 1: Computing Technology for Physics Research
    Oral

    We present a decisive milestone in the challenging event reconstruction of the CMS High Granularity Calorimeter (HGCAL): the deployment to the official CMS software of the GPU version of the clustering algorithm (CLUE). The direct GPU linkage of CLUE to the preceding energy deposits calibration step is thus made possible, avoiding data transfers between host and device, further extending the...

    Go to contribution page
  14. Mariia Demianenko (HSE University, Moscow Institute of Physics and Technology (National Research University))
    29/11/2021, 18:00
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    Photometric data-driven classification of supernovae is one of the fundamental problems in astronomy. Recent studies have demonstrated the superior quality of solutions based on various machine learning models. These models learn to classify supernova types using their light curves as inputs. Preprocessing of these curves is a crucial step that significantly affects the final quality. In this...

    Go to contribution page
  15. Marvin Gerlach (Karlsruhe Insitute of Technology)
    29/11/2021, 18:00
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    The demand for precision predictions in the field of high energy physics has increased tremendously over the recent years. Its importance is visible in the light of current experimental efforts to test the predictive power of the Standard Model of particle physics (SM) to a never before seen accuracy. Thus, advanced computer software is a key technology to enable phenomenological computations...

    Go to contribution page
  16. Mr Stefano Vergani (University of Cambridge)
    29/11/2021, 18:20
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    Over the last ten years, the popularity of Machine Learning (ML) has grown exponentially in all scientific fields, included particle physics. Industry has also developed new powerful tools that, imported into academia, could revolutionise research. One recent industry development that has not yet come to the attention of the particle physics community is Collaborative Learning (CL), a...

    Go to contribution page
  17. Dr Giuseppe De Laurentis (Freiburg University)
    29/11/2021, 18:20
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    Scattering amplitudes in perturbative quantum field theory exhibit a rich structure of zeros, poles and branch cuts which are best understood in complexified momentum space. It has been recently shown that leveraging this information can significantly simplify both analytical reconstruction and final expressions for the rational coefficients of transcendental functions appearing in...

    Go to contribution page
  18. Dr Sofia Vallecorsa (CERN)
    29/11/2021, 18:20
    Track 1: Computing Technology for Physics Research
    Oral

    The Worldwide LHC Computing Grid (WLCG) is the infrastructure enabling the storage and pro-cessing of the large amount of data generated by the LHC experiments, and in particular the ALICE experiment among them. With the foreseen increase in the computing requirements of the future HighLuminosity LHC experiments, a data placement strategy which increases the efficiency of the WLCG computing...

    Go to contribution page
  19. Stephen Nicholas Swatman (University of Amsterdam (NL))
    29/11/2021, 18:40
    Track 1: Computing Technology for Physics Research
    Oral

    Programmers using the C++ programming language are increasingly taught to manage memory implicitly through containers provided by the C++ standard library. However, many heterogeneous programming platforms require explicit allocation and deallocation of memory, which is often discouraged in “best practice” C++ programming, and this discrepancy in memory management strategies can be daunting...

    Go to contribution page
  20. Dr Vicent Mateu Barreda (University of Salamanca)
    29/11/2021, 18:40
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    In this talk I will present REvolver, a c++ library for renormalization group evolution and automatic flavor matching of the QCD coupling and quark masses, as well as precise conversion between various quark mass renormalization schemes. The library systematically accounts for the renormalization group evolution of low-scale short-distance masses which depend linearly on the renormalization...

    Go to contribution page
  21. Kai Habermann (University of Bonn)
    29/11/2021, 18:40
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    The Self-Organizing-Map (SOM) is a widely used neural
    net for data analysis, dimension reduction and
    clustering. It has yet to find use in high energy
    particle physics. This paper discusses two
    applications of SOM in particle physics. First, we were
    able to obtain high separation of rare processes in
    regions of the dimensionally reduced representation.
    Second, we obtained Monte Carlo...

    Go to contribution page
  22. Ingo Müller (ETH Zurich)
    29/11/2021, 19:00
    Track 1: Computing Technology for Physics Research
    Oral

    In the domain of high-energy physics (HEP), query languages in general and SQL in particular have found limited acceptance. This is surprising since HEP data analysis matches the SQL model well: the data is fully structured and queried using mostly standard operators. To gain insights on why this is the case, we perform a comprehensive analysis of six diverse, general-purpose data processing...

    Go to contribution page
  23. Vitalii Maheria
    29/11/2021, 19:00
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    pySecDec is a tool for Monte Carlo integration of multiloop Feynman integrals (or parametric integrals in general), using the sector decomposition strategy. Its latest release contains two major features: the ability to expand integrals in kinematic limits using expansion by regions approach, and the ability to optimize the integration of weighted sums of integrals maximizing the obtained...

    Go to contribution page
  24. Kinga Anna Wozniak (University of Vienna (AT))
    29/11/2021, 19:00
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    We investigate supervised and unsupervised quantum machine learning algorithms in the context of typical data analyses at the LHC. To deal with constraints on the problem size, dictated by limitations on the quantum hardware, we concatenate the quantum algorithms to the encoder of a classic autoencoder, used for dimensional reduction. We show results for a quantum classifier and a quantum...

    Go to contribution page
  25. Dr Anthony Hartin (UCL)
    29/11/2021, 19:20
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    Non perturbative QED is used to predict beam backgrounds at the interaction point of colliders, in calculations of Schwinger pair creation and in precision QED tests with ultra-intense lasers.
    In order to predict these phenomena, custom built monte carlo event generators based on a suitable non perturbative theory have to be developed. One such suitable theory uses the Furry Interaction...

    Go to contribution page
  26. Roman N. Lee
    30/11/2021, 15:00

    Multiloop calculations are vital for obtaining high-precision predictions in Standard Model. In particular, such predictions are important for the possibility to discover New Physics which is expected to reveal itself in tiny deviations. The methods of multiloop calculations are rapidly evolving for already a few decades. New algorithms as well as their specific software implementations appear...

    Go to contribution page
  27. Alberto Broggi
    30/11/2021, 15:30

    Autonomous driving is an extremely hot topic, and the whole automotive industry is now working hard to transition from research to products. Deep learning and the progress of silicon technology are the main enabling factors that boosted the industry interest and are currently pushing the automotive sector towards futuristic self-driving cars. Computer vision is one of the most important...

    Go to contribution page
  28. Josh Bendavid (CERN)
    30/11/2021, 16:00

    The unprecedented volume of data and Monte Carlo simulations at the HL-LHC will pose increasing challenges for data analysis both in terms of computing resource requirements as well as "time to insight". I will discuss the evolution and current state of analysis data formats, software, infrastructure and workflows at the LHC, and the directions being taken towards fast, efficient, and...

    Go to contribution page
  29. Christina Agapopoulou (Centre National de la Recherche Scientifique (FR))
    30/11/2021, 17:00
    Track 1: Computing Technology for Physics Research
    Oral

    From 2022 onward, the upgraded LHCb experiment will use a triggerless readout system collecting data at an event rate of 30 MHz. A software-only High Level Trigger will enable unprecedented flexibility for trigger selections. During the first stage (HLT1), a sub-set of the full offline track reconstruction for charged particles is run to select particles of interest based on single or...

    Go to contribution page
  30. Lukas Alexander Heinrich (CERN), Michael Aaron Kagan (SLAC National Accelerator Laboratory (US))
    30/11/2021, 17:00
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    We introduce the differentiable simulator MadJax, an implementation of the general purpose matrix element generator Madgraph integrated within the Jax differentiable programming framework in Python. Integration is performed during automated matrix element code generation and subsequently enables automatic differentiation through leading order matrix element calculations. Madjax thus...

    Go to contribution page
  31. Dalila Salamani (CERN)
    30/11/2021, 17:00
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    High energy physics experiments relies on Monte Carlo simulation to accurately model their detector response. Most of the time dominated by shower simulation in the calorimeter, the detector response modelling is time consuming and CPU intensive especially with the upcoming High Luminosity LHC upgrade. Several research directions investigated the use of Machine Learning based models to...

    Go to contribution page
  32. Z.D. Kassabov-Zaharieva
    30/11/2021, 17:20
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    We present the software framework underlying the NNPDF4.0 global determination of parton distribution functions (PDFs). The code is released under an open source licence and is accompanied by extensive documentation and examples. The code base is composed by a PDF fitting package, tools to handle experimental data and to efficiently compare it to theoretical predictions, and a versatile...

    Go to contribution page
  33. Andrea Bocci (CERN), CMS Collaboration
    30/11/2021, 17:20
    Track 1: Computing Technology for Physics Research
    Oral

    At the start of the upcoming LHC Run-3, CMS will deploy a heterogeneous High Level Trigger farm composed of x86 CPUs and NVIDIA GPUs. In order to guarantee that the HLT can run on machines without any GPU accelerators - for example as part of the large scale Monte Carlo production running on the grid, or when individual developers need to optimise specific triggers - the HLT reconstruction has...

    Go to contribution page
  34. Sergei Mokhnenko (National Research University Higher School of Economics (RU))
    30/11/2021, 17:20
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    The increasing luminosities of future data taking at Large Hadron Collider and next generation collider experiments require an unprecedented amount of simulated events to be produced. Such large scale productions demand a significant amount of valuable computing resources. This brings a demand to use new approaches to event generation and simulation of detector responses. In this talk, we...

    Go to contribution page
  35. Oriel Orphee Moira Kiss (Universite de Geneve (CH))
    30/11/2021, 17:40

    Generative models (GM) are powerful tools to help validate theories by reducing the computation time of Monte Carlo (MC) simulations. GMs can learn expensive MC calculations and generalize to similar situations. In this work, we propose comparing a classical generative adversarial network (GAN) approach with a Born machine, both in his discrete (QCBM) and continuous (CVBM) form while...

    Go to contribution page
  36. Nuno Dos Santos Fernandes (LIP Laboratorio de Instrumentacao e Fisica Experimental de Particulas (PT))
    30/11/2021, 17:40
    Track 1: Computing Technology for Physics Research
    Oral

    After the Phase II Upgrade of the LHC, expected for the period between 2025-26, the average
    number of collisions per bunch crossing at the LHC will increase from the Run-2 average value
    of 36 to a maximum of 200 pile-up proton-proton interactions per bunch crossing. The ATLAS
    detector will also undergo a major upgrade programme to be able to operate it in such a harsh
    conditions with the...

    Go to contribution page
  37. Stefano Carrazza (CERN)
    30/11/2021, 17:40
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    We present Qibo, a new open-source framework for fast evaluation of quantum circuits and adiabatic evolution which takes full advantage of hardware accelerators, quantum hardware calibration and control, and large codebase of algorithms for applications in HEP and beyond. The growing interest in quantum computing and the recent developments of quantum hardware devices motivates the development...

    Go to contribution page
  38. Joshua Falco Beirer (CERN, Georg-August-Universitaet Goettingen (DE))
    30/11/2021, 18:00
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    AtlFast3 is the next generation of high precision fast simulation in ATLAS. It is being deployed by the collaboration and will replace AtlFastII, the fast simulation tool that was successfully used until now. AtlFast3 combines two Fast Calorimeter Simulations tools; a parameterization-based approach and a machine-learning based tool exploiting Generative Adversarial Networks (GANs). AtlFast3...

    Go to contribution page
  39. Antonio Pineda
    30/11/2021, 18:00
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    We compute the coefficients of the perturbative expansions of the plaquette,
    and of the self-energy of static sources in the triplet and octet representation,
    up to very high orders in perturbation theory. We use numerical sthocastic
    perturbation theory and lattice regularization. We explore if the results
    obtained comply with expectations from renormalon dominance, and what
    they may say...

    Go to contribution page
  40. Kai Lukas Unger (Karlsruhe Institute of Technology (KIT))
    30/11/2021, 18:00
    Track 1: Computing Technology for Physics Research
    Oral

    The z-vertex track trigger estimates the collision origin in the Belle II experiment using neural networks to reduce the background. The main part is a pre-trained multilayer perceptron. The task of this perceptron is to estimate the z-vertex of the collision to suppress background from outside the interaction point. For this, a low latency real-time FPGA implementation is needed. We present...

    Go to contribution page
  41. Paul de Bryas (EPFL - Ecole Polytechnique Federale Lausanne (CH))
    30/11/2021, 18:20
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    SND@LHC is a newly approved detector under construction at the LHC, aimed at studying the interactions of neutrinos of all flavours produced by proton-proton collisions at the LHC. The energy range under study, few hundreds MeVs up to about 5 TeVs, is currently unexplored. In particular, electron neutrino and tau neutrino cross sections are unknown in that energy range, whereas muon neutrino...

    Go to contribution page
  42. Timo Janßen (Georg-August-Universität Göttingen)
    30/11/2021, 18:20
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    Modern machine learning methods offer great potential for increasing the efficiency of Monte Carlo event generators. We present the latest developments in the context of the event generation framework SHERPA. These include phase space sampling using normalizing flows and a new unweighting procedure based on neural network surrogates for the full matrix elements. We discuss corresponding...

    Go to contribution page
  43. Andrei Gheata (CERN)
    30/11/2021, 18:20
    Track 1: Computing Technology for Physics Research
    Oral

    Several online and offline applications in high-energy physics have benefitted from running on graphics processing units (GPUs), taking advantage of their processing model. To date, however, general HEP particle transport simulation is not one of them, due to difficulties in mapping the complexity of its components and workflow to the GPU’s massive parallelism features. Deep code stacks, with...

    Go to contribution page
  44. Ioana Ifrim (Princeton University (US))
    30/11/2021, 18:40
    Track 1: Computing Technology for Physics Research
    Oral

    Automatic Differentiation (AD) is instrumental for science and industry. It is a tool to evaluate the derivative of a function specified through a computer program. The range of AD application domain spans from Machine Learning to Robotics to High Energy Physics. Computing gradients with the help of AD is guaranteed to be more precise than the numerical alternative and have at most a constant...

    Go to contribution page
  45. Humberto Reyes-González (University of Genoa)
    30/11/2021, 18:40
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    Normalizing Flows (NFs) are emerging as a powerful brand of generative models, as they not only allow for efficient sampling, but also deliver density estimations by construction. They are of great potential usage in High Energy Physics (HEP), where we unavoidably deal with complex high dimensional data and probability distributions are everyday’s meal. However, in order to fully leverage the...

    Go to contribution page
  46. Yee Chinn Yap (Deutsches Elektronen-Synchrotron (DE))
    30/11/2021, 18:40
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    The LUXE experiment (LASER Und XFEL Experiment) is a new experiment in planning at DESY Hamburg that will study Quantum Electrodynamics (QED) at the strong-field frontier. In this regime, QED is non-perturbative. This manifests itself in the creation of physical electron-positron pairs from the QED vacuum. LUXE intends to measure the positron production rate in this unprecedented regime by...

    Go to contribution page
  47. Marco Barbone (Imperial College London)
    30/11/2021, 19:00
    Track 1: Computing Technology for Physics Research
    Oral

    We present results from a stand-alone simulation of electron single coulomb scattering as implemented completely on an FPGA architecture and compared with an identical simulation on a standard CPU. FPGA architectures offer unprecedented speed-up capability for Monte Carlo simulations, however with the caveats of lengthy development cycles and resource limitation particularly in terms of...

    Go to contribution page
  48. Kang-Hun Ahn
    01/12/2021, 15:00

    Human hearing has a very amazing ability that even advanced technology cannot imitate. The difference in energy between the smallest and loudest audible sounds is about a trillion times. The frequency resolution is also excellent, so the ear can distinguish a frequency difference of about 4 Hz. What is more surprising is that it can be heard even where there is a louder noise than the sound of...

    Go to contribution page
  49. Ruth Mueller
    01/12/2021, 15:30

    In this talk, I will discuss the impacts of what has been termed a growing culture of speed and hypercompetition in the academic sciences. Drawing on qualitative social sciences research in the life sciences, I will discuss how acceleration and hypercompetition impact epistemic diversity in science, i.e. the range of research topics researchers consider they can address, as well as human...

    Go to contribution page
  50. Lenka Zdeborova
    01/12/2021, 16:30

    The affinity between statistical physics and machine learning has a long history, I will describe the main lines of this long-lasting friendship in the context of current theoretical challenges and open questions about deep learning. Theoretical physics often proceeds in terms of solvable synthetic models, I will describe the related line of work on solvable models of simple feed-forward...

    Go to contribution page
  51. Wenhao Huang (Shandong University)
    01/12/2021, 17:00
    Track 1: Computing Technology for Physics Research
    Oral

    The Super Tau Charm Facility (STCF) is a high-luminosity electron–positron
    collider proposed in China, for the study of charm and tau physics. The Offline Software of Super Tau Charm Facility (OSCAR) is designed and developed
    based on SNiPER, a lightweight common framework for HEP experiments. Several state-of-art software and tools in the HEP community are adopted, such as
    the Detector...

    Go to contribution page
  52. Ryan Moodie (IPPP, Durham University)
    01/12/2021, 17:00
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    Phenomenological studies of high-multiplicity scattering processes at collider experiments present a substantial theoretical challenge and are increasingly important ingredients in experimental measurements. We investigate the use of neural networks to approximate matrix elements for these processes, studying the case of loop-induced diphoton production through gluon fusion. We train neural...

    Go to contribution page
  53. CMS Collaboration, Felice Pantaleo (CERN)
    01/12/2021, 17:00
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    To sustain the harsher conditions of the high-luminosity LHC, the CMS collaboration is designing a novel endcap calorimeter system. The new calorimeter will predominantly use silicon sensors to achieve sufficient radiation tolerance and will maintain highly-granular information in the readout to help mitigate the effects of pileup. In regions characterized by lower radiation levels, small...

    Go to contribution page
  54. Eric Wulff (CERN)
    01/12/2021, 17:20
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    In the European Center of Excellence in Exascale Computing "Research on AI- and Simulation-Based Engineering at Exascale" (CoE RAISE), researchers from science and industry develop novel, scalable Artificial Intelligence technologies towards Exascale. In this work, we leverage HPC resources to perform large scale hyperparameter optimization using distributed training on multiple compute nodes,...

    Go to contribution page
  55. Yixiang Yang (Institute of High Energy Physics)
    01/12/2021, 17:20
    Track 1: Computing Technology for Physics Research
    Oral

    The JUNO experiment is being built mainly to determine the neutrino mass hierarchy by detecting neutrinos generated in the Yangjiang and Taishan nuclear plants in southern China. The detector will record 2 PB raw data every year, but each day it can only collect about 60 neutrino events scattered among huge background events. Selection of extremely sparse neutrino events brings a big challenge...

    Go to contribution page
  56. Jakub Marcin Krys (University of Turin)
    01/12/2021, 17:20
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    In this talk, I present the computation of the two-loop helicity amplitudes for Higgs boson production in association with a bottom quark pair. I give an overview of the method and describe how computational bottlenecks can be overcome by using finite field reconstruction to obtain analytic expressions from numerical evaluations. I also show how the method of differential equations allows us...

    Go to contribution page
  57. Matteo Concas (INFN Torino (IT))
    01/12/2021, 17:40
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    During the LHC Run 3 the ALICE online computing farm will process up to 50 times more Pb-Pb events per second than in Run 2. The implied computing resource scaling requires a shift in the approach that comprises the extensive usage of Graphics Processing Units (GPU) for the processing. We will give an overview of the state of the art for the data reconstruction on GPUs in ALICE, with...

    Go to contribution page
  58. Dr Andreas Maier (DESY)
    01/12/2021, 17:40
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    We propose a novel method for the elimination of negative Monte Carlo event
    weights. The method is process-agnostic, independent of any analysis, and preserves all physical observables. We demonstrate the overall performance and systematic improvement with increasing event sample size, based on predictions for the production of a W boson with two jets calculated at next-to-leading order...

    Go to contribution page
  59. Riccardo Maria Bianchi (University of Pittsburgh (US))
    01/12/2021, 17:40
    Track 1: Computing Technology for Physics Research
    Oral

    The GeoModel toolkit is an open-source suite of standalone tools that empowers the user with lightweight tools to describe, visualize, test, and debug detector descriptions and geometries for HEP standalone studies and experiments. GeoModel has been designed with independence and responsiveness in mind and offers a development environment free of other large HEP tools and frameworks, and with...

    Go to contribution page
  60. Jonas Rembser (CERN)
    01/12/2021, 18:00
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    RooFit is a toolkit for statistical modelling and fitting, and together with RooStats it is used for measurements and statistical tests by most experiments in particle physics, particularly the LHC experiments. As the LHC program progresses, physics analysis becomes more computationally demanding. Therefore, the focus of RooFit developments in recent years was performance optimization....

    Go to contribution page
  61. Joana Niermann (Georg August Universitaet Goettingen (DE))
    01/12/2021, 18:00
    Track 1: Computing Technology for Physics Research
    Oral

    A detailed geometry description is essential to any high quality track reconstruction application. In current C++ based track reconstruction software libraries this is often achieved by an object oriented, polymorphic geometry description that implements different shapes and objects by extending a common base class. Such a design, however, has been shown to be problematic when attempting to...

    Go to contribution page
  62. Jannis Lang (Karlsruhe Institute of Technology (KIT))
    01/12/2021, 18:00
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    We present results for Higgs boson pair production in gluon fusion including both, NLO (2-loop) QCD corrections with full top quark mass dependence as well as anomalous couplings related to operators describing effects of physics beyond the Standard Model.
    The latter can be realized in non-linear (HEFT) or linear (SMEFT) Effective Field Theory frameworks.
    We show results for both and discuss...

    Go to contribution page
  63. Wolfgang Waltenberger (Austrian Academy of Sciences (AT))
    01/12/2021, 18:20
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    In view of the null results (so far) in the numerous channel-by-channel searches for new particles at the LHC, it becomes increasingly relevant to change perspective and attempt a more global approach to finding out where BSM physics may hide. To this end, we developed a novel statistical learning algorithm that is capable of identifying potential dispersed signals in the slew of published LHC...

    Go to contribution page
  64. Florian Till Groetschla (KIT - Karlsruhe Institute of Technology (DE))
    01/12/2021, 18:20
    Track 1: Computing Technology for Physics Research
    Oral

    The performance of I/O intensive applications is largely determined by the organization of data and the associated insertion/extraction techniques. In this paper we present the design and implementation of an application that is targeted at managing data received (up to ~150 Gb/s payload throughput) into host DRAM, buffering data for several seconds, matched with the DRAM size, before being...

    Go to contribution page
  65. Philippe Debie (Wageningen University, Wageningen Economic Research)
    01/12/2021, 18:20
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    The analysis of high-frequency financial trading data faces similar problems as High Energy Physics (HEP) analysis. The data is noisy, irregular in shape, and large in size. Recent research on the intra-day behaviour of financial markets shows a lack of tools specialized for finance data, and describes this problem as a computational burden. In contrary to HEP data, finance data consists of...

    Go to contribution page
  66. Burak Sen (Middle East Technical University), Changgi Huh (Kyungpook National University (KR)), Gokhan Unel (University of California Irvine (US)), Gordon Watts (University of Washington (US)), Harry Prosper (Florida State University (US)), Mason Proffitt (University of Washington (US)), Sezen Sekmen (Kyungpook National University (KR))
    01/12/2021, 18:40
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    We present two applications of declarative interfaces for HEP data analysis allowing users to avoid writing event loops that simplify code and enable performance improvements to be decoupled from analysis development. One example is FuncADL, an analysis description language inspired by functional programming developed using Python as a host language. In addition to providing a declarative,...

    Go to contribution page
  67. Vladyslav Shtabovenko (KIT)
    01/12/2021, 18:40
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    FeynCalc is esteemed by many particle theorists as a very
    useful tool for tackling symbolic Feynman diagram calculations
    with a great amount of transparency and flexibility.
    While the program enjoys an excellent reputation
    when it comes to tree level and 1-loop calculations,
    the usefulness of FeynCalc in multi-loop projects is
    often doubted by the practitioners.

    In this talk I will...

    Go to contribution page
  68. Alexei Klimentov (Brookhaven National Laboratory (US))
    01/12/2021, 18:40
    Track 1: Computing Technology for Physics Research
    Oral

    The High Luminosity upgrade to the LHC, which aims for a ten-fold increase in the luminosity of proton-proton collisions at an energy of 14 TeV, is expected to start operation in 2028/29, and will deliver an unprecedented volume of scientific data at the multi-exabyte scale. This amount of data has to be stored and the corresponding storage system must ensure fast and reliable data delivery...

    Go to contribution page
  69. Piotr Konopka (CERN)
    01/12/2021, 19:00
    Track 1: Computing Technology for Physics Research
    Oral

    The ALICE experiment at the CERN LHC (Large Hadron Collider) is undertaking a major upgrade during the LHC Long Shutdown 2 in 2019-2021, which includes a new computing system called O2 (Online-Offline). The raw data input from the ALICE detectors will increase a hundredfold, up to 3.5 TB/s. By reconstructing the data online, it will be possible to compress the data stream down to 100 GB/s...

    Go to contribution page
  70. Joseph Lykken
    02/12/2021, 09:00

    The technology of quantum computers and related systems is advancing rapidly, and powerful programmable quantum processors are already being made available by various companies. Long before we reach the promised land of fully fault tolerant large scale quantum computers, it is possible that unambiguous “quantum advantage” will be demonstrated for certain kinds of problems, including problems...

    Go to contribution page
  71. Barry Sanders
    02/12/2021, 09:30
    Invited plenary

    I provide a perspective on the development of quantum computing for data science, including a dive into state-of-the-art for both hardware and algorithms and the potential for quantum machine learning.

    Go to contribution page
  72. Joshua Isaacson
    02/12/2021, 10:00

    With the upcoming High Luminosity LHC coming online in the near future, event generators will need to generate a similar number of events. Currently, the current estimated cost to generate these events exceeds the computing budget of the LHC experiments. To address these issues, the event generators need to improve their speed. Many different approaches are being taken to achieve this goal. I...

    Go to contribution page
  73. Nick Smith (Fermi National Accelerator Lab. (US))
    02/12/2021, 11:00
    Track 1: Computing Technology for Physics Research
    Oral

    Query languages for High Energy Physics (HEP) are an ever present topic within the field. A query language that can efficiently represent the nested data structures that encode the statistical and physical meaning of HEP data will help analysts by ensuring their code is more clear and pertinent. As the result of a multi-year effort to develop an in-memory columnar representation of high energy...

    Go to contribution page
  74. Dr Narayan Rana (INFN Milan)
    02/12/2021, 11:00
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    We present the mixed QCD-EW two-loop virtual amplitudes for the neutral current Drell-Yan production. The evaluation of the two-loop amplitudes is one of the bottlenecks for the complete calculation of the NNLO mixed QCD-EW corrections. We present the computational details, especially the evaluation of all the relevant two-loop Feynman integrals using analytical and semi-analytical methods. We...

    Go to contribution page
  75. Dr Wenxing Fang
    02/12/2021, 11:00
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    The Circular Electron Positron Collider (CEPC) [1] is one of future experiments which aims to study the properties of Higgs boson and to perform searches for new physics beyond the Standard Model. The drift chamber is a design option for the outer tracking detector. With the development of new technology in electronics, employment of primary ionization counting method [2-3] to identify charged...

    Go to contribution page
  76. Jim Pivarski (Princeton University)
    02/12/2021, 11:20
    Track 1: Computing Technology for Physics Research
    Oral

    Awkward Array 0.x was written entirely in Python, and Awkward Array 1.x was a fresh rewrite with a C++ core and a Python interface. Ironically, the Awkward Array 2.x project is translating most of that core back into Python (leaving the interface untouched). This is because we discovered surprising and subtle issues in Python-C++ integration that can be avoided with a more minimal coupling: we...

    Go to contribution page
  77. Torri Jeske (Thomas Jefferson National Accelerator Facility)
    02/12/2021, 11:20
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    The AI for Experimental Controls project at Jefferson Lab is developing an AI system to control and calibrate a large drift chamber system in near-real time. The AI system will monitor environmental variables and beam conditions to recommend new high voltage settings that maintain consistent dE/dx gain and optimal resolution throughout the experiment. At present, calibrations are performed...

    Go to contribution page
  78. Chaitanya Paranjape (IIT Dhanbad)
    02/12/2021, 11:20
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    We present an application of major new features of the program pySecDec, which is a program to calculate parametric integrals, in particular multi-loop integrals, numerically.
    One important new feature is the ability to integrate weighted sums of integrals in a way which is optimised to reach a given accuracy goal on the sums rather than on the individual integrals, another one is the option...

    Go to contribution page
  79. Wuming Luo (Institute of High Energy Physics, Chinese Academy of Science)
    02/12/2021, 11:40
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    Jiangmen Underground Neutrino Observatory (JUNO), located at the southern part of China, will be the world’s largest liquid scintillator(LS) detector. Equipped with 20 kton LS, 17623 20-inch PMTs and 25600 3-inch PMTs, JUNO will provide a unique apparatus to probe the mysteries of neutrinos, particularly the neutrino mass ordering puzzle. One of the challenges for JUNO is the high precision...

    Go to contribution page
  80. Gene Van Buren (Brookhaven National Laboratory), Jerome LAURET (Brookhaven National Laboratory), Ivan Amos Cali (Massachusetts Inst. of Technology (US)), Dr Juan Gonzalez (Accelogic), Philippe Canal (Fermi National Accelerator Lab. (US)), Mr Rafael Nunez, Yueyang Ying (Massachusetts Inst. of Technology (US))
    02/12/2021, 11:40
    Track 1: Computing Technology for Physics Research
    Oral

    For the last 7 years Accelogic pioneered and perfected a radically new theory of numerical computing codenamed "Compressive Computing", which has an extremely profound impact on real-world computer science [1]. At the core of this new theory is the discovery of one of its fundamental theorems which states that, under very general conditions, the vast majority (typically between 70% and 80%) of...

    Go to contribution page
  81. Alina Lazar (Youngstown State University)
    02/12/2021, 12:00
    Track 1: Computing Technology for Physics Research
    Oral

    Recently, graph neural networks (GNNs) have been successfully used for a variety of reconstruction problems in HEP. In this work, we develop and evaluate an end-to-end C++ implementation for inferencing a charged particle tracking pipeline based on GNNs. The pipeline steps include data encoding, graph building, edge filtering, GNN and track labeling and it runs on both GPUs and CPUs. The ONNX...

    Go to contribution page
  82. Dr Teng LI (Shandong University, CN)
    02/12/2021, 12:00
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    Particle identification is one of most fundamental tools in various particle physics experiments. For the BESIII experiment on the BEPCII, the realization of numerous physical goals heavily relies on advanced particle identification algorithms. In recent years, the emerging of quantum machine learning could potentially arm particle physics experiments with a powerful new toolbox. In this work,...

    Go to contribution page
  83. Sergey Volkov
    02/12/2021, 12:00
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    A high-precision calculation of lepton magnetic moments requires an evaluation of QED Feynman diagrams up to five independent loops.
    These calculations are still important:
    1) the 5-loop contributions with lepton loops to the electron g-2 are still not double-checked (and can potentially be sensitive in experiments);
    2) there is a discrepancy in different calculations of the 5-loop...

    Go to contribution page
  84. Mr Yahor Dydyshka (Joint Institute for Nuclear Research, Dubna)
    02/12/2021, 12:20
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    An algorithm for the spinor amplitudes with massive particles is implemented in the SANC computer system framework.
    Procedure for simplification of the expressions with spinor products is based on little group technique in six-dimensional space-time.
    Amplitudes for bremsstrahlung processes e+e+\to (e+e+/mu+mu-/HZ/Zgamma/gamma gamma) + gamma are obtained in gauge-covariant form...

    Go to contribution page
  85. Joosep Pata (National Institute of Chemical Physics and Biophysics (EE))
    02/12/2021, 12:20
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    The particle-flow (PF) algorithm at CMS combines information across different detector subsystems to reconstruct a global particle-level picture of the event. At a fundamental level, tracks are extrapolated to the calorimeters and the muons system, and combined with energy deposits to reconstruct charged and neutral hadron candidates, as well as electron, photon and muon candidates.

    In...

    Go to contribution page
  86. Sophie Berkman (Fermi National Accelerator Laboratory)
    02/12/2021, 12:20
    Track 1: Computing Technology for Physics Research
    Oral

    Neutrinos are particles that interact rarely, so identifying them requires large detectors which produce lots of data. Processing this data with the computing power available is becoming more difficult as the detectors increase in size to reach their physics goals. Liquid argon time projection chamber (LArTPC) neutrino experiments are expected to grow in the next decade to have 100 times more...

    Go to contribution page
  87. Dr Elise de Doncker (Western Michigan University)
    02/12/2021, 12:40
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    In recent work we computed 4-loop integrals for self-energy diagrams with 11 massive internal lines. Presently we perform numerical integration and regularization for diagrams with 8 to 11 lines, while considering massive and massless cases. For dimensional regularization, a sequence of integrals is computed depending on a parameter ($\varepsilon$) that is incorporated via the space-time...

    Go to contribution page
  88. Eric Anton Moreno (California Institute of Technology (US))
    02/12/2021, 12:40
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    We present an application of anomaly detection techniques based on deep recurrent autoencoders to the problem of detecting gravitational wave signals in laser interferometers. Trained on noise data, this class of algorithms could detect signals using an unsupervised strategy, i.e., without targeting a specific kind of source. We develop a custom architecture to analyze the data from two...

    Go to contribution page
  89. Adrien MATTA (IN2P3/CNRS, LPC Caen)
    03/12/2021, 15:20

    Over the past decades nuclear physics experiment has seen a drastic increase in complexity. With the arrival of second generation radioactive ions beams facilities all over the world, the run for exploring more and more exotic nuclei is raging. The low intensity of RI-beams require more complex setup, covering larger solid angle, and detecting a wider variety of charged and neutral particles....

    Go to contribution page
  90. Ms Brunella D'Anzi (Universita e INFN, Bari (IT))
    03/12/2021, 15:50
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Artificial Neural Networks in High Energy Physics: introduction and goals

    Nowadays High Energy Physics (HEP) analyses take generally advantages of the implementation of Machine Learning techniques to optimize the discrimination between signal and background, preserving as much signal as possible. Running a classical cut-based selection would imply a severe reduction of both signal and...

    Go to contribution page
  91. Anna Kawecka (Warsaw University of Technology (PL))
    03/12/2021, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    NA61/SHINE is a high-energy physics experiment operating at the SPS accelerator at CERN. The physics programme of the experiment was recently extended, requiring a major upgrade of the detector setup. The main goal of the upgrade is to increase the event flow rate from 80Hz to 1kHz by exchanging the read-out electronics of the NA61/SHINE main tracking detectors (Time-Projection-Chambers -...

    Go to contribution page
  92. Atul Prajapati (Gran Sasso Science Institute)
    03/12/2021, 16:10
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    CYGNO is developing a gaseous Time Projection Chamber (TPC) for directional dark matter searches, to be hosted at Laboratori Nazionali del Gran Sasso (LNGS), Italy. CYGNO uses He:CF4 gas mixture at atmospheric pressure and relies on Gas Electron Multipliers (GEMs) stack for the charge amplification. Light is produced by the electrons avalanche thanks to the CF4 scintillation properties and is...

    Go to contribution page
  93. Michael Spannowsky (University of Durham (GB))
    03/12/2021, 16:50

    In the absence of new physics signals and in the presence of a plethora of new physics scenarios that could hide in the copiously produced LHC collision events, unbiased event reconstruction and classification methods have become a major research focus of the high-energy physics community. Unsupervised machine learning methods, often used as anomaly-detection methods, are trained on Standard...

    Go to contribution page
  94. Axel Naumann (CERN)
    03/12/2021, 17:20
  95. Doris Yangsoo Kim (Soongsil University), Soonwook Hwang (Korea Institute of Science & Technology Information (KR))
    03/12/2021, 17:45
  96. David Britton (University of Glasgow (GB))
    03/12/2021, 17:50
  97. Michael Spannowsky (University of Durham (GB))

    In the absence of new physics signals and in the presence of a plethora of new physics scenarios that could hide in the copiously produced LHC collision events, unbiased event reconstruction and classification methods have become a major research focus of the high-energy physics community. Unsupervised machine learning methods, often used as anomaly-detection methods, are trained on Standard...

    Go to contribution page
  98. Daniel Thomas Murnane (Lawrence Berkeley National Lab. (US))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    There has been significant interest and development in the use of graph neural networks (GNNs) for jet tagging applications. These generally provide better accuracy than CNN and energy flow algorithms by exploiting a range of GNN mechanisms, such as dynamic graph construction, equivariance, attention, and large parameterizations. In this work, we present the first apples-to-apples exploration...

    Go to contribution page
  99. Xiaocong Ai (DESY)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Computing centres, including those used to process High-Energy Physics data and simulations, are increasingly providing significant fractions of their computing resources using hardware architectures other than x86 CPUs, with GPUs being a commonly available alternative. GPUs can provide excellent computational performance at a good price point for tasks that can be suitably parallelized....

    Go to contribution page
  100. Jingshu Li (Sun Yat-Sen University (CN))
    Track 1: Computing Technology for Physics Research
    Poster

    It is usually difficult to describe the non-uniformity of the liquid in a detector because the fixed method is used to construct the geometry in detector simulations such as Geant4. We propose a method based on geometry description markup language and a tessellated detector description to share the detector geometry information between computational fluid dynamics simulation software and...

    Go to contribution page
  101. Saverio Mariani (Universita e INFN, Firenze (IT))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    An innovative approach to particle identification (PID) analyses employing machine learning techniques and its application to a physics case from the fixed-target programme at the LHCb experiment at CERN are presented. In general, a PID classifier is built by combining the response of specialized subdetectors, exploiting different techniques to guarantee redundancy and a wide kinematic...

    Go to contribution page
  102. Kilian Lieret
    Track 1: Computing Technology for Physics Research
    Poster

    The physics output of modern experimental HEP collaborations hinges not only on the quality of its software but also on the ability of the collaborators to make the best possible use of it.

    With the COVID-19 pandemic making in-person training impossible, the training paradigm at Belle II was shifted towards one of guided self-study.

    To that end, the study material was rebuilt from...

    Go to contribution page
  103. Kihong Park (Korea Institute of Science and Technology Information (KISTI)), Kihyeon Cho
    Track 1: Computing Technology for Physics Research
    Poster

    Because the cross section of dark matter is very small compared to that of the Standard Model (SM), huge amount of simulation is required [1]. Hence, to optimize Central Processing Unit (CPU) time is crucial to increase the efficiency of dark matter research in HEP. In this work, the CPU time was studied using the MadGraph5 as a simulation toolkit for dark matter study at e+e- colliders. The...

    Go to contribution page
  104. Dr Shane Jackson (PNNL)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Surrogate modeling and data-model convergence are important in any field utilizing probabilistic modeling, including High Energy Physics and Nuclear Physics. However, demonstrating that the model produces samples from the same underlying distribution as the true source can be problematic if the data is many-dimensional. The 1-D and multi-dimensional Kolmogorov-Smirnov test (ddKS) is a...

    Go to contribution page
  105. Ke Li (University of Washington (US))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    ATLAS is one of the largest experiments at the Large Hadron Collider. Its broad physics program relies on very large samples of simulated events, but producing these samples is very CPU intensive when using the full GEANT4 detector simulation. A parameterization-based Fast Calorimeter Simulation, i.e. AtlFast3, is developed to replace the Geant4 simulation to meet the computing challenges....

    Go to contribution page
  106. Mr Sergiu Weisz (University Politehnica of Bucharest (RO))
    Track 1: Computing Technology for Physics Research
    Poster

    Abstract

    The Large Hadron Collider’s third run poses new and interesting problems
    that all experiments have to tackle in order to fully exploit the
    benefits provided by the new architecture, such as the increase in the
    amount of data to be recorded.

    As part of the new developments that are taking place in the ALICE
    experiment, payloads that use more than a single processing...

    Go to contribution page
  107. Erwin Rudi (Rheinisch Westfaelische Tech. Hoch. (DE))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Scale factors are commonly used in HEP to improve shape agreement between distributions of data and simulation. We present a generalized deep-learning based architecture for producing shape changing scale factors, investigated in the context of bottom-quark jet- tagging algorithms within the CMS experiment.

    The method utilizes an adversarial approach with three networks forming the central...

    Go to contribution page
  108. Bogdan Kutsenko (Budker Institute of Nuclear Physics (RU))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The study of the conversion decay of the omega meson into $\pi^{0}e^{+} e^{-} $ state was performed with the CMD-3 detector at the VEPP-2000 electron-positron collider in Novosibirsk. The main physical background to the process under study is radiative decay $\omega \to \pi^{0} \gamma$, where monochromatic photon converts on the material in front of the detector. The deep neural network was...

    Go to contribution page
  109. Aryan Roy
    Track 1: Computing Technology for Physics Research
    Poster

    Analysis on HEP data is an iterative process in which the results of one step often inform the next. In an exploratory analysis, it is common to perform one computation on a collection of events, then view the results (often with histograms) to decide what to try next. Awkward Array is a Scikit-HEP Python package that enables data analysis with array-at-a-time operations to implement cuts as...

    Go to contribution page
  110. Sameshan Perumal (UCT)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Particle collider experiments generate huge volumes of complex data, and a mix of experience and tenacity is usually required to understand it at the detector and reconstruction level. Event displays provide a useful visual representation of both raw and reconstructed data that can be used to accelerate this learning process towards physics results. They are also used to verify expected...

    Go to contribution page
  111. Xiaomei Zhang (Chinese Academy of Sciences (CN)), Dr Yang Yifan (Institute of High Enery Physics)
    Track 1: Computing Technology for Physics Research
    Poster

    In the near future, many new high energy physics (HEP) experiments with challenging data volume are coming into operations or are planned in IHEP, China. The DIRAC-based distributed computing system has been set up to support these experiments. To get a better utilization of available distributed computing resources, it's important to provide experimental users with handy tools for the...

    Go to contribution page
  112. Oleg Kalashev (INR RAS)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The problem of ultra-high energy cosmic ray sources identification is greatly complicated by the fact that even highest energy cosmic rays may be deflected by tens of degrees in the galactic magnetic fields. We show that arrival directions for the deflected cosmic rays from several nearest active galaxies form specific patterns in the sky, which can be effectively recognized by the...

    Go to contribution page
  113. Artem Uskov (Budker Institute of Nuclear Physics)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Analysis of the CMD-3 detector data: searching for low-energy electron-positron annihilation into $KK\pi$ and $KK\pi\pi^0$

    A. A. Uskov.
    Budker Institute of Nuclear Physics, Siberian Branch of the Russian Academy of Sciences.

    We explored the process $e^+e^- → KK\pi$ with the СMD-3 detector at the electron-positron collider VEPP-2000. The data amassed by the СMD-3 detector in the...

    Go to contribution page
  114. Stanislav Polyakov (Lomonosov Moscow State University, Skobeltsyn Institute of NUclear Physics (RU))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    We use convolutional neural networks (CNNs) to analyze monoscopic and stereoscopic images of extensive air showers registered by Cherenkov telescopes of the TAIGA experiment. The networks are trained and evaluated on Monte-Carlo simulated images to identify the type of the primary particle and to estimate the energy of the gamma rays. We compare the performance of the networks trained on...

    Go to contribution page
  115. Katharina Hafner (RWTH Aachen University)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    When measuring cosmic ray induced air showers through radio waves, recovering the full three-dimensional electromagnetic field from the recorded two-dimensional voltage of an antenna is a major challenge. Antennas project the electromagnetic field into a lower dimensional space while applying a frequency dependent response and are subjected to noise contamination during measurement. We use...

    Go to contribution page
  116. Libin Xia (IHEP)
    Track 1: Computing Technology for Physics Research
    Poster

    High energy physics (HEP) is moving towards extremely high statistical experiments and super-large-scale simulation of theory such as Standard Model. In order to handle the challenge of rapidly increase of data volumes, distributed computing and storage frameworks in Big Data area like Hadoop and Spark make computations easily to scale out. While in- memory RDD based programming model assumes...

    Go to contribution page
  117. CMS Collaboration, Erica Brondolin (CERN)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    CLUE (CLUstering of Energy) is a fast parallel clustering algorithm for High Granularity Calorimeters in High Energy Physics. In these types of detectors, such as that to be built to cover the endcap region in the CMS Phase-2 Upgrade for HL-LHC, the standard clusterisation algorithms using combinatorics are expected to fail due to large number of digitised energy deposits (hits) in the...

    Go to contribution page
  118. CMS Collaboration, Lakshmi Pramod (Deutsches Elektronen-Synchrotron (DE))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The inner tracking system of the CMS experiment, which comprise of Silicon Pixel and Silicon Strip detectors, is designed to provide a precise measurement of the momentum of charged particles and to reconstruct the primary and secondary vertices. The movements of the different substructures of the tracker detectors driven by the operating conditions during data taking, require to regularly...

    Go to contribution page
  119. Sylvain Joube (IJCLab - Télécom SudParis)
    Track 1: Computing Technology for Physics Research
    Poster

    The increased use of accelerators for scientific computing, together with the increased variety of hardware involved, induces a need for performance portability between at least CPUs (which largely dominate WLCG infrastructure) and GPUs (which are quickly emerging as an architecture of choice for online data processing and HPC centers). In the C/C++ community, OpenCL was a low level first...

    Go to contribution page
  120. Maxim Potekhin (Brookhaven National Laboratory (US))
    Track 1: Computing Technology for Physics Research
    Poster

    In the past decade, Data and Analysis Preservation (DAP) has
    gained an increased prominence in the scope of effort of major
    High Energy and Nuclear Physics (HEP/NP) experiments, driven
    by the policies of the funding agencies as well as realization
    of the benefits brought by DAP to the science output of many
    projects in the field. It is a complex domain which in addition to
    archival of...

    Go to contribution page
  121. Aleksandr Alekseev (Universidad Andres Bello (CL))
    Track 1: Computing Technology for Physics Research
    Poster

    HENP experiments are preparing for HL-LHC era, which will bring an unprecedented volume of scientific data. This data will need to be stored and processed by collaborations, but expected resource growth is nowhere near extrapolated requirements of existing models both in storage volume and compute power. In this report, we will focus on building a prototype of a distributed data processing and...

    Go to contribution page
  122. Irakli Chakaberia (Lawrence Berkeley National Lab. (US)), Irakli Chakaberia (Lawrence Berkeley National Lab. (US))
    Track 1: Computing Technology for Physics Research
    Poster

    Solenoidal Tracker at RHIC (STAR) is a multipurpose experiment at the Relativistic Heavy Ion Collider (RHIC) with the primary goal to study formation and properties of the quark-gluon plasma. STAR is an international collaboration of member institutions and laboratories from around the world. Yearly data-taking period produces PBytes of raw data collected by the experiment. STAR primarily uses...

    Go to contribution page
  123. Michele Piero Blago (CERN)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The use of Ring Imaging Cherenkov detectors (RICH) offers a powerful technique for identifying the particle species in particle physics. These detectors produce 2D images formed by rings of individual photons superimposed on a background of photon rings from other particles.

    The RICH particle identification (PID) is essential to the LHCb experiment at CERN. While the current PID algorithm...

    Go to contribution page
  124. Davide Valsecchi (Università degli Studi e INFN Milano-Bicocca (IT))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The reconstruction of electrons and photons in CMS depends on topological clustering of the energy deposited by an incident particle in different crystals of the electromagnetic calorimeter (ECAL).

    These clusters are formed by aggregating neighbouring crystals according to the expected topology of an electromagnetic shower in the ECAL. The presence of upstream material (beampipe, tracker...

    Go to contribution page
  125. Dr Tao Lin (Chinese Academy of Sciences (CN))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The Jiangmen Underground Neutrino Observatory (JUNO) is designed to determine the neutrino mass ordering and precisely measure oscillation parameters. It is under construction at a depth of 700~m underground and comprises a central detector, water Cherenkov detector and top tracker. The central detector is designed to detect anti-neutrinos with an energy resolution of 3\% at 1~MeV, using a 20...

    Go to contribution page
  126. CMS Collaboration, Kevin Pedro (Fermi National Accelerator Lab. (US))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The high accuracy of detector simulation is crucial for modern particle physics experiments. However, this accuracy comes with a high computational cost, which will be exacerbated by the large datasets and complex detector upgrades associated with next-generation facilities such as the High Luminosity LHC. We explore the viability of regression-based machine learning (ML) approaches using...

    Go to contribution page
  127. Andrii Verbytskyi (Max-Planck-Institut fur Physik (DE))
    Track 1: Computing Technology for Physics Research
    Poster

    The installation and maintenance of scientific software for research in
    experimental, phenomenological, and theoretical High Energy Physics (HEP)
    requires a considerable amount of time and expertise. While many tools are
    available to make the task of installation and maintenance much easier,
    many of these tools require maintenance on their own, have little
    documentation and very few...

    Go to contribution page
  128. He Li
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    A geometry management system (GMS) is designed for the Offline Software
    of Super Tau Charm Facility (STCF) in China. Based on the eXtensible Markup Language
    (XML) and Detector Description Toolkit for High Energy Physics Experiments (DD4Hep) ,
    the system provides a consistent detector-geometry description for different offline applications,
    such as simulation, reconstruction and...

    Go to contribution page
  129. Thomas Reis (Science and Technology Facilities Council STFC (GB))
    Track 1: Computing Technology for Physics Research
    Poster

    The higher LHC luminosity expected in Run 3 (2022+) and the consequently larger number of simultaneous proton-proton collisions (pileup) per event pose significant challenges for CMS event reconstruction. This is particularly important for event filtering at the CMS High Level Trigger (HLT), where complex reconstruction algorithms must be executed within a strict time budget.

    This problem...

    Go to contribution page
  130. Dr Igor Alexandrov (Joint Institute for Nuclear Research (RU))
    Track 1: Computing Technology for Physics Research
    Poster

    Collecting, storing and processing of experimental data are an integral part of modern high-energy physics experiments. Various experiment databases and corresponding information systems related to their use and support play an important role and, in many ways, combine online and offline data processing. One of them, the Configuration Database is an essential part of a complex of information...

    Go to contribution page
  131. Igor Pelevanyuk (Joint Institute for Nuclear Research (RU))
    Track 1: Computing Technology for Physics Research
    Poster

    Joint Institute for Nuclear Research has several large computing facilities: Tier1 and Tier2 grid clusters, Govorun supercomputer, cloud, and LHEP computing cluster. Each of them has different access protocols, authentication and authorization procedures, data access methods. With the help of the DIRAC Interware, we were able to integrate all these resources to provide a uniform access to all...

    Go to contribution page
  132. Federico Fornari
    Track 1: Computing Technology for Physics Research
    Poster

    Modern datacenters need distributed filesystems to provide user applications with access to data stored on a large number of nodes. The ability to mount a distributed filesystem and leverage its native application programming interfaces in a Docker container, combined with the advanced orchestration features provided by Kubernetes, can improve flexibility in installing, monitoring and...

    Go to contribution page
  133. Vincenzo Eduardo Padulano (Valencia Polytechnic University (ES))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The declarative approach to data analysis provides high-level abstractions for users to operate on their datasets in a much more ergonomic fashion compared to imperative interfaces. ROOT offers such a tool with RDataFrame, which creates a computation graph with the operations issued by the user and executes it lazily only when the final results are queried. It has always been oriented towards...

    Go to contribution page
  134. Patrick Reichherzer (Ruhr-University Bochum)
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Poster

    In astrophysics, the search for sources of the highest-energy cosmic rays continues. For further progress, not only ever better observatories but also ever more realistic numerical simulations are needed. We present here a novel approach to charged particle propagation that finds its application in Simulations of particle propagation in jets of active galactic nuclei, possible sources of...

    Go to contribution page
  135. Dr Marco Letizia (MaLGa, University of Genoa and INFN - National Institute for Nuclear Physics)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Kernel methods represent an elegant and mathematically sound approach to nonparametric learning, but so far could hardly be used in large scale problems, since naïve implementations scale poorly with data size. Recent improvements have shown the benefits of a number of algorithmic ideas, combining optimization, numerical linear algebra and random projections. These, combined with (multi-)GPU...

    Go to contribution page
  136. Alan Malta Rodrigues (University of Nebraska Lincoln (US)), Daniele Spiga (Universita e INFN, Perugia (IT)), Tommaso Boccali (INFN Sezione di Pisa)
    Track 1: Computing Technology for Physics Research
    Poster

    CMS software stack (CMSSW) is being built on a nightly basis for multiple hardware architectures and compilers, in order to benefit from the diverse platforms. In practice, still, only x86_64 is used in production, and is supported by design by the workload management tools in charge of production and analysis job delivery to the distributed computing infrastructure.
    Profiting from an INFN...

    Go to contribution page
  137. Kristina Jaruskova (Czech Technical University in Prague)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Foreseen increasing demand for simulations of particle transport through detectors in High Energy Physics motivated the search for faster alternatives to Monte Carlo based simulations. Deep learning approaches provide promising results in terms of speed up and accuracy, among which generative adversarial networks (GANs) appear to be the most successful in reproducing realistic detector data....

    Go to contribution page
  138. Federico Fornari
    Track 1: Computing Technology for Physics Research
    Poster

    In the present work the possibility to exploit EOS, an open-source storage software solution for multi-PB storage management at CERN Large Hadron Collider, has been investigated in order to deploy a distributed filesystem over a storage backend provided by CEPH, an open-source software platform capable to expose data through interfaces for object, block and posix-compliant storage.
    The work...

    Go to contribution page
  139. Muhammad Imran (National Centre for Physics (PK))
    Track 1: Computing Technology for Physics Research
    Poster

    This talk summarizes the various storage options that we implemented for the CMSWEB cluster in Kubernetes infrastructure. All CMSWEB services require storage for logs, while some services also require storage for data. We also provide a feasibility analysis of various storage options and describe the pros/cons of each technique from the perspective of the CMSWEB cluster and its users. In the...

    Go to contribution page
  140. Meifeng Lin (Brookhaven National Laboratory (US))
    Track 1: Computing Technology for Physics Research
    Poster

    The Liquid Argon Time Projection Chamber (LArTPC) technology is widely used in high energy physics experiments, including the upcoming Deep Underground Neutrino Experiment (DUNE). Accurately simulating LArTPC detector responses is essential for analysis algorithm development and physics model interpretations. But because of the highly diverse event topologies that can occur in LArTPCs,...

    Go to contribution page
  141. Peter Klimai (Moscow Institute of Physics and Technology (MIPT))
    Track 1: Computing Technology for Physics Research
    Poster

    NICA (Nuclotron-based Ion Collider fAсility) is a new accelerator complex, which is under construction at the Joint Institute for Nuclear Research in Dubna to study properties of dense baryonic matter. The experiments of the NICA projects have already generated and obtained substantial volumes of event data, and it is expected that the overall number of stored events will increase from the...

    Go to contribution page
  142. Enrico Fattibene (INFN - National Institute for Nuclear Physics)
    Track 1: Computing Technology for Physics Research
    Poster

    The main computing and storage facility of INFN (Italian Institute for Nuclear Physics) running at CNAF hosts and manages tens of Petabytes of data produced by the LHC (Large Hadron Collider) experiments at CERN and other scientific collaborations in which INFN is involved. The majority of these data are stored on tape resources of different technologies.
    All the tape drives can be used for...

    Go to contribution page
  143. Raquel Pezoa Rivera (Federico Santa Maria Technical University (CL))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Understanding the predictions of a machine learning model can be as important as achieving high performance, especially in critical application domains such as health care, cybersecurity, or financial services, among others. In scientific domains, the model interpretation can enhance the model's performance, helping to trust them accurately for its use on real data and for knowledge discovery....

    Go to contribution page
  144. Christoph Wissing (Deutsches Elektronen-Synchrotron (DE)), Daniele Spiga (Universita e INFN, Perugia (IT))
    Track 1: Computing Technology for Physics Research
    Poster

    Particle accelerators are an important tool to study the fundamental properties of elementary particles. Currently the highest energy accelerator is the LHC at CERN, in Geneva, Switzerland. Each of its four major detectors, such as the CMS detector, produces dozens of Petabytes of data per year to be analyzed by a large international collaboration. The processing is carried out on the...

    Go to contribution page
  145. Sascha Daniel Diefenbacher (Hamburg University (DE))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    One of the largest strains on computational resources in the field of high energy physics are Monte Carlo simulations. Given that this already high computational cost is expected to increase in the high-precision era of the LHC and at future colliders, fast surrogate simulators are urgently needed. Generative machine learning models offer a promising way to provide such a fast simulation by...

    Go to contribution page
  146. Ricardo Luz (Argonne National Laboratory (US))
    Track 1: Computing Technology for Physics Research
    Poster

    Over the next decade, the ATLAS experiment will be required to operate in an increasingly harsh collision environment. To maintain physics performance, the ATLAS experiment will undergo a series of upgrades during major shutdowns. A key goal of these upgrades is to improve the capacity and flexibility of the detector readout system. To this end, the Front-End Link eXchange (FELIX) system was...

    Go to contribution page
  147. Michael Poat (Brookhaven National Laboratory)
    Track 1: Computing Technology for Physics Research
    Poster

    A difficult aspect of cyber security is the ability to achieve automated real time intrusion prevention across various sets of systems. To this extent, several companies are offering comprehensive solutions that leverage an “accuracy of scale” and moving much of the intelligence and detection on the Cloud, relying on an ever-growing set of data and analytics to increase decision accuracy....

    Go to contribution page
  148. William Kalderon (Brookhaven National Laboratory (US))
    Track 1: Computing Technology for Physics Research
    Poster

    This talk introduces and shows the simulated performance of two FPGA-based techniques to improve fast track finding in the ATLAS trigger. A fast hardware based track trigger is being developed in ATLAS for the High Luminosity upgrade of the Large Hadron Collider (HL-LHC), the goal of which is to provide the high-level trigger with full-scan tracking at 100 kHz in the high pile-up conditions of...

    Go to contribution page
  149. George Raduta (CERN)
    Track 1: Computing Technology for Physics Research
    Poster

    Abstract. The ALICE Experiment at CERN’s Large Hadron Collider is undertaking a major upgrade during Long Shutdown 2 in 2019-2021, which includes a new Online-Offline computing system. To ensure the efficient operation of the upgraded experiment, and of its newly designed computing system, a new set of reliable and performant graphical interfaces is needed. These are to be used 24h/365d in...

    Go to contribution page
  150. Mikhail Kirsanov (Russian Academy of Sciences (RU))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    We present the package for the simulation of DM (Dark Matter) particles in fixed target experiments. The most convenient way
    of this simulation (and the only possible way in the case of beam-dump) is to simulate it in the framework of the
    Monte-Carlo program performing the particle tracing in the experimental setup.
    The Geant4 toolkit framework was chosen as the most popular and versatile...

    Go to contribution page
  151. Alexander Rogachev (National Research University Higher School of Economics (RU), Yandex School of Data Analysis (RU))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    High energy physics experiments essentially rely on the simulation data used for physics analyses. However, running detailed simulation models requires tremendous amount of computation resources. New approaches to speed up detector simulation are therefore needed. \
    Generation of calorimeter responses is often the most expensive component of the simulation chain for HEP experiments.
    It has...

    Go to contribution page
  152. Mr Oriel Kiss (CERN, UNIGE)
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    Generative models (GM) are powerful tools to help validate theories by reducing the computation time of Monte Carlo (MC) simulations. GMs can learn expensive MC calculations and generalize to similar situations. In this work, we propose comparing a classical generative adversarial network (GAN) approach with a Born machine, both in his discrete (QCBM) and continuous (CVBM) form while...

    Go to contribution page
  153. Artem Maevskiy (National Research University Higher School of Economics (RU))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The detailed detector simulation models are vital for the successful operation of modern high-energy physics experiments. In most cases, such detailed models require a significant amount of computing resources to run. Often this may not be afforded and less resource-intensive approaches are desired. In this work, we demonstrate the applicability of Generative Adversarial Networks (GAN) as the...

    Go to contribution page
  154. Dr Nikita Kazeev (Yandex School of Data Analysis (RU))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    In recent years fully-parametric fast simulation methods based on generative models have been proposed for a variety of high-energy physics detectors. By their nature, the quality of data-driven models degrades in the regions of the phase space where the data are sparse. Since machine-learning models are hard to analyze from the physical principles, the commonly used testing procedures are...

    Go to contribution page
  155. CMS Collaboration, Thomas Klijnsma (Fermi National Accelerator Lab. (US))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Modern calorimeters for High Energy Physics (HEP) have very fine transverse and longitudinal segmentation to manage high incoming flux and improve particle identification capabilities. Compared to older calorimeter designs, this change alone alters the extraction of the number and energy of incident particles on the device from a simple gaussian-template clustering problem to a highly...

    Go to contribution page
  156. Manfred Peter Fackeldey (Rheinisch Westfaelische Tech. Hoch. (DE))
    Track 1: Computing Technology for Physics Research
    Poster

    Fast turnaround times for LHC physics analyses are essential for scientific success. The ability to quickly perform optimizations and consolidation studies is critical. At the same time, computing demands and complexities are rising with the upcoming data taking periods and new technologies, such as deep learning.
    We present a show-case of the HH->bbWW analysis at the CMS experiment, where we...

    Go to contribution page
  157. Xiangyang Ju (Lawrence Berkeley National Lab. (US)), Daniel Thomas Murnane (Lawrence Berkeley National Lab. (US)), Chun-Yi Wang (National Tsing Hua University (TW))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Particle tracking is a challenging pattern recognition task in experimental particle physics. Traditional algorithms based on Kalman filters show desirable performance in finding tracks originating from collision points. However, for displaced tracks, dedicated tunings are often required in order to reach sensible performance as the quality of the seed for the Kalman filter has a direct impact...

    Go to contribution page
  158. Kaushal Gumpula (Fermi National Accelerator Lab. (US)), Mr Nikita Koloskov (University of Chicago), Jeremy Edmund Hewes (University of Cincinnati (US))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The Exa.TrkX project presents a graph neural network (GNN) technique for low-level reconstruction of neutrino interactions in a Liquid Argon Time Projection Chamber (LArTPC). GNNs are still a relatively novel technique, and have shown great promise for similar reconstruction tasks in the LHC. Graphs describing particle interactions are formed by treating each detector hit as a node, with edges...

    Go to contribution page
  159. Gustavo Uribe (Universidad Antonio Narino (CO))
    Track 1: Computing Technology for Physics Research
    Poster

    The ATLAS Technical Coordination Expert System is a knowledge-based application describing and simulating the ATLAS infrastructure, its components, and their relationships, in order to facilitate the sharing of knowledge, improve the communication among experts, and foresee potential consequences of interventions and failures. The developed software is key for planning ahead of the future...

    Go to contribution page
  160. Simon Metayer
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Poster

    In this talk, we shall discuss recent results for the elastic degrees of freedom of fluctuating surfaces obtained by multi-loop approaches. These surfaces are ubiquitous in physics, and are used to describe objects in various fields; from brane theory to membranes in biophysics and more recently, applied to graphene and graphene-like materials. We derive the three-loop order renormalization...

    Go to contribution page
  161. David Southwick (CERN)
    Track 1: Computing Technology for Physics Research
    Poster

    As part of CERN-GEANT-PRACE-SKA collaboration and in the context of EGI-ACE (Advanced Computing for the European Open Science Cloud ) collaborators are working towards enabling
    efficient HPC use for Big Data sciences. Approaching HPC site with High Throughput
    Computing (HTC) workloads presents unique challenges in areas concerning data
    ingress/egress, use of shared storage systems, and...

    Go to contribution page
  162. Dr John J. Oh (NIMS (South Korea))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The gravitational-wave detector is a very complicated and sensitive collection of advanced instru-ments, which is influenced not only by the mutual interaction between mechanical/electronics systemsbut also by the surrounding environment. Thus, it is necessary to categorize and reduce noises frommany channels interconnected by such instruments and environment for achieving the detection...

    Go to contribution page
  163. Ivan Kharuk (INR RAS)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    We introduce a novel method for identifying fractions of primary air shower particles in an ensemble of events using deep learning. The suggested approach is developed for the Monte-Carlo simulated data for the Telescope Array experiment. For a given hadronic model, the error of identifying individual fractions of primary particles in an ensemble is less than 7%. We show that the developed...

    Go to contribution page
  164. Mason Proffitt (University of Washington (US))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The ABCD method is a common background estimation method used by many physics searches in particle collider experiments and involves defining four regions based on two uncorrelated observables. The regions are defined such that there is a search region, where most signal events are expected to be, and three control regions. A likelihood-based version of the ABCD method, also referred to as the...

    Go to contribution page
  165. Josina Schulte (RWTH Aachen University)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Conditional Invertible Neural Networks (cINNs) provide a new technique for the inference of free model parameters by enabling the creation of posterior distributions. With these distributions, the parameter mean values, their uncertainties and the correlations between the parameters can be estimated. In this contribution we summarize the functionality of cINNs, which are based on normalizing...

    Go to contribution page
  166. Andrey Demichev (Skobeltsyn Institute of Nuclear Physics, Lomonosov Moscow State University)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    In recent years, a correspondence has been established between the appropriate asymptotics of deep neural networks (DNNs), including convolutional ones (CNNs), and the machine learning methods based on Gaussian processes (GPs). The ultimate goal of establishing such interrelations is to achieve a better theoretical understanding of various methods of machine learning (ML) and their...

    Go to contribution page
  167. Andre Sznajder (Universidade do Estado do Rio de Janeiro (BR))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    We investigate the possibility of using Deep Learning algorithms for jet identification in the L1 trigger at HL-LHC. We perform a survey of architectures (MLP, CNN, Graph Networks) and benchmark their performance and resource consumption on FPGAs using a QKeras+hls4ml compression-aware training procedure. We use the HLS4ML jet dataset to compare the results obtained in this study to previous...

    Go to contribution page
  168. Placido Fernandez Declara (CERN)
    Track 1: Computing Technology for Physics Research
    Poster

    Detector optimisation and physics performance studies are an integral part of the development of future collider experiments. The Key4hep project aims to design a common set of software tools for future, or even present, High Energy Physics projects. Based on the iLCSoft and FCCSW frameworks an integrated solution for detector simulation, reconstruction and analyses is being developed. This...

    Go to contribution page
  169. Adrian Alan Pol (CERN)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    In this contribution, we apply deep learning object detection techniques based on convolutional blocks to jet identification and reconstruction problem encountered at the CERN Large Hadron Collider. Particles reconstructed through the Particle Flow algorithm can be represented as an image composed of calorimeter and tracker cells as an input to a Single Shot Detection network. The algorithm,...

    Go to contribution page
  170. CMS Collaboration, Vichayanun Wachirapusitanand (Chulalongkorn University (TH))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    As the CMS detector is getting ready for data-taking in 2021 and beyond, the detector is expected to deliver an ever-increasing amount of data. To ensure that the data recorded from the detector has the best quality possible for physics analyses, CMS Collaboration has dedicated Data Quality Monitoring (DQM) and Data Certification (DC) working groups. These working groups are made of human...

    Go to contribution page
  171. Vladimir Loncar (CERN)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The hls4ml project started to bring Neural Network inference to the L1 trigger system of the LHC experiments. Since its initial proposal, the library has grown, integrating support for multiple backends, multiple network architectures (convolutional, recurrent, graph), extreme quantization (binary and ternary networks), and multiple applications (classification, regression, anomaly detection)....

    Go to contribution page
  172. Grigory Rubtsov (INR RAS)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Baikal-GVD is a large scale underwater neutrino telescope currently under construction in Lake Baikal. The experiment is aimed at the study of the high-energy cosmic neutrinos and the search for their sources. The principal component of the telescope is the three-dimensional array of optical modules (OMs) which register Cherenkov light associated with the neutrino-induced particles. The OMs...

    Go to contribution page
  173. Nisha Lad (UCL)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The baseline track finding algorithms adopted in the LHC experiments are based on combinatorial track following techniques, where the seed number scales non-linearly with the number of hits. The corresponding CPU time increase, close to cubical, creates huge and ever-increasing demand for computing power. This is particularly problematic for the silicon tracking detectors, where the hit...

    Go to contribution page
  174. Andrey Baginyan (Joint Institute for Nuclear Research (RU))
    Track 1: Computing Technology for Physics Research
    Poster

    Modeling network data traffic is the most important task in the design and construction of new network centers and campus networks. The results of the analysis of models can be applied in the reorganization of existing centers and in the configuration of data routing protocols based on the use of links. The paper shows how constant monitoring of the main directions of data transfer allows...

    Go to contribution page
  175. Ouail Kitouni (Massachusetts Inst. of Technology (US))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The Lipschitz constant of the map between the input and output space represented by a neural network is a natural metric by which the robustness of the model can be measured. We present a new method to constrain the Lipschitz constant of dense deep learning models that can also be generalized to other architectures. The method relies on a simple weight normalization scheme during training...

    Go to contribution page
  176. Jiwoong Kim (Kyungpook National University (KR))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    We present the first application of scalable deep learning with a high-performance computer (HPC) to physics analysis using the CMS simulation data with 13 TeV LHC proton-proton collision. We build a convolutional neural network (CNN) model which takes low-level information as images considering the geometry of the CMS detector. The CNN model is implemented to discriminate R-parity violating...

    Go to contribution page
  177. Edson Carquin Lopez (Federico Santa Maria Technical University (CL))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Tau leptons are used in a range of important ATLAS physics analyses, including the measurement of the SM Higgs boson coupling to fermions, searches for Higgs boson partners, and heavy resonances decaying into pairs of tau leptons. Events for these analyses are provided by a number of single and di-tau triggers including event topological requirements or the requirement of additional objects at...

    Go to contribution page
  178. Mr Nathan Daniel Simpson (Lund University (SE))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The advent of deep learning has yielded powerful tools to automatically compute gradients of computations. This is because “training a neural network” equates to iteratively updating its parameters using gradient descent to find the minimum of a loss function. Deep learning is then a subset of a broader paradigm; a workflow with free parameters that is end-to-end optimisable, provided one can...

    Go to contribution page
  179. CMS Collaboration, Christopher Edward Brown (Imperial College (GB))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    A major challenge of the high-luminosity upgrade of the CERN LHC is to single out the primary interaction vertex of the hard scattering process from the expected 200 pileup interactions that will occur each bunch crossing. To meet this challenge, the upgrade of the CMS experiment comprises a complete replacement of the silicon tracker that will allow for the first time to perform the...

    Go to contribution page
  180. Gloria Corti (CERN), Michal Mazurek (CERN)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The LHCb Experiment at the Large Hadron Collider (LHC) at CERN has successfully performed a large number of physics measurements during Runs 1 and 2 of the LHC. It will resume operation in Run3 with an upgraded detector to process events with up to five times higher luminosity. Monte Carlo simulations are key to the commissioning of the new detector and the interpretation of past and future...

    Go to contribution page
  181. Carlos Perez Dengra (PIC-CIEMAT)
    Track 1: Computing Technology for Physics Research
    Poster

    The Large Hadron Collider (LHC) will enter a new era for data acquisition by 2026 within the High-Luminosity Large Hadron Collider (HL-LHC) program, where the LHC will increase the proton-proton collisions up to unprecedented levels. This increase will imply a factor 10 in terms of luminosity as compared to the current values, having an impact in the way the experimental data is stored and...

    Go to contribution page
  182. Mr Andreas Pappas (National and Kapodistrian University of Athens (GR))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The LHCb detector is undergoing a comprehensive upgrade for data taking in the LHC’s Run 3, which is scheduled to begin in 2022. The new Run 3 detector has a different, upgraded geometry and uses new tools for its description, namely DD4hep and ROOT. Besides, the visualization technologies have evolved quite a lot since Run 1, with the introduction of ubiquitous web based solutions or...

    Go to contribution page
  183. Antonio Gioiosa (INFN - National Institute for Nuclear Physics)
    Track 1: Computing Technology for Physics Research
    Poster

    The Mu2e experiment at Fermilab searches for the charged-lepton flavor violating neutrino-less conversion of a negative muon into an electron in the field of an aluminum nucleus. If no events are observed, in three years of running Mu2e will improve the previous upper limit by four orders of magnitude in search sensitivity.
    Mu2e’s Trigger and Data Acquisition System (TDAQ) uses {\it otsdaq}...

    Go to contribution page
  184. Prof. Ivan Kisel (Johann-Wolfgang-Goethe Univ. (DE))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Within the FAIR Phase-0 program the algorithms of the FLES (First-Level Event Selection) package developed for the CBM experiment (FAIR/GSI, Germany) are adapted for online and offline processing in the STAR experiment (BNL, USA).

    Long-lived charged particles are reconstructed in the TPC detector using the CA track finder algorithm based on the Cellular Automaton. The search for...

    Go to contribution page
  185. Ludwig Albert Jaffe (Goethe University Frankfurt (DE)), Alexander Adler (Goethe University Frankfurt (DE))
    Track 1: Computing Technology for Physics Research
    Poster

    Containerisation is an elementary tool for sharing IT resources: It is more light-weight than full virtualisation, but offers comparable isolation. We argue that for many use-cases which are typically approached with standard containerisation tools, less than full isolation is sufficient: Sometimes, only networking or only storage or both need to be different from their native, unisolated...

    Go to contribution page
  186. Raghav Kansal (Univ. of California San Diego (US))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    There has been significant development recently in generative models for accelerating LHC simulations. Work on simulating jets has primarily used image-based representations, which tend to be sparse and of limited resolution. We advocate for the more natural 'particle cloud' representation of jets, i.e. as a set of particles in momentum space, and discuss four physics- and...

    Go to contribution page
  187. Lea Reuter (Institut für Experimentelle Teilchenphysik (ETP), Karlsruher Institut für Technologie (KIT), Germany)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Learning the hierarchy of graphs is relevant in a variety of domains, as they are commonly used to express the chronological interactions in data structures. One application is in Flavor Physics, as the natural representation of a particle decay process is a rooted tree graph. 
    Analyzing collision events involving missing particles or neutrinos requires knowledge of the full decay tree....

    Go to contribution page
  188. Ahmet Ilker Topuz (Catholic University of Louvain)
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Poster

    The wide angular distribution of the incoming cosmic ray muons in connection with either incident angle or azimuthal angle is a challenging trait led to a drastic particle loss in the course of parametric computations from the GEANT4 simulations since the tomographic configurations as well as the target geometries also influence the processable number of the detected particles apart from the...

    Go to contribution page
  189. Abtin Narimani Charan (Deutsches Elektronen-Synchrotron (DESY))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The Belle II experiment is located at the asymmetric SuperKEKB $e^+ e^-$ collider in Tsukuba, Japan. The Belle II electromagnetic calorimeter (ECL) is designed to measure the energy deposited by charged and neutral particles. It also provides important contributions to the particle identification system. Identification of low-momenta muons and pions in the ECL is crucial if they do not reach...

    Go to contribution page
  190. Mary Touranakou (National and Kapodistrian University of Athens (GR)), Breno Orzari (UNESP - Universidade Estadual Paulista (BR))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    HEP experiments heavily rely on the production and the storage of large datasets of simulated events. At the LHC, simulation workflows require about half of the available computing resources of a typical experiment. With the foreseen High Luminosity LHC upgrade, data volume and complexity are going to increase faster than the expected improvements in computing infrastructure. Speeding up the...

    Go to contribution page
  191. Huilin Qu (CERN)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Identification of hadronic decays of highly Lorentz-boosted W/Z/Higgs bosons and top quarks provides powerful handles to a wide range of new physics searches and Standard Model measurements at the LHC. In this talk, we present ParticleNeXt, a new graph neural network (GNN) architecture tailored for jet tagging. With the introduction of novel components such as pairwise features, attentive...

    Go to contribution page
  192. CMS Collaboration, Wahid Redjeb (Rheinisch Westfaelische Tech. Hoch. (DE))
    Track 1: Computing Technology for Physics Research
    Poster

    Heterogeneous Computing will play a fundamental role in the CMS reconstruction to face the challenges that will be posed by the HL-LHC phase. Several computing architectures and vendors are currently available to build an Heterogeneous Computing Farm for the CMS experiment. However, specialized implementations for each of these architectures is not sustainable in terms of development,...

    Go to contribution page
  193. Xiaocong Ai (DESY)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Exploring anomalous objects from beyond standard model (BSM) signatures is one important mission of the LHC experiments. Recently, new particles in the sub-GeV scale have received more and more attention. The light pseudo-scalar such as axion-like particles (ALPs) and light scalar such as dark Higgs are proposed by many BSM models and can be taken as mediators of some sub-GeV dark matter...

    Go to contribution page
  194. Andrea Valenzuela Ramirez (Universitat Oberta de Catalunya (ES))
    Track 1: Computing Technology for Physics Research
    Poster

    The CernVM File System (CernVM-FS) is a global read-only POSIX file system that provides scalable and reliable software distribution to numerous scientific collaborations. It gives access to more than a billion binary files of experiment application software stacks and operating system containers to end user devices, grids, clouds, and supercomputers. CernVM-FS is asymmetric by construction....

    Go to contribution page
  195. Huw Haigh (Austrian Academy of Sciences (AT))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    In this talk, we present the novel implementation of a non-differentiable metric approximation with a corresponding loss-scheduling based on the minimization of a figure-of-merit related function typical of particle physics (the so-called Punzi figure of merit). We call this new loss-scheduling a "Punzi-loss function" and the neural network that minimizes it a "Punzi-net". We tested the...

    Go to contribution page
  196. Dr Federico SCUTTI (The University of Melbourne)
    Track 1: Computing Technology for Physics Research
    Poster

    The pyrate framework provides a dynamic, versatile, and memory-efficient approach to data format transformations, object reconstruction and data analysis in particle physics.The framework is implemented with the python programming language, allowing easy access to the scientific python package ecosystem and commodity big data technologies. Developed within the context of the SABRE experiment...

    Go to contribution page
  197. Henry Fredrick Schreiner (Princeton University)
    Track 1: Computing Technology for Physics Research
    Poster

    Histogramming for Python has been transformed by the Scikit-HEP family of libraries, starting with boost-histogram, a core library for high performance Pythonic histogram creation and manipulation based on the Boost C++ libraries. This was extended by Hist with plotting, analysis friendly shortcuts, and much more. And UHI is a specification that allows histogramming and plotting libraries,...

    Go to contribution page
  198. Vasileios Belis (ETH Zurich (CH))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The advantage of quantum computers over classical devices lies in the possibility of using quantum superposition effects of n qubits to perform exponential computations in parallel. This effect makes it possible to reduce the computational complexity of certain classes of problems, such as optimisation, sampling or combinatorial problems in large scale fault-tolerant quantum...

    Go to contribution page
  199. Enrico Guiraud (EP-SFT, CERN)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    In recent years, RDataFrame, ROOT's high-level interface for data analysis and processing, has seen widespread adoption on the part of HEP physicists. Much of this success is due to RDataFrame's ergonomic programming model that enables the implementation of common analysis tasks more easily than previous APIs, without compromising on application performance. Nonetheless, RDataFrame's...

    Go to contribution page
  200. Ben Nachman (Lawrence Berkeley National Lab. (US)), Daniel Britzger (Max-Planck-Institut für Physik München), Miguel Ignacio Arratia Munoz (Lawrence Berkeley National Lab. (US)), Owen Long (University of California Riverside (US))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    In this talk we present a novel method to reconstruct the kinematics of neutral-current deep inelastic scattering (DIS) using a deep neural network (DNN). Unlike traditional methods, it exploits the full kinematic information of both the scattered electron and the hadronic-final state, and it accounts for QED radiation by identifying events with radiated photons and event-level momentum...

    Go to contribution page
  201. Dr Renat Sadykov (Joint Institute for Nuclear Research (RU))
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Poster

    We present a new version of the Monte Carlo event generator ReneSANCe. The generator takes into account complete one-loop electroweak (EW) corrections, QED corrections in leading log approximation (LLA) and some higher order QED and EW corrections to processes at e^+e^- colliders with finite particle masses and arbitrary polarizations of intitial particles. ReneSANCe effectively operates in...

    Go to contribution page
  202. Javier Lopez Gomez (CERN)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Upcoming HEP experiments, e.g. at the HL-LHC, are expected to increase the volume of generated data by at least one order of magnitude. In order to retain the ability to analyze the influx of data, full exploitation of modern storage hardware and systems, such as low-latency high-bandwidth NVMe devices and distributed object stores, becomes critical.

    To this end, the ROOT RNTuple I/O...

    Go to contribution page
  203. Ajay Rawat (University of Washington (US))
    Track 1: Computing Technology for Physics Research
    Poster

    The Reproducible Open Benchmarks for Data Analysis Platform (ROB)[1][2] is a platform developed to help evaluate data analysis workflows in a controlled competition-style environment. ROB was inspired by the Top Tagger Comparison analysis (2019)[3] that compared multiple different top tagger neural networks. ROB has two main goals: (1) reduce the amount of time required to organize and...

    Go to contribution page
  204. Aziz Temirkhanov (National Research University Higher School of Economics (RU))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The volume of data processed by the Large Hadron Collider experiments demands sophisticated selection rules typically based on machine learning algorithms. One of the shortcomings of these approaches is their profound sensitivity to the biases in training samples. In the case of particle identification (PID), this might lead to degradation of the efficiency for some decays on validation due to...

    Go to contribution page
  205. Kyungeon Choi (University of Texas at Austin (US))
    Track 1: Computing Technology for Physics Research
    Poster

    Recent developments in software to address challenges in the High-Luminosity LHC (HL-LHC) era allow novel approaches when interacting with the data and performing physics analysis. We employed software components primarily from IRIS-HEP to construct an analysis workflow of an ongoing ATLAS Run-2 physics analysis in the python ecosystem. The software components in the analysis workflow include...

    Go to contribution page
  206. Su Yeon Chang (CERN / EPFL - Ecole Polytechnique Federale Lausanne (CH))
    Track 1: Computing Technology for Physics Research
    Poster

    In an earlier work [1], we introduced dual-Parameterized Quantum Circuit (PQC) Generative Adversarial Networks (GAN), an advanced prototype of quantum GAN, which consists of a classical discriminator and two quantum generators that take the form of PQCs. We have shown the model can imitate calorimeter outputs in High-Energy Physics (HEP), interpreted as reduced size pixelated images. But the...

    Go to contribution page
  207. Gordon Watts (University of Washington (US))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    ServiceX is a cloud-native distributed application that transforms data into columnar formats in the python ecosystem and ROOT framework. Along with the transformation, is applies filtering, and thinning operations to reduce the data load sent to the client. ServiceX, designed for easy deployment to a Kubernetes cluster, is runs near the data, scanning TB’s of data to send GB’s to a client or...

    Go to contribution page
  208. Ms Giulia Sorrentino (Universita e INFN Trieste (IT))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The Compact Muon Solenoid (CMS) detector at the Large Hadron Collider (LHC) is undertaking a Phase II upgrade program to face the harsh conditions imposed by the High Luminosity LHC (HL-LHC). This program comprises the installation of a new timing layer to measure the time of minimum ionizing particles (MIPs) with a time resolution of 30-40 ps. The time information of the tracks from this new...

    Go to contribution page
  209. Marco Rossi (CERN), Sofia Vallecorsa (CERN)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    DUNE is a cutting edge experiment aiming to study neutrinos in detail, with a
    special focus on the flavor oscillation mechanism. ProtoDUNE-SP (the prototype
    of the DUNE Far detector Single Phase TPC), has been built and operated at CERN
    and a full suite of reconstruction tools have been developed. Pandora is a
    multi-algorithm framework that implements reconstructions tools: a large number...

    Go to contribution page
  210. Sitong An (CERN, Carnegie Mellon University (US))
    Track 1: Computing Technology for Physics Research
    Poster

    Deep neural networks are rapidly gaining popularity in physics research. While python-based deep learning frameworks for training models in GPU environments develop and mature, a good solution that allows easy integration of inference of trained models into conventional C++ and CPU-based scientific computing workflow seems lacking.

    We report the latest development in ROOT/TMVA that aims to...

    Go to contribution page
  211. Michel Hernandez Villanueva (DESY)
    Track 1: Computing Technology for Physics Research
    Poster

    Among the upgrades in current high energy physics (HEP) experiments and the new facilities coming online, solving software challenges has become integral for the success of the collaborations, and the demand for human resources highly-skilled in both HEP and software domains is increasing. With a highly distributed environment in human resources, the sustainability of the HEP ecosystem...

    Go to contribution page
  212. Adam Abed Abud (University of Liverpool (GB) and CERN)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Deep Learning (DL) methods and Computer Vision are becoming important tools for event reconstruction in particle physics detectors. In this work, we report on the use of Submanifold Sparse Convolutional Neural Networks (SparseNet) for the classification of track and shower hits from a DUNE prototype liquid-argon detector at CERN (ProtoDUNE). By taking advantage of the three-dimensional nature...

    Go to contribution page
  213. Dr Wei Sun (Institute of High Energy Physics, Chinese Academy of Sciences)
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Poster

    Lattice quantum chromodynamics (lattice QCD) is the non-perturbative definition of the QCD theory from first principle and can be systematically improved, meanwhile, it is one of the most important high performance computing application in high energy physics. The physics research of lattice QCD benefited enormously from the development of computer hardware and algorithm, and particle...

    Go to contribution page
  214. Niklas Nolte (Massachusetts Institute of Technology (US))
    Track 1: Computing Technology for Physics Research
    Poster

    The triggerless readout of data corresponding to a 30 MHz event rate at the upgraded LHCb experiment together with a software-only High Level Trigger will enable the highest possible flexibility for trigger selections. During the first stage (HLT1), track reconstruction and vertex fitting for charged particles enable a broad and efficient selection process to reduce the event rate to 1 MHz....

    Go to contribution page
  215. Kaixuan Huang (SUN YAT-SEN UNIVERSITY)
    Track 1: Computing Technology for Physics Research
    Poster

    In High Energy Physics (HEP) experiments, it is useful for physics analysis and outreach if the event display software can provide fancy visualization effect. Unity is a professional software that can provide 3D modeling and animation production. GDML format files are commonly used for detector description in HEP experiments. In this work, we present a method for automating the import of GDML...

    Go to contribution page
  216. Su Yeon Chang (CERN / EPFL - Ecole Polytechnique Federale Lausanne (CH))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    In classical deep learning, a number of studies have proven that noise plays a crucial role in the training of neural networks. Artificial noises are often injected in order to make the model more robust, faster converging, and stable. Meanwhile, quantum computing, a completely new paradigm of computation, is characterized by statistical uncertainty from its probabilistic nature. Furthermore,...

    Go to contribution page
  217. Niclas Steve Eich (Rheinisch Westfaelische Tech. Hoch. (DE))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    We present a specialised layer for generative modeling of LHC events with generative adversarial networks. We use Lorentz boosts, rotations, momentum and energy conservation to build a network cell generating a 2-body particle decay. This cell is stacked consecutively in order to model two staged decays, respecting the symmetries across the decay chain. We allow for modifications of the...

    Go to contribution page
  218. Brahim Aitbenchikh (Universite Hassan II, Ain Chock (MA))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The ATLAS experiment at the Large Hadron Collider (LHC) relies heavily on simulated data, requiring the production of billions of Monte Carlo (MC)-based proton-proton collisions for every run period. As such, the simulation of collisions (events) is the single biggest CPU resource consumer for the experiment. ATLAS's finite computing resources are at odds with the expected conditions during...

    Go to contribution page
  219. Paul Gessinger (CERN)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The great success of the Tracking Machine Learning Challenges (TrackML) contracted in two phases (accuracy phase from April to August, throughput phase from September to November 2018) has proven the need of an easy accessible and yet challenging dataset for algorithm design and further R&D. The released TrackML dataset is to date heavily used by several research groups at the forefront of...

    Go to contribution page
  220. Gene Van Buren (Brookhaven National Laboratory)
    Track 1: Computing Technology for Physics Research
    Poster

    A unique experiment was conducted by the STAR Collaboration in 2018 to investigate differences between collisions of nuclear isobars, a potential key to unraveling one of the physics mysteries in our field: why the universe is made predominantly of matter. Enhancing the credibility of findings was deemed to hinge on blinding analyzers from knowing which dataset they were examining,...

    Go to contribution page
  221. Ahmet Ilker Topuz (Catholic University of Louvain)
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Poster

    The emerging applications of cosmic ray muon tomography lead to a significant rise in the utilization of the cosmic particle generators, e.g. CRY, CORSIKA, or CMSCGEN, where the fundamental parameters such as the energy spectrum and the angular distribution about the generated muons are represented in the continuous forms routinely governed implicitly by the probability density functions over...

    Go to contribution page
  222. Mary Touranakou (National and Kapodistrian University of Athens (GR)), Shah Rukh Qasim (Manchester Metropolitan University (GB))
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    We investigate the application of object condensation to particle tracking at the LHC. Designed having in mind calorimeter clustering and successfully employed on high-granularity calorimeter reconstruction for HL-LHC, object condensation is a generic clustering methods that could be applied to many problems within and outside HEP. Using the TrackML challenge dataset, we train a tracking...

    Go to contribution page
  223. Stefano Piacentini (Università La Sapienza)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    In this contribution we will show an innovative approach based on Bayesian networks and linear algebra providing a solid and complete solution to the problem of the detector response and the related systematic effects. As a case study, we will consider the Dark Matter (DM) direct detection searches. In fact, in the past decades, a huge experimental effort has been developed to ...

    Go to contribution page
  224. Mr Dennis Noll (RWTH Aachen University (DE))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Many HEP analyses are adopting the concept of vectorised computing, often making them increasingly performant and resource-efficient.
    While a variety of computing steps can be vectorised directly, some calculations are challenging to implement.
    One of these is the analytical neutrino reconstruction which involves fitting that naturally varies between events.

    We show a vectorised...

    Go to contribution page
  225. Ziyuan Li (Sun Yat-Sen University (CN)), Zhen Qian (Sun Yat-sen University)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The Jiangmen Underground Neutrino Observatory (JUNO), currently under construction in the south of China, is the largest Liquid Scintillator (LS) detector in the world. JUNO is a multipurpose neutrino experiment designed to determine neutrino mass ordering, precisely measure oscillation parameters, and study solar neutrinos, supernova neutrinos, geo-neutrinos and atmospheric neutrinos. The...

    Go to contribution page
  226. Jonas Eschle (Universitaet Zuerich (CH))
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Statistical modelling and likelihood inference is a key element in many sciences,
    especially in High-Energy Physics (HEP) analyses. These require advanced features
    such as handling large amounts of data, supporting binned, unbinned and mixed inference, using complicated and often custom made model functions, and being highly performant.
    In HEP, these features were covered in C++ frameworks...

    Go to contribution page
  227. Felix Wagner (HEPHY Vienna)
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Novel cryogenic scintillating calorimeters, used in rare event search experiments, achieve sub-keV recoil energy thresholds. Such low thresholds require a sensible raw data analysis of triggered events. This includes the identification of particle recoils among artifacts, and the reconstruction of the corresponding recoil energies, despite a low signal-to-noise ratio. For this purpose we...

    Go to contribution page