-
David Britton (University of Glasgow (GB))29/11/2021, 15:00
-
Doris Yangsoo Kim (Soongsil University), Soonwook Hwang (KiSTi Korea Institute of Science & Technology Information (KR))29/11/2021, 15:05
-
Prof. Do Young Noh (IBS)29/11/2021, 15:15
-
Julia Fitzner29/11/2021, 15:20
The World Health Organization has been and is monitoring the development of the pandemic through the regular collection of disease and laboratory data from all member states. Data is collected on the number of cases and death, the age distribution, infections in health care workers, but also on what public health measures are taken and where how many people are vaccinated. This data allows...
Go to contribution page -
Anja Butter (Universität Heidelberg, ITP)29/11/2021, 15:50
Over the next years, measurements at the LHC and the HL-LHC will provide us with a wealth of data. The best hope of answering fundamental questions like the nature of dark matter, is to adopt big data techniques in simulations and analyses to extract all relevant information.
On the theory side, LHC physics crucially relies on our ability to simulate events efficiently from first...
Go to contribution page -
Kevin Buzzard (Imperial College London)29/11/2021, 16:20
-
Simon Platzer (University of Vienna (AT))29/11/2021, 17:20Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Amplitude level evolution has become a new theoretical paradigm to analyze parton shower algorithms which are at the heart of multi-purpose event generator simulations used for particle collider experiments. It can also be implemented as a numerical algorithm in its own right to perform resummation of non-global observables beyond the leading colour approximation, leading to a new kind of...
Go to contribution page -
Lu Wang (Computing Center,Institute of High Energy Physics, CAS)29/11/2021, 17:20Track 1: Computing Technology for Physics ResearchOral
Problematic I/O pattern is the major cause of low efficiency HEP jobs. When the computing cluster is partially occupied by jobs with problematical I/O patterns, the overall CPU efficiency will dramatically drop down. In a cluster with thousands of users, locating the source of an anomalous workload is not an easy task. Automatic anomaly detection of I/O behavior can largely alleviate the...
Go to contribution page -
Katya Govorkova (CERN)29/11/2021, 17:20
We show how to adapt and deploy anomaly detection algorithms based on deep autoencoders, for the unsupervised detection of new physics signatures in the extremely challenging environment of a real-time event selection system at the Large Hadron Collider (LHC). We demonstrate that new physics signatures can be enhanced by three orders of magnitude, while staying within the strict latency and...
Go to contribution page -
Henry Truong29/11/2021, 17:40Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
In this talk we present a neural network based model to emulate matrix
Go to contribution page
elements. This model improves on existing methods by taking advantage of the known
factorisation properties of matrix elements to separate out the divergent regions.
In doing so the neural network learns about the factorisation property in singular limits, meaning we can control the behaviour of simulated matrix elements... -
Gaia Grosso (Universita e INFN, Padova (IT))29/11/2021, 17:40
We present a machine-learning based strategy to detect data departures from a given reference model, with no prior bias on the nature of the new physics responsible for the discrepancy. The main idea behind this method is to build the likelihood-ratio hypothesis test by directly translating the problem of maximizing a likelihood-ratio into the minimization of a loss function. A neural network...
Go to contribution page -
David Rousseau (IJCLab-Orsay)29/11/2021, 17:40Track 1: Computing Technology for Physics ResearchOral
Future HEP experiments will have ever higher read-out rate. It is then essential to explore new hardware paradigms for large scale computations. In this work we consider the Optical Processing Unit (OPU) from [LightOn][1], which is an optical device allowing to compute in a fast analog way the multiplication of an input vector of size 1 million by a 1 million x 1 million fixed random matrix,...
Go to contribution page -
Bruno Alves (LIP Laboratorio de Instrumentacao e Fisica Experimental de Part)29/11/2021, 18:00Track 1: Computing Technology for Physics ResearchOral
We present a decisive milestone in the challenging event reconstruction of the CMS High Granularity Calorimeter (HGCAL): the deployment to the official CMS software of the GPU version of the clustering algorithm (CLUE). The direct GPU linkage of CLUE to the preceding energy deposits calibration step is thus made possible, avoiding data transfers between host and device, further extending the...
Go to contribution page -
Mariia Demianenko (HSE University, Moscow Institute of Physics and Technology (National Research University))29/11/2021, 18:00
Photometric data-driven classification of supernovae is one of the fundamental problems in astronomy. Recent studies have demonstrated the superior quality of solutions based on various machine learning models. These models learn to classify supernova types using their light curves as inputs. Preprocessing of these curves is a crucial step that significantly affects the final quality. In this...
Go to contribution page -
Marvin Gerlach (Karlsruhe Insitute of Technology)29/11/2021, 18:00Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
The demand for precision predictions in the field of high energy physics has increased tremendously over the recent years. Its importance is visible in the light of current experimental efforts to test the predictive power of the Standard Model of particle physics (SM) to a never before seen accuracy. Thus, advanced computer software is a key technology to enable phenomenological computations...
Go to contribution page -
Mr Stefano Vergani (University of Cambridge)29/11/2021, 18:20
Over the last ten years, the popularity of Machine Learning (ML) has grown exponentially in all scientific fields, included particle physics. Industry has also developed new powerful tools that, imported into academia, could revolutionise research. One recent industry development that has not yet come to the attention of the particle physics community is Collaborative Learning (CL), a...
Go to contribution page -
Dr Giuseppe De Laurentis (Freiburg University)29/11/2021, 18:20Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Scattering amplitudes in perturbative quantum field theory exhibit a rich structure of zeros, poles and branch cuts which are best understood in complexified momentum space. It has been recently shown that leveraging this information can significantly simplify both analytical reconstruction and final expressions for the rational coefficients of transcendental functions appearing in...
Go to contribution page -
Dr Sofia Vallecorsa (CERN)29/11/2021, 18:20Track 1: Computing Technology for Physics ResearchOral
The Worldwide LHC Computing Grid (WLCG) is the infrastructure enabling the storage and pro-cessing of the large amount of data generated by the LHC experiments, and in particular the ALICE experiment among them. With the foreseen increase in the computing requirements of the future HighLuminosity LHC experiments, a data placement strategy which increases the efficiency of the WLCG computing...
Go to contribution page -
Stephen Nicholas Swatman (University of Amsterdam (NL))29/11/2021, 18:40Track 1: Computing Technology for Physics ResearchOral
Programmers using the C++ programming language are increasingly taught to manage memory implicitly through containers provided by the C++ standard library. However, many heterogeneous programming platforms require explicit allocation and deallocation of memory, which is often discouraged in “best practice” C++ programming, and this discrepancy in memory management strategies can be daunting...
Go to contribution page -
Dr Vicent Mateu Barreda (University of Salamanca)29/11/2021, 18:40Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
In this talk I will present REvolver, a c++ library for renormalization group evolution and automatic flavor matching of the QCD coupling and quark masses, as well as precise conversion between various quark mass renormalization schemes. The library systematically accounts for the renormalization group evolution of low-scale short-distance masses which depend linearly on the renormalization...
Go to contribution page -
Kai Habermann (University of Bonn)29/11/2021, 18:40
The Self-Organizing-Map (SOM) is a widely used neural
Go to contribution page
net for data analysis, dimension reduction and
clustering. It has yet to find use in high energy
particle physics. This paper discusses two
applications of SOM in particle physics. First, we were
able to obtain high separation of rare processes in
regions of the dimensionally reduced representation.
Second, we obtained Monte Carlo... -
Ingo Müller (ETH Zurich)29/11/2021, 19:00Track 1: Computing Technology for Physics ResearchOral
In the domain of high-energy physics (HEP), query languages in general and SQL in particular have found limited acceptance. This is surprising since HEP data analysis matches the SQL model well: the data is fully structured and queried using mostly standard operators. To gain insights on why this is the case, we perform a comprehensive analysis of six diverse, general-purpose data processing...
Go to contribution page -
Vitalii Maheria29/11/2021, 19:00Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
pySecDec is a tool for Monte Carlo integration of multiloop Feynman integrals (or parametric integrals in general), using the sector decomposition strategy. Its latest release contains two major features: the ability to expand integrals in kinematic limits using expansion by regions approach, and the ability to optimize the integration of weighted sums of integrals maximizing the obtained...
Go to contribution page -
Kinga Anna Wozniak (University of Vienna (AT))29/11/2021, 19:00
We investigate supervised and unsupervised quantum machine learning algorithms in the context of typical data analyses at the LHC. To deal with constraints on the problem size, dictated by limitations on the quantum hardware, we concatenate the quantum algorithms to the encoder of a classic autoencoder, used for dimensional reduction. We show results for a quantum classifier and a quantum...
Go to contribution page -
Dr Anthony Hartin (UCL)29/11/2021, 19:20Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Non perturbative QED is used to predict beam backgrounds at the interaction point of colliders, in calculations of Schwinger pair creation and in precision QED tests with ultra-intense lasers.
Go to contribution page
In order to predict these phenomena, custom built monte carlo event generators based on a suitable non perturbative theory have to be developed. One such suitable theory uses the Furry Interaction... -
Roman N. Lee30/11/2021, 15:00
Multiloop calculations are vital for obtaining high-precision predictions in Standard Model. In particular, such predictions are important for the possibility to discover New Physics which is expected to reveal itself in tiny deviations. The methods of multiloop calculations are rapidly evolving for already a few decades. New algorithms as well as their specific software implementations appear...
Go to contribution page -
Alberto Broggi30/11/2021, 15:30
Autonomous driving is an extremely hot topic, and the whole automotive industry is now working hard to transition from research to products. Deep learning and the progress of silicon technology are the main enabling factors that boosted the industry interest and are currently pushing the automotive sector towards futuristic self-driving cars. Computer vision is one of the most important...
Go to contribution page -
Josh Bendavid (CERN)30/11/2021, 16:00
The unprecedented volume of data and Monte Carlo simulations at the HL-LHC will pose increasing challenges for data analysis both in terms of computing resource requirements as well as "time to insight". I will discuss the evolution and current state of analysis data formats, software, infrastructure and workflows at the LHC, and the directions being taken towards fast, efficient, and...
Go to contribution page -
Christina Agapopoulou (Centre National de la Recherche Scientifique (FR))30/11/2021, 17:00Track 1: Computing Technology for Physics ResearchOral
From 2022 onward, the upgraded LHCb experiment will use a triggerless readout system collecting data at an event rate of 30 MHz. A software-only High Level Trigger will enable unprecedented flexibility for trigger selections. During the first stage (HLT1), a sub-set of the full offline track reconstruction for charged particles is run to select particles of interest based on single or...
Go to contribution page -
Lukas Alexander Heinrich (CERN), Michael Aaron Kagan (SLAC National Accelerator Laboratory (US))30/11/2021, 17:00Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
We introduce the differentiable simulator MadJax, an implementation of the general purpose matrix element generator Madgraph integrated within the Jax differentiable programming framework in Python. Integration is performed during automated matrix element code generation and subsequently enables automatic differentiation through leading order matrix element calculations. Madjax thus...
Go to contribution page -
Dalila Salamani (CERN)30/11/2021, 17:00
High energy physics experiments relies on Monte Carlo simulation to accurately model their detector response. Most of the time dominated by shower simulation in the calorimeter, the detector response modelling is time consuming and CPU intensive especially with the upcoming High Luminosity LHC upgrade. Several research directions investigated the use of Machine Learning based models to...
Go to contribution page -
Z.D. Kassabov-Zaharieva30/11/2021, 17:20Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
We present the software framework underlying the NNPDF4.0 global determination of parton distribution functions (PDFs). The code is released under an open source licence and is accompanied by extensive documentation and examples. The code base is composed by a PDF fitting package, tools to handle experimental data and to efficiently compare it to theoretical predictions, and a versatile...
Go to contribution page -
Andrea Bocci (CERN), CMS Collaboration30/11/2021, 17:20Track 1: Computing Technology for Physics ResearchOral
At the start of the upcoming LHC Run-3, CMS will deploy a heterogeneous High Level Trigger farm composed of x86 CPUs and NVIDIA GPUs. In order to guarantee that the HLT can run on machines without any GPU accelerators - for example as part of the large scale Monte Carlo production running on the grid, or when individual developers need to optimise specific triggers - the HLT reconstruction has...
Go to contribution page -
Sergei Mokhnenko (National Research University Higher School of Economics (RU))30/11/2021, 17:20
The increasing luminosities of future data taking at Large Hadron Collider and next generation collider experiments require an unprecedented amount of simulated events to be produced. Such large scale productions demand a significant amount of valuable computing resources. This brings a demand to use new approaches to event generation and simulation of detector responses. In this talk, we...
Go to contribution page -
Oriel Orphee Moira Kiss (Universite de Geneve (CH))30/11/2021, 17:40
Generative models (GM) are powerful tools to help validate theories by reducing the computation time of Monte Carlo (MC) simulations. GMs can learn expensive MC calculations and generalize to similar situations. In this work, we propose comparing a classical generative adversarial network (GAN) approach with a Born machine, both in his discrete (QCBM) and continuous (CVBM) form while...
Go to contribution page -
Nuno Dos Santos Fernandes (LIP Laboratorio de Instrumentacao e Fisica Experimental de Particulas (PT))30/11/2021, 17:40Track 1: Computing Technology for Physics ResearchOral
After the Phase II Upgrade of the LHC, expected for the period between 2025-26, the average
Go to contribution page
number of collisions per bunch crossing at the LHC will increase from the Run-2 average value
of 36 to a maximum of 200 pile-up proton-proton interactions per bunch crossing. The ATLAS
detector will also undergo a major upgrade programme to be able to operate it in such a harsh
conditions with the... -
Stefano Carrazza (CERN)30/11/2021, 17:40Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
We present Qibo, a new open-source framework for fast evaluation of quantum circuits and adiabatic evolution which takes full advantage of hardware accelerators, quantum hardware calibration and control, and large codebase of algorithms for applications in HEP and beyond. The growing interest in quantum computing and the recent developments of quantum hardware devices motivates the development...
Go to contribution page -
Joshua Falco Beirer (CERN, Georg-August-Universitaet Goettingen (DE))30/11/2021, 18:00
AtlFast3 is the next generation of high precision fast simulation in ATLAS. It is being deployed by the collaboration and will replace AtlFastII, the fast simulation tool that was successfully used until now. AtlFast3 combines two Fast Calorimeter Simulations tools; a parameterization-based approach and a machine-learning based tool exploiting Generative Adversarial Networks (GANs). AtlFast3...
Go to contribution page -
Antonio Pineda30/11/2021, 18:00Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
We compute the coefficients of the perturbative expansions of the plaquette,
Go to contribution page
and of the self-energy of static sources in the triplet and octet representation,
up to very high orders in perturbation theory. We use numerical sthocastic
perturbation theory and lattice regularization. We explore if the results
obtained comply with expectations from renormalon dominance, and what
they may say... -
Kai Lukas Unger (Karlsruhe Institute of Technology (KIT))30/11/2021, 18:00Track 1: Computing Technology for Physics ResearchOral
The z-vertex track trigger estimates the collision origin in the Belle II experiment using neural networks to reduce the background. The main part is a pre-trained multilayer perceptron. The task of this perceptron is to estimate the z-vertex of the collision to suppress background from outside the interaction point. For this, a low latency real-time FPGA implementation is needed. We present...
Go to contribution page -
Paul de Bryas (EPFL - Ecole Polytechnique Federale Lausanne (CH))30/11/2021, 18:20
SND@LHC is a newly approved detector under construction at the LHC, aimed at studying the interactions of neutrinos of all flavours produced by proton-proton collisions at the LHC. The energy range under study, few hundreds MeVs up to about 5 TeVs, is currently unexplored. In particular, electron neutrino and tau neutrino cross sections are unknown in that energy range, whereas muon neutrino...
Go to contribution page -
Timo Janßen (Georg-August-Universität Göttingen)30/11/2021, 18:20Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Modern machine learning methods offer great potential for increasing the efficiency of Monte Carlo event generators. We present the latest developments in the context of the event generation framework SHERPA. These include phase space sampling using normalizing flows and a new unweighting procedure based on neural network surrogates for the full matrix elements. We discuss corresponding...
Go to contribution page -
Andrei Gheata (CERN)30/11/2021, 18:20Track 1: Computing Technology for Physics ResearchOral
Several online and offline applications in high-energy physics have benefitted from running on graphics processing units (GPUs), taking advantage of their processing model. To date, however, general HEP particle transport simulation is not one of them, due to difficulties in mapping the complexity of its components and workflow to the GPU’s massive parallelism features. Deep code stacks, with...
Go to contribution page -
Ioana Ifrim (Princeton University (US))30/11/2021, 18:40Track 1: Computing Technology for Physics ResearchOral
Automatic Differentiation (AD) is instrumental for science and industry. It is a tool to evaluate the derivative of a function specified through a computer program. The range of AD application domain spans from Machine Learning to Robotics to High Energy Physics. Computing gradients with the help of AD is guaranteed to be more precise than the numerical alternative and have at most a constant...
Go to contribution page -
Humberto Reyes-González (University of Genoa)30/11/2021, 18:40Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Normalizing Flows (NFs) are emerging as a powerful brand of generative models, as they not only allow for efficient sampling, but also deliver density estimations by construction. They are of great potential usage in High Energy Physics (HEP), where we unavoidably deal with complex high dimensional data and probability distributions are everyday’s meal. However, in order to fully leverage the...
Go to contribution page -
Yee Chinn Yap (Deutsches Elektronen-Synchrotron (DE))30/11/2021, 18:40
The LUXE experiment (LASER Und XFEL Experiment) is a new experiment in planning at DESY Hamburg that will study Quantum Electrodynamics (QED) at the strong-field frontier. In this regime, QED is non-perturbative. This manifests itself in the creation of physical electron-positron pairs from the QED vacuum. LUXE intends to measure the positron production rate in this unprecedented regime by...
Go to contribution page -
Marco Barbone (Imperial College London)30/11/2021, 19:00Track 1: Computing Technology for Physics ResearchOral
We present results from a stand-alone simulation of electron single coulomb scattering as implemented completely on an FPGA architecture and compared with an identical simulation on a standard CPU. FPGA architectures offer unprecedented speed-up capability for Monte Carlo simulations, however with the caveats of lengthy development cycles and resource limitation particularly in terms of...
Go to contribution page -
Kang-Hun Ahn01/12/2021, 15:00
Human hearing has a very amazing ability that even advanced technology cannot imitate. The difference in energy between the smallest and loudest audible sounds is about a trillion times. The frequency resolution is also excellent, so the ear can distinguish a frequency difference of about 4 Hz. What is more surprising is that it can be heard even where there is a louder noise than the sound of...
Go to contribution page -
Ruth Mueller01/12/2021, 15:30
In this talk, I will discuss the impacts of what has been termed a growing culture of speed and hypercompetition in the academic sciences. Drawing on qualitative social sciences research in the life sciences, I will discuss how acceleration and hypercompetition impact epistemic diversity in science, i.e. the range of research topics researchers consider they can address, as well as human...
Go to contribution page -
Lenka Zdeborova01/12/2021, 16:30
The affinity between statistical physics and machine learning has a long history, I will describe the main lines of this long-lasting friendship in the context of current theoretical challenges and open questions about deep learning. Theoretical physics often proceeds in terms of solvable synthetic models, I will describe the related line of work on solvable models of simple feed-forward...
Go to contribution page -
Wenhao Huang (Shandong University)01/12/2021, 17:00Track 1: Computing Technology for Physics ResearchOral
The Super Tau Charm Facility (STCF) is a high-luminosity electron–positron
Go to contribution page
collider proposed in China, for the study of charm and tau physics. The Offline Software of Super Tau Charm Facility (OSCAR) is designed and developed
based on SNiPER, a lightweight common framework for HEP experiments. Several state-of-art software and tools in the HEP community are adopted, such as
the Detector... -
Ryan Moodie (IPPP, Durham University)01/12/2021, 17:00Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Phenomenological studies of high-multiplicity scattering processes at collider experiments present a substantial theoretical challenge and are increasingly important ingredients in experimental measurements. We investigate the use of neural networks to approximate matrix elements for these processes, studying the case of loop-induced diphoton production through gluon fusion. We train neural...
Go to contribution page -
CMS Collaboration, Felice Pantaleo (CERN)01/12/2021, 17:00
To sustain the harsher conditions of the high-luminosity LHC, the CMS collaboration is designing a novel endcap calorimeter system. The new calorimeter will predominantly use silicon sensors to achieve sufficient radiation tolerance and will maintain highly-granular information in the readout to help mitigate the effects of pileup. In regions characterized by lower radiation levels, small...
Go to contribution page -
Eric Wulff (CERN)01/12/2021, 17:20
In the European Center of Excellence in Exascale Computing "Research on AI- and Simulation-Based Engineering at Exascale" (CoE RAISE), researchers from science and industry develop novel, scalable Artificial Intelligence technologies towards Exascale. In this work, we leverage HPC resources to perform large scale hyperparameter optimization using distributed training on multiple compute nodes,...
Go to contribution page -
Yixiang Yang (Institute of High Energy Physics)01/12/2021, 17:20Track 1: Computing Technology for Physics ResearchOral
The JUNO experiment is being built mainly to determine the neutrino mass hierarchy by detecting neutrinos generated in the Yangjiang and Taishan nuclear plants in southern China. The detector will record 2 PB raw data every year, but each day it can only collect about 60 neutrino events scattered among huge background events. Selection of extremely sparse neutrino events brings a big challenge...
Go to contribution page -
Jakub Marcin Krys (University of Turin)01/12/2021, 17:20Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
In this talk, I present the computation of the two-loop helicity amplitudes for Higgs boson production in association with a bottom quark pair. I give an overview of the method and describe how computational bottlenecks can be overcome by using finite field reconstruction to obtain analytic expressions from numerical evaluations. I also show how the method of differential equations allows us...
Go to contribution page -
Matteo Concas (INFN Torino (IT))01/12/2021, 17:40
During the LHC Run 3 the ALICE online computing farm will process up to 50 times more Pb-Pb events per second than in Run 2. The implied computing resource scaling requires a shift in the approach that comprises the extensive usage of Graphics Processing Units (GPU) for the processing. We will give an overview of the state of the art for the data reconstruction on GPUs in ALICE, with...
Go to contribution page -
Dr Andreas Maier (DESY)01/12/2021, 17:40Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
We propose a novel method for the elimination of negative Monte Carlo event
Go to contribution page
weights. The method is process-agnostic, independent of any analysis, and preserves all physical observables. We demonstrate the overall performance and systematic improvement with increasing event sample size, based on predictions for the production of a W boson with two jets calculated at next-to-leading order... -
Riccardo Maria Bianchi (University of Pittsburgh (US))01/12/2021, 17:40Track 1: Computing Technology for Physics ResearchOral
The GeoModel toolkit is an open-source suite of standalone tools that empowers the user with lightweight tools to describe, visualize, test, and debug detector descriptions and geometries for HEP standalone studies and experiments. GeoModel has been designed with independence and responsiveness in mind and offers a development environment free of other large HEP tools and frameworks, and with...
Go to contribution page -
Jonas Rembser (CERN)01/12/2021, 18:00
RooFit is a toolkit for statistical modelling and fitting, and together with RooStats it is used for measurements and statistical tests by most experiments in particle physics, particularly the LHC experiments. As the LHC program progresses, physics analysis becomes more computationally demanding. Therefore, the focus of RooFit developments in recent years was performance optimization....
Go to contribution page -
Joana Niermann (Georg August Universitaet Goettingen (DE))01/12/2021, 18:00Track 1: Computing Technology for Physics ResearchOral
A detailed geometry description is essential to any high quality track reconstruction application. In current C++ based track reconstruction software libraries this is often achieved by an object oriented, polymorphic geometry description that implements different shapes and objects by extending a common base class. Such a design, however, has been shown to be problematic when attempting to...
Go to contribution page -
Jannis Lang (Karlsruhe Institute of Technology (KIT))01/12/2021, 18:00Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
We present results for Higgs boson pair production in gluon fusion including both, NLO (2-loop) QCD corrections with full top quark mass dependence as well as anomalous couplings related to operators describing effects of physics beyond the Standard Model.
Go to contribution page
The latter can be realized in non-linear (HEFT) or linear (SMEFT) Effective Field Theory frameworks.
We show results for both and discuss... -
Wolfgang Waltenberger (Austrian Academy of Sciences (AT))01/12/2021, 18:20Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
In view of the null results (so far) in the numerous channel-by-channel searches for new particles at the LHC, it becomes increasingly relevant to change perspective and attempt a more global approach to finding out where BSM physics may hide. To this end, we developed a novel statistical learning algorithm that is capable of identifying potential dispersed signals in the slew of published LHC...
Go to contribution page -
Florian Till Groetschla (KIT - Karlsruhe Institute of Technology (DE))01/12/2021, 18:20Track 1: Computing Technology for Physics ResearchOral
The performance of I/O intensive applications is largely determined by the organization of data and the associated insertion/extraction techniques. In this paper we present the design and implementation of an application that is targeted at managing data received (up to ~150 Gb/s payload throughput) into host DRAM, buffering data for several seconds, matched with the DRAM size, before being...
Go to contribution page -
Philippe Debie (Wageningen University, Wageningen Economic Research)01/12/2021, 18:20
The analysis of high-frequency financial trading data faces similar problems as High Energy Physics (HEP) analysis. The data is noisy, irregular in shape, and large in size. Recent research on the intra-day behaviour of financial markets shows a lack of tools specialized for finance data, and describes this problem as a computational burden. In contrary to HEP data, finance data consists of...
Go to contribution page -
Burak Sen (Middle East Technical University), Changgi Huh (Kyungpook National University (KR)), Gokhan Unel (University of California Irvine (US)), Gordon Watts (University of Washington (US)), Harry Prosper (Florida State University (US)), Mason Proffitt (University of Washington (US)), Sezen Sekmen (Kyungpook National University (KR))01/12/2021, 18:40
We present two applications of declarative interfaces for HEP data analysis allowing users to avoid writing event loops that simplify code and enable performance improvements to be decoupled from analysis development. One example is FuncADL, an analysis description language inspired by functional programming developed using Python as a host language. In addition to providing a declarative,...
Go to contribution page -
Vladyslav Shtabovenko (KIT)01/12/2021, 18:40Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
FeynCalc is esteemed by many particle theorists as a very
useful tool for tackling symbolic Feynman diagram calculations
with a great amount of transparency and flexibility.
While the program enjoys an excellent reputation
when it comes to tree level and 1-loop calculations,
the usefulness of FeynCalc in multi-loop projects is
often doubted by the practitioners.In this talk I will...
Go to contribution page -
Alexei Klimentov (Brookhaven National Laboratory (US))01/12/2021, 18:40Track 1: Computing Technology for Physics ResearchOral
The High Luminosity upgrade to the LHC, which aims for a ten-fold increase in the luminosity of proton-proton collisions at an energy of 14 TeV, is expected to start operation in 2028/29, and will deliver an unprecedented volume of scientific data at the multi-exabyte scale. This amount of data has to be stored and the corresponding storage system must ensure fast and reliable data delivery...
Go to contribution page -
Piotr Konopka (CERN)01/12/2021, 19:00Track 1: Computing Technology for Physics ResearchOral
The ALICE experiment at the CERN LHC (Large Hadron Collider) is undertaking a major upgrade during the LHC Long Shutdown 2 in 2019-2021, which includes a new computing system called O2 (Online-Offline). The raw data input from the ALICE detectors will increase a hundredfold, up to 3.5 TB/s. By reconstructing the data online, it will be possible to compress the data stream down to 100 GB/s...
Go to contribution page -
Joseph Lykken02/12/2021, 09:00
The technology of quantum computers and related systems is advancing rapidly, and powerful programmable quantum processors are already being made available by various companies. Long before we reach the promised land of fully fault tolerant large scale quantum computers, it is possible that unambiguous “quantum advantage” will be demonstrated for certain kinds of problems, including problems...
Go to contribution page -
Barry Sanders02/12/2021, 09:30Invited plenary
I provide a perspective on the development of quantum computing for data science, including a dive into state-of-the-art for both hardware and algorithms and the potential for quantum machine learning.
Go to contribution page -
Joshua Isaacson02/12/2021, 10:00
With the upcoming High Luminosity LHC coming online in the near future, event generators will need to generate a similar number of events. Currently, the current estimated cost to generate these events exceeds the computing budget of the LHC experiments. To address these issues, the event generators need to improve their speed. Many different approaches are being taken to achieve this goal. I...
Go to contribution page -
Nick Smith (Fermi National Accelerator Lab. (US))02/12/2021, 11:00Track 1: Computing Technology for Physics ResearchOral
Query languages for High Energy Physics (HEP) are an ever present topic within the field. A query language that can efficiently represent the nested data structures that encode the statistical and physical meaning of HEP data will help analysts by ensuring their code is more clear and pertinent. As the result of a multi-year effort to develop an in-memory columnar representation of high energy...
Go to contribution page -
764. Mixed QCD-EW two-loop amplitudes for neutral current Drell-Yan production (contribution ID 764)Dr Narayan Rana (INFN Milan)02/12/2021, 11:00Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
We present the mixed QCD-EW two-loop virtual amplitudes for the neutral current Drell-Yan production. The evaluation of the two-loop amplitudes is one of the bottlenecks for the complete calculation of the NNLO mixed QCD-EW corrections. We present the computational details, especially the evaluation of all the relevant two-loop Feynman integrals using analytical and semi-analytical methods. We...
Go to contribution page -
Dr Wenxing Fang02/12/2021, 11:00
The Circular Electron Positron Collider (CEPC) [1] is one of future experiments which aims to study the properties of Higgs boson and to perform searches for new physics beyond the Standard Model. The drift chamber is a design option for the outer tracking detector. With the development of new technology in electronics, employment of primary ionization counting method [2-3] to identify charged...
Go to contribution page -
Jim Pivarski (Princeton University)02/12/2021, 11:20Track 1: Computing Technology for Physics ResearchOral
Awkward Array 0.x was written entirely in Python, and Awkward Array 1.x was a fresh rewrite with a C++ core and a Python interface. Ironically, the Awkward Array 2.x project is translating most of that core back into Python (leaving the interface untouched). This is because we discovered surprising and subtle issues in Python-C++ integration that can be avoided with a more minimal coupling: we...
Go to contribution page -
Torri Jeske (Thomas Jefferson National Accelerator Facility)02/12/2021, 11:20
The AI for Experimental Controls project at Jefferson Lab is developing an AI system to control and calibrate a large drift chamber system in near-real time. The AI system will monitor environmental variables and beam conditions to recommend new high voltage settings that maintain consistent dE/dx gain and optimal resolution throughout the experiment. At present, calibrations are performed...
Go to contribution page -
Chaitanya Paranjape (IIT Dhanbad)02/12/2021, 11:20Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
We present an application of major new features of the program pySecDec, which is a program to calculate parametric integrals, in particular multi-loop integrals, numerically.
Go to contribution page
One important new feature is the ability to integrate weighted sums of integrals in a way which is optimised to reach a given accuracy goal on the sums rather than on the individual integrals, another one is the option... -
Wuming Luo (Institute of High Energy Physics, Chinese Academy of Science)02/12/2021, 11:40
Jiangmen Underground Neutrino Observatory (JUNO), located at the southern part of China, will be the world’s largest liquid scintillator(LS) detector. Equipped with 20 kton LS, 17623 20-inch PMTs and 25600 3-inch PMTs, JUNO will provide a unique apparatus to probe the mysteries of neutrinos, particularly the neutrino mass ordering puzzle. One of the challenges for JUNO is the high precision...
Go to contribution page -
Gene Van Buren (Brookhaven National Laboratory), Jerome LAURET (Brookhaven National Laboratory), Ivan Amos Cali (Massachusetts Inst. of Technology (US)), Dr Juan Gonzalez (Accelogic), Philippe Canal (Fermi National Accelerator Lab. (US)), Mr Rafael Nunez, Yueyang Ying (Massachusetts Inst. of Technology (US))02/12/2021, 11:40Track 1: Computing Technology for Physics ResearchOral
For the last 7 years Accelogic pioneered and perfected a radically new theory of numerical computing codenamed "Compressive Computing", which has an extremely profound impact on real-world computer science [1]. At the core of this new theory is the discovery of one of its fundamental theorems which states that, under very general conditions, the vast majority (typically between 70% and 80%) of...
Go to contribution page -
Alina Lazar (Youngstown State University)02/12/2021, 12:00Track 1: Computing Technology for Physics ResearchOral
Recently, graph neural networks (GNNs) have been successfully used for a variety of reconstruction problems in HEP. In this work, we develop and evaluate an end-to-end C++ implementation for inferencing a charged particle tracking pipeline based on GNNs. The pipeline steps include data encoding, graph building, edge filtering, GNN and track labeling and it runs on both GPUs and CPUs. The ONNX...
Go to contribution page -
Dr Teng LI (Shandong University, CN)02/12/2021, 12:00
Particle identification is one of most fundamental tools in various particle physics experiments. For the BESIII experiment on the BEPCII, the realization of numerous physical goals heavily relies on advanced particle identification algorithms. In recent years, the emerging of quantum machine learning could potentially arm particle physics experiments with a powerful new toolbox. In this work,...
Go to contribution page -
Sergey Volkov02/12/2021, 12:00Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
A high-precision calculation of lepton magnetic moments requires an evaluation of QED Feynman diagrams up to five independent loops.
Go to contribution page
These calculations are still important:
1) the 5-loop contributions with lepton loops to the electron g-2 are still not double-checked (and can potentially be sensitive in experiments);
2) there is a discrepancy in different calculations of the 5-loop... -
Mr Yahor Dydyshka (Joint Institute for Nuclear Research, Dubna)02/12/2021, 12:20Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
An algorithm for the spinor amplitudes with massive particles is implemented in the SANC computer system framework.
Go to contribution page
Procedure for simplification of the expressions with spinor products is based on little group technique in six-dimensional space-time.
Amplitudes for bremsstrahlung processes e+e+\to (e+e+/mu+mu-/HZ/Zgamma/gamma gamma) + gamma are obtained in gauge-covariant form... -
Joosep Pata (National Institute of Chemical Physics and Biophysics (EE))02/12/2021, 12:20
The particle-flow (PF) algorithm at CMS combines information across different detector subsystems to reconstruct a global particle-level picture of the event. At a fundamental level, tracks are extrapolated to the calorimeters and the muons system, and combined with energy deposits to reconstruct charged and neutral hadron candidates, as well as electron, photon and muon candidates.
In...
Go to contribution page -
Sophie Berkman (Fermi National Accelerator Laboratory)02/12/2021, 12:20Track 1: Computing Technology for Physics ResearchOral
Neutrinos are particles that interact rarely, so identifying them requires large detectors which produce lots of data. Processing this data with the computing power available is becoming more difficult as the detectors increase in size to reach their physics goals. Liquid argon time projection chamber (LArTPC) neutrino experiments are expected to grow in the next decade to have 100 times more...
Go to contribution page -
Dr Elise de Doncker (Western Michigan University)02/12/2021, 12:40Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
In recent work we computed 4-loop integrals for self-energy diagrams with 11 massive internal lines. Presently we perform numerical integration and regularization for diagrams with 8 to 11 lines, while considering massive and massless cases. For dimensional regularization, a sequence of integrals is computed depending on a parameter ($\varepsilon$) that is incorporated via the space-time...
Go to contribution page -
Eric Anton Moreno (California Institute of Technology (US))02/12/2021, 12:40
We present an application of anomaly detection techniques based on deep recurrent autoencoders to the problem of detecting gravitational wave signals in laser interferometers. Trained on noise data, this class of algorithms could detect signals using an unsupervised strategy, i.e., without targeting a specific kind of source. We develop a custom architecture to analyze the data from two...
Go to contribution page -
Adrien MATTA (IN2P3/CNRS, LPC Caen)03/12/2021, 15:20
Over the past decades nuclear physics experiment has seen a drastic increase in complexity. With the arrival of second generation radioactive ions beams facilities all over the world, the run for exploring more and more exotic nuclei is raging. The low intensity of RI-beams require more complex setup, covering larger solid angle, and detecting a wider variety of charged and neutral particles....
Go to contribution page -
Ms Brunella D'Anzi (Universita e INFN, Bari (IT))03/12/2021, 15:50
Artificial Neural Networks in High Energy Physics: introduction and goals
Nowadays High Energy Physics (HEP) analyses take generally advantages of the implementation of Machine Learning techniques to optimize the discrimination between signal and background, preserving as much signal as possible. Running a classical cut-based selection would imply a severe reduction of both signal and...
Go to contribution page -
Anna Kawecka (Warsaw University of Technology (PL))03/12/2021, 16:00
NA61/SHINE is a high-energy physics experiment operating at the SPS accelerator at CERN. The physics programme of the experiment was recently extended, requiring a major upgrade of the detector setup. The main goal of the upgrade is to increase the event flow rate from 80Hz to 1kHz by exchanging the read-out electronics of the NA61/SHINE main tracking detectors (Time-Projection-Chambers -...
Go to contribution page -
Atul Prajapati (Gran Sasso Science Institute)03/12/2021, 16:10
CYGNO is developing a gaseous Time Projection Chamber (TPC) for directional dark matter searches, to be hosted at Laboratori Nazionali del Gran Sasso (LNGS), Italy. CYGNO uses He:CF4 gas mixture at atmospheric pressure and relies on Gas Electron Multipliers (GEMs) stack for the charge amplification. Light is produced by the electrons avalanche thanks to the CF4 scintillation properties and is...
Go to contribution page -
Michael Spannowsky (University of Durham (GB))03/12/2021, 16:50
In the absence of new physics signals and in the presence of a plethora of new physics scenarios that could hide in the copiously produced LHC collision events, unbiased event reconstruction and classification methods have become a major research focus of the high-energy physics community. Unsupervised machine learning methods, often used as anomaly-detection methods, are trained on Standard...
Go to contribution page -
Axel Naumann (CERN)03/12/2021, 17:20
-
Doris Yangsoo Kim (Soongsil University), Soonwook Hwang (Korea Institute of Science & Technology Information (KR))03/12/2021, 17:45
-
David Britton (University of Glasgow (GB))03/12/2021, 17:50
-
Michael Spannowsky (University of Durham (GB))
In the absence of new physics signals and in the presence of a plethora of new physics scenarios that could hide in the copiously produced LHC collision events, unbiased event reconstruction and classification methods have become a major research focus of the high-energy physics community. Unsupervised machine learning methods, often used as anomaly-detection methods, are trained on Standard...
Go to contribution page -
Daniel Thomas Murnane (Lawrence Berkeley National Lab. (US))
There has been significant interest and development in the use of graph neural networks (GNNs) for jet tagging applications. These generally provide better accuracy than CNN and energy flow algorithms by exploiting a range of GNN mechanisms, such as dynamic graph construction, equivariance, attention, and large parameterizations. In this work, we present the first apples-to-apples exploration...
Go to contribution page -
Xiaocong Ai (DESY)
Computing centres, including those used to process High-Energy Physics data and simulations, are increasingly providing significant fractions of their computing resources using hardware architectures other than x86 CPUs, with GPUs being a commonly available alternative. GPUs can provide excellent computational performance at a good price point for tasks that can be suitably parallelized....
Go to contribution page -
Jingshu Li (Sun Yat-Sen University (CN))
It is usually difficult to describe the non-uniformity of the liquid in a detector because the fixed method is used to construct the geometry in detector simulations such as Geant4. We propose a method based on geometry description markup language and a tessellated detector description to share the detector geometry information between computational fluid dynamics simulation software and...
Go to contribution page -
Saverio Mariani (Universita e INFN, Firenze (IT))
An innovative approach to particle identification (PID) analyses employing machine learning techniques and its application to a physics case from the fixed-target programme at the LHCb experiment at CERN are presented. In general, a PID classifier is built by combining the response of specialized subdetectors, exploiting different techniques to guarantee redundancy and a wide kinematic...
Go to contribution page -
Kilian Lieret
The physics output of modern experimental HEP collaborations hinges not only on the quality of its software but also on the ability of the collaborators to make the best possible use of it.
With the COVID-19 pandemic making in-person training impossible, the training paradigm at Belle II was shifted towards one of guided self-study.
To that end, the study material was rebuilt from...
Go to contribution page -
Kihong Park (Korea Institute of Science and Technology Information (KISTI)), Kihyeon Cho
Because the cross section of dark matter is very small compared to that of the Standard Model (SM), huge amount of simulation is required [1]. Hence, to optimize Central Processing Unit (CPU) time is crucial to increase the efficiency of dark matter research in HEP. In this work, the CPU time was studied using the MadGraph5 as a simulation toolkit for dark matter study at e+e- colliders. The...
Go to contribution page -
619. Accelerated Computation of a High Dimensional Kolmogorov-Smirnov Distance (contribution ID 619)Dr Shane Jackson (PNNL)
Surrogate modeling and data-model convergence are important in any field utilizing probabilistic modeling, including High Energy Physics and Nuclear Physics. However, demonstrating that the model produces samples from the same underlying distribution as the true source can be problematic if the data is many-dimensional. The 1-D and multi-dimensional Kolmogorov-Smirnov test (ddKS) is a...
Go to contribution page -
Ke Li (University of Washington (US))
ATLAS is one of the largest experiments at the Large Hadron Collider. Its broad physics program relies on very large samples of simulated events, but producing these samples is very CPU intensive when using the full GEANT4 detector simulation. A parameterization-based Fast Calorimeter Simulation, i.e. AtlFast3, is developed to replace the Geant4 simulation to meet the computing challenges....
Go to contribution page -
Mr Sergiu Weisz (University Politehnica of Bucharest (RO))
Abstract
The Large Hadron Collider’s third run poses new and interesting problems
that all experiments have to tackle in order to fully exploit the
benefits provided by the new architecture, such as the increase in the
amount of data to be recorded.As part of the new developments that are taking place in the ALICE
Go to contribution page
experiment, payloads that use more than a single processing... -
Erwin Rudi (Rheinisch Westfaelische Tech. Hoch. (DE))
Scale factors are commonly used in HEP to improve shape agreement between distributions of data and simulation. We present a generalized deep-learning based architecture for producing shape changing scale factors, investigated in the context of bottom-quark jet- tagging algorithms within the CMS experiment.
The method utilizes an adversarial approach with three networks forming the central...
Go to contribution page -
Bogdan Kutsenko (Budker Institute of Nuclear Physics (RU))
The study of the conversion decay of the omega meson into $\pi^{0}e^{+} e^{-} $ state was performed with the CMD-3 detector at the VEPP-2000 electron-positron collider in Novosibirsk. The main physical background to the process under study is radiative decay $\omega \to \pi^{0} \gamma$, where monochromatic photon converts on the material in front of the detector. The deep neural network was...
Go to contribution page -
Aryan Roy
Analysis on HEP data is an iterative process in which the results of one step often inform the next. In an exploratory analysis, it is common to perform one computation on a collection of events, then view the results (often with histograms) to decide what to try next. Awkward Array is a Scikit-HEP Python package that enables data analysis with array-at-a-time operations to implement cuts as...
Go to contribution page -
Sameshan Perumal (UCT)
Particle collider experiments generate huge volumes of complex data, and a mix of experience and tenacity is usually required to understand it at the detector and reconstruction level. Event displays provide a useful visual representation of both raw and reconstructed data that can be used to accelerate this learning process towards physics results. They are also used to verify expected...
Go to contribution page -
Xiaomei Zhang (Chinese Academy of Sciences (CN)), Dr Yang Yifan (Institute of High Enery Physics)
In the near future, many new high energy physics (HEP) experiments with challenging data volume are coming into operations or are planned in IHEP, China. The DIRAC-based distributed computing system has been set up to support these experiments. To get a better utilization of available distributed computing resources, it's important to provide experimental users with handy tools for the...
Go to contribution page -
Oleg Kalashev (INR RAS)
The problem of ultra-high energy cosmic ray sources identification is greatly complicated by the fact that even highest energy cosmic rays may be deflected by tens of degrees in the galactic magnetic fields. We show that arrival directions for the deflected cosmic rays from several nearest active galaxies form specific patterns in the sky, which can be effectively recognized by the...
Go to contribution page -
Artem Uskov (Budker Institute of Nuclear Physics)
Analysis of the CMD-3 detector data: searching for low-energy electron-positron annihilation into $KK\pi$ and $KK\pi\pi^0$
A. A. Uskov.
Budker Institute of Nuclear Physics, Siberian Branch of the Russian Academy of Sciences.We explored the process $e^+e^- → KK\pi$ with the СMD-3 detector at the electron-positron collider VEPP-2000. The data amassed by the СMD-3 detector in the...
Go to contribution page -
Stanislav Polyakov (Lomonosov Moscow State University, Skobeltsyn Institute of NUclear Physics (RU))
We use convolutional neural networks (CNNs) to analyze monoscopic and stereoscopic images of extensive air showers registered by Cherenkov telescopes of the TAIGA experiment. The networks are trained and evaluated on Monte-Carlo simulated images to identify the type of the primary particle and to estimate the energy of the gamma rays. We compare the performance of the networks trained on...
Go to contribution page -
Katharina Hafner (RWTH Aachen University)
When measuring cosmic ray induced air showers through radio waves, recovering the full three-dimensional electromagnetic field from the recorded two-dimensional voltage of an antenna is a major challenge. Antennas project the electromagnetic field into a lower dimensional space while applying a frequency dependent response and are subjected to noise contamination during measurement. We use...
Go to contribution page -
702. Blaze: High performance Big Data Computing System for High Energy Physics (contribution ID 702)Libin Xia (IHEP)
High energy physics (HEP) is moving towards extremely high statistical experiments and super-large-scale simulation of theory such as Standard Model. In order to handle the challenge of rapidly increase of data volumes, distributed computing and storage frameworks in Big Data area like Hadoop and Spark make computations easily to scale out. While in- memory RDD based programming model assumes...
Go to contribution page -
CMS Collaboration, Erica Brondolin (CERN)
CLUE (CLUstering of Energy) is a fast parallel clustering algorithm for High Granularity Calorimeters in High Energy Physics. In these types of detectors, such as that to be built to cover the endcap region in the CMS Phase-2 Upgrade for HL-LHC, the standard clusterisation algorithms using combinatorics are expected to fail due to large number of digitised energy deposits (hits) in the...
Go to contribution page -
CMS Collaboration, Lakshmi Pramod (Deutsches Elektronen-Synchrotron (DE))
The inner tracking system of the CMS experiment, which comprise of Silicon Pixel and Silicon Strip detectors, is designed to provide a precise measurement of the momentum of charged particles and to reconstruct the primary and secondary vertices. The movements of the different substructures of the tracker detectors driven by the operating conditions during data taking, require to regularly...
Go to contribution page -
Sylvain Joube (IJCLab - Télécom SudParis)
The increased use of accelerators for scientific computing, together with the increased variety of hardware involved, induces a need for performance portability between at least CPUs (which largely dominate WLCG infrastructure) and GPUs (which are quickly emerging as an architecture of choice for online data processing and HPC centers). In the C/C++ community, OpenCL was a low level first...
Go to contribution page -
Maxim Potekhin (Brookhaven National Laboratory (US))
In the past decade, Data and Analysis Preservation (DAP) has
Go to contribution page
gained an increased prominence in the scope of effort of major
High Energy and Nuclear Physics (HEP/NP) experiments, driven
by the policies of the funding agencies as well as realization
of the benefits brought by DAP to the science output of many
projects in the field. It is a complex domain which in addition to
archival of... -
Aleksandr Alekseev (Universidad Andres Bello (CL))
HENP experiments are preparing for HL-LHC era, which will bring an unprecedented volume of scientific data. This data will need to be stored and processed by collaborations, but expected resource growth is nowhere near extrapolated requirements of existing models both in storage volume and compute power. In this report, we will focus on building a prototype of a distributed data processing and...
Go to contribution page -
Irakli Chakaberia (Lawrence Berkeley National Lab. (US)), Irakli Chakaberia (Lawrence Berkeley National Lab. (US))
Solenoidal Tracker at RHIC (STAR) is a multipurpose experiment at the Relativistic Heavy Ion Collider (RHIC) with the primary goal to study formation and properties of the quark-gluon plasma. STAR is an international collaboration of member institutions and laboratories from around the world. Yearly data-taking period produces PBytes of raw data collected by the experiment. STAR primarily uses...
Go to contribution page -
Michele Piero Blago (CERN)
The use of Ring Imaging Cherenkov detectors (RICH) offers a powerful technique for identifying the particle species in particle physics. These detectors produce 2D images formed by rings of individual photons superimposed on a background of photon rings from other particles.
The RICH particle identification (PID) is essential to the LHCb experiment at CERN. While the current PID algorithm...
Go to contribution page -
Davide Valsecchi (Università degli Studi e INFN Milano-Bicocca (IT))
The reconstruction of electrons and photons in CMS depends on topological clustering of the energy deposited by an incident particle in different crystals of the electromagnetic calorimeter (ECAL).
These clusters are formed by aggregating neighbouring crystals according to the expected topology of an electromagnetic shower in the ECAL. The presence of upstream material (beampipe, tracker...
Go to contribution page -
Dr Tao Lin (Chinese Academy of Sciences (CN))
The Jiangmen Underground Neutrino Observatory (JUNO) is designed to determine the neutrino mass ordering and precisely measure oscillation parameters. It is under construction at a depth of 700~m underground and comprises a central detector, water Cherenkov detector and top tracker. The central detector is designed to detect anti-neutrinos with an energy resolution of 3\% at 1~MeV, using a 20...
Go to contribution page -
CMS Collaboration, Kevin Pedro (Fermi National Accelerator Lab. (US))
The high accuracy of detector simulation is crucial for modern particle physics experiments. However, this accuracy comes with a high computational cost, which will be exacerbated by the large datasets and complex detector upgrades associated with next-generation facilities such as the High Luminosity LHC. We explore the viability of regression-based machine learning (ML) approaches using...
Go to contribution page -
Andrii Verbytskyi (Max-Planck-Institut fur Physik (DE))
The installation and maintenance of scientific software for research in
Go to contribution page
experimental, phenomenological, and theoretical High Energy Physics (HEP)
requires a considerable amount of time and expertise. While many tools are
available to make the task of installation and maintenance much easier,
many of these tools require maintenance on their own, have little
documentation and very few... -
He Li
A geometry management system (GMS) is designed for the Offline Software
Go to contribution page
of Super Tau Charm Facility (STCF) in China. Based on the eXtensible Markup Language
(XML) and Detector Description Toolkit for High Energy Physics Experiments (DD4Hep) ,
the system provides a consistent detector-geometry description for different offline applications,
such as simulation, reconstruction and... -
Thomas Reis (Science and Technology Facilities Council STFC (GB))
The higher LHC luminosity expected in Run 3 (2022+) and the consequently larger number of simultaneous proton-proton collisions (pileup) per event pose significant challenges for CMS event reconstruction. This is particularly important for event filtering at the CMS High Level Trigger (HLT), where complex reconstruction algorithms must be executed within a strict time budget.
This problem...
Go to contribution page -
Dr Igor Alexandrov (Joint Institute for Nuclear Research (RU))
Collecting, storing and processing of experimental data are an integral part of modern high-energy physics experiments. Various experiment databases and corresponding information systems related to their use and support play an important role and, in many ways, combine online and offline data processing. One of them, the Configuration Database is an essential part of a complex of information...
Go to contribution page -
Igor Pelevanyuk (Joint Institute for Nuclear Research (RU))
Joint Institute for Nuclear Research has several large computing facilities: Tier1 and Tier2 grid clusters, Govorun supercomputer, cloud, and LHEP computing cluster. Each of them has different access protocols, authentication and authorization procedures, data access methods. With the help of the DIRAC Interware, we were able to integrate all these resources to provide a uniform access to all...
Go to contribution page -
Federico Fornari
Modern datacenters need distributed filesystems to provide user applications with access to data stored on a large number of nodes. The ability to mount a distributed filesystem and leverage its native application programming interfaces in a Docker container, combined with the advanced orchestration features provided by Kubernetes, can improve flexibility in installing, monitoring and...
Go to contribution page -
Vincenzo Eduardo Padulano (Valencia Polytechnic University (ES))
The declarative approach to data analysis provides high-level abstractions for users to operate on their datasets in a much more ergonomic fashion compared to imperative interfaces. ROOT offers such a tool with RDataFrame, which creates a computation graph with the operations issued by the user and executes it lazily only when the final results are queried. It has always been oriented towards...
Go to contribution page -
Patrick Reichherzer (Ruhr-University Bochum)
In astrophysics, the search for sources of the highest-energy cosmic rays continues. For further progress, not only ever better observatories but also ever more realistic numerical simulations are needed. We present here a novel approach to charged particle propagation that finds its application in Simulations of particle propagation in jets of active galactic nuclei, possible sources of...
Go to contribution page -
Dr Marco Letizia (MaLGa, University of Genoa and INFN - National Institute for Nuclear Physics)
Kernel methods represent an elegant and mathematically sound approach to nonparametric learning, but so far could hardly be used in large scale problems, since naïve implementations scale poorly with data size. Recent improvements have shown the benefits of a number of algorithmic ideas, combining optimization, numerical linear algebra and random projections. These, combined with (multi-)GPU...
Go to contribution page -
Alan Malta Rodrigues (University of Nebraska Lincoln (US)), Daniele Spiga (Universita e INFN, Perugia (IT)), Tommaso Boccali (INFN Sezione di Pisa)
CMS software stack (CMSSW) is being built on a nightly basis for multiple hardware architectures and compilers, in order to benefit from the diverse platforms. In practice, still, only x86_64 is used in production, and is supported by design by the workload management tools in charge of production and analysis job delivery to the distributed computing infrastructure.
Go to contribution page
Profiting from an INFN... -
Kristina Jaruskova (Czech Technical University in Prague)
Foreseen increasing demand for simulations of particle transport through detectors in High Energy Physics motivated the search for faster alternatives to Monte Carlo based simulations. Deep learning approaches provide promising results in terms of speed up and accuracy, among which generative adversarial networks (GANs) appear to be the most successful in reproducing realistic detector data....
Go to contribution page -
Federico Fornari
In the present work the possibility to exploit EOS, an open-source storage software solution for multi-PB storage management at CERN Large Hadron Collider, has been investigated in order to deploy a distributed filesystem over a storage backend provided by CEPH, an open-source software platform capable to expose data through interfaces for object, block and posix-compliant storage.
Go to contribution page
The work... -
Muhammad Imran (National Centre for Physics (PK))
This talk summarizes the various storage options that we implemented for the CMSWEB cluster in Kubernetes infrastructure. All CMSWEB services require storage for logs, while some services also require storage for data. We also provide a feasibility analysis of various storage options and describe the pros/cons of each technique from the perspective of the CMSWEB cluster and its users. In the...
Go to contribution page -
Meifeng Lin (Brookhaven National Laboratory (US))
The Liquid Argon Time Projection Chamber (LArTPC) technology is widely used in high energy physics experiments, including the upcoming Deep Underground Neutrino Experiment (DUNE). Accurately simulating LArTPC detector responses is essential for analysis algorithm development and physics model interpretations. But because of the highly diverse event topologies that can occur in LArTPCs,...
Go to contribution page -
Peter Klimai (Moscow Institute of Physics and Technology (MIPT))
NICA (Nuclotron-based Ion Collider fAсility) is a new accelerator complex, which is under construction at the Joint Institute for Nuclear Research in Dubna to study properties of dense baryonic matter. The experiments of the NICA projects have already generated and obtained substantial volumes of event data, and it is expected that the overall number of stored events will increase from the...
Go to contribution page -
Enrico Fattibene (INFN - National Institute for Nuclear Physics)
The main computing and storage facility of INFN (Italian Institute for Nuclear Physics) running at CNAF hosts and manages tens of Petabytes of data produced by the LHC (Large Hadron Collider) experiments at CERN and other scientific collaborations in which INFN is involved. The majority of these data are stored on tape resources of different technologies.
Go to contribution page
All the tape drives can be used for... -
Raquel Pezoa Rivera (Federico Santa Maria Technical University (CL))
Understanding the predictions of a machine learning model can be as important as achieving high performance, especially in critical application domains such as health care, cybersecurity, or financial services, among others. In scientific domains, the model interpretation can enhance the model's performance, helping to trust them accurately for its use on real data and for knowledge discovery....
Go to contribution page -
Christoph Wissing (Deutsches Elektronen-Synchrotron (DE)), Daniele Spiga (Universita e INFN, Perugia (IT))
Particle accelerators are an important tool to study the fundamental properties of elementary particles. Currently the highest energy accelerator is the LHC at CERN, in Geneva, Switzerland. Each of its four major detectors, such as the CMS detector, produces dozens of Petabytes of data per year to be analyzed by a large international collaboration. The processing is carried out on the...
Go to contribution page -
Sascha Daniel Diefenbacher (Hamburg University (DE))
One of the largest strains on computational resources in the field of high energy physics are Monte Carlo simulations. Given that this already high computational cost is expected to increase in the high-precision era of the LHC and at future colliders, fast surrogate simulators are urgently needed. Generative machine learning models offer a promising way to provide such a fast simulation by...
Go to contribution page -
Ricardo Luz (Argonne National Laboratory (US))
Over the next decade, the ATLAS experiment will be required to operate in an increasingly harsh collision environment. To maintain physics performance, the ATLAS experiment will undergo a series of upgrades during major shutdowns. A key goal of these upgrades is to improve the capacity and flexibility of the detector readout system. To this end, the Front-End Link eXchange (FELIX) system was...
Go to contribution page -
Michael Poat (Brookhaven National Laboratory)
A difficult aspect of cyber security is the ability to achieve automated real time intrusion prevention across various sets of systems. To this extent, several companies are offering comprehensive solutions that leverage an “accuracy of scale” and moving much of the intelligence and detection on the Cloud, relying on an ever-growing set of data and analytics to increase decision accuracy....
Go to contribution page -
William Kalderon (Brookhaven National Laboratory (US))
This talk introduces and shows the simulated performance of two FPGA-based techniques to improve fast track finding in the ATLAS trigger. A fast hardware based track trigger is being developed in ATLAS for the High Luminosity upgrade of the Large Hadron Collider (HL-LHC), the goal of which is to provide the high-level trigger with full-scan tracking at 100 kHz in the high pile-up conditions of...
Go to contribution page -
George Raduta (CERN)
Abstract. The ALICE Experiment at CERN’s Large Hadron Collider is undertaking a major upgrade during Long Shutdown 2 in 2019-2021, which includes a new Online-Offline computing system. To ensure the efficient operation of the upgraded experiment, and of its newly designed computing system, a new set of reliable and performant graphical interfaces is needed. These are to be used 24h/365d in...
Go to contribution page -
Mikhail Kirsanov (Russian Academy of Sciences (RU))
We present the package for the simulation of DM (Dark Matter) particles in fixed target experiments. The most convenient way
Go to contribution page
of this simulation (and the only possible way in the case of beam-dump) is to simulate it in the framework of the
Monte-Carlo program performing the particle tracing in the experimental setup.
The Geant4 toolkit framework was chosen as the most popular and versatile... -
Alexander Rogachev (National Research University Higher School of Economics (RU), Yandex School of Data Analysis (RU))
High energy physics experiments essentially rely on the simulation data used for physics analyses. However, running detailed simulation models requires tremendous amount of computation resources. New approaches to speed up detector simulation are therefore needed. \
Go to contribution page
Generation of calorimeter responses is often the most expensive component of the simulation chain for HEP experiments.
It has... -
Mr Oriel Kiss (CERN, UNIGE)
Generative models (GM) are powerful tools to help validate theories by reducing the computation time of Monte Carlo (MC) simulations. GMs can learn expensive MC calculations and generalize to similar situations. In this work, we propose comparing a classical generative adversarial network (GAN) approach with a Born machine, both in his discrete (QCBM) and continuous (CVBM) form while...
Go to contribution page -
Artem Maevskiy (National Research University Higher School of Economics (RU))
The detailed detector simulation models are vital for the successful operation of modern high-energy physics experiments. In most cases, such detailed models require a significant amount of computing resources to run. Often this may not be afforded and less resource-intensive approaches are desired. In this work, we demonstrate the applicability of Generative Adversarial Networks (GAN) as the...
Go to contribution page -
Dr Nikita Kazeev (Yandex School of Data Analysis (RU))
In recent years fully-parametric fast simulation methods based on generative models have been proposed for a variety of high-energy physics detectors. By their nature, the quality of data-driven models degrades in the regions of the phase space where the data are sparse. Since machine-learning models are hard to analyze from the physical principles, the commonly used testing procedures are...
Go to contribution page -
CMS Collaboration, Thomas Klijnsma (Fermi National Accelerator Lab. (US))
Modern calorimeters for High Energy Physics (HEP) have very fine transverse and longitudinal segmentation to manage high incoming flux and improve particle identification capabilities. Compared to older calorimeter designs, this change alone alters the extraction of the number and energy of incident particles on the device from a simple gaussian-template clustering problem to a highly...
Go to contribution page -
Manfred Peter Fackeldey (Rheinisch Westfaelische Tech. Hoch. (DE))
Fast turnaround times for LHC physics analyses are essential for scientific success. The ability to quickly perform optimizations and consolidation studies is critical. At the same time, computing demands and complexities are rising with the upcoming data taking periods and new technologies, such as deep learning.
Go to contribution page
We present a show-case of the HH->bbWW analysis at the CMS experiment, where we... -
Xiangyang Ju (Lawrence Berkeley National Lab. (US)), Daniel Thomas Murnane (Lawrence Berkeley National Lab. (US)), Chun-Yi Wang (National Tsing Hua University (TW))
Particle tracking is a challenging pattern recognition task in experimental particle physics. Traditional algorithms based on Kalman filters show desirable performance in finding tracks originating from collision points. However, for displaced tracks, dedicated tunings are often required in order to reach sensible performance as the quality of the seed for the Kalman filter has a direct impact...
Go to contribution page -
Kaushal Gumpula (Fermi National Accelerator Lab. (US)), Mr Nikita Koloskov (University of Chicago), Jeremy Edmund Hewes (University of Cincinnati (US))
The Exa.TrkX project presents a graph neural network (GNN) technique for low-level reconstruction of neutrino interactions in a Liquid Argon Time Projection Chamber (LArTPC). GNNs are still a relatively novel technique, and have shown great promise for similar reconstruction tasks in the LHC. Graphs describing particle interactions are formed by treating each detector hit as a node, with edges...
Go to contribution page -
Gustavo Uribe (Universidad Antonio Narino (CO))
The ATLAS Technical Coordination Expert System is a knowledge-based application describing and simulating the ATLAS infrastructure, its components, and their relationships, in order to facilitate the sharing of knowledge, improve the communication among experts, and foresee potential consequences of interventions and failures. The developed software is key for planning ahead of the future...
Go to contribution page -
Simon Metayer
In this talk, we shall discuss recent results for the elastic degrees of freedom of fluctuating surfaces obtained by multi-loop approaches. These surfaces are ubiquitous in physics, and are used to describe objects in various fields; from brane theory to membranes in biophysics and more recently, applied to graphene and graphene-like materials. We derive the three-loop order renormalization...
Go to contribution page -
David Southwick (CERN)
As part of CERN-GEANT-PRACE-SKA collaboration and in the context of EGI-ACE (Advanced Computing for the European Open Science Cloud ) collaborators are working towards enabling
Go to contribution page
efficient HPC use for Big Data sciences. Approaching HPC site with High Throughput
Computing (HTC) workloads presents unique challenges in areas concerning data
ingress/egress, use of shared storage systems, and... -
Dr John J. Oh (NIMS (South Korea))
The gravitational-wave detector is a very complicated and sensitive collection of advanced instru-ments, which is influenced not only by the mutual interaction between mechanical/electronics systemsbut also by the surrounding environment. Thus, it is necessary to categorize and reduce noises frommany channels interconnected by such instruments and environment for achieving the detection...
Go to contribution page -
Ivan Kharuk (INR RAS)
We introduce a novel method for identifying fractions of primary air shower particles in an ensemble of events using deep learning. The suggested approach is developed for the Monte-Carlo simulated data for the Telescope Array experiment. For a given hadronic model, the error of identifying individual fractions of primary particles in an ensemble is less than 7%. We show that the developed...
Go to contribution page -
Mason Proffitt (University of Washington (US))
The ABCD method is a common background estimation method used by many physics searches in particle collider experiments and involves defining four regions based on two uncorrelated observables. The regions are defined such that there is a search region, where most signal events are expected to be, and three control regions. A likelihood-based version of the ABCD method, also referred to as the...
Go to contribution page -
Josina Schulte (RWTH Aachen University)
Conditional Invertible Neural Networks (cINNs) provide a new technique for the inference of free model parameters by enabling the creation of posterior distributions. With these distributions, the parameter mean values, their uncertainties and the correlations between the parameters can be estimated. In this contribution we summarize the functionality of cINNs, which are based on normalizing...
Go to contribution page -
Andrey Demichev (Skobeltsyn Institute of Nuclear Physics, Lomonosov Moscow State University)
In recent years, a correspondence has been established between the appropriate asymptotics of deep neural networks (DNNs), including convolutional ones (CNNs), and the machine learning methods based on Gaussian processes (GPs). The ultimate goal of establishing such interrelations is to achieve a better theoretical understanding of various methods of machine learning (ML) and their...
Go to contribution page -
Andre Sznajder (Universidade do Estado do Rio de Janeiro (BR))
We investigate the possibility of using Deep Learning algorithms for jet identification in the L1 trigger at HL-LHC. We perform a survey of architectures (MLP, CNN, Graph Networks) and benchmark their performance and resource consumption on FPGAs using a QKeras+hls4ml compression-aware training procedure. We use the HLS4ML jet dataset to compare the results obtained in this study to previous...
Go to contribution page -
Placido Fernandez Declara (CERN)
Detector optimisation and physics performance studies are an integral part of the development of future collider experiments. The Key4hep project aims to design a common set of software tools for future, or even present, High Energy Physics projects. Based on the iLCSoft and FCCSW frameworks an integrated solution for detector simulation, reconstruction and analyses is being developed. This...
Go to contribution page -
Adrian Alan Pol (CERN)
In this contribution, we apply deep learning object detection techniques based on convolutional blocks to jet identification and reconstruction problem encountered at the CERN Large Hadron Collider. Particles reconstructed through the Particle Flow algorithm can be represented as an image composed of calorimeter and tracker cells as an input to a Single Shot Detection network. The algorithm,...
Go to contribution page -
CMS Collaboration, Vichayanun Wachirapusitanand (Chulalongkorn University (TH))
As the CMS detector is getting ready for data-taking in 2021 and beyond, the detector is expected to deliver an ever-increasing amount of data. To ensure that the data recorded from the detector has the best quality possible for physics analyses, CMS Collaboration has dedicated Data Quality Monitoring (DQM) and Data Certification (DC) working groups. These working groups are made of human...
Go to contribution page -
Vladimir Loncar (CERN)
The hls4ml project started to bring Neural Network inference to the L1 trigger system of the LHC experiments. Since its initial proposal, the library has grown, integrating support for multiple backends, multiple network architectures (convolutional, recurrent, graph), extreme quantization (binary and ternary networks), and multiple applications (classification, regression, anomaly detection)....
Go to contribution page -
Grigory Rubtsov (INR RAS)
Baikal-GVD is a large scale underwater neutrino telescope currently under construction in Lake Baikal. The experiment is aimed at the study of the high-energy cosmic neutrinos and the search for their sources. The principal component of the telescope is the three-dimensional array of optical modules (OMs) which register Cherenkov light associated with the neutrino-induced particles. The OMs...
Go to contribution page -
Nisha Lad (UCL)
The baseline track finding algorithms adopted in the LHC experiments are based on combinatorial track following techniques, where the seed number scales non-linearly with the number of hits. The corresponding CPU time increase, close to cubical, creates huge and ever-increasing demand for computing power. This is particularly problematic for the silicon tracking detectors, where the hit...
Go to contribution page -
Andrey Baginyan (Joint Institute for Nuclear Research (RU))
Modeling network data traffic is the most important task in the design and construction of new network centers and campus networks. The results of the analysis of models can be applied in the reorganization of existing centers and in the configuration of data routing protocols based on the use of links. The paper shows how constant monitoring of the main directions of data transfer allows...
Go to contribution page -
Ouail Kitouni (Massachusetts Inst. of Technology (US))
The Lipschitz constant of the map between the input and output space represented by a neural network is a natural metric by which the robustness of the model can be measured. We present a new method to constrain the Lipschitz constant of dense deep learning models that can also be generalized to other architectures. The method relies on a simple weight normalization scheme during training...
Go to contribution page -
Jiwoong Kim (Kyungpook National University (KR))
We present the first application of scalable deep learning with a high-performance computer (HPC) to physics analysis using the CMS simulation data with 13 TeV LHC proton-proton collision. We build a convolutional neural network (CNN) model which takes low-level information as images considering the geometry of the CMS detector. The CNN model is implemented to discriminate R-parity violating...
Go to contribution page -
Edson Carquin Lopez (Federico Santa Maria Technical University (CL))
Tau leptons are used in a range of important ATLAS physics analyses, including the measurement of the SM Higgs boson coupling to fermions, searches for Higgs boson partners, and heavy resonances decaying into pairs of tau leptons. Events for these analyses are provided by a number of single and di-tau triggers including event topological requirements or the requirement of additional objects at...
Go to contribution page -
Mr Nathan Daniel Simpson (Lund University (SE))
The advent of deep learning has yielded powerful tools to automatically compute gradients of computations. This is because “training a neural network” equates to iteratively updating its parameters using gradient descent to find the minimum of a loss function. Deep learning is then a subset of a broader paradigm; a workflow with free parameters that is end-to-end optimisable, provided one can...
Go to contribution page -
CMS Collaboration, Christopher Edward Brown (Imperial College (GB))
A major challenge of the high-luminosity upgrade of the CERN LHC is to single out the primary interaction vertex of the hard scattering process from the expected 200 pileup interactions that will occur each bunch crossing. To meet this challenge, the upgrade of the CMS experiment comprises a complete replacement of the silicon tracker that will allow for the first time to perform the...
Go to contribution page -
Gloria Corti (CERN), Michal Mazurek (CERN)
The LHCb Experiment at the Large Hadron Collider (LHC) at CERN has successfully performed a large number of physics measurements during Runs 1 and 2 of the LHC. It will resume operation in Run3 with an upgraded detector to process events with up to five times higher luminosity. Monte Carlo simulations are key to the commissioning of the new detector and the interpretation of past and future...
Go to contribution page -
Carlos Perez Dengra (PIC-CIEMAT)
The Large Hadron Collider (LHC) will enter a new era for data acquisition by 2026 within the High-Luminosity Large Hadron Collider (HL-LHC) program, where the LHC will increase the proton-proton collisions up to unprecedented levels. This increase will imply a factor 10 in terms of luminosity as compared to the current values, having an impact in the way the experimental data is stored and...
Go to contribution page -
Mr Andreas Pappas (National and Kapodistrian University of Athens (GR))
The LHCb detector is undergoing a comprehensive upgrade for data taking in the LHC’s Run 3, which is scheduled to begin in 2022. The new Run 3 detector has a different, upgraded geometry and uses new tools for its description, namely DD4hep and ROOT. Besides, the visualization technologies have evolved quite a lot since Run 1, with the introduction of ubiquitous web based solutions or...
Go to contribution page -
Antonio Gioiosa (INFN - National Institute for Nuclear Physics)
The Mu2e experiment at Fermilab searches for the charged-lepton flavor violating neutrino-less conversion of a negative muon into an electron in the field of an aluminum nucleus. If no events are observed, in three years of running Mu2e will improve the previous upper limit by four orders of magnitude in search sensitivity.
Go to contribution page
Mu2e’s Trigger and Data Acquisition System (TDAQ) uses {\it otsdaq}... -
Prof. Ivan Kisel (Johann-Wolfgang-Goethe Univ. (DE))
Within the FAIR Phase-0 program the algorithms of the FLES (First-Level Event Selection) package developed for the CBM experiment (FAIR/GSI, Germany) are adapted for online and offline processing in the STAR experiment (BNL, USA).
Long-lived charged particles are reconstructed in the TPC detector using the CA track finder algorithm based on the Cellular Automaton. The search for...
Go to contribution page -
Ludwig Albert Jaffe (Goethe University Frankfurt (DE)), Alexander Adler (Goethe University Frankfurt (DE))
Containerisation is an elementary tool for sharing IT resources: It is more light-weight than full virtualisation, but offers comparable isolation. We argue that for many use-cases which are typically approached with standard containerisation tools, less than full isolation is sufficient: Sometimes, only networking or only storage or both need to be different from their native, unisolated...
Go to contribution page -
Raghav Kansal (Univ. of California San Diego (US))
There has been significant development recently in generative models for accelerating LHC simulations. Work on simulating jets has primarily used image-based representations, which tend to be sparse and of limited resolution. We advocate for the more natural 'particle cloud' representation of jets, i.e. as a set of particles in momentum space, and discuss four physics- and...
Go to contribution page -
Lea Reuter (Institut für Experimentelle Teilchenphysik (ETP), Karlsruher Institut für Technologie (KIT), Germany)
Learning the hierarchy of graphs is relevant in a variety of domains, as they are commonly used to express the chronological interactions in data structures. One application is in Flavor Physics, as the natural representation of a particle decay process is a rooted tree graph.
Go to contribution page
Analyzing collision events involving missing particles or neutrinos requires knowledge of the full decay tree.... -
Ahmet Ilker Topuz (Catholic University of Louvain)
The wide angular distribution of the incoming cosmic ray muons in connection with either incident angle or azimuthal angle is a challenging trait led to a drastic particle loss in the course of parametric computations from the GEANT4 simulations since the tomographic configurations as well as the target geometries also influence the processable number of the detected particles apart from the...
Go to contribution page -
Abtin Narimani Charan (Deutsches Elektronen-Synchrotron (DESY))
The Belle II experiment is located at the asymmetric SuperKEKB $e^+ e^-$ collider in Tsukuba, Japan. The Belle II electromagnetic calorimeter (ECL) is designed to measure the energy deposited by charged and neutral particles. It also provides important contributions to the particle identification system. Identification of low-momenta muons and pions in the ECL is crucial if they do not reach...
Go to contribution page -
Mary Touranakou (National and Kapodistrian University of Athens (GR)), Breno Orzari (UNESP - Universidade Estadual Paulista (BR))
HEP experiments heavily rely on the production and the storage of large datasets of simulated events. At the LHC, simulation workflows require about half of the available computing resources of a typical experiment. With the foreseen High Luminosity LHC upgrade, data volume and complexity are going to increase faster than the expected improvements in computing infrastructure. Speeding up the...
Go to contribution page -
713. ParticleNeXt: Pushing the Limit of Jet Tagging With Graph Neural Networks (contribution ID 713)Huilin Qu (CERN)
Identification of hadronic decays of highly Lorentz-boosted W/Z/Higgs bosons and top quarks provides powerful handles to a wide range of new physics searches and Standard Model measurements at the LHC. In this talk, we present ParticleNeXt, a new graph neural network (GNN) architecture tailored for jet tagging. With the introduction of novel components such as pairwise features, attentive...
Go to contribution page -
CMS Collaboration, Wahid Redjeb (Rheinisch Westfaelische Tech. Hoch. (DE))
Heterogeneous Computing will play a fundamental role in the CMS reconstruction to face the challenges that will be posed by the HL-LHC phase. Several computing architectures and vendors are currently available to build an Heterogeneous Computing Farm for the CMS experiment. However, specialized implementations for each of these architectures is not sustainable in terms of development,...
Go to contribution page -
Xiaocong Ai (DESY)
Exploring anomalous objects from beyond standard model (BSM) signatures is one important mission of the LHC experiments. Recently, new particles in the sub-GeV scale have received more and more attention. The light pseudo-scalar such as axion-like particles (ALPs) and light scalar such as dark Higgs are proposed by many BSM models and can be taken as mediators of some sub-GeV dark matter...
Go to contribution page -
Andrea Valenzuela Ramirez (Universitat Oberta de Catalunya (ES))
The CernVM File System (CernVM-FS) is a global read-only POSIX file system that provides scalable and reliable software distribution to numerous scientific collaborations. It gives access to more than a billion binary files of experiment application software stacks and operating system containers to end user devices, grids, clouds, and supercomputers. CernVM-FS is asymmetric by construction....
Go to contribution page -
Huw Haigh (Austrian Academy of Sciences (AT))
In this talk, we present the novel implementation of a non-differentiable metric approximation with a corresponding loss-scheduling based on the minimization of a figure-of-merit related function typical of particle physics (the so-called Punzi figure of merit). We call this new loss-scheduling a "Punzi-loss function" and the neural network that minimizes it a "Punzi-net". We tested the...
Go to contribution page -
Dr Federico SCUTTI (The University of Melbourne)
The pyrate framework provides a dynamic, versatile, and memory-efficient approach to data format transformations, object reconstruction and data analysis in particle physics.The framework is implemented with the python programming language, allowing easy access to the scientific python package ecosystem and commodity big data technologies. Developed within the context of the SABRE experiment...
Go to contribution page -
Henry Fredrick Schreiner (Princeton University)
Histogramming for Python has been transformed by the Scikit-HEP family of libraries, starting with boost-histogram, a core library for high performance Pythonic histogram creation and manipulation based on the Boost C++ libraries. This was extended by Hist with plotting, analysis friendly shortcuts, and much more. And UHI is a specification that allows histogramming and plotting libraries,...
Go to contribution page -
Vasileios Belis (ETH Zurich (CH))
The advantage of quantum computers over classical devices lies in the possibility of using quantum superposition effects of n qubits to perform exponential computations in parallel. This effect makes it possible to reduce the computational complexity of certain classes of problems, such as optimisation, sampling or combinatorial problems in large scale fault-tolerant quantum...
Go to contribution page -
Enrico Guiraud (EP-SFT, CERN)
In recent years, RDataFrame, ROOT's high-level interface for data analysis and processing, has seen widespread adoption on the part of HEP physicists. Much of this success is due to RDataFrame's ergonomic programming model that enables the implementation of common analysis tasks more easily than previous APIs, without compromising on application performance. Nonetheless, RDataFrame's...
Go to contribution page -
Ben Nachman (Lawrence Berkeley National Lab. (US)), Daniel Britzger (Max-Planck-Institut für Physik München), Miguel Ignacio Arratia Munoz (Lawrence Berkeley National Lab. (US)), Owen Long (University of California Riverside (US))
In this talk we present a novel method to reconstruct the kinematics of neutral-current deep inelastic scattering (DIS) using a deep neural network (DNN). Unlike traditional methods, it exploits the full kinematic information of both the scattered electron and the hadronic-final state, and it accounts for QED radiation by identifying events with radiated photons and event-level momentum...
Go to contribution page -
Dr Renat Sadykov (Joint Institute for Nuclear Research (RU))
We present a new version of the Monte Carlo event generator ReneSANCe. The generator takes into account complete one-loop electroweak (EW) corrections, QED corrections in leading log approximation (LLA) and some higher order QED and EW corrections to processes at e^+e^- colliders with finite particle masses and arbitrary polarizations of intitial particles. ReneSANCe effectively operates in...
Go to contribution page -
Javier Lopez Gomez (CERN)
Upcoming HEP experiments, e.g. at the HL-LHC, are expected to increase the volume of generated data by at least one order of magnitude. In order to retain the ability to analyze the influx of data, full exploitation of modern storage hardware and systems, such as low-latency high-bandwidth NVMe devices and distributed object stores, becomes critical.
To this end, the ROOT RNTuple I/O...
Go to contribution page -
Ajay Rawat (University of Washington (US))
The Reproducible Open Benchmarks for Data Analysis Platform (ROB)[1][2] is a platform developed to help evaluate data analysis workflows in a controlled competition-style environment. ROB was inspired by the Top Tagger Comparison analysis (2019)[3] that compared multiple different top tagger neural networks. ROB has two main goals: (1) reduce the amount of time required to organize and...
Go to contribution page -
Aziz Temirkhanov (National Research University Higher School of Economics (RU))
The volume of data processed by the Large Hadron Collider experiments demands sophisticated selection rules typically based on machine learning algorithms. One of the shortcomings of these approaches is their profound sensitivity to the biases in training samples. In the case of particle identification (PID), this might lead to degradation of the efficiency for some decays on validation due to...
Go to contribution page -
Kyungeon Choi (University of Texas at Austin (US))
Recent developments in software to address challenges in the High-Luminosity LHC (HL-LHC) era allow novel approaches when interacting with the data and performing physics analysis. We employed software components primarily from IRIS-HEP to construct an analysis workflow of an ongoing ATLAS Run-2 physics analysis in the python ecosystem. The software components in the analysis workflow include...
Go to contribution page -
Su Yeon Chang (CERN / EPFL - Ecole Polytechnique Federale Lausanne (CH))
In an earlier work [1], we introduced dual-Parameterized Quantum Circuit (PQC) Generative Adversarial Networks (GAN), an advanced prototype of quantum GAN, which consists of a classical discriminator and two quantum generators that take the form of PQCs. We have shown the model can imitate calorimeter outputs in High-Energy Physics (HEP), interpreted as reduced size pixelated images. But the...
Go to contribution page -
Gordon Watts (University of Washington (US))
ServiceX is a cloud-native distributed application that transforms data into columnar formats in the python ecosystem and ROOT framework. Along with the transformation, is applies filtering, and thinning operations to reduce the data load sent to the client. ServiceX, designed for easy deployment to a Kubernetes cluster, is runs near the data, scanning TB’s of data to send GB’s to a client or...
Go to contribution page -
Ms Giulia Sorrentino (Universita e INFN Trieste (IT))
The Compact Muon Solenoid (CMS) detector at the Large Hadron Collider (LHC) is undertaking a Phase II upgrade program to face the harsh conditions imposed by the High Luminosity LHC (HL-LHC). This program comprises the installation of a new timing layer to measure the time of minimum ionizing particles (MIPs) with a time resolution of 30-40 ps. The time information of the tracks from this new...
Go to contribution page -
Marco Rossi (CERN), Sofia Vallecorsa (CERN)
DUNE is a cutting edge experiment aiming to study neutrinos in detail, with a
Go to contribution page
special focus on the flavor oscillation mechanism. ProtoDUNE-SP (the prototype
of the DUNE Far detector Single Phase TPC), has been built and operated at CERN
and a full suite of reconstruction tools have been developed. Pandora is a
multi-algorithm framework that implements reconstructions tools: a large number... -
Sitong An (CERN, Carnegie Mellon University (US))
Deep neural networks are rapidly gaining popularity in physics research. While python-based deep learning frameworks for training models in GPU environments develop and mature, a good solution that allows easy integration of inference of trained models into conventional C++ and CPU-based scientific computing workflow seems lacking.
We report the latest development in ROOT/TMVA that aims to...
Go to contribution page -
Michel Hernandez Villanueva (DESY)
Among the upgrades in current high energy physics (HEP) experiments and the new facilities coming online, solving software challenges has become integral for the success of the collaborations, and the demand for human resources highly-skilled in both HEP and software domains is increasing. With a highly distributed environment in human resources, the sustainability of the HEP ecosystem...
Go to contribution page -
Adam Abed Abud (University of Liverpool (GB) and CERN)
Deep Learning (DL) methods and Computer Vision are becoming important tools for event reconstruction in particle physics detectors. In this work, we report on the use of Submanifold Sparse Convolutional Neural Networks (SparseNet) for the classification of track and shower hits from a DUNE prototype liquid-argon detector at CERN (ProtoDUNE). By taking advantage of the three-dimensional nature...
Go to contribution page -
Dr Wei Sun (Institute of High Energy Physics, Chinese Academy of Sciences)
Lattice quantum chromodynamics (lattice QCD) is the non-perturbative definition of the QCD theory from first principle and can be systematically improved, meanwhile, it is one of the most important high performance computing application in high energy physics. The physics research of lattice QCD benefited enormously from the development of computer hardware and algorithm, and particle...
Go to contribution page -
Niklas Nolte (Massachusetts Institute of Technology (US))
The triggerless readout of data corresponding to a 30 MHz event rate at the upgraded LHCb experiment together with a software-only High Level Trigger will enable the highest possible flexibility for trigger selections. During the first stage (HLT1), track reconstruction and vertex fitting for charged particles enable a broad and efficient selection process to reduce the event rate to 1 MHz....
Go to contribution page -
Kaixuan Huang (SUN YAT-SEN UNIVERSITY)
In High Energy Physics (HEP) experiments, it is useful for physics analysis and outreach if the event display software can provide fancy visualization effect. Unity is a professional software that can provide 3D modeling and animation production. GDML format files are commonly used for detector description in HEP experiments. In this work, we present a method for automating the import of GDML...
Go to contribution page -
Su Yeon Chang (CERN / EPFL - Ecole Polytechnique Federale Lausanne (CH))
In classical deep learning, a number of studies have proven that noise plays a crucial role in the training of neural networks. Artificial noises are often injected in order to make the model more robust, faster converging, and stable. Meanwhile, quantum computing, a completely new paradigm of computation, is characterized by statistical uncertainty from its probabilistic nature. Furthermore,...
Go to contribution page -
Niclas Steve Eich (Rheinisch Westfaelische Tech. Hoch. (DE))
We present a specialised layer for generative modeling of LHC events with generative adversarial networks. We use Lorentz boosts, rotations, momentum and energy conservation to build a network cell generating a 2-body particle decay. This cell is stacked consecutively in order to model two staged decays, respecting the symmetries across the decay chain. We allow for modifications of the...
Go to contribution page -
Brahim Aitbenchikh (Universite Hassan II, Ain Chock (MA))
The ATLAS experiment at the Large Hadron Collider (LHC) relies heavily on simulated data, requiring the production of billions of Monte Carlo (MC)-based proton-proton collisions for every run period. As such, the simulation of collisions (events) is the single biggest CPU resource consumer for the experiment. ATLAS's finite computing resources are at odds with the expected conditions during...
Go to contribution page -
Paul Gessinger (CERN)
The great success of the Tracking Machine Learning Challenges (TrackML) contracted in two phases (accuracy phase from April to August, throughput phase from September to November 2018) has proven the need of an easy accessible and yet challenging dataset for algorithm design and further R&D. The released TrackML dataset is to date heavily used by several research groups at the forefront of...
Go to contribution page -
Gene Van Buren (Brookhaven National Laboratory)
A unique experiment was conducted by the STAR Collaboration in 2018 to investigate differences between collisions of nuclear isobars, a potential key to unraveling one of the physics mysteries in our field: why the universe is made predominantly of matter. Enhancing the credibility of findings was deemed to hinge on blinding analyzers from knowing which dataset they were examining,...
Go to contribution page -
Ahmet Ilker Topuz (Catholic University of Louvain)
The emerging applications of cosmic ray muon tomography lead to a significant rise in the utilization of the cosmic particle generators, e.g. CRY, CORSIKA, or CMSCGEN, where the fundamental parameters such as the energy spectrum and the angular distribution about the generated muons are represented in the continuous forms routinely governed implicitly by the probability density functions over...
Go to contribution page -
Mary Touranakou (National and Kapodistrian University of Athens (GR)), Shah Rukh Qasim (Manchester Metropolitan University (GB))
We investigate the application of object condensation to particle tracking at the LHC. Designed having in mind calorimeter clustering and successfully employed on high-granularity calorimeter reconstruction for HL-LHC, object condensation is a generic clustering methods that could be applied to many problems within and outside HEP. Using the TrackML challenge dataset, we train a tracking...
Go to contribution page -
Stefano Piacentini (Università La Sapienza)
In this contribution we will show an innovative approach based on Bayesian networks and linear algebra providing a solid and complete solution to the problem of the detector response and the related systematic effects. As a case study, we will consider the Dark Matter (DM) direct detection searches. In fact, in the past decades, a huge experimental effort has been developed to ...
Go to contribution page -
Mr Dennis Noll (RWTH Aachen University (DE))
Many HEP analyses are adopting the concept of vectorised computing, often making them increasingly performant and resource-efficient.
While a variety of computing steps can be vectorised directly, some calculations are challenging to implement.
One of these is the analytical neutrino reconstruction which involves fitting that naturally varies between events.We show a vectorised...
Go to contribution page -
Ziyuan Li (Sun Yat-Sen University (CN)), Zhen Qian (Sun Yat-sen University)
The Jiangmen Underground Neutrino Observatory (JUNO), currently under construction in the south of China, is the largest Liquid Scintillator (LS) detector in the world. JUNO is a multipurpose neutrino experiment designed to determine neutrino mass ordering, precisely measure oscillation parameters, and study solar neutrinos, supernova neutrinos, geo-neutrinos and atmospheric neutrinos. The...
Go to contribution page -
Jonas Eschle (Universitaet Zuerich (CH))
Statistical modelling and likelihood inference is a key element in many sciences,
Go to contribution page
especially in High-Energy Physics (HEP) analyses. These require advanced features
such as handling large amounts of data, supporting binned, unbinned and mixed inference, using complicated and often custom made model functions, and being highly performant.
In HEP, these features were covered in C++ frameworks... -
Felix Wagner (HEPHY Vienna)
Novel cryogenic scintillating calorimeters, used in rare event search experiments, achieve sub-keV recoil energy thresholds. Such low thresholds require a sensible raw data analysis of triggered events. This includes the identification of particle recoils among artifacts, and the reconstruction of the corresponding recoil energies, despite a low signal-to-noise ratio. For this purpose we...
Go to contribution page
Choose timezone
Your profile timezone: