Conveners
Algorithms: Tue AM
- David Rohr (CERN)
- John Derek Chapman (University of Cambridge (GB))
Algorithms: Tue PM
- Dorothea Vom Bruch (Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille, France)
- Gordon Watts (University of Washington (US))
Algorithms: Wed AM
- David Rohr (CERN)
- Felice Pantaleo (CERN)
-
Mr Matthieu Carrère (CNRS)18/05/2021, 10:50Offline ComputingShort Talk
CORSIKA is a standard software for simulations of air showers induced by cosmic rays. It has been developed in Fortran 77 continuously over the last thirty years. So it becomes very difficult to add new physics features to CORSIKA 7. CORSIKA 8 aims to be the future of the CORSIKA project. It is a framework in C++17 which uses modern concepts in object oriented programming for an efficient...
Go to contribution page -
Mrs Caterina Marcon (Lund University (SE))18/05/2021, 11:03Offline ComputingShort Talk
Full detector simulation is known to consume a large proportion of computing resources available to the LHC experiments, and reducing time consumed by simulation will allow for more profound physics studies. There are many avenues to exploit, and in this work we investigate those that do not require changes in the GEANT4 simulation suite. In this study, several factors affecting the full...
Go to contribution page -
Prof. Vladimir Ivantchenko (CERN)18/05/2021, 11:16Offline ComputingShort Talk
We report status of the CMS full simulation for Run-3. During the long shutdown of the LHC a significant update has been introduced to the CMS code for simulation. CMS geometry description is reviewed. Several important modifications were needed. CMS detector description software is migrated to the DD4Hep community developed tool. We will report on our experience obtained during the process of...
Go to contribution page -
Olivier Rousselle (Laboratoire Kastler Brossel (FR))18/05/2021, 11:29Offline ComputingShort Talk
The modelling of Cherenkov based detectors is traditionally done using Geant4 toolkit. In this work, we present another method based on Python programming language and Numba high performance compiler to speed up the simulation. As an example we take one of the Forward Proton Detectors at the CERN LHC - ATLAS Forward Proton (AFP) Time-of-Flight, which is used to reduce the background from...
Go to contribution page -
Yo Sato (Tohoku University)18/05/2021, 11:42Offline ComputingShort Talk
The Belle II experiment is an upgrade to the Belle experiment, and is located at the SuperKEKB facility in KEK, Tsukuba, Japan. The Belle II software is completely new and is used for everything from triggering data, generation of Monte Carlo events, tracking, clustering, to high-level analysis. One important feature is the matching between the combinations of reconstructed objects which form...
Go to contribution page -
Swagato Banerjee (University of Louisville (US))18/05/2021, 15:00Offline ComputingShort Talk
The SuperKEKB/Belle II experiment expects to collect 50 $\mathrm{ab}^{-1}$ of collision data during the next decade. Study of this data requires monumental computing resources to process and to generate the required simulation events necessary for physics analysis. At the core of the Belle II simulation library is the Geant4 toolkit. To use the available computing resources more efficiently,...
Go to contribution page -
Sunanda Banerjee (Fermi National Accelerator Lab. (US))18/05/2021, 15:13Offline ComputingShort Talk
CMS tuned its simulation program and chose a specific physics model of Geant4 by comparing the simulation results with dedicated test beam experiments. Test beam data provide measurements of energy response of the calorimeter as well as resolution for well identified charged hadrons over a large energy region. CMS continues to validate the physics models using the test beam data as well as...
Go to contribution page -
Martina Javurkova (University of Massachusetts (US))18/05/2021, 15:26Offline ComputingShort Talk
The ATLAS experiment relies heavily on simulated data, requiring the production on the order of billions of Monte Carlo-based proton-proton collisions every run period. As such, the simulation of collisions (events) is the single biggest CPU resource consumer. ATLAS's finite computing resources are at odds with the expected conditions during the High Luminosity LHC era, where the increase in...
Go to contribution page -
Ross John Hunter (University of Warwick (GB))18/05/2021, 15:39Online ComputingShort Talk
Upon its restart in 2022, the LHCb experiment at the LHC will run at higher instantaneous luminosity and utilize an unprecedented full-software trigger, promising greater physics reach and efficiency. On the flip side, conforming to offline data storage constraints becomes far more challenging. Both of these considerations necessitate a set of highly optimised trigger selections. We therefore...
Go to contribution page -
Dr Andreas Ralph Redelbach (Goethe University Frankfurt (DE))18/05/2021, 15:52Online ComputingShort Talk
Future operation of the CBM detector requires ultra-fast analysis of the continuous stream of data from all subdetector systems. Determining the inter-system time shifts among individual detector systems in the existing prototype experiment Mini-CBM is an essential step for data processing and in particular for stable data taking. Based on the input of raw measurements from all detector...
Go to contribution page -
Mohan Krishnamoorthy (Argonne National Laboratory)18/05/2021, 16:05Offline ComputingShort Talk
Apprentice is a tool developed for event generator tuning. It contains a range of conceptual improvements and extensions over the tuning tool Professor. Its core functionality remains the construction of a multivariate analytic surrogate model to computationally expensive Monte Carlo event generator predictions. The surrogate model is used for numerical optimization in chi-square...
Go to contribution page -
Mr Pavel Kisel (Uni-Frankfurt, JINR)19/05/2021, 10:50Online ComputingShort Talk
As part of the FAIR Phase-0 program, the fast FLES (First-Level Event Selection) package algorithms developed for the CBM experiment (FAIR/GSI, Germany) has been adapted for online and offline processing in the STAR experiment (BNL, USA). Using the same algorithms creates a bridge between online and offline modes. This allows combining online and offline resources for data processing.
Thus,...
Go to contribution page -
Anna Alicke (Forschungszentrum Jülich)19/05/2021, 11:03Online ComputingShort Talk
The PANDA experiment at FAIR (Facility for Antiproton and Ion
Go to contribution page
Research) in Darmstadt is currently under construction. In order to reduce the
amount of data collected during operation, it is essential to find all true tracks
and to be able to distinguish them from false tracks. Part of the preparation
for the experiment is therefore the development of a fast online track finder.
This work... -
Dr Leonardo Cristella (CERN)19/05/2021, 11:16Offline ComputingShort Talk
To sustain the harsher conditions of the high-luminosity LHC, the CMS collaboration is designing a novel endcap calorimeter system. The new calorimeter will predominantly use silicon sensors to achieve sufficient radiation tolerance and will maintain highly-granular information in the readout to help mitigate the effects of pileup. In regions characterised by lower radiation levels, small...
Go to contribution page -
Tadeas Bilka (Charles University)19/05/2021, 11:29Offline ComputingShort Talk
The alignment of the Belle II tracking system composed of a pixel and strip vertex detectors and central drift chamber is described by approximately 60,000 parameters. These include internal local alignment: positions, orientations and surface deformations of silicon sensors and positions of drift chamber wires as well as global alignment: relative positions of the sub-detectors and larger...
Go to contribution page -
Zachary Michael Schillaci (Brandeis University (US))19/05/2021, 11:42Offline ComputingShort Talk
This talk summarises the main changes to the ATLAS experiment’s Inner Detector Track reconstruction software chain in preparation of LHC Run 3 (2022-2024). The work was carried out to ensure that the expected high-activity collisions with on average 50 simultaneous proton-proton interactions per bunch crossing (pile-up) can be reconstructed promptly using the available computing resources....
Go to contribution page -
Mr Anton Philippov (HSE)19/05/2021, 11:55Offline ComputingShort Talk
The common approach for constructing a classifier for particle selection assumes reasonable consistency between train data samples and the target data sample used for the particular analysis. However, train and target data may have very different properties, like energy spectra for signal and background contributions. We suggest using ensemble of pre-trained classifiers, each of which is...
Go to contribution page