Conveners
T2 - Offline computing: S1
- Victor Daniel Elvira (Fermi National Accelerator Lab. (US))
T2 - Offline computing: S2
- Lucia Grillo (University of Manchester (GB))
T2 - Offline computing: S3
- Victor Daniel Elvira (Fermi National Accelerator Lab. (US))
T2 - Offline computing: S4
- Gene Van Buren (Brookhaven National Laboratory)
T2 - Offline computing: S5
- Lucia Grillo (University of Manchester (GB))
T2 - Offline computing: S6
- Gene Van Buren (Brookhaven National Laboratory)
T2 - Offline computing: S7
- Lucia Grillo (University of Manchester (GB))
Faster alternatives to a full, GEANT4-based simulation are being pursued within the LHCb experiment. In this context the integration of the Delphes toolkit in the LHCb simulation framework is intended to provide a fully parameterized option.
Delphes is a modular software designed for general-purpose experiments such as ATLAS and CMS to quickly propagate stable particles using a parametric...
ATLAS relies on very large samples of simulated events for delivering high-quality
and competitive physics results, but producing these samples takes much time and
is very CPU intensive when using the full GEANT4 detector simulation.
Fast simulation tools are a useful way of reducing CPU requirements when detailed
detector simulations are not needed. During the LHC Runs 1 and 2, a...
In HEP experiments CPU resources required by MC simulations are constantly growing and becoming a very large fraction of the total computing power (greater than 75%). At the same time the pace of performance improvements given by technology is slowing down, so the only solution is a more efficient use of resources. Efforts are ongoing in the LHC experiment collaborations to provide multiple...
The goal to obtain more precise physics results in current collider experiments drives the plans to significantly increase the instantaneous luminosity collected by the experiments . The increasing complexity of the events due to the resulting increased pileup requires new approaches to triggering, reconstruction, analysis,
and event simulation. The last task brings to a critical problem:...
Machine Learning techniques have been used in different applications by the HEP community: in this talk, we discuss the case of detector simulation. The amount of simulated events, expected in the future for LHC experiments and their High Luminosity upgrades, is increasing dramatically and requires new fast simulation solutions. We will describe an R&D activity, aimed at providing a...
In the context of the common online-offline computing infrastructure for Run3 (ALICE-O2), ALICE is reorganizing its detector simulation software to be based on FairRoot, offering a common toolkit to implement simulation based on the Virtual-Monte-Carlo (VMC) scheme. Recently, FairRoot has been augmented by ALFA, a software framework developed in collaboration between ALICE and FAIR, offering...
Detector simulation has become fundamental to the success of modern high-energy physics (HEP) experiments. For example, the Geant4-based simulation applications developed by the ATLAS and CMS experiments played a major role for them to produce physics measurements of unprecedented quality and precision with faster turnaround, from data taking to journal submission, than any previous hadron...
The CMS full simulation using Geant4 has delivered billions of simulated events for analysis during Runs 1 and 2 of the LHC. However, the HL-LHC dataset will be an order of magnitude larger, with a similar increase in occupancy per event. In addition, the upgraded CMS detector will be considerably more complex, with an extended silicon tracker and a high granularity calorimeter in the endcap...
The high-luminosity data produced by the LHC leads to many proton-proton interactions per beam
crossing in ATLAS, known as pile-up. In order to understand the ATLAS data and extract the physics
results it is important to model these effects accurately in the simulation. As the pile-up rate continues
to grow towards an eventual rate of 200 for the HL-LHC, this puts increasing demands on...
To address the challenges of the major upgrade of the experiment, the ALICE simulations must be able to make efficient use of computing and opportunistic supercomputing resources available on the GRID. The Geant4 transport package, the performance of which has been demonstrated in a hybrid multithreading (MT) and multiprocessing (MPI) environment with up to ¼ million threads, is therefore of a...
The Jiangmen Underground Neutrino Observatory (JUNO) is a multi-purpose neutrino experiment. It consists of a central detector, a water pool and a top tracker. The central detector, which is used for neutrino detection, consists of 20 kt liquid scintillator (LS) and about 18,000 20-inch photomultiplier tubes (PMTs) to collect lights from LS.
Simulation software is one of the important parts...
The NOvA experiment is a two-detectors, long-baseline neutrino experiment operating since 2014 in the NuMI muon neutrino beam (FNAL, USA). NOvA has already collected about 25% of its expected statistics in both neutrino and antineutrino modes for electron-neutrino appearance and muon-neutrino disappearance analyses. Careful simulation of neutrino events and backgrounds are required for precise...
The increase in luminosity foreseen in the future years of operation of the Large Hadron Collider (LHC) creates new challenges in computing efficiency for all participating experiment. These new challenges extend beyond the data-taking alone, because data analyses require more and more simulated events, whose creation already takes a large fraction of the overall computing resources. For Run 3...
The design of readout electronics for the LAr calorimeters of the ATLAS detector to be operated at the future High-Luminosity LHC (HL-LHC) requires a detailed simulation of the full readout chain in order to find optimal solutions for the analog and digital processing of the detector signals. Due to the long duration of the LAr calorimeter pulses relative to the LHC bunch crossing time,...
The LHCb experiment is a fully instrumented forward spectrometer designed for
precision studies in the flavour sector of the standard model with proton-proton
collisions at the LHC. As part of its expanding physics programme, LHCb collected data also during the LHC proton-nucleus collisions in 2013 and 2016 and
during nucleus-nucleus collisions in 2015. All the collected datasets are...
Opticks is an open source project that integrates the NVIDIA OptiX
GPU ray tracing engine with Geant4 toolkit based simulations.
Massive parallelism brings drastic performance improvements with optical photon simulation speedup expected to exceed 1000 times Geant4 with workstation GPUs.
Optical physics processes of scattering, absorption, reemission and
boundary processes are implemented...
We report developments for the Geant4 electromagnetic (EM) physics sub-packages for Geant4 release 10.4 and beyond. Modifications are introduced to the models of photo-electric effect, bremsstrahlung, gamma conversion, and multiple scattering. Important developments for calorimetry applications were carried out for the modeling of single and multiple scattering of charged particles....
The development of the GeantV Electromagnetic (EM) physics package has evolved following two necessary paths towards code modernization. A first phase required the revision of the main electromagnetic physics models and their implementation. The main objectives were to improve their accuracy, extend them to the new high-energy frontiers posed by the Future Circular Collider (FCC) programme and...
SIMD acceleration can potentially boost by factors the application throughput. However, achieving efficient SIMD vectorization for scalar code with complex data flow and branching logic, goes way beyond breaking loop dependencies and relying on the compiler. Since the re-factoring effort scales with the number of lines of code, it is important to understand what kind of performance gains can...
Majority of currently planned or considered hadron colliders are expected to deliver data in collisions with hundreds of simultaneous interactions per beam bunch crossing on average, including the high luminosity LHC upgrade currently in preparation and the possible high energy LHC upgrade or a future circular collider FCC-hh. Running of charged particle track reconstruction for the general...
The multi-purpose R$^{3}$B (Reactions with Relativistic Radioactive Beams) detector at the future FAIR facility in Darmstadt will be used for various experiments with exotic beams in inverse kinematics. The two-fold setup will serve for particle identification and momentum measurement up- and downstream the secondary reaction target. In order to perform a high-precision charge identification...
The High-Luminosity Large Hadron Collider (HL-LHC) at CERN will be characterized by higher event rate, greater pileup of events, and higher occupancy. Event reconstruction will therefore become far more computationally demanding, and given recent technology trends, the extra processing capacity will need to come from expanding the parallel capabilities in the tracking software. Existing...
One of the task of track reconstruction for COMET Phase-I drift chamber is to fit multi-turn curling tracks. A method of Deterministic Annealing Filter and implements a global competition between hits from different turn tracks is introduced. This method assigns the detector measurements to the track assumption based on the weighted mean of fitting quality on different turns. This method is...
In early 2018, e+e- collisions of the SuperKEKB B-Factory will be recorded by the Belle II detector in Tsukuba (Japan) for the first time. The new accelerator and detector represent a major upgrade from the previous Belle experiment and will achieve a 40-times higher instantaneous luminosity. Special considerations and challenges arise for track reconstruction at Belle II due to multiple...
The Belle II experiment is ready to take data in 2018, studying e+e- collisions at the KEK facility in Tsukuba (Japan), in a center of mass energy range of the Bottomonium states. The tracking system includes a combination of hit measurements coming from the vertex detector, made of pixel detectors and double-sided silicon strip detectors, and a central drift chamber, inside a solenoid of 1.5...
CMS offline event reconstruction algorithms cover simulated and acquired data processing starting from the detector raw data on input and providing high level reconstructed objects suitable for analysis. The landscape of supported data types and detector configuration scenarios has been expanding and covers the past and expected future configurations including proton-proton collisions and...
Development of the JANA multi-threaded event processing framework began in 2005. It’s primary application has been for GlueX, a major Nuclear Physics experiment at Jefferson Lab. Production data taking began in 2016 and JANA has been highly successful in analyzing that data on the JLab computing farm. Work has now begun on JANA2, a near complete rewrite emphasizing features targeted for large...
The upcoming PANDA at FAIR experiment in Darmstadt, Germany will belong to a new generation of accelerator-based experiments relying exclusively on software filters for data selection. Due to the likeness of signal and background as well as the multitude of investigated physics channels, this paradigm shift is driven by the need for having full and precise information from all detectors in...
CMS has worked aggressively to make use of multi-core architectures, routinely running 4 to 8 core production jobs in 2017. The primary impediment to efficiently scaling beyond 8 cores has been our ROOT-based output module, which has been necessarily single threaded. In this presentation we explore the changes made to the CMS framework and our ROOT output module to overcome the previous...
The Cherenkov Telescope Array (CTA) is the next generation of ground-based gamma-ray telescopes for gamma-ray astronomy. Two arrays will be deployed composed of 19 telescopes in the Northern hemisphere and 99 telescopes in the Southern hemisphere. Observatory operations are planned to start in 2021 but first data from prototypes should be available already in 2019. Due to its very high...
In 2017, NA62 recorded over a petabyte of raw data, collecting around a billion events per day of running. Data are collected in bursts of 3-5 seconds, producing output files of a few gigabytes. A typical run, a sequence of bursts with the same detector configuration and similar experimental conditions, contains 1500 bursts and constitutes the basic unit for offline data processing. A...
We present a range of conceptual improvements and extensions to the popular
tuning tool "Professor".
Its core functionality remains the construction of multivariate analytic
approximations to an otherwise computationally expensive function. A typical
example would be histograms obtained from Monte-Carlo (MC) event generators for
standard model and new physics processes.
The fast Professor...
We describe the CMS computing model for MC event generation, and technical integration and workflows for generator tools in CMS. We discuss the most commonly used generators, standard configurations, their event tunes, and the technical performance of these configurations for Run II as well as the needs for Run III.
The detector description is an essential component to analyse data resulting from particle collisions in high energy physics experiments.
The interpretation of these data from particle collisions typically require more long-living data which describe in detail the state of the experiment itself. Such accompanying data include alignment parameters, the electronics calibration and their...
VecGeom is a multi-purpose geometry library targeting the optimisation of the 3D-solid's algorithms used extensively in particle transport and tracking applications. As a particular feature, the implementations of these algorithms are templated on the input data type and are explicitly vectorised using VecCore library in case of SIMD vector inputs. This provides additional performance for...
This paper is dedicated to the current state of the Geometry Database (Geometry DB) for the CBM experiment. The geometry DB is an information system that supports the CBM geometry. The main aims of Geometry DB are to provide storage of the CBM geometry, convenient tools for managing the geometry modules assembling various versions of the CBM setup as a combination of geometry modules and...
ATLAS is embarking on a project to multithread its reconstruction software in time for use in Run 3 of the LHC. One component that must be migrated is the histogramming infrastructure used for data quality monitoring of the reconstructed data. This poses unique challenges due to its large memory footprint which forms a bottleneck for parallelization and the need to accommodate relatively...
The Data Quality Monitoring Software is a central tool in the CMS experiment. It is used in the following key environments: 1) Online, for real-time detector monitoring; 2) Offline, for the prompt-offline-feedback and final fine-grained data quality analysis and certification; 3) Validation of all the reconstruction software production releases; 4) Validation in Monte Carlo productions. Though...
Monte-Carlo simulation is a fundamental tool for high-energy physics experiments, from the design phase to data analysis. In recent years its relevance has increased due to the ever growing measurements precision. Accuracy and reliability are essential features in simulation and particularly important in the current phase of the LHCb experiment, where physics analysis and preparation for data...
Good quality track visualization is an important aspect of every High-Energy Physics experiment, where it can be used for quick assessment of recorded collisions. The event display, operated in the Control Room, is also important for visitors and increases public recognition of the experiment. Especially in the case of the ALICE detector at the Large Hadron Collider (LHC), which reconstructs...
Until recently, the direct visualization of the complete ATLAS experiment geometry and final analysis data was confined within the software framework of the experiment.
To provide a detailed interactive data visualization capability to users, as well as easy access to geometry data, and to ensure platform independence and portability, great effort has been recently put into the modernization...
The Belle II experiment, based in Japan, is designed for the precise measurement of B and C meson as well as $\tau$ decays and is intended to play an important role in the search for physics beyond the Standard Model. To visualize the collected data, amongst other things, virtual reality (VR) applications are used within the collaboration. In addition to the already existing VR application...
Interactive 3D data visualization plays a key role in HEP experiments, as it is used in many tasks at different levels of the data chain. Outside HEP, for interactive 3D graphics, the game industry makes heavy use of so-called “game engines”, modern software frameworks offering an extensive set of powerful graphics tools and cross-platform deployment. Recently, a very strong support for...
One of the big challenges in High Energy Physics development is the fact that many potential -and very valuable- students and young researchers live in countries where internet access and computational infrastructure are poor compared to institutions already participating.
In order to accelerate the process, the ATLAS Open Data project releases useful and meaningful data and tools using...