Conveners
Track 2: Offline Computing: 2.1
- Deborah BARD (LBL)
- John Marshall (University of Cambridge)
- David Lange (Princeton University (US))
Track 2: Offline Computing: 2.2
- Deborah BARD (LBL)
- David Lange (Princeton University (US))
- John Marshall (University of Cambridge)
Track 2: Offline Computing: 2.3
- Sunanda Banerjee (Fermi National Accelerator Lab. (US))
Track 2: Offline Computing: 2.4
- Frank-Dieter Gaede (Deutsches Elektronen-Synchrotron (DE))
Track 2: Offline Computing: 2.5
- David Lange (Princeton University (US))
- John Marshall (University of Cambridge)
- Deborah BARD (LBL)
Track 2: Offline Computing: 2.6
- Deborah BARD (LBL)
- David Lange (Princeton University (US))
- John Marshall (University of Cambridge)
Track 2: Offline Computing: 2.7
- John Marshall (University of Cambridge)
- David Lange (Princeton University (US))
- Deborah BARD (LBL)
ATLAS's current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognised for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we...
In 2015, CMS was the first LHC experiment to begin using a multi-threaded framework for doing event processing. This new framework utilizes Intel's Thread Building Block library to manage concurrency via a task based processing model. During the 2015 LHC run period, CMS only ran reconstruction jobs using multiple threads because only those jobs were sufficiently thread efficient. Recent work...
The Future Circular Collider (FCC) software effort is supporting the different experiment design studies for the three future collider options, hadron-hadron, electron-electron or electron-hadron. The software framework used by data processing applications has to be independent of the detector layout and the collider configuration. The project starts from the premise of using existing software...
LArSoft is an integrated, experiment-agnostic set of software tools for liquid argon (LAr) neutrino experiments
to perform simulation, reconstruction and analysis within Fermilab art framework.
Along with common algorithms, the toolkit provides generic interfaces and extensibility
that accommodate the needs of detectors of very different size and configuration.
To date, LArSoft has been...
In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel's Thread Building...
In preparation for the XENON1T Dark Matter data acquisition, we have
prototyped and implemented a new computing model. The XENON signal and data processing
software is developed fully in Python 3, and makes extensive use of generic scientific data
analysis libraries, such as the SciPy stack. A certain tension between modern “Big Data”
solutions and existing HEP frameworks is typically...
The Muon Ionization Cooling Experiment (MICE) is a proof-of-principle experiment designed to demonstrate muon ionization cooling for the first time. MICE is currently on Step IV of its data taking programme, where transverse emittance reduction will be demonstrated. The MICE Analysis User Software (MAUS) is the reconstruction, simulation and analysis framework for the MICE experiment. MAUS is...
The Belle II experiment at KEK is preparing for first collisions in 2017. Processing the large amounts of data that will be produced will require conditions data to be readily available to systems worldwide in a fast and efficient manner that is straightforward for both the user and maintainer.
The Belle II conditions database was designed with a straightforward goal: make it as easily...
Since the 2014 the ATLAS and CMS experiments share a common vision for the Condition Database infrastructure required to handle the non-event data for the forthcoming LHC runs. The large commonality in the use cases allows to agree on a common overall design solution meeting the requirements of both experiments. A first prototype implementing these solutions has been completed in 2015 and was...
Conditions data (for example: alignment, calibration, data quality) are used extensively in the processing of real and simulated data in ATLAS. The volume and variety of the conditions data needed by different types of processing are quite diverse, so optimizing its access requires a careful understanding of conditions usage patterns. These patterns can be quantified by mining representative...
The ATLAS EventIndex System has amassed a set of key quantities for a large number of ATLAS events into a Hadoop based infrastructure for the purpose of providing the experiment with a number of event-wise services. Collecting this data in one place provides the opportunity to investigate various storage formats and technologies and assess which best serve the various use cases as well as...
AsyncStageOut (ASO) is the component of the CMS distributed data analysis system (CRAB3) that manages users’ transfers in a centrally controlled way using the File Transfer System (FTS3) at CERN. It addresses a major weakness of the previous, decentralized model, namely that the transfer of the user's output data to a single remote site was part of the job execution, resulting in inefficient...
This work reports on the activities of integrating Oracle and Hadoop technologies for CERN database services and in particular in the development of solutions for offloading data and queries from Oracle databases into Hadoop-based systems. This is of interest to increase the scalability and reduce cost for some our largest Oracle databases. These concepts have been applied, among others, to...
The Geant4 Collaboration released a new generation of the Geant4 simulation toolkit (version 10) in December 2013 and reported its new features at CHEP 2015. Since then, the Collaboration continues to improve its physics and computing performance and usability. This presentation will survey the major improvements made since version 10.0. On the physics side, it includes fully revised multiple...
A status of recent developments of the DELPHES C++ fast detector simulation framework will be given. New detector cards for the LHCb detector and prototypes for future e+ e- (ILC, FCC-ee) and p-p colliders at 100 TeV (FCC-hh) have been designed. The particle-flow algorithm has been optimised for high multiplicity environments such as high luminosity and boosted regimes. In addition, several...
Detector design studies, test beam analyses, or other small particle physics experiments require the simulation of more and more detector geometries and event types, while lacking the resources to build full scale Geant4 applications from
scratch. Therefore an easy-to-use yet flexible and powerful simulation program
that solves this common problem but can also be adapted to specific...
GeantV simulation is a complex system based on the interaction of different modules needed for detector simulation, which include transportation (heuristically managed mechanism of sets of predefined navigators), scheduling policies, physics models (cross-sections and reaction final states) and a geometrical modeler library with geometry algorithms. The GeantV project is recasting the...
Particle physics experiments make heavy use of the Geant4 simulation package to model interactions between subatomic particles and bulk matter. Geant4 itself employs a set of carefully validated physics models that span a wide range of interaction energies.
They rely on measured cross-sections and phenomenological models with the physically motivated parameters that are tuned to cover many...
Opticks is an open source project that integrates the NVIDIA OptiX
GPU ray tracing engine with Geant4 toolkit based simulations.
Massive parallelism brings drastic performance improvements with
optical photon simulation speedup expected to exceed 1000 times Geant4
when using workstation GPUs. Optical photon simulation time becomes
effectively zero compared to the rest of the...
We report current status of the CMS full simulation. For run-II CMS is using Geant4 10.0p02 built in sequential mode. About 8 billion events are produced in 2015. In 2016 any extra production will be done using the same production version. For the development Geant4 10.0p03 with CMS private patches built in multi-threaded mode were established. We plan to use newest Geant4 10.2 for 2017...
The ATLAS Simulation infrastructure has been used to produce upwards of 50 billion proton-proton collision events for analyses
ranging from detailed Standard Model measurements to searches for exotic new phenomena. In the last several years, the
infrastructure has been heavily revised to allow intuitive multithreading and significantly improved maintainability. Such a
massive update of a...
Software for the next generation of experiments at the Future Circular Collider (FCC), should by design efficiently exploit the available computing resources and therefore support of parallel execution is a particular requirement. The simulation package of the FCC Common Software Framework (FCCSW) makes use of the Gaudi parallel data processing framework and external packages commonly used in...
or some physics processes studied with the ATLAS detector, a more
accurate simulation in some respects can be achieved by including real
data into simulated events, with substantial potential improvements in the CPU,
disk space, and memory usage of the standard simulation configuration,
at the cost of significant database and networking challenges.
Real proton-proton background events can be...
The long standing problem of reconciling the cosmological evidence of the existence of dark matter with the lack of any clear experimental observation of it, has recently revived the idea that the new particles are not directly connected with the Standard Model gauge fields, but only through mediator fields or ''portals'', connecting our world with new ''secluded'' or ''hidden'' sectors. One...
Many physics and performance studies with the ATLAS detector at the Large Hadron Collider require very large samples of simulated events, and producing these using the full GEANT4 detector simulation is highly CPU intensive.
Often, a very detailed detector simulation is not needed, and in these cases fast simulation tools can be used
to reduce the calorimeter simulation time by a few orders...
Limits on power dissipation have pushed CPUs to grow in parallel processing capabilities rather than clock rate, leading to the rise of "manycore" or GPU-like processors. In order to achieve the best performance, applications must be able to take full advantage of vector units across multiple cores, or some analogous arrangement on an accelerator card. Such parallel performance is becoming a...
The reconstruction of charged particles trajectories is a crucial task for most particle physics
experiments. The high instantaneous luminosity achieved at the LHC leads to a high number
of proton-proton collisions per bunch crossing, which has put the track reconstruction
software of the LHC experiments through a thorough test. Preserving track reconstruction
performance under...
The reconstruction and identification of charmed hadron decays provides an important tool for the study of heavy quark behavior in the Quark Gluon Plasma. Such measurements require high resolution to topologically identify decay daughters at vertices displaced <100 microns from the primary collision vertex, placing stringent demands on track reconstruction software. To enable these...
With the advent of a post-Moore’s law field of computation, novel architectures continue to emerge. HEP experiments, with their ever-increasing computing requirements, are exploring new methods of computation and data handling. With composite multi-million connection neuromorphic chips like IBM’s TrueNorth, neural engineering has now become a feasible technology in this novel computing...
ATLAS track reconstruction code is continuously evolving to match the demands from the increasing instantaneous luminosity of LHC, as well as the increased centre-of-mass energy. With the increase in energy, events with dense environments, e.g. the cores of jets or boosted tau leptons, become much more abundant. These environments are characterised by charged particle separations on the order...
The Muon g-2 experiment will measure the precession rate of positive charged muons subjected to an external magnetic field in a storage ring. To prevent interference in the magnetic field, both the calorimeter and tracker detectors are situated along the ring and measure the muon's properties via the decay positron. The influence of the magnetic field and oscillation motions of the muon beam...
The all-silicon design of the tracking system of the CMS experiment provides excellent resolution for charged tracks and an efficient tagging of jets. As the CMS tracker, and in particular its pixel detector, underwent repairs and experienced changed conditions with the start of the LHC Run-II in 2015, the position and orientation of each of the 15148 silicon strip and 1440 silicon pixel...
DAMPE is a powerful space telescope launched in December 2015, able to detect electrons and photons in a wide range of energy (5 GeV to 10 TeV) and with unprecedented energy resolution. Silicon tracker is a crucial component of detector, able to determine the direction of detected particles and trace the origin of incoming gamma rays. This contribution covers the reconstruction software of...
In this presentation, the data preparation workflows for Run 2 are
presented. Online data quality uses a new hybrid software release
that incorporates the latest offline data quality monitoring software
for the online environment. This is used to provide fast feedback in
the control room during a data acquisition (DAQ) run, via a
histogram-based monitoring framework as well as the online...
Since 2014, the STAR experiment has been exploiting data collected by the Heavy Flavor Tracker (HFT), a group of high precision silicon-based detectors installed to enhance track reconstruction and pointing resolution of the existing Time Projection Chamber (TPC). The significant improvement in the primary vertex resolution resulting from this upgrade prompted us to revisit the variety of...
Efficient and precise reconstruction of the primary vertex in
an LHC collision is essential in both the reconstruction of the full
kinematic properties of a hard-scatter event and of soft interactions as a
measure of the amount of pile-up. The reconstruction of primary vertices in
the busy, high pile-up environment of Run-2 of the LHC is a challenging
task. New methods have been developed by...
The DD4hep detector description tool-kit offers a flexible and easy to use solution for the consistent and complete description of particle physics detectors in one single system. The sub-component DDRec provides a dedicated interface to the detector geometry as needed for event reconstruction. With DDRec there is no need to define an additional, separate reconstruction geometry as is often...
The LHCb detector at the LHC is a general purpose detector in the forward region with a focus on reconstructing decays of c- and b-hadrons. For Run II of the LHC, a new trigger strategy with a real-time reconstruction, alignment and calibration was developed and employed. This was made possible by implementing an offline-like track reconstruction in the high level trigger. However, the ever...
The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC),...
Precise modelling of detectors in simulations is the key to the understanding of their performance, which, in turn, is a prerequisite for the proper design choice and, later, for the achievement of valid physics results. In this report,
we describe the implementation of the Silicon Tracking System (STS), the main tracking device of the CBM experiment, in the CBM software environment. The STS...
High-energy particle physics (HEP) has advanced greatly over recent years and current plans for the future foresee even more ambitious targets and challenges that have to be coped with. Amongst the many computer technology R&D areas, simulation of particle detectors stands out as the most time consuming part of HEP computing. An intensive R&D and programming effort is required to exploit the...
CMS has tuned its simulation program and chosen a specific physics model of Geant4 by comparing the simulation results with dedicated test beam experiments. CMS continues to validate the physics models inside Geant4 using the test beam data as well as collision data. Several physics lists (collection of physics models) inside the most recent version of Geant4 provide good agreement of the...
Purpose
The aim of this work consists in the full simulation and measurements of a GEMPix (Gas Electron Multiplier) detector for a possible application as monitor for beam verification at CNAO Center (National Center for Oncological Hadrontherapy).
A triple GEMPix detector read by 4 Timepix chips could provide a beam monitoring, dose verification and quality checks with good resolution...
The JUNO (Jiangmen Underground Neutrino Observatory) is a multipurpose neutrino experiment which is mainly designed to determine neutrino mass hierarchy and precisely measure oscillation parameters. As one of the most important systems, the JUNO offline software is being developed using the SNiPER software. In this presentation, we focus on the requirements of JUNO simulation and present the...