Conveners
Software: Introduction
- Michel Jouvin (Universitรฉ Paris-Saclay (FR))
- Graeme A Stewart (CERN)
Software: Other Communities
- Graeme A Stewart (CERN)
- Michel Jouvin (Universitรฉ Paris-Saclay (FR))
Software: R&D Activities
- Michel Jouvin (Universitรฉ Paris-Saclay (FR))
- Graeme A Stewart (CERN)
Software: R&D Activities
- Teng Jian Khoo (Humboldt University of Berlin (DE))
- David Lange (Princeton University (US))
Software: Training
- David Lange (Princeton University (US))
- Teng Jian Khoo (Humboldt University of Berlin (DE))
Software: Event Generation
- Efe Yazgan (National Taiwan University (TW))
- Josh McFayden (University of Sussex)
- Andrea Valassi (CERN)
Software: Detector Simulation I
- Jonathan Madsen
- Philippe Canal (Fermi National Accelerator Lab. (US))
- Gloria Corti (CERN)
- Witold Pokorski (CERN)
Software: Detector Simulation II
- Gloria Corti (CERN)
- Philippe Canal (Fermi National Accelerator Lab. (US))
- Witold Pokorski (CERN)
- Jonathan Madsen
Software: Diverse R&D (Coffee break is internal - see detailed view)
- David Lange (Princeton University (US))
- Teng Jian Khoo (Humboldt University of Berlin (DE))
- Michel Jouvin (Universitรฉ Paris-Saclay (FR))
- Graeme A Stewart (CERN)
Full detector simulations using Geant4 are highly accurate but computationally intensive, while existing fast simulation techniques may not provide sufficient accuracy for all purposes. Machine learning offers potential paths to achieve both high speed and high accuracy. This may be especially important to address the computational challenges posed by the HL-LHC. Ongoing efforts from both...
Visualising HEP experiment data is vital for physicists trying to debug their reconstruction software, to examine detector geometry or to understand physics analyses, and also for outreach and publicity purposes. Traditionally experiments used in-house applications which required installation (often as part of a much larger experiment specific framework). In recent years, web-based...
With the upcoming start of LHC Run III and beyond, HEP data analysis is facing a large increase in average input dataset sizes. At the same time, balancing analysis software complexity with the need to extract as much performance as possible from the latest HPC hardware is still often difficult.
Recent developments in ROOT significantly lower the energy barrier for the development of...
The bamboo analysis framework [1] allows to write simple declarative analysis code (it effectively implements a domain-specific language embedded in python), and runs it efficiently using RDataFrame (RDF) - or viewed differently: it introduces a set of tools to efficiently generate large RDF computation graphs from a minimal amount of user code (in python), e.g. a simple way to specify...
Physicists aiming to perform an LHC-type analysis today are facing a number of challenges: intense computing knowledge is needed at programming level to implement the relevant algorithm, and at system level to interact with the ever evolving sets of analysis frameworks for interfacing with the analysis object information. Moreover, the ambiguity concerning the configuration of the overall...
Creating efficient event data models (EDMs) for high energy physics (HEP) experiments is a non-trivial task. Past approaches, employing virtual inheritance and possibly featuring deep object-hierarchies, have shown to exhibit severe performance limitations. Additonally, the advent of multi-threading and heterogenous computing poses further constraints on how to efficiently implement EDMs and...
The use of first and higher order differentiation is essential for many parts of track reconstruction: either as part of the transport of track parameters through the detector, in several linearization applications, and for establishing the detector alignment. While in general those derivations are well known, they can be complex to derive and even more difficult to be validated. The latter is...
Neutrinos are particles that interact rarely, so identifying them requires large detectors which produce lots of data. Processing this data with the computing power available is becoming more challenging as the detectors increase in size to reach their physics goals. Liquid argon time projection chamber (TPC) neutrino experiments are planned to grow by 100 times in the next decade relative to...
Future neutrino experiments like DUNE represent big-data experiments that will acquire petabytes of data per year. Processing this amount of data itself is a significant challenge. In recent years, however, the use of deep learning applications in the reconstruction and analysis of data acquired by LArTPC-based experiments has grown substantially. This will impose an even bigger amount of...
At future hadron colliders such as the High-Luminosity LHC(HL-LHC), tens of thousands of particles can be produced in a single event, which results in a very challenging tracking environment. The estimated CPU resources required by the event processing at the HL-LHC could well exceed the available resources. To mitigate this problem, modern tracking software tends to gain performance by taking...
Physicists at the Large Hadron Collider (LHC), near Geneva,
Switzerland, are preparing their experiments for the high
luminosity (HL) era of proton-proton collision data-taking. In
addition to detector hardware research and development for
upgrades necessary to cope with the more than two-fold increase
in instantaneous luminosity, physicists are investigating
potential heterogeneous...
We present VegasFlow, a new software for fast evaluation of high dimensional integrals based on Monte Carlo integration using Google's TensorFlow library.
VegasFlow enables developers to delegate all complicated aspects of hardware and platform implementation to the underlying library so they can focus on the problem at hand.
VegasFlow automatically offloads the integration algorithm and...