Conveners
Computing, Analysis Tools, & Data Handling: IRIS-HEP Tutorial
- Mike Hildreth (University of Notre Dame (US))
- Michael Kirby (Fermi National Accelerator Laboratory)
- Jim Pivarski (Princeton University)
- Peter Onyisi (University of Texas at Austin (US))
- Nick Smith (Fermi National Accelerator Lab. (US))
- Bo Jayatilaka (Fermi National Accelerator Lab. (US))
Computing, Analysis Tools, & Data Handling: IRIS-HEP Tutorial
- Michael Kirby (Fermi National Accelerator Laboratory)
- Bo Jayatilaka (Fermi National Accelerator Lab. (US))
- Jim Pivarski (Princeton University)
- Nick Smith (Fermi National Accelerator Lab. (US))
- Peter Onyisi (University of Texas at Austin (US))
- Mike Hildreth (University of Notre Dame (US))
Computing, Analysis Tools, & Data Handling
- Mike Hildreth (University of Notre Dame (US))
- Bo Jayatilaka (Fermi National Accelerator Lab. (US))
- Peter Onyisi (University of Texas at Austin (US))
- Michael Kirby (Fermi National Accelerator Laboratory)
Computing, Analysis Tools, & Data Handling
- Peter Onyisi (University of Texas at Austin (US))
- Bo Jayatilaka (Fermi National Accelerator Lab. (US))
- Michael Kirby (Fermi National Accelerator Laboratory)
- Mike Hildreth (University of Notre Dame (US))
Computing, Analysis Tools, & Data Handling
- Bo Jayatilaka (Fermi National Accelerator Lab. (US))
- Michael Kirby (Fermi National Accelerator Laboratory)
- Mike Hildreth (University of Notre Dame (US))
- Peter Onyisi (University of Texas at Austin (US))
Computing, Analysis Tools, & Data Handling
- Mike Hildreth (University of Notre Dame (US))
- Peter Onyisi (University of Texas at Austin (US))
- Michael Kirby (Fermi National Accelerator Laboratory)
- Bo Jayatilaka (Fermi National Accelerator Lab. (US))
Computing, Analysis Tools, & Data Handling
- Peter Onyisi (University of Texas at Austin (US))
- Bo Jayatilaka (Fermi National Accelerator Lab. (US))
- Mike Hildreth (University of Notre Dame (US))
- Michael Kirby (Fermi National Accelerator Laboratory)
Description
parallel sessions
In this tutorial session, we introduce the scientific python ecosystem and extensions thereof that have been developed as part of the IRIS-HEP initiative to better fit the needs of particle physicists. This hands-on tutorial will introduce:
- Scientific programming with Numpy and various tools in its ecosystem: SciPy, Pandas, Scikit-Learn, etc.
- Tools to accelerate python when Numpy is not...
In this tutorial session, we introduce the scientific python ecosystem and extensions thereof that have been developed as part of the IRIS-HEP initiative to better fit the needs of particle physicists. This hands-on tutorial will introduce:
- Scientific programming with Numpy and various tools in its ecosystem: SciPy, Pandas, Scikit-Learn, etc.
- Tools to accelerate python when Numpy is not...
The COFFEA Framework provides a new approach to HEP analysis, via columnar operations, that improves time-to-insight, scalability, portability, and reproducibility of analysis. It is implemented with the Python programming language and commodity big data technologies such as Apache Spark and NoSQL databases. To achieve this suite of improvements across many use cases, COFFEA takes a factorized...
We present a new method to approximate the widely-used Poisson-likelihood chi-square using a linear combination of Neyman's and Pearson's chi-squares, namely ``combined Neyman-Pearson chi-square'' (CNP). Through analytical derivation and toy model simulations, we show that CNP leads to a significantly smaller bias on the best-fit normalization parameter compared to that using either Neyman's...
The HistFactory p.d.f. template $\href{https://cds.cern.ch/record/1456844}{\text{[CERN-OPEN-2012-016]}}$ is per-se independent of its implementation in ROOT and it is useful to be able to run statistical analysis outside of the ROOT, RooFit, RooStats framework. pyhf is a pure-python implementation of that statistical model for multi-bin histogram-based analysis and its interval estimation is...
RECAST is an analysis reinterpretation framework; since analyses are often sensitive to a range of models, RECAST can be used to constrain the plethora of theoretical models without the significant investment required for a new analysis. However, experiment-specific full simulation is still computationally expensive. Thus, to facilitate rapid exploration, RECAST has been extended to...
The reconstruction of charged particles’ trajectories plays a crucial role to achieve the research goals of high energy physics experiments. While track reconstruction is one of the most complex and CPU consuming parts of the full data processing chain, the performance of ATLAS' track reconstruction software has gone through stringent tests in the dense environment at the LHC. To greatly...
DUNE is the next-generation neutrino experiment will play a decisive role to measure neutrino CP violation and mass hierarchy. DUNE far detectors will use liquid argon time projection chamber (LArTPC) technology which provides an excellent spatial resolution, high neutrino detection efficiency, and superb background rejection. To successfully accomplish the role of DUNE, the reconstruction of...
From particle identification to the discovery of the Higgs boson, deep learning algorithms have become an increasingly important tool for data analysis at the Large Hadron Collider.
We present an innovative end-to-end deep learning approach for jet identification at the LHC. The method combines deep neural networks with low-level detector information, such as calorimeter energy deposits and...
To address the unprecedented scale of HL-LHC data, the HEP.TrkX project has been investigating a variety of machine learning approaches to particle track reconstruction. The most promising of these solutions, a graph neural network, processes the event as a graph that connects track measurements (detector hits corresponding to nodes) with candidate line segments between the hits (corresponding...
The single-phase liquid argon time projection chamber (LArTPC) provides a large amount of detailed information in the form of fine-grained drifted ionization charge from particle traces. MicroBooNE is a 85 metric tonne single-phase LArTPC and the first detector taking data in the Short Baseline Neutrino (SBN) program, located at Fermilab, which will examine a rich assortment of physics topics,...
In gigaton scale neutrino detectors, such as the IceCube experiment, interaction products are detected by the Cherenkov radiation emitted by their passage through the detector medium. Simulating this propagation of light is traditionally approached through ray tracing. This is complicated by the sparsity of the detector: the vast majority of light rays are scattered and absorbed by the...
Many scintillator based detectors employ a set of photomultiplier tubes (PMT) to observe the scintillation light from potential signal and background events. It is important to be able to count the number of photo-electrons (PE) in the pulses observed in the PMTs, because the position and energy reconstruction of the events is directly related to how well the spatial distribution of the PEs in...
We investigate modern machine learning techniques to derive calibration for the combined CMS electromagnetic and hadronic calorimeter system. We use the dataset from a 2006 CMS test beam to measure the calorimeter responses to pion beams of various energies. The performance of the network is evaluated by studying the linearity of calibrated responses. A convolutional neural network approach...
Accurately and computationally rapidly modeling stochastic detector response for complex LHC experiments involving many particles from multiple interaction points, up to 200 interactions per proton-proton crossing in the HL-LHC requires the development of novel techniques. A study aimed at finding a fast transformation from truth level physics objects to reconstructed detector level physics...
NOvA is a long baseline neutrino oscillation experiment. It is optimized to measure νe appearance and νμ disappearance at the Far Detector in the νμ beam produced by the NuMI facility at Fermilab. NOvA uses a convolutional neural network(CVN) to identify neutrino events in two functionally identical liquid scintillator detectors. A different network, called “Prong-CVN”, has been used to...
Designed to push forward our understanding of fundamental physics at the energy frontier, the Compact Muon Solenoid (CMS) detector is one of the two general-purpose particle detectors at the LHC. Collisions take place within CMS at approximately 40 MHz, producing much more data than can be recorded or stored for future analysis. However, only a small fraction of the collisions contain events...
As we are moving towards LHC Run 3, the data acquisition in $pp$ collisions at $\sqrt{s}=$ 13 TeV with the ATLAS detector will be performed in a multi-threaded environment of the Athena framework (AthenaMT). This will allow the concurrent processing of High Level Trigger (HLT) algorithms on single and multiple events. For trigger electron/photon reconstruction, the Run 2 legacy system had two...
Using IBM Quantum Computer Simulators and Quantum Computer Hardware, we have successfully employed the Quantum Support Vector Machine Method (QSVM) for a ttH (H to two photons), Higgs coupling to top quarks analysis at LHC.
We will present our experiences and results of a study on LHC high energy physics data analysis with IBM Quantum Computer Simulators and IBM Quantum Computer Hardware...
ProtoDUNE-SP, the prototype of the single-phase DUNE far detector, was constructed and operated at the CERN Neutrino Platform with total liquid argon (LAr) mass of 0.77 kt and using full-scale components of the design for DUNE. The physics program of protoDUNE-SP aims to understand and control the systematic uncertainties for future oscillation measurements at DUNE, the charged-particle beam...
In this talk, I will summarize and highlight the work of the Computing and Machine Learning session from the New Technologies for Discovery IV: CPAD Instrumentation Frontier workshop at Brown University in December 2018. This talk will cover on-going research and development efforts in this area described in the forthcoming New Technologies for Discovery Report.
During 2016-2017 the worldwide HEP community met over a series of workshops to prepare a roadmap for the software R&D needed to prepare for the data and computational challenges of the High Luminosity LHC and other HEP experiments in the 2020s. This process was organized by the HEP Software Foundation and the outcome was a community white paper with title “A Roadmap for HEP Software and...
The offline software and computing systems of the LHC experiments continue to evolve to meet the challenges of delivering data effectively to LHC analysts. Looking to Run 3 and high-luminosity LHC, the data rates required by the HL-LHC physics program will far outstrip what can be provided by the current analysis and production computing approaches. In this presentation, we will discuss how...
The NSF-funded Scalable CyberInfrastructure for Artificial Intelligence and Likelihood Free Inference (SCAILFIN) project aims to develop and deploy artificial intelligence (AI) and likelihood-free inference (LFI) techniques and software using scalable cyberinfrastructure (CI) built on top of existing CI elements. Specifically, the project has extended the CERN-based REANA framework, a...
The latest release of the Noble Element Simulation Technique (NEST) is presented here. Noble element target media have become common in rare event searches, and an accurate comparison model is critical for understanding and predicting signals and unwanted backgrounds. Like its predecessors, NEST v2.0 is a simulation tool written in C++ and is based heavily on experimental data, taking into...