Conveners
Computing and Data Handling: Session I - Premiere
- Dagmar Adamova (Czech Academy of Sciences (CZ))
- Elisabetta Maria Pennacchio (Centre National de la Recherche Scientifique (FR))
Computing and Data Handling: Session II - Premiere
- Elizabeth Sexton-Kennedy (Fermi National Accelerator Lab. (US))
- Dagmar Adamova (Czech Academy of Sciences (CZ))
Computing and Data Handling: Session III - Premiere
- Graeme A Stewart (CERN)
- Concezio Bozzi (INFN Ferrara)
Computing and Data Handling: Session IV - Premiere
- Graeme A Stewart (CERN)
- Concezio Bozzi (INFN Ferrara)
Computing and Data Handling: Session I - Replay
- There are no conveners in this block
Computing and Data Handling: Session III - Replay
- There are no conveners in this block
Computing and Data Handling: Session IV - Replay
- There are no conveners in this block
Computing and Data Handling: Session II - Replay
- There are no conveners in this block
-
Chen Zhou (University of Wisconsin Madison (US))28/07/2020, 15:3014. Computing and Data HandlingTalk
Using IBM Quantum Computer Simulators and Quantum Computer Hardware, we have successfully employed the Quantum Support Vector Machine Method (QSVM) for a ttH (H to two photons), Higgs production in association with a top quark pair analysis at the LHC.
We will present our experiences and results of a study on LHC high energy physics data analysis with IBM Quantum Computer Simulators and IBM...
Go to contribution page -
Giles Chatham Strong (Universita e INFN, Padova (IT))28/07/2020, 15:5014. Computing and Data HandlingTalk
Beginning from a basic neural-network architecture, we test the potential benefits offered by a range of advanced techniques for machine learning and deep learning in the context of a typical classification problem encountered in the domain of high-energy physics, using a well-studied dataset: the 2014 Higgs ML Kaggle dataset. The advantages are evaluated in terms of both performance metrics...
Go to contribution page -
Axel Naumann (CERN)28/07/2020, 16:1014. Computing and Data HandlingTalk
ROOT is one of HEP's most senior active software projects; virtually every physicist uses it, and its TTree is the backbone of HEP data. But ROOT can do even better - and it's getting there, step by step. It now features RDataFrame, a new, simple and super-fast way to write a data analysis. Soon TTree will have a successor, RNTuple, allowing for even faster data processing. Graphics will...
Go to contribution page -
Stephan Hageboeck (CERN)28/07/2020, 16:5014. Computing and Data HandlingTalk
RooFit is a toolkit for statistical modelling and fitting, and together with RooStats it is used for measurements and statistical tests by most experiments in particle physics.
Go to contribution page
Since one year, RooFit is being modernised. In this talk, improvements already released with ROOT will be discussed, such as faster data loading, vectorised computations and more standard-like interfaces. These allow... -
Mr Andrea Di Luca (Universita degli Studi di Trento and INFN (IT))28/07/2020, 17:1014. Computing and Data HandlingTalk
In high-energy physics experiments, the sensitivity of selection-based analyses critically depends on which observable quantities are taken into consideration and which ones are discarded as considered least important. In this process, scientists are usually guided by their cultural background and by literature.
Go to contribution page
Yet simple and powerful, this approach may be sub-optimal when machine learning... -
Irene Dutta (California Institute of Technology (US))28/07/2020, 17:3014. Computing and Data HandlingTalk
At HEP experiments, processing billions of records of structured numerical data can be a bottleneck in the analysis pipeline. This step is typically more complex than current query languages allow, such that numerical codes are used. As highly parallel computing architectures are increasingly important in the computing ecosystem, it may be useful to consider how accelerators such as GPUs can...
Go to contribution page -
Giuseppe Cerati (Fermi National Accelerator Lab. (US)), Allison Reinsvold Hall (Fermilab), Giuseppe Cerati (Fermi National Accelerator Lab. (US))28/07/2020, 18:1014. Computing and Data HandlingTalk
We report on developments targeting a boost in the utilization of parallel computing architectures in HEP reconstruction, particularly for LHC experiments and for neutrino experiments using Liquid Argon Time-Projection Chamber (LArTPC) detectors. Key algorithms in the reconstruction workflows of HEP experiments were identified and redesigned: charged particle track reconstruction for CMS, and...
Go to contribution page -
Laurent Basara (LAL/LRI, Université Paris Saclay)28/07/2020, 18:3014. Computing and Data HandlingTalk
The High Luminosity Large Hadron Collider is expected to have a 10 times higher readout rate than the current state, significantly increasing the computational load required. It is then essential to explore new hardware paradigms. In this work we consider the Optical Processing Units (OPU) from [LightOn][1], which compute random matrix multiplications on large datasets in an analog, fast and...
Go to contribution page -
Adam Benjamin Morris29/07/2020, 15:3014. Computing and Data HandlingTalk
The LHCb detector at the LHC is a single-arm forward spectrometer designed for the study of b- and c-hadron states. During Run 1 and 2, the LHCb experiment has collected a total of 9/fb of data, corresponding to the largest charmed hadron dataset in the world and providing unparalleled datatests for studies of CP violation in the B system, hadron spectroscopy and rare decays, not to mention...
Go to contribution page -
Matteo Rama (Universita & INFN Pisa (IT))29/07/2020, 15:5014. Computing and Data HandlingTalk
During Run 2, the simulation of physics events at LHCb has taken about 80% of the distributed computing resources available to the experiment. The large increase in luminosity and trigger rates with the upgraded detector in Run 3 will require much larger simulated samples to match the increase of collected data. About 50% of the overall CPU time in the simulation of physics events is spent in...
Go to contribution page -
Mikael Berggren (Deutsches Elektronen-Synchrotron (DE))29/07/2020, 16:1014. Computing and Data HandlingTalk
Future linear e+e- colliders aim for extremely high precision measurements.
Go to contribution page
To achieve this, not only excellent detectors and well controlled machine conditions
are needed, but also the best possible estimate of backgrounds. To avoid that lacking
channels and too low statistics becomes a major source of systematic errors
in data-MC comparisons, all SM channels with the potential to yield... -
Nick Prouse (TRIUMF)29/07/2020, 16:5014. Computing and Data HandlingTalk
The next generation of neutrino experiments will require improvements to detector simulation and event reconstruction software matching the reduced statistical errors and increased and precision of new detectors.
Go to contribution page
This talk will present progress for the software of the Hyper-Kamiokande experiment being developed to enable reduction of systematic errors to below the 1% level.
The current... -
Graeme A Stewart (CERN)29/07/2020, 17:1014. Computing and Data HandlingTalk
The upgrade of the LHC accelerator for high-luminosity will allow CERN's general purpose detectors, ATLAS and CMS, to take far more data than they do currently, with instantaneous luminosity of up to $7.5x10^{34}\mathrm{cm}^{-2}\mathrm{s}^{-1}$ and pile-up of 200 events. In total HL-LHC targets $3\mathrm{ab}^{-1}$ of data. To best exploit this physics potential, trigger rates will rise by up...
Go to contribution page -
Michael Kirby (Fermi National Accelerator Laboratory)29/07/2020, 17:5014. Computing and Data HandlingTalk
The DUNE long-baseline neutrino oscillation collaboration consists of over 180 institutions from 33 countries. The experiment is in preparation now with the commissioning of the first 10kT fiducial volume Liquid Argon TPC expected over the period 2025-2028 and a long data-taking run with 4 modules expected from 2029 and beyond.
An active prototyping program is already in place with a...
Go to contribution page -
Steve Timm29/07/2020, 18:1014. Computing and Data HandlingTalk
The DUNE collaboration has been using Rucio since 2018 to transport data to our many European remote storage elements. We currently have 13.8 PB of data under Rucio management at 13 remote storage elements.
Go to contribution page
We present our experience thus far, as well as our future plans to make Rucio our sole file location catalog. We will present our planned data discovery system and the role of Rucio in... -
Sharad Agarwal (Univ. of Wisconsin)29/07/2020, 18:3014. Computing and Data HandlingTalk
The computational, storage, and network requirements of the Compact Muon Solenoid (CMS) Experiment, from Run 1 at LHC to the future Run 4 at High Luminosity Large Hadron Collider (HL-LHC), have scaled by at least an order of magnitude. Computing in CMS plays a significant role, from the first steps of data processing to the last stage of delivering analyzed data to physicists. In this talk, we...
Go to contribution page -
Chiara Zampolli (CERN)30/07/2020, 08:0014. Computing and Data HandlingTalk
During the upcoming Runs 3 and 4 of the LHC, ALICE will take data at a peak Pb-Pb collision rate of 50 kHz. This will be made possible thanks to the upgrade of the main tracking detectors of the experiment, and with a new data processing strategy. In order to collect the statistics needed for the precise measurements that ALICE aims at, a continuous readout will be adopted. This brings about...
Go to contribution page -
Matteo Concas (INFN e Politecnico di Torino (IT))30/07/2020, 08:2014. Computing and Data HandlingTalk
In LHC Run 3, ALICE will increase the data taking rate significantly to read out 50 kHz minimum bias Pb-Pb collisions. Such a large increase poses challenges for online and offline reconstruction as well as for data compression. Compared to Run 2, the online farm will process 50 times more events per second and achieve a higher data compression factor. To address this challenge ALICE will...
Go to contribution page -
Mr Grégoire Uhlrich (IP2I Lyon)30/07/2020, 08:4014. Computing and Data HandlingTalk
Studies Beyond the Standard Model (BSM) will become more and more
Go to contribution page
important in the near future with a rapidly increasing amount of data from
different experiments around the world. The full study of BSM models is
in general an extremely time-consuming task involving long and difficult
computations. It is in practice not possible to do exhaustive
predictions in these models by hand, in... -
Michael Lettrich (Technische Universitat Munchen (DE))30/07/2020, 09:2014. Computing and Data HandlingTalk
In LHC Run 3, the upgraded ALICE detector will record 50kHz Pb-Pb collisions using continuous readout. The resulting stream of raw data at ~3.5TB/s - a fiftyfold increase over Run 2 - must be processed with a set of lossy and lossless compression and data reduction techniques to decrease the data rate to storage to ~100GB/s without affecting the physics. This contribution focuses on lossless...
Go to contribution page -
Matthew Barrett (KEK)30/07/2020, 09:4014. Computing and Data HandlingTalk
Data collection at the Belle II experiment started in the spring of 2019. During the early stages of the experiment it is important that the raw data are both copied to permanent storage and made available soon after being recorded to allow for the timely commissioning and calibration of the detector. Automated procedures have been developed to transfer the data from the detector in a timely...
Go to contribution page -
Mr Chieh Lin (National Taiwan University)30/07/2020, 10:2014. Computing and Data HandlingTalk
The KOTO experiment searches for the rare kaon decay $K_L^0 \rightarrow \pi^0 \nu \bar{\nu}$. Because of the small theoretical uncertainty in the Standard Model, it is sensitive to the new physics. In order to collect the signal events, pipeline readout is developed to enable two-level trigger decisions. The first level requires energy sum in the calorimeter and the absence of signal in other...
Go to contribution page -
Muhammad Imran (National Centre for Physics, Quaid-I-Azam Univ.)30/07/2020, 10:4014. Computing and Data HandlingTalk
The CMS experiment heavily relies on CMSWEB cluster to host critical services for its operational needs. The cluster is deployed on virtual machines (VMs) from the CERN Openstack cloud and is manually maintained by operator and developers. The release cycle is composed of several steps, from building RPMs, their deployment, validation and coordination tests. To enhance the sustainability of...
Go to contribution page -
Dr Juan Manuel Cruz Martínez (University of Milan)30/07/2020, 11:0014. Computing and Data HandlingTalk
We present VegasFlow, a new software for fast evaluation of high dimensional integrals based on Monte Carlo integration using dataflow graphs.
Go to contribution page
The growing complexity of calculations and simulations in many areas of science have been accompanied by advances in the computational tools which have helped their developments.
VegasFlow enables developers to delegate all complicated aspects... -
Marco Rossi31/07/2020, 08:0014. Computing and Data HandlingTalk
We present the PDFflow library for parton density functions (PDFs) access which takes advantages of multi-threading CPU and graphical processing unit (GPU). PDFflow is built in python and it leverages the PDF interpolation algorithm with TensorFlow. The resulting optimized computation graph accelerates and parallelizes algorithm when a large grid of interpolated PDF points is requested. Thus...
Go to contribution page -
Remi Ete (DESY)31/07/2020, 08:2014. Computing and Data HandlingTalk
The ILD detector is a detector concept designed for high precision physics at the ILC. It is optimized for particle flow event reconstruction with extremely precise tracking capabilities and highly granular calorimeters. Over the last decade ILD has developed a suite of sophisticated software components for simulation and reconstruction in the context of the iLCSoft ecosystem in collaboration...
Go to contribution page -
Tom Neep (University of Birmingham (GB))31/07/2020, 08:4014. Computing and Data HandlingTalk
The spherical proportional counter is a novel gaseous detector, with many applications, including dark matter searches and neutron spectroscopy.
Go to contribution page
A simulation framework has been developed, which combines the strengths of the Geant4 and Garfield++ toolkits. The framework allows the properties of spherical proportional counters to be studied in detail, providing insights for detector R&D,... -
Davide Zuliani (Universita e INFN, Padova (IT))31/07/2020, 09:2014. Computing and Data HandlingTalk
Tensor Networks are mathematical representations that have been invented to investigate quantum many-body systems on classical computers.
Go to contribution page
Recently it has been shown that quantum-inspired Tensor Networks can be applied to solve machine learning tasks.
Due to their quantum nature, Tensor Networks allow to easily compute quantities like correlations and entropy in order to gain insight into the... -
Andreas Salzburger (CERN)31/07/2020, 09:4014. Computing and Data HandlingTalk
The HL-LHC will see ATLAS and CMS see proton bunch collisions reaching track multiplicity up to 10.000 charged tracks per event. To engage the Computer Science community to contribute new algorithms ideas, we have organized a Tracking Machine Learning challenge (TrackML). Participants are provided events with 100k 3D points, and are asked to group the points into tracks; they are also given a...
Go to contribution page -
Martin Adam (Czech Academy of Sciences (CZ))31/07/2020, 10:0014. Computing and Data HandlingTalk
With the explosion of the number of distributed applications, a new dynamic server environment emerged grouping servers into clusters, utilization of which depends on the current demand for the application. To provide reliable and smooth services it is crucial to detect and fix possible erratic behavior of individual servers in these clusters. Use of standard techniques for this purpose...
Go to contribution page -
Michal Svatos (Czech Academy of Sciences (CZ))31/07/2020, 10:4014. Computing and Data HandlingTalk
The ATLAS experiment at CERN uses more than 150 sites in the WLCG to process and analyze data recorded by the LHC. The grid workflow system PanDA routinely utilizes more than 400 thousand CPU cores of those sites. The data management system Rucio manages about half an exabyte of detector and simulation data distributed among these sites. With the ever-improving performance of the LHC, more...
Go to contribution page -
Nicola Anne Skidmore (Ruprecht Karls Universitaet Heidelberg (DE))31/07/2020, 11:0014. Computing and Data HandlingTalk
The LHCb experiment is being upgraded for data taking in 2021 and subsequent years. The offline computing model is undergoing several changes that are needed in order to cope with the much higher data volumes originating from the detector and the associated demands of simulated samples of ever-increasing size. This contribution presents the evolution of the data processing model, followed by a...
Go to contribution page -
Antonio Perez-Calero Yzquierdo (Centro de Investigaciones Energéti cas Medioambientales y Tecno)31/07/2020, 11:2014. Computing and Data HandlingTalk
The CMS experiment requires vast amounts of computational power in order to generate, process and analyze the data coming from proton-proton collisions at the Large Hadron Collider, as well as Monte Carlo simulations. CMS computing needs have been mostly satisfied up to now by the supporting Worldwide LHC Computing Grid (WLCG), a joint collaboration of more than a hundred computing centers...
Go to contribution page