-
04/11/2019, 09:00Plenary
-
Heidi Marie Schellman (Oregon State University (US))04/11/2019, 09:30Plenary
-
David Dossett (University of Melbourne)04/11/2019, 10:00Plenary
-
Federico Carminati (CERN)04/11/2019, 11:00
The Deep Underground Neutrino Experiment (DUNE) is an international effort to build the next-generation neutrino observatory to answer fundamental questions about the nature of elementary particles and their role in the universe. Integral to DUNE is the process of reconstruction, where the raw data from Liquid Argon Time Projection Chambers (LArTPC) are transformed into products that can be...
Go to contribution page -
Oliver Sander (KIT - Karlsruhe Institute of Technology (DE))04/11/2019, 11:00
Data acquisition systems (DAQ) for high energy physics experiments utilize complex FPGAs to handle unprecedented high data rates. This is especially true in the first stages of the processing chain. Developing and commissioning these systems becomes more complex as additional processing intelligence is placed closer to the detector, in a distributed way directly on the ATCA blades, in the...
Go to contribution page -
Gerardo Ganis (CERN)04/11/2019, 11:00
The Future Circular Collider (FCC) is designed to provide unprecedented luminosity and unprecedented centre-of-mass energies. The physics reach and potential of the different FCC options - $e^+e^-$, $pp$, $e^-p$ - has been studied and published in dedicated Conceptual Design Reports (CDRs) published at the end of 2018.
Go to contribution page
Conceptual detector designs have been developed for such studies and... -
Pawel Grzywaczewski (CERN)04/11/2019, 11:00Track 8 – Collaboration, Education, Training and OutreachOral
E-mail service is considered as a critical collaboration system. We will share our experience regarding technical and organizational challenges when migrating 40 000 mailboxes from Microsoft Exchange to free and open source software solution: Kopano.
Go to contribution page -
Tommaso Boccali (Universita & INFN Pisa (IT))04/11/2019, 11:00
The INFN Tier-1 located at CNAF in Bologna (Italy) is a major center of the WLCG e-Infrastructure, supporting the 4 major LHC collaborations and more than 30 other INFN-related experiments.
Go to contribution page
After multiple tests towards elastic expansion of CNAF compute power via Cloud resources (provided by Azure, Aruba and in the framework of the HNSciCloud project), but also building on the experience... -
Ms Juan Chen (IHEP), juan chen (IHEP)04/11/2019, 11:00
The IHEP local cluster is a middle-sized HEP data center which consists of 20’000 CPU slots, hundreds of data servers, 20 PB disk storage and 10 PB tape storage. After data taking of JUNO and LHAASO experiment, the data volume processed at this center will approach 10 PB data per year. Facing the current cluster scale, anomaly detection is a non-trivial task in daily maintenance. Traditional...
Go to contribution page -
Benjamin Galewsky04/11/2019, 11:00Track 4 – Data Organisation, Management and AccessOral
We will describe a component of the Intelligent Data Delivery Service being developed in collaboration with IRIS-HEP and the LHC experiments. ServiceX is an experiment-agnostic service to enable on-demand data delivery specifically tailored for nearly-interactive vectorized analysis. This work is motivated by the data engineering challenges posed by HL-LHC data volumes and the increasing...
Go to contribution page -
Federico Stagni (CERN)04/11/2019, 11:00
Efficient access to distributed computing and storage resources is mandatory for the success of current and future High Energy and Nuclear Physics Experiments. DIRAC is an interware to build and operate distributed computing systems. It provides a development framework and a rich set of services for the Workload, Data and Production Management tasks of large scientific communities. A single...
Go to contribution page -
Jonas Eschle (Universitaet Zuerich (CH))04/11/2019, 11:00
Statistical modelling is a key element for High-Energy Physics (HEP) analysis. Currently, most of this modelling is performed with the ROOT/RooFit toolkit which is written in C++ and provides Python bindings which are only loosely integrated into the scientific Python ecosystem. We present zfit, a new alternative to RooFit, written in pure Python. Built on top of TensorFlow (a modern, high...
Go to contribution page -
Tomoe Kishimoto (University of Tokyo (JP))04/11/2019, 11:15
A Grid computing site consists of various services including Grid middlewares, such as Computing Element, Storage Element and so on. Ensuring a safe and stable operation of the services is a key role of site administrators. Logs produced by the services provide useful information for understanding the status of the site. However, it is a time-consuming task for site administrators to monitor...
Go to contribution page -
Thomas Britton (JLab)04/11/2019, 11:15
MCwrapper is a set of system that manages the entire Monte Carlo production workflow for GlueX and provides standards for how that Monte Carlo is produced. MCwrapper was designed to be able to utilize a variety of batch systems in a way that is relatively transparent to the user, thus enabling users to quickly and easily produce valid simulated data at home institutions worldwide. ...
Go to contribution page -
Enrico Gamberini (CERN)04/11/2019, 11:15
The data acquisition (DAQ) software for most applications in high energy physics is composed of common building blocks, such as a networking layer, plug-in loading, configuration, and process management. These are often re-invented and developed from scratch for each project or experiment around specific needs. In some cases, time and available resources can be limited and make development...
Go to contribution page -
Maria Alandes Pradillo (CERN), Sebastian Bukowiec (CERN)04/11/2019, 11:15Track 8 – Collaboration, Education, Training and OutreachOral
As of March 2019, CERN is no longer eligible for academic licences of Microsoft products. For this reason, CERN IT started a series of task forces to respond to the evolving requirements of the user community with the goal of reducing as much as possible the need for Microsoft licensed software. This exercise was an opportunity to understand better the user requirements for all office...
Go to contribution page -
Mike Hildreth (University of Notre Dame (US))04/11/2019, 11:15
The NSF-funded Scalable CyberInfrastructure for Artificial Intelligence and Likelihood Free Inference (SCAILFIN) project aims to develop and deploy artificial intelligence (AI) and likelihood-free inference (LFI) techniques and software using scalable cyberinfrastructure (CI) built on top of existing CI elements. Specifically, the project has extended the CERN-based REANA framework, a...
Go to contribution page -
Christopher Jones (Fermi National Accelerator Lab. (US))04/11/2019, 11:15
The diversity of the scientific goals across HEP experiments necessitates unique bodies of software tailored for achieving particular physics results. The challenge, however, is to identify the software that must be unique, and the code that is unnecessarily duplicated, which results in wasted effort and inhibits code maintainability.
Fermilab has a history of supporting and developing...
Go to contribution page -
Stefan Wunsch (KIT - Karlsruhe Institute of Technology (DE))04/11/2019, 11:15
ROOT provides, through TMVA, machine learning tools for data analysis at HEP experiments and beyond. In this talk, we present recently included features in TMVA and the strategy for future developments in the diversified machine learning landscape. Focus is put on fast machine learning inference, which enables analysts to deploy their machine learning models rapidly on large scale datasets....
Go to contribution page -
Shawn Mc Kee (University of Michigan (US))04/11/2019, 11:15Track 4 – Data Organisation, Management and AccessOral
We will report on the status of the OSiRIS project (NSF Award #1541335, UM, IU, MSU and WSU) after its fourth year. OSiRIS is delivering a distributed Ceph storage infrastructure coupled together with software-defined networking to support multiple science domains across Michigan’s three largest research universities. The project’s goal is to provide a single scalable, distributed storage...
Go to contribution page -
Maurizio Pierini (CERN)04/11/2019, 11:15
We use Graph Networks to learn representations of irregular detector geometries and perform on it typical tasks such as cluster segmentation or pattern recognition. Thanks to the flexibility and generality of the graph architecture, this kind of network can be applied to detector of arbitrarly geometry, representing the detector elements through a unique detector identification (e.g., physical...
Go to contribution page -
Mohammad Al-Turany (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))04/11/2019, 11:30
The ALFA framework is a joint development between ALICE Online-Offline and FairRoot teams. ALFA has a distributed architecture, i.e. a collection of highly maintainable, testable, loosely coupled, independently deployable processes.
ALFA allows the developer to focus on building single-function modules with well-defined interfaces and operations. The communication between the independent...
Go to contribution page -
Siarhei Padolski (BNL)04/11/2019, 11:30Track 4 – Data Organisation, Management and AccessOral
The Belle II experiment started taking physics data in March 2019, with an estimated dataset of order 60 petabytes expected by the end of operations in the mid-2020s. Originally designed as a fully integrated component of the BelleDIRAC production system, the Belle II distributed data management (DDM) software needs to manage data across 70 storage elements worldwide for a collaboration of...
Go to contribution page -
Roland Sipos (CERN)04/11/2019, 11:30
The DAQ system of ProtoDUNE-SP successfully proved its design principles and met the requirements of the beam run of 2018. The technical design of the DAQ system for the DUNE experiment has major differences compared to the prototype due to different requirements and the environment. The single-phase prototype in CERN is the major integration facility for R&D aspects of the DUNE DAQ system....
Go to contribution page -
Piero Vicini (Sapienza Universita e INFN, Roma I (IT))04/11/2019, 11:30
Nowadays, a number of technology R&D activities has been launched in Europe trying to close the gap with traditional HPC providers like USA and Japan and more recently emerging ones like China.
Go to contribution page
The EU HPC strategy, funded through EuroHPC initiative, leverages on two different pillars: the first one targets the procurement and the hosting of two/three commercial pre-Exascale systems, in order... -
Jakub Moscicki (CERN)04/11/2019, 11:30
SWAN (Service for Web-based ANalysis) is a CERN service that allows users to perform interactive data analysis in the cloud, in a "software as a service" model. The service is a result of the collaboration between IT Storage and Databases groups and EP-SFT group at CERN. SWAN is built upon the widely-used Jupyter notebooks, allowing users to write - and run - their data analysis using only a...
Go to contribution page -
Ruben Domingo Gaspar Aparicio (CERN)04/11/2019, 11:30Track 8 – Collaboration, Education, Training and OutreachOral
In this talk the approach chosen to monitor firstly a world-wide video conference server infrastructure and secondly a wide diversity of audio-visual devices that build up the audio-visual conference room ecosystem at CERN will be presented.
CERN video conference system is a complex ecosystem which is being used by most HEP institutes, together with Swiss Universities through SWITCH. As a...
Go to contribution page -
Jean-Roch Vlimant (California Institute of Technology (US))04/11/2019, 11:30
We study the use of interaction networks to perform tasks related to jet reconstruction. In particular, we consider jet tagging for generic boosted-jet topologies, tagging of large-momentum H$\to$bb decays, and anomalous-jet detection. The achieved performance is compared to state-of-the-art deep learning approaches, based on Convolutional or Recurrent architectures. Unlike these approaches,...
Go to contribution page -
Dr Kenneth Richard Herner (Fermi National Accelerator Laboratory (US))04/11/2019, 11:30
The Deep Underground Neutrino Experiment (DUNE) will be the world’s foremost neutrino detector when it begins taking data in the mid-2020s. Two prototype detectors, collectively known as ProtoDUNE, have begun taking data at CERN and have accumulated over 3 PB of raw and reconstructed data since September 2018. Particle interaction within liquid argon time projection chambers are challenging to...
Go to contribution page -
Andrea Valassi (CERN)04/11/2019, 11:30
The benchmarking and accounting of CPU resources in WLCG has been based on the HEP-SPEC06 (HS06) suite for over a decade. HS06 is stable, accurate and reproducible, but it is an old benchmark and it is becoming clear that its performance and that of typical HEP applications have started to diverge. After evaluating several alternatives for the replacement of HS06, the HEPIX benchmarking WG has...
Go to contribution page -
Gilles Grasseau (Centre National de la Recherche Scientifique (FR))04/11/2019, 11:45
For the High Luminosity LHC, the CMS collaboration made the ambitious choice of a high granularity design to replace the existing endcap calorimeters. The thousands of particles coming from the multiple interactions create showers in the calorimeters, depositing energy simultaneously in adjacent cells. The data are analog to 3D gray-scale image that should be properly reconstructed.
In this...
Go to contribution page -
Penelope Constanta (Fermilab)04/11/2019, 11:45Track 8 – Collaboration, Education, Training and OutreachOral
Indico, CERN’s popular open-source tool for event management, is in widespread use among facilities that make up the HEP community. It is extensible through a robust plugin architecture that provides features such as search and video conferencing integration. In 2018, Indico version 2 was released with many notable improvements, but without a full-featured search functionality that could be...
Go to contribution page -
Antonio Perez-Calero Yzquierdo (Centro de Investigaciones Energéti cas Medioambientales y Tecno)04/11/2019, 11:45
Efforts in distributed computing of the CMS experiment at the LHC at CERN are now focusing on the functionality required to fulfill the projected needs for the HL-LHC era. Cloud and HPC resources are expected to be dominant relative to resources provided by traditional Grid sites, being also much more diverse and heterogeneous. Handling their special capabilities or limitations and maintaining...
Go to contribution page -
William Panduro Vazquez (Royal Holloway, University of London)04/11/2019, 11:45
After the current LHC shutdown (2019-2021), the ATLAS experiment will be required to operate in an increasingly harsh collision environment. To maintain physics performance, the ATLAS experiment will undergo a series of upgrades during the shutdown. A key goal of this upgrade is to improve the capacity and flexibility of the detector readout system. To this end, the Front-End Link eXchange...
Go to contribution page -
Marten Teitsma (Amsterdam University of Applied Sciences (NL))04/11/2019, 11:45Track 4 – Data Organisation, Management and AccessOral
A new bookkeeping system called Jiskefet is being developed for A Large Ion Collider Experiment (ALICE) during Long Shutdown 2, to be in production until the end of LHC Run 4 (2029).
Jiskefet unifies two functionalities. The first is gathering, storing and presenting metadata associated with the operations of the ALICE experiment. The second is tracking the asynchronous processing of the...
Go to contribution page -
Benjamin Krikler (University of Bristol (GB))04/11/2019, 11:45
The Faster Analysis Software Taskforce (FAST) is a small, European group of HEP researchers that have been investigating and developing modern software approaches to improve HEP analyses. We present here an overview of the key product of this effort: a set of packages that allows a complete implementation of an analysis using almost exclusively YAML files. Serving as an analysis description...
Go to contribution page -
Roel Aaij (Nikhef National institute for subatomic physics (NL))04/11/2019, 11:45
The Dutch science funding organization NWO is in the process of drafting requirements for the procurement of a future high-performance compute facility. To investigate the requirements for this facility to potentially support high-throughput workloads in addition to traditional high-performance workloads, a broad range of HEP workloads are being functionally tested on the current facility. The...
Go to contribution page -
Christopher Jones (Fermi National Accelerator Lab. (US))04/11/2019, 11:45
The OpenMP standard is the primary mechanism used at high performance computing facilities to allow intra-process parallelization. In contrast, many HEP specific software (such as CMSSW, GaudiHive, and ROOT) make use of Intel's Threading Building Blocks (TBB) library to accomplish the same goal. In this talk we will discuss our work to compare TBB and OpenMP when used for scheduling algorithms...
Go to contribution page -
Pedro Andrade (CERN)04/11/2019, 11:45
WLCG relies on the network as a critical part of its infrastructure and therefore needs to guarantee effective network usage and prompt detection and resolution of any network issues, including connection failures, congestion and traffic routing. The OSG Networking Area, in partnership with WLCG, is focused on being the primary source of networking information for its partners and...
Go to contribution page -
Niklas Nolte (CERN / Technische Universitaet Dortmund (DE))04/11/2019, 12:00
The high-level trigger (HLT) of LHCb in Run 3 will have to process 5 TB/s of data, which is about two orders of magnitude larger compared to Run 2. The second stage of the HLT runs asynchronously to the LHC, aiming for a throughput of about 1 MHz. It selects analysis-ready physics signals by O(1000) dedicated selections totaling O(10000) algorithms to achieve maximum efficiency. This poses two...
Go to contribution page -
Steven Farrell (Lawrence Berkeley National Lab (US))04/11/2019, 12:00
We present recent work in supporting deep learning for particle physics and cosmology at NERSC, the US Dept. of Energy mission HPC center. We describe infrastructure and software to support both large-scale distributed training across (CPU and GPU) HPC resources and for productive interfaces via Jupyter notebooks. We also detail plans for accelerated hardware for deep learning in the future...
Go to contribution page -
Prof. Xingtao Huang (Shandong University)04/11/2019, 12:00Track 4 – Data Organisation, Management and AccessOral
(On behalf of the JUNO collaboration)
Abstract:
Go to contribution page
The JUNO (Jiangmen Underground Neutrino Observatory) experiment is designed to determine the neutrino mass hierarchy and precisely measure oscillation parameters with an unprecedented energy resolution of 3% at 1MeV. It is composed of a 20kton liquid scintillator central detector equipped with 18000 20” PMTs and 25000 3” PMTs, a water pool... -
Andre Sailer (CERN)04/11/2019, 12:00
Software tools for detector optimization studies for future experiments need to be efficient and reliable. One important ingredient of the detector design optimization concerns the calorimeter system. Every change of the calorimeter configuration requires a new set of overall calibration parameters which in its turn requires a new calorimeter calibration to be done. An efficient way to perform...
Go to contribution page -
Hugo Gonzalez Labrador (CERN)04/11/2019, 12:00Track 8 – Collaboration, Education, Training and OutreachOral
CERNBox is the CERN cloud storage hub for more than 16000 users at CERN. It allows synchronising and sharing files on all major desktop and mobile platforms (Linux, Windows, MacOSX, Android, iOS) providing universal, ubiquitous, online- and offline access to any data stored in the CERN EOS infrastructure. CERNBox also provides integration with other CERN services for big science: visualisation...
Go to contribution page -
Riccardo Farinelli (Universita e INFN, Ferrara (IT))04/11/2019, 12:00
Micro-Pattern Gas Detectors (MPGDs) are the new frontier in between the gas tracking systems. Among them, the triple Gas Electron Multiplier (triple-GEM) detectors are widely used. In particular, cylindrical triple-GEM (CGEM) detectors can be used as inner tracking devices in high energy physics experiments. In this contribution, a new offline software called GRAAL (Gem Reconstruction And...
Go to contribution page -
Ursula Laa (Monash University)04/11/2019, 12:00
In physics we often encounter high-dimensional data, in the form of multivariate measurements or of models with multiple free parameters. The information encoded is increasingly explored using machine learning, but is not typically explored visually. The barrier tends to be visualising beyond 3D, but systematic approaches for this exist in the statistics literature. I will use examples from...
Go to contribution page -
Luis Granado Cardoso (CERN)04/11/2019, 12:00
LHCb is one of the 4 experiments at the LHC accelerator at CERN. During the upgrade phase of the experiment, several new electronic boards and Front End chips that perform the data acquisition for the experiment will be added by the different sub-detectors. These new devices will be controlled and monitored via a system composed of GigaBit Transceiver (GBT) chips that manage the bi-directional...
Go to contribution page -
Pedro Andrade (CERN)04/11/2019, 12:00
Monitoring of the CERN Data Centres and the WLCG infrastructure is now largely based on the MONIT infrastructure provided by CERN IT. This is the result of the migration from several old in-house developed monitoring tools into a common monitoring infrastructure based on open source technologies such as Collectd, Flume, Kafka, Spark, InfluxDB, Grafana and others. The MONIT infrastructure...
Go to contribution page -
Daniel Hugo Campora Perez (Universidad de Sevilla (ES))04/11/2019, 12:15
As part of the LHCb detector upgrade in 2021, the hardware-level trigger will be removed, coinciding with an increase in luminosity. As a consequence, about 40 Tbit/s of data will be processed in a full-software trigger, a challenge that has prompted the exploration of alternative hardware technologies. Allen is a framework that permits concurrent many-event execution targeting many-core...
Go to contribution page -
Elizabeth Gallas (University of Oxford (GB))04/11/2019, 12:15Track 4 – Data Organisation, Management and AccessOral
The ATLAS model for remote access to database resident information relies upon a limited set of dedicated and distributed Oracle database repositories complemented with the deployment of Frontier system infrastructure on the WLCG. ATLAS clients with network access can get the database information they need dynamically by submitting requests to a squid server in the Frontier network which...
Go to contribution page -
Dr Carsten Daniel Burgard (Nikhef National institute for subatomic physics (NL))04/11/2019, 12:15
RooFit is the statistical modeling and fitting package used in many experiments to extract physical parameters from reduced particle collision data. RooFit aims to separate particle physics model building and fitting (the users' goals) from their technical implementation and optimization in the back-end. In this talk, we outline our efforts to further optimize the back-end by automatically...
Go to contribution page -
Pablo Saiz (CERN)04/11/2019, 12:15
The Centralised Elasticsearch Service at CERN runs the infrastructure to
provide Elasticsearch clusters for more than 100 different use cases.This contribution presents how the infrastructure is managed, covering the
Go to contribution page
resource distribution, instance creation, cluster monitoring and user
support. The contribution will present the components that have been identified as
critical in order... -
Andrea Sciabà (CERN)04/11/2019, 12:15
The increase in the scale of LHC computing during Run 3 and Run 4 (HL-LHC) will certainly require radical changes to the computing models and the data processing of the LHC experiments. The working group established by WLCG and the HEP Software Foundation to investigate all aspects of the cost of computing and how to optimise them has continued producing results and improving our understanding...
Go to contribution page -
Thomas Kuhr (Ludwig Maximilians Universitat (DE))04/11/2019, 12:15Track 8 – Collaboration, Education, Training and OutreachOral
Collaborative services are essential for any experiment.
They help to integrate global virtual communities by allowing to share
and exchange relevant information among members.
Typical examples are public and internal web pages, wikis, mailing
list services, issue tracking systems, and services for meeting
organizations and documents.After reviewing their collaborative services with...
Go to contribution page -
Dr Roman Dzhygadlo (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))04/11/2019, 12:15
The innovative Barrel DIRC (Detection of Internally Reflected Cherenkov light) counter will provide hadronic particle identification (PID) in the central region of the PANDA experiment at the new Facility for Antiproton and Ion Research (FAIR), Darmstadt, Germany. This detector is designed to separate charged pions and kaons with at least 3 standard deviations for momenta up to 3.5 GeV/c...
Go to contribution page -
Benjamin LaRoque04/11/2019, 12:15
The Project 8 collaboration seeks to measure, or more tightly bound, the mass of the electron antineutrino by applying a novel spectroscopy technique to precision measurement of the tritium beta-decay spectrum. For the current, lab-bench-scale, phase of the project a single digitizer produces 3.2 GB/s of raw data. An onboard FPGA uses digital down conversion to extract three 100 MHz wide...
Go to contribution page -
Marco Zanetti (Universita e INFN, Padova (IT))04/11/2019, 14:00Track 8 – Collaboration, Education, Training and OutreachOral
Most of the challenges set by modern physics endeavours are related to the management, processing and analysis of massive amount of data. As stated in a recent Nature editorial (The thing about data, Nature Physics volume 13, page 717, 2017), "the rise of big data represents an opportunity for physicists. To take full advantage, however, they need a subtle but important shift in mindset"....
Go to contribution page -
Teo Mrnjavac (CERN)04/11/2019, 14:00
The ALICE Experiment at CERN LHC (Large Hadron Collider) is undertaking a major upgrade during LHC Long Shutdown 2 in 2019-2020, which includes a new computing system called O² (Online-Offline). To ensure the efficient operation of the upgraded experiment and of its newly designed computing system, a reliable, high performance, and automated experiment control system is being developed. The...
Go to contribution page -
Andrea Formica (Université Paris-Saclay (FR))04/11/2019, 14:00Track 4 – Data Organisation, Management and AccessOral
ATLAS event processing requires access to centralized database systems where information about calibrations, detector status and data-taking conditions are stored. This processing is done on more than 150 computing sites on a world-wide computing grid which are able to access the database using the squid-Frontier system. Some processing workflows have been found which overload the Frontier...
Go to contribution page -
Scott Snyder (Brookhaven National Laboratory (US))04/11/2019, 14:00
In preparation for Run 3 of the LHC, the ATLAS experiment is modifying its offline software to be fully multithreaded. An important part of this is data structures that can be efficiently and safely concurrently accessed from many threads. A standard way of achieving this is through mutual exclusion; however, the overhead from this can sometimes be excessive. Fully lockless implementations are...
Go to contribution page -
Adam Davis (University of Manchester (GB))04/11/2019, 14:00
The LHCb detector at the LHC is a single forward arm spectrometer dedicated to the study of $b-$ and $c-$ hadron states. During Run 1 and 2, the LHCb experiment has collected a total of 9 fb$^{-1}$ of data, corresponding to the largest charmed hadron dataset in the world and providing unparalleled datatests for studies of CP violation in the $B$ system, hadron spectroscopy and rare decays, not...
Go to contribution page -
Maria Girone (CERN)04/11/2019, 14:00
High Performance Computing (HPC) centers are the largest facilities available for science. They are centers of expertise for computing scale and local connectivity and represent unique resources. The efficient usage of HPC facilities is critical to the future success of production processing campaigns of all Large Hadron Collider (LHC) experiments. A substantial amount of R&D investigations...
Go to contribution page -
Antonio Delgado Peris (Centro de Investigaciones Energéti cas Medioambientales y Tecno)04/11/2019, 14:00
There is a general trend in WLCG towards the federation of resources, aiming for increased simplicity, efficiency, flexibility, and availability. Although general, VO-agnostic federation of resources between two independent and autonomous resource centres may prove arduous, a considerable amount of flexibility in resource sharing can be achieved, in the context of a single WLCG VO, with a...
Go to contribution page -
Riley Patrick (The University of Adelaide)04/11/2019, 14:00
In this talk I will present an investigation into sizeable interference effects between a {heavy} charged Higgs boson signal produced via $gg\to t\bar b H^-$ (+ c.c.) followed by the decay $H^-\to b\bar t$ (+ c.c.) and the irreducible background given by $gg\to t\bar t b \bar b$ topologies at the Large Hadron Collider (LHC). I will show how such effects could spoil current $H^\pm$...
Go to contribution page -
Daniele Spiga (Universita e INFN, Perugia (IT))04/11/2019, 14:00
The EGI Cloud Compute service offers a multi-cloud IaaS federation that brings together research clouds as a scalable computing platform for research accessible with OpenID Connect Federated Identity. The federation is not limited to single sign-on, it also introduces features to facilitate the portability of applications across providers: i) a common VM image catalogue VM image replication to...
Go to contribution page -
Randall Sobie (University of Victoria (CA))04/11/2019, 14:15
The cloudscheduler VM provisioning service has been running production jobs for ATLAS and Belle II for many years using commercial and private clouds in Europe, North America and Australia. Initially released in 2009, version 1 is a single Python 2 module implementing multiple threads to poll resources and jobs, and to create and destroy virtual machine. The code is difficult to scale,...
Go to contribution page -
Antonio Perez-Calero Yzquierdo (Centro de Investigaciones Energéti cas Medioambientales y Tecno)04/11/2019, 14:15
High Energy Physics (HEP) experiments will enter a new era with the start of the HL-LHC program, where computing needs required will surpass by large factors the current capacities. Looking forward to this scenario, funding agencies from participating countries are encouraging the HEP collaborations to consider the rapidly developing High Performance Computing (HPC) international...
Go to contribution page -
Maciej Szymon Gladki (University of Warsaw (PL))04/11/2019, 14:15
The Data Acquisition (DAQ) system of the Compact Muon Solenoid (CMS) experiment at LHC is a complex system responsible for the data readout, event building and recording of accepted events. Its proper functioning plays a critical role in the data-taking efficiency of the CMS experiment. In order to ensure high availability and recover promptly in the event of hardware or software failure of...
Go to contribution page -
Johnny Raine (Universite de Geneve (CH))04/11/2019, 14:15
The ATLAS physics program relies on very large samples of GEANT4 simulated events, which provide a highly detailed and accurate simulation of the ATLAS detector. However, this accuracy comes with a high price in CPU, and the sensitivity of many physics analyses is already limited by the available Monte Carlo statistics and will be even more so in the future. Therefore, sophisticated fast...
Go to contribution page -
Pat Scott (The University of Queensland)04/11/2019, 14:15
GAMBIT is a modular and flexible framework for performing global fits to a wide range of theories for new physics. It includes theory and analysis calculations for direct production of new particles at the LHC, flavour physics, dark matter experiments, cosmology and precision tests, as well as an extensive library of advanced parameter-sampling algorithms. I will present the GAMBIT software...
Go to contribution page -
Dr Gagik Gavalian (Jefferson Lab)04/11/2019, 14:15Track 4 – Data Organisation, Management and AccessOral
With increasing data volume from Nuclear Physics experiments requirements to data
Go to contribution page
storage and access are changing. To keep up with large data sets new data formats
are needed for efficient processing and analysis of the data. Frequently, in the
experiments data goes through stages from data acquisition to reconstruction and
data analysis and data is converted from one format to another... -
Remi Ete (DESY)04/11/2019, 14:15
Marlin is the event processing framework of the iLCSoft ecosystem. Originally developed
Go to contribution page
for the ILC more than 15 years ago, it is now widely used, e.g. by CLICdp, CEPC and
many test beam projects such as Calice, LCTPC and EU-Telescope. While Marlin is
lightweight and flexible it was originally designed for sequential processing only.
With MarlinMT we now evolved Marlin for parallel processing... -
Dr Daniel Peter Traynor (Queen Mary University of London (GB))04/11/2019, 14:15
The Queen Mary University of London WLCG Tier-2 Grid site has been providing GPU resources on the Grid since 2016. GPUs are an important modern tool to assist in data analysis. They have historically been used to accelerate computationally expensive but parallelisable workloads using frameworks such as OpenCL and CUDA. However, more recently their power in accelerating machine learning,...
Go to contribution page -
Hannah Short (CERN)04/11/2019, 14:15Track 8 – Collaboration, Education, Training and OutreachOral
The number of women in technical computing roles in the HEP community hovers at around 15%. At the same time there is a growing body of research to suggest that diversity, in all its forms, brings positive impact on productivity and wellbeing. These aspects are directly in line with many organisations’ values and missions, including CERN. Although proactive efforts to recruit more women in our...
Go to contribution page -
Xiaowei Jiang (IHEP(中国科学院高能物理研究所))04/11/2019, 14:30
In a HEP Computing Center, at least 1 batch systems are used. As an example, at IHEP, we’ve used 3 batch systems, PBS, HTCondor and Slurm. After running PBS as local batch system for 10 years, we replaced it by HTCondor (for HTC) and Slurm (for HPC). During that period, problems came up on both user and admin sides.
On user side, the new batch systems bring a set of new commands, which...
Go to contribution page -
Igor Soloviev (University of California Irvine (US))04/11/2019, 14:30
The Information Service (IS) is an integral part of the Trigger and Data Acquisition (TDAQ) system of the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The IS allows online publication of operational monitoring data, and it is used by all sub-systems and sub-detectors of the experiment to constantly monitor their hardware and software components including more than 25000...
Go to contribution page -
Oliver Gutsche (Fermi National Accelerator Lab. (US))04/11/2019, 14:30
The advent of computing resources with co-processors, for example Graphics Processing Units (GPU) or Field-Programmable Gate Arrays (FPGA), for use cases like the CMS High-Level Trigger (HLT) or data processing at leadership-class supercomputers imposes challenges for the current data processing frameworks. These challenges include developing a model for algorithms to offload their...
Go to contribution page -
Federica Legger (Universita e INFN Torino (IT))04/11/2019, 14:30Track 8 – Collaboration, Education, Training and OutreachOral
In recent years proficiency in data science and machine learning (ML) became one of the most requested skills for jobs in both industry and academy. Machine learning algorithms typically require large sets of data to train the models and extensive usage of computing resources both for training and inference. Especially for deep learning algorithms, training performances can be dramatically...
Go to contribution page -
Hugo Gonzalez Labrador (CERN)04/11/2019, 14:30
Cloud Services for Synchronization and Sharing (CS3) have become increasing popular in the European Education and Research landscape in the last
years. Services such as CERNBox, SWITCHdrive, CloudStor and many more have become indispensable in everyday work for scientists, engineers and in administrationCS3 services represent an important part of the EFSS market segment (Enterprise File...
Go to contribution page -
Ioana Ifrim (CERN)04/11/2019, 14:30
In High Energy Physics, simulation activity is a key element for theoretical models evaluation and detector design choices. The increase in the luminosity of particle accelerators leads to a higher computational cost when dealing with the orders of magnitude increase in collected data. Thus, novel methods for speeding up simulation procedures (FastSimulation tools) are being developed with the...
Go to contribution page -
David Lange (Princeton University (US))04/11/2019, 14:30
The High-Luminosity LHC will provide an unprecedented data volume of complex collision events. The desire to keep as many of the "interesting" events for investigation by analysts implies a major increase in the scale of compute, storage and networking infrastructure required for HL-LHC experiments. An updated computing model is required to facilitate the timely publication of accurate physics...
Go to contribution page -
Benjamin Roberts (The University of Queensland)04/11/2019, 14:30
Despite the overwhelming cosmological evidence for the existence of dark matter, and the considerable effort of the scientific community over decades, there is no evidence for dark matter in terrestrial experiments.
Go to contribution page
The GPS.DM observatory uses the existing GPS constellation as a 50,000 km-aperture sensor array, analysing the satellite and terrestrial atomic clock data for exotic physics... -
Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US))04/11/2019, 14:30Track 4 – Data Organisation, Management and AccessOral
For almost 10 years now XRootD has been very successful at facilitating data management of LHC experiments. Being the foundation and main component of numerous solutions employed within the WLCG collaboration (like EOS and DPM), XRootD grew into one of the most important storage technologies in the High Energy Physics (HEP) community. With the latest major release (5.0.0) XRootD framework...
Go to contribution page -
Peter Kodys (Charles University), Peter Kodys (Charles University (CZ))04/11/2019, 14:45
The Belle II experiment features a substantial upgrade of the Belle detector and will operate at the SuperKEKB energy-asymmetric $e^+ e^-$ collider at KEK in Tuskuba, Japan. The accelerator successfully completed the first phase of commissioning in 2016 and the Belle II detector saw its first electron-positron collisions in April 2018. Belle II features a newly designed silicon vertex detector...
Go to contribution page -
Xiaomei Zhang (Chinese Academy of Sciences (CN))04/11/2019, 14:45
The Jiangmen Underground Neutrino Observatory (JUNO) is a multipurpose neutrino experiment, which plans to take about 2PB raw data each year starting from 2021. The experiment data plans to be stored in IHEP and have another copy in Europe (CNAF, IN2P3, JINR data centers). MC simulation tasks are expected to be arranged and operated through a distributed computing system to share efforts among...
Go to contribution page -
Dr wei su (University of Adelaide)04/11/2019, 14:45
In this talk, we discuss the new physics implication in Two Higgs doublet Model (2HDM) under various experimental constraints. As part work of Gambit group, our work is to use the global fit method to constrain the parameter space, find out the hints for new physics and try to make some predictions for further studies.
In our global fit, we include the constraints from LEP, LHC (SM-like...
Go to contribution page -
Edward Karavakis (CERN)04/11/2019, 14:45Track 4 – Data Organisation, Management and AccessOral
The File Transfer Service developed at CERN and in production since 2014, has become fundamental component for LHC experiments workflows.
Starting from the beginning of 2018 with the participation to the EU project Extreme Data Cloud (XDC) [1] and the activities carried out in the context of the DOMA TPC [2] and QoS [3] working groups, a series of new developments and improvements has been...
Go to contribution page -
Andrea Bocci (CERN)04/11/2019, 14:45
As the mobile ecosystem has demonstrated, ARM processors and GPUs promise to deliver higher compute efficiency with a lower power consumption. One interesting platform to experiment with architectures different from a traditional x86 machine is the NVIDIA AGX Xavier SoC, that pairs a 64-bit ARM processor 8 cores with a Volta-class GPU with 512 CUDA cores. The CMS reconstruction software was...
Go to contribution page -
Federico Stagni (CERN)04/11/2019, 14:45
High Performance Computing (HPC) supercomputers are expected to play an increasingly important role in HEP computing in the coming years. While HPC resources are not necessarily the optimal fit for HEP workflows, computing time at HPC centers on an opportunistic basis has already been available to the LHC experiments for some time, and it is also possible that part of the pledged computing...
Go to contribution page -
Ricardo Brito Da Rocha (CERN)04/11/2019, 14:45
The future need of simulated events for the LHC experiments and their High Luminosity upgrades, is expected to increase dramatically. As a consequence, research on new fast simulation solutions, based on Deep Generative Models, is very active and initial results look promising.
We have previously reported on a prototype that we have developed, based on 3 dimensional convolutional Generative...
Go to contribution page -
Marion Devouassoux (CERN)04/11/2019, 14:45
The use of commercial cloud services has gained popularity in research environments. Not only it is a flexible solution for adapting computing capacity to the researchers' needs, it also provides access to the newest functionalities on the market. In addition, most service providers offer cloud credits, enabling researchers to explore innovative architectures before procuring them at scale....
Go to contribution page -
Gianluca Peco (Universita e INFN, Bologna (IT))04/11/2019, 14:45Track 8 – Collaboration, Education, Training and OutreachOral
iTHEPHY is an ERASMUS+ project which aims at developing innovative student-centered Deeper Learning Approaches (DPA) and Project-Based teaching and learning methodologies for HE students, contributing to increase the internationalization of physics master courses. In this talk we'll introduce the iTHEPHY project status and main goals attained, with a focus on the web-based virtual environment...
Go to contribution page -
Stefano Dal Pra (Universita e INFN, Bologna (IT))04/11/2019, 15:00
In the last couple of years, we have been actively developing the Dynamic On-Demand Analysis Service (DODAS) as an enabling technology to deploy container-based clusters over any Cloud infrastructure with almost zero effort. The DODAS engine is driven by high-level templates written in the TOSCA language, that allows to abstract the complexity of many configuration details. DODAS is...
Go to contribution page -
Francesco Giovanni Sciacca (Universitaet Bern (CH))04/11/2019, 15:00
Predictions for requirements for the LHC computing for Run 3 and Run 4 (HL_LHC) over the course of the next 10 years show a considerable gap between required and available resources, assuming budgets will globally remain flat at best. This will require some radical changes to the computing models for the data processing of the LHC experiments. Concentrating computational resources in fewer...
Go to contribution page -
Fedor Ratnikov (Yandex School of Data Analysis (RU))04/11/2019, 15:00
LHCb is one of the major experiments operating at the Large Hadron Collider at CERN. The richness of the physics program and the increasing precision of the measurements in LHCb lead to the need of ever larger simulated samples. This need will increase further when the upgraded LHCb detector will start collecting data in the LHC Run 3. Given the computing resources pledged for the production...
Go to contribution page -
Attila Krasznahorkay (CERN)04/11/2019, 15:00
With GPUs and other kinds of accelerators becoming ever more accessible, High Performance Computing Centres all around the world using them ever more, ATLAS has to find the best way of making use of such accelerators in much of its computing.
Go to contribution page
Tests with GPUs -- mainly with CUDA -- have been performed in the past in the experiment. At that time the conclusion was that it was not advantageous... -
Farid Ould-Saada (University of Oslo (NO))04/11/2019, 15:00Track 8 – Collaboration, Education, Training and OutreachOral
The International Particle Physics Outreach Group (IPPOG) is a network of scientists, science educators and communication specialists working across the globe in informal science education and outreach for particle physics. IPPOG’s flagship activity is the International Particle Physics Masterclass programme, which provides secondary students with access to particle physics data using...
Go to contribution page -
Koji Hara (KEK)04/11/2019, 15:00
At the high luminosity flavor factory experiments such as the Belle II
Go to contribution page
experiment, it is expected to find the new physics effect and
constrain the new physics models with the high statics and many
observables. In such analysis, the global analysis of the many
observables with the model-independent approach is important. One
difficulty in such global analysis is that the new physics... -
Stefano Petrucci (University of Edinburgh, CERN)04/11/2019, 15:00
The LHCb high level trigger (HLT) is split in two stages. HLT1 is synchronous with collisions delivered by the LHC and writes its output to a local disk buffer, which is asynchronously processed by HLT2. Efficient monitoring of the data being processed by the application is crucial to promptly diagnose detector or software problems. HLT2 consists of approximately 50000 processes and 4000...
Go to contribution page -
Michael Schuh (Deutsches Elektronen-Synchrotron DESY)04/11/2019, 15:00
Low latency, high throughput data processing in distributed environments is a key requirement of today's experiments. Storage events facilitate synchronisation with external services where the widely adopted request-response pattern does not scale because of polling as a long-running activity. We discuss the use of an event broker and stream processing platform (Apache Kafka) for storage...
Go to contribution page -
Katy Ellis (Science and Technology Facilities Council STFC (GB))04/11/2019, 15:00Track 4 – Data Organisation, Management and AccessOral
The XRootD software framework is essential for data access at WLCG sites. The WLCG community is exploring and expanding XRootD functionality. This presents a particular challenge at the RAL Tier-1 as the Echo storage service is a Ceph based Erasure Coded object store. External access to Echo uses gateway machines which run GridFTP and XRootD servers. This paper will describe how third party...
Go to contribution page -
Igor Sfiligoi (UCSD)04/11/2019, 15:15
Cloud computing is becoming mainstream, with funding agencies moving beyond prototyping and starting to fund production campaigns, too. An important aspect of any production computing campaign is data movement, both incoming and outgoing. And while the performance and cost of VMs is relatively well understood, the network performance and cost is not.
Go to contribution page
We thus embarked on a network... -
Thomas Kuhr (Ludwig Maximilians Universitat (DE))04/11/2019, 15:15
Belle II uses a Geant4-based simulation to determine the detector response to the generated decays of interest. A realistic detector simulation requires the inclusion of noise from beam-induced backgrounds. This is accomplished by overlaying random trigger data to the simulated signal. To have statistically independent Monte-Carlo events a high number of random trigger events are desirable....
Go to contribution page -
Mr Ziheng Chen (Northwestern University)04/11/2019, 15:15
The future upgraded High Luminosity LHC (HL-LHC) is expected to deliver about 5 times higher instantaneous luminosity than the present LHC, producing pile-up up to 200 interactions per bunch crossing. As a part of its phase-II upgrade program, the CMS collaboration is developing a new end-cap calorimeter system, the High Granularity Calorimeter (HGCAL), featuring highly-segmented hexagonal...
Go to contribution page -
Piotr Konopka (CERN, AGH University of Science and Technology (PL))04/11/2019, 15:15
The ALICE Experiment at CERN LHC (Large Hadron Collider) is undertaking a major upgrade during LHC Long Shutdown 2 in 2019-2020, which includes a new computing system called O² (Online-Offline). The raw data input from the ALICE detectors will then increase a hundredfold, up to 3.4 TB/s. In order to cope with such a large amount of data, a new online-offline computing system, called O2, will...
Go to contribution page -
Ian Collier (Science and Technology Facilities Council STFC (GB))04/11/2019, 15:15Track 4 – Data Organisation, Management and AccessOral
When the LHC started data taking in 2009 the data rates were unprecedented for the time and forced the WLCG community develop a range of tools for managing their data across many different sites. A decade later other science communities are finding their data requirements have grown far beyond what they can easily manage and are looking for help. The RAL Tier-1’s primary mission has always...
Go to contribution page -
Martin John White (University of Adelaide (AU))04/11/2019, 15:15
Searches for beyond-Standard Model physics at the LHC have thus far not uncovered any evidence of new particles, and this is often used to state that new particles with low mass are now excluded. Using the example of the supersymmetric partners of the electroweak sector of the Standard Model, I will present recent results from the GAMBIT collaboration that show that there is plenty of room for...
Go to contribution page -
Minh Huynh04/11/2019, 16:00Plenary
-
Mario Lassnig (CERN)04/11/2019, 16:30
For many scientific projects, data management is an increasingly complicated challenge. The number of data-intensive instruments generating unprecedented volumes of data is growing and their accompanying workflows are becoming more complex. Their storage and computing resources are heterogeneous and are distributed at numerous geographical locations belonging to different administrative...
Go to contribution page -
Mansi Kasliwal (California Institute of Technology)04/11/2019, 17:00Plenary
-
Lyn Beazley04/11/2019, 17:30Plenary
-
Waseem Kamleh (University of Adelaide)04/11/2019, 18:00
-
Arantza De Oyanguren Campos (Univ. of Valencia and CSIC (ES))05/11/2019, 09:00Plenary
-
Ruben Shahoyan (CERN)05/11/2019, 09:30Plenary
The ALICE experiment has originally been designed as a relatively low-rate experiment, in particular given the limitations of the Time Projection Chamber (TPC) readout system using MWPCs. This will not be the case anymore for LHC Run 3 scheduled to start in 2021.
Go to contribution page
After the LS2 upgrades, including a new silicon tracker and a GEM-based readout for the TPC, ALICE will operate at a peak Pb-Pb... -
Xinchou Lou (Chinese Academy of Sciences (CN))05/11/2019, 10:00Plenary
-
Filippo Costa (CERN)05/11/2019, 11:00
The ALICE Experiment at CERN LHC (Large Hadron Collider) is undertaking a major upgrade during LHC Long Shutdown 2 in 2019-2020. The raw data input from the detector will then increase a hundredfold, up to 3.4 TB/s. In order to cope with such a large throughput, a new Online-Offline computing system, called O2, will be deployed.
The FLP servers (First Layer Processor) are the readout nodes...
Go to contribution page -
Ivana Hrivnacova (Centre National de la Recherche Scientifique (FR))05/11/2019, 11:00
The Geant4 electromagnetic (EM) physics sub-packages is an important component of LHC experiment simulations. During long shutdown 2 for LHC these packages are under intensive development and in this work we report a progress for the new Geant4 version 10.6. These developments includes modifications allowing speed-up computations for EM physics, improve EM models, extend set for models, and...
Go to contribution page -
Dr Maxim Gonchar (Joint Institute for Nuclear Research)05/11/2019, 11:00
GNA is a high performance fitter, designed to handle large scale models with big number of parameters. Following the data flow paradigm the model in GNA is built as directed acyclic graph. Each node (transformation) of the graph represents a function, that operates on vectorized data. A library of transformations, implementing various functions, is precompiled. The graph itself is assembled...
Go to contribution page -
Paul Nilsson (Brookhaven National Laboratory (US))05/11/2019, 11:00
The unprecedented computing resource needs of the ATLAS experiment have motivated the Collaboration to become a leader in exploiting High Performance Computers (HPCs). To meet the requirements of HPCs, the PanDA system has been equipped with two new components; Pilot 2 and Harvester, that were designed with HPCs in mind. While Harvester is a resource-facing service which provides resource...
Go to contribution page -
Alessandra Forti (University of Manchester (GB))05/11/2019, 11:00Track 4 – Data Organisation, Management and AccessOral
The “Third Party Copy” (TPC) Working Group in the WLCG’s “Data Organization, Management, and Access” (DOMA) activity was proposed during a CHEP 2018 Birds of a Feather session in order to help organize the work toward developing alternatives to the GridFTP protocol. Alternate protocols enable the community to diversify; explore new approaches such as alternate authorization mechanisms; and...
Go to contribution page -
Vladimir Loncar (University of Belgrade (RS))05/11/2019, 11:00
MPI-learn and MPI-opt are libraries to perform large-scale training and hyper-parameter optimization for deep neural networks. The two libraries, based on Message Passing Interface, allows to perform these tasks on GPU clusters, through different kinds of parallelism. The main characteristic of these libraries is their flexibility: the user has complete freedom in building her own model,...
Go to contribution page -
Dr Anna Elizabeth Woodard (University of Chicago)05/11/2019, 11:00
The traditional HEP analysis model uses successive processing steps to reduce the initial dataset to a size that permits real-time analysis. This iterative approach requires significant CPU time and storage of large intermediate datasets and may take weeks or months to complete. Low-latency, query-based analysis strategies are being developed to enable real-time analysis of primary datasets by...
Go to contribution page -
Steven Goldfarb (University of Melbourne (AU))05/11/2019, 11:00Track 8 – Collaboration, Education, Training and OutreachOral
The International Particle Physics Outreach Group (IPPOG) is a network of scientists, science educators and communication specialists working across the globe in informal science education and outreach for particle physics. The primary methodology adopted by IPPOG requires the direct involvement of scientists active in current research with education and communication specialists, in order to...
Go to contribution page -
Gordon Watts (University of Washington (US))05/11/2019, 11:00
The increase in luminosity by a factor of 100 for the HL-LHC with respect to Run 1 poses a big challenge from the data analysis point of view. It demands a comparable improvement in software and processing infrastructure. The use of GPU enhanced supercomputers will increase the amount of computer power and analysis languages will have to be adapted to integrate them. The particle physics...
Go to contribution page -
Leo Piilonen (Virginia Tech)05/11/2019, 11:15Track 8 – Collaboration, Education, Training and OutreachOral
I describe a novel interactive virtual reality visualization of the Belle II detector at KEK and the animation therein of GEANT4-simulated event histories. Belle2VR runs on Oculus and Vive headsets (as well as in a web browser and on 2D computer screens, in the absence of a headset). A user with some particle-physics knowledge manipulates a gamepad or hand controller(s) to interact with and...
Go to contribution page -
Remi Mommsen (Fermi National Accelerator Lab. (US))05/11/2019, 11:15
We report on performance measurements and optimizations of the event-builder software for the CMS experiment at the CERN Large Hadron Collider (LHC). The CMS event builder collects event fragments from several hundred sources. It assembles them into complete events that are then handed to the High-Level Trigger (HLT) processes running on O(1000) computers. We use a test system with 16...
Go to contribution page -
Jason Webb (Brookhaven National Lab)05/11/2019, 11:15
The STAR Heavy Flavor Tracker (HFT) has enabled a rich physics program, providing important insights into heavy quark behavior in heavy ion collisions. Acquiring data during the 2014 through 2016 runs at the Relativistic Heavy Ion Collider (RHIC), the HFT consisted of four layers of precision silicon sensors. Used in concert with the Time Projection Chamber (TPC), the HFT enables the...
Go to contribution page -
Johannes Elmsheuser (Brookhaven National Laboratory (US))05/11/2019, 11:15
With an increased dataset obtained during CERN LHC Run-2, the even larger forthcoming Run-3 data and more than an order of magnitude expected increase for HL-LHC, the ATLAS experiment is reaching the limits of the current data production model in terms of disk storage resources. The anticipated availability of an improved fast simulation will enable ATLAS to produce significantly larger Monte...
Go to contribution page -
David Schultz (University of Wisconsin-Madison)05/11/2019, 11:15
For the past several years, IceCube has embraced a central, global overlay grid of HTCondor glideins to run jobs. With guaranteed network connectivity, the jobs themselves transferred data files, software, logs, and status messages. Then we were given access to a supercomputer, with no worker node internet access. As the push towards HPC increased, we had access to several of these...
Go to contribution page -
Maria Alandes Pradillo (CERN)05/11/2019, 11:15
CERN IT department has been maintaining different HPC facilities over the past five years, one in Windows and the other one on Linux as the bulk of computing facilities at CERN are running under Linux. The Windows cluster has been dedicated to engineering simulations and analysis problems. This cluster is a High Performance Computing (HPC) cluster thanks to powerful hardware and low-latency...
Go to contribution page -
Vardan Gyurjyan (Jefferson Lab)05/11/2019, 11:15
Modern hardware is trending towards increasingly parallel and heterogeneous architectures. Contemporary machine processors are spread across multiple sockets, where each socket can access some system memory faster than the rest, creating non-uniform memory access (NUMA). Efficiently utilizing these NUMA machines is becoming increasingly important. This paper examines latest Intel Skylake and...
Go to contribution page -
Brian Paul Bockelman (University of Nebraska Lincoln (US))05/11/2019, 11:15Track 4 – Data Organisation, Management and AccessOral
Since its earliest days, the Worldwide LHC Computational Grid (WLCG) has relied on GridFTP to transfer data between sites. The announcement that Globus is dropping support of its open source Globus Toolkit (GT), which forms the basis for several FTP client and servers, has created an opportunity to reevaluate the use of FTP. HTTP-TPC, an extension to HTTP compatible with WebDAV, has arisen...
Go to contribution page -
Shigeki Misawa (BNL)05/11/2019, 11:15
Driven by the need to carefully plan and optimise the resources for the next data taking periods of Big Science projects, such as CERN’s Large Hadron Collider and others, sites started a common activity, the HEPiX Technology Watch Working Group, tasked with tracking the evolution of technologies and markets of concern to the data centres. The talk will give an overview of general and...
Go to contribution page -
Sandro Christian Wenzel (CERN)05/11/2019, 11:30
VecGeom is a geometry modeller library with hit-detection features as needed by particle detector simulation at the LHC and beyond. It was incubated by a Geant-R&D initiative and the motivation to combine the code of Geant4 and ROOT/TGeo into a single, better maintainable piece of software within the EU-AIDA program.
So far, VecGeom is mainly used by LHC experiments as a geometry primitive...
Go to contribution page -
Gene Van Buren (Brookhaven National Laboratory)05/11/2019, 11:30
For the last 5 years Accelogic pioneered and perfected a radically new theory of numerical computing codenamed “Compressive Computing”, which has an extremely profound impact on real-world computer science. At the core of this new theory is the discovery of one of its fundamental theorems which states that, under very general conditions, the vast majority (typically between 70% and 80%) of the...
Go to contribution page -
Elizabeth Sexton-Kennedy (Fermi National Accelerator Lab. (US))05/11/2019, 11:30
The upcoming generation of exascale HPC machines will all have most of their computing power provided by GPGPU accelerators. In order to be able to take advantage of this class of machines for HEP Monte Carlo simulations, we started to develop a Geant pilot application as a collaboration between HEP and the Exascale Computing Project. We will use this pilot to study and characterize how the...
Go to contribution page -
Vardan Gyurjyan (Jefferson Lab)05/11/2019, 11:30
The hardware landscape used in HEP and NP is changing from homogeneous multi-core systems towards heterogeneous systems with many different computing units, each with their own characteristics. To achieve data processing maximum performance the main challenge is to place the right computing on the right hardware.
Go to contribution page
In this paper we discuss CLAS12 charge particle tracking workload partitioning... -
Ofer Rind05/11/2019, 11:30
At the SDCC we are deploying a Jupyterhub infrastructure to enable
Go to contribution page
scientists from multiple disciplines to access our diverse compute and
storage resources. One major design goal was to avoid rolling out yet
another compute backend and leverage our pre-existing resources via our
batch systems (HTCondor and Slurm). Challenges faced include creating a
frontend that allows users to choose... -
Jan de Cuveland (Johann-Wolfgang-Goethe Univ. (DE))05/11/2019, 11:30
The Compressed Baryonic Matter (CBM) experiment is currently under construction at the GSI/FAIR accelerator facility in Darmstadt, Germany. In CBM, all event selection is performed in a large online processing system, the “First-level Event Selector” (FLES). The data are received from the self-triggered detectors at an input-stage computer farm designed for a data rate of 1 TByte/s. The...
Go to contribution page -
Prof. Andreas Wicenec (International Centre of Radio Astronomy Research)05/11/2019, 11:30
The SKA will enable the production of full polarisation spectral line cubes at a very high spatial and spectral resolution. Performing a back-of-the-evelope estimate gives you the incredible amount of around 75-100 million tasks to run in parallel to perform a state-of-the-art faceting algorithm (assuming that it would spawn off just one task per facet, which is not the case). This simple...
Go to contribution page -
Hannes Sakulin (CERN)05/11/2019, 11:30Track 8 – Collaboration, Education, Training and OutreachOral
We present an interactive game for up to seven players that demonstrates the challenges of on-line event selection at the Compact Muon Solenoid (CMS) experiment to the public. The game - in the shape of a popular classic pinball machine - was conceived and prototyped by an interdisciplinary team of graphic designers, physicists and engineers at the CMS Create hackathon in 2016. Having won the...
Go to contribution page -
Wei Yang (SLAC National Accelerator Laboratory (US))05/11/2019, 11:30Track 4 – Data Organisation, Management and AccessOral
A Third Party Copy (TPC) has existed in the pure XRootD storage environment for many years. However using XRootD TPC in the WLCG environment presents additional challenges due to the diversity of the storage systems involved such as EOS, dCache, DPM and ECHO, requiring that we carefully navigate the unique constraints imposed by these storage systems and their site-specific environments...
Go to contribution page -
Lukas On Arnold (Columbia University)05/11/2019, 11:45
Covariance matrices are used for a wide range of applications in particle ohysics, including Kalman filter for tracking purposes, as well as for Primary Component Analysis and other dimensionality reduction techniques. The covariance matrix contains covariance and variance measures between all permutations of data dimensions, leading to high computational cost.
Go to contribution page
By using a novel decomposition... -
Giulio Eulisse (CERN)05/11/2019, 11:45
ALICE Experiment is currently undergoing a major upgrade program, both in
Go to contribution page
terms of hardware and software, to prepare for the LHC Run 3. A new Software
Framework is being developed in collaboration with the FAIR experiments at GSI
to cope with the 100 fold increase in collected collisions.
We present our progress to adapt such a framework for the end user physics data
analysis. In... -
Marcel Rieger (CERN)05/11/2019, 11:45
In particle physics, workflow management systems are primarily used as tailored solutions in dedicated areas such as Monte Carlo production. However, physicists performing data analyses are usually required to steer their individual workflows manually which is time-consuming and often leads to undocumented relations between particular workloads.
Go to contribution page
We present the luigi analysis workflow (law)... -
Marzena Lapka (CERN)05/11/2019, 11:45Track 8 – Collaboration, Education, Training and OutreachOral
The rapid economic growth is building new trends in careers. Almost every domain, including high-energy physics, needs people with strong capabilities in programming. In this evolving environment, it is highly desirable that young people are equipped with computational thinking (CT) skills, such as problem-solving and logical thinking, as well as the ability to develop software applications...
Go to contribution page -
Kevin Pedro (Fermi National Accelerator Lab. (US))05/11/2019, 11:45
The HL-LHC and the corresponding detector upgrades for the CMS experiment will present extreme challenges for the full simulation. In particular, increased precision in models of physics processes may be required for accurate reproduction of particle shower measurements from the upcoming High Granularity Calorimeter. The CPU performance impacts of several proposed physics models will be...
Go to contribution page -
Fernando Harald Barreiro Megino (University of Texas at Arlington)05/11/2019, 11:45
ATLAS Computing Management has identified the migration of all resources to Harvester, PanDA’s new workload submission engine, as a critical milestone for Run 3 and 4. This contribution will focus on the Grid migration to Harvester.
Go to contribution page
We have built a redundant architecture based on CERN IT’s common offerings (e.g. Openstack Virtual Machines and Database on Demand) to run the necessary Harvester... -
Prof. Ryosuke Itoh (KEK)05/11/2019, 11:45
The Belle II experiment is a new generation B-factory experiment at KEK in Japan aiming at the search for New Physics in a huge sample of B-meson dacays. The commissioning of accelerator and detector for the first physics run has been started from March this year. The Belle II High Level Trigger (HLT) is fully
Go to contribution page
working in the beam run. The HLT is now operated with 1600 cores clusterized in 5... -
Mario Lassnig (CERN)05/11/2019, 11:45Track 4 – Data Organisation, Management and AccessOral
The anticipated increase in storage requirements for the forthcoming HL-LHC data rates is not matched by a corresponding increase in budget. This results in a short-fall in available resources if the computing models remain unchanged. Therefore, effort is being invested in looking for new and innovative ways to optimise the current infrastructure, so minimising the impact of this...
Go to contribution page -
Julia Andreeva (CERN)05/11/2019, 11:45
The WLCG has over 170 sites and the number is expected to grow in the coming years. In order to support WLCG workloads, each site has to deploy and maintain several middleware packages and grid services. Setting up, maintaining and supporting the grid infrastructure at a site can be a demanding activity and often requires significant assistance from WLCG experts. Modern configuration...
Go to contribution page -
Federico Carminati (CERN)05/11/2019, 12:00
Detailed simulation is one of the most expensive tasks, in terms of time and computing resources for High Energy Physics experiments. The need for simulated events will dramatically increase for the next generation experiments, like the ones that will run at the High Luminosity LHC. The computing model must evolve and in this context, alternative fast simulation solutions are being studied....
Go to contribution page -
Patrick Fuhrmann05/11/2019, 12:00Track 4 – Data Organisation, Management and AccessOral
Optimization of computing resources, in particular storage, the costliest one, is a tremendous challenge for the High Luminosity LHC (HL-LHC) program. Several venues are being investigated to address the storage issues foreseen for HL-LHC. Our expectation is that savings can be achieved in two primary areas: optimization of the use of various storage types and reduction of the required...
Go to contribution page -
Gvozden Neskovic (Johann-Wolfgang-Goethe Univ. (DE))05/11/2019, 12:00
ALICE (A Large Ion Collider Experiment), one of the large LHC experiments, is currently undergoing a significant upgrade. Increase in data rates planned for LHC Run3, together with triggerless continuous readout operation, requires a new type of networking and data processing infrastructure.
The new ALICE O2 (online-offline) computing facility consists of two types of nodes: First Level...
Go to contribution page -
Julien Leduc (CERN)05/11/2019, 12:00Track 8 – Collaboration, Education, Training and OutreachOral
Fluidic Data is a floor-to-ceiling installation spanning the four levels of the CERN Data Centre stairwell. It utilizes the interplay of water and light to visualize the magnitude and flow of information coming from the four major LHC experiments. The installation consists of an array of transparent hoses that house colored fluid, symbolizing the data of each experiment, surrounded by a...
Go to contribution page -
Maiken Pedersen (University of Oslo (NO))05/11/2019, 12:00
The WLCG is today comprised of a range of different types of resources such as cloud centers, large and small HPC centers, volunteer computing as well as the traditional grid resources. The Nordic Tier 1 (NT1) is a WLCG computing infrastructure distributed over the Nordic countries. The NT1 deploys the Nordugrid ARC CE, which is non-intrusive and lightweight, originally developed to cater for...
Go to contribution page -
Miha Muskinja (Lawrence Berkeley National Lab. (US))05/11/2019, 12:00
The ATLAS experiment has successfully integrated High-Performance Computing (HPC) resources in its production system. Unlike the current generation of HPC systems, and the LHC computing grid, the next generation of supercomputers is expected to be extremely heterogeneous in nature: different systems will have radically different architectures, and most of them will provide partitions optimized...
Go to contribution page -
Dr Ziyan Deng (Institute of High Energy Physics)05/11/2019, 12:00
The JUNO (Jiangmen Underground Neutrino Observatory) experiment is a multi-purpose neutrino experiment designed to determine the neutrino mass hierarchy and precisely measure oscillation parameters. It is composed of a 20kton liquid scintillator central detector equipped with 18000 20’’ PMTs and 25000 3’’ PMTs, a water pool with 2000 20’’ PMTs, and a top tracker. Monte-Carlo simulation is a...
Go to contribution page -
Eduardo Rodrigues (University of Cincinnati (US))05/11/2019, 12:00
Scikit-HEP is a community-driven and community-oriented project with the goal of providing an ecosystem for particle physics data analysis in Python. Scikit-HEP is a toolset of approximately twenty packages and a few “affiliated” packages. It expands the typical Python data analysis tools for particle physicists. Each package focuses on a particular topic, and interacts with other packages in...
Go to contribution page -
Robert William Gardner Jr (University of Chicago (US))05/11/2019, 12:00
One of the most costly factors in providing a global computing infrastructure such as the WLCG is the human effort in deployment, integration, and operation of the distributed services supporting collaborative computing, data sharing and delivery, and analysis of extreme scale datasets. Furthermore, the time required to roll out global software updates, introduce new service components, or...
Go to contribution page -
Nick Smith (Fermi National Accelerator Lab. (US))05/11/2019, 12:15
The COFFEA Framework provides a new approach to HEP analysis, via columnar operations, that improves time-to-insight, scalability, portability, and reproducibility of analysis. It is implemented with the Python programming language and commodity big data technologies such as Apache Spark and NoSQL databases. To achieve this suite of improvements across many use cases, COFFEA takes a factorized...
Go to contribution page -
Dr Venkitesh Ayyar (Lawrence Berkeley National Lab)05/11/2019, 12:15
High Energy Physics experiments face unique challenges when running their computation on High Performance Computing (HPC) resources. The LZ dark matter detection experiment has two data centers, one each in the US and UK, to perform computations. Its US data center uses the HPC resources at NERSC.
Go to contribution page
In this talk, I will describe the current computational workflow of the LZ experiment, detailing... -
Flavio Pisani (Universita e INFN, Bologna (IT))05/11/2019, 12:15
The LHCb experiment will be upgraded in 2021 and a new trigger-less readout system will be implemented. In the upgraded system, both event building (EB) and event selection will be performed in software for every collision produced in every bunch-crossing of the LHC. In order to transport the full data rate of 32 Tb/s we will use state of the art off-the-shelf network technologies, e.g....
Go to contribution page -
Mr Greg Corbett (STFC)05/11/2019, 12:15Track 8 – Collaboration, Education, Training and OutreachOral
Public Engagement (PE) with science should be more than “fun” for the staff involved. PE should be a strategic aim of any publically funded science organisation to ensure the public develops an understanding and appreciation of their work, its benefits to everyday life and to ensure the next generation is enthused to take up STEM careers. Most scientific organisations do have aims to do this,...
Go to contribution page -
Benedikt Riedel (University of Wisconsin-Madison), Benedikt Riedel (University of Wisconsin-Madison)05/11/2019, 12:15
Many of the challenges faced by the LHC experiments (aggregation of distributed computing resources, management of data across multiple storage facilities, integration of experiment-specific workflow management tools across multiple grid services) are similarly experienced by "midscale" high energy physics and astrophysics experiments, particularly as their data set volumes are increasing at...
Go to contribution page -
Simon Voigt Nesbo (Western Norway University of Applied Sciences (NO))05/11/2019, 12:15
The ALICE experiment at the CERN LHC will feature several upgrades for run 3, one of which is a new inner tracking system (ITS). The ITS upgrade is currently under development and commissioning. The new ITS will be installed during the ongoing long shutdown 2.
The specification for the ITS upgrade calls for event rates of up to 100 kHz for Pb-Pb, and 400 kHz pp, which is two orders of...
Go to contribution page -
Tony Cass (CERN)05/11/2019, 12:15
We describe the software tool-set being implemented in the contest of the NOTED [1] project to better exploit WAN bandwidth for Rucio and FTS data transfers, how it has been developed and the results obtained.
The first component is a generic data-transfer broker that interfaces with Rucio and FTS. It identifies data transfers for which network reconfiguration is both possible and...
Go to contribution page -
Xavier Espinal (CERN)05/11/2019, 12:15Track 4 – Data Organisation, Management and AccessOral
HL-LHC will confront the WLCG community with enormous data storage, management and access challenges. These are as much technical as economical. In the WLCG-DOMA Access working group, members of the experiments and site managers have explored different models for data access and storage strategies to reduce cost and complexity, taking into account the boundary conditions given by our...
Go to contribution page -
Hannes Sakulin (CERN)05/11/2019, 14:00
The CMS experiment will be upgraded for operation at the High-Luminosity LHC to maintain and extend its optimal physics performance under extreme pileup conditions. Upgrades will include an entirely new tracking system, supplemented by a track trigger processor capable of providing tracks at Level-1, as well as a high-granularity calorimeter in the endcap region. New front-end and back-end...
Go to contribution page -
Kevin Pedro (Fermi National Accelerator Lab. (US))05/11/2019, 14:00
Large-scale particle physics experiments face challenging demands for high-throughput computing resources both now and in the future. New heterogeneous computing paradigms on dedicated hardware with increased parallelization, such as Field Programmable Gate Arrays (FPGAs), offer exciting solutions with large potential gains. The growing applications of machine learning algorithms in particle...
Go to contribution page -
Dominik Muller (CERN)05/11/2019, 14:00
The increase in luminosity foreseen in the future years of operation of the Large Hadron Collider (LHC) creates new challenges in computing efficiency for all participating experiment. To cope with these challenges and in preparation for the third running period of the LHC, the LHCb collaboration currently overhauls its software framework to better utilise modern computing architectures. This...
Go to contribution page -
Sanjay Bloor (Imperial College London)05/11/2019, 14:00
GUM is a new feature of the GAMBIT global fitting software framework, which provides a direct interface between Lagrangian level tools and GAMBIT. GUM automatically writes GAMBIT routines to compute observables and likelihoods for physics beyond the Standard Model. I will describe the structure of GUM, the tools (within GAMBIT) it is able to create interfaces to, and the observables it is able...
Go to contribution page -
Graeme A Stewart (CERN)05/11/2019, 14:00
High-Energy Physics has evolved a rich set of software packages that need to work harmoniously to carry out the key software tasks needed by experiments. The problem of consistently building and deploying these software packages as a coherent software stack is one that is shared across the HEP community. To that end the HEP Software Foundation Packaging Working Group has worked to identify...
Go to contribution page -
Tibor Simko (CERN)05/11/2019, 14:00Track 8 – Collaboration, Education, Training and OutreachOral
In this paper we present the latest CMS open data release published on the CERN Open Data portal. The samples of raw datasets, collision and simulated datasets were released together with the detailed information about the data provenance. The data production chain covers the necessary compute environments, the configuration files and the computational procedures used in each data production...
Go to contribution page -
Alessandro Di Girolamo (CERN)05/11/2019, 14:00
In the near future, large scientific collaborations will face unprecedented computing challenges. Processing and storing exabyte datasets require a federated infrastructure of distributed computing resources. The current systems have proven to be mature and capable of meeting the experiment goals, by allowing timely delivery of scientific results. However, a substantial amount of interventions...
Go to contribution page -
Daniele Spiga (Universita e INFN, Perugia (IT))05/11/2019, 14:00Track 4 – Data Organisation, Management and AccessOral
The envisaged Storage and Compute needs for the HL-LHC will be a factor up to 10 above what can be achieved by the evolution of current technology within a flat budget. The WLCG community is studying possible technical solutions to evolve the current computing in order to cope with the requirements; one of the main focuses is resource optimization, with the ultimate objective of improving...
Go to contribution page -
Oliver Gutsche (Fermi National Accelerator Lab. (US))05/11/2019, 14:00
The WLCG Web Proxy Auto Discovery (WPAD) service provides a convenient mechanism for jobs running anywhere on the WLCG to dynamically discover web proxy cache servers that are nearby. The web proxy caches are general purpose for a number of different http applications, but different applications have different usage characteristics and not all proxy caches are engineered to work with the...
Go to contribution page -
Ivan Kisel (Johann-Wolfgang-Goethe Univ. (DE))05/11/2019, 14:15
Within the FAIR Phase-0 program the fast algorithms of the FLES (First-Level Event Selection) package developed for the CBM experiment (FAIR/GSI, Germany) are adapted for online and offline processing in the STAR experiment (BNL, USA). Using the same algorithms creates a bridge between online and offline. This makes it possible to combine online and offline resources for data...
Go to contribution page -
Federica Legger (Universita e INFN Torino (IT))05/11/2019, 14:15
The CMS computing infrastructure is composed by several subsystems that accomplish complex tasks such as workload and data management, transfers, submission of user and centrally managed production requests. Till recently, most subsystems were monitored through custom tools and web applications, and logging information was scattered in several sources and typically accessible only by experts....
Go to contribution page -
Jose Flix Molina (Centro de Investigaciones Energéti cas Medioambientales y Tecno)05/11/2019, 14:15Track 4 – Data Organisation, Management and AccessOral
Computing needs projections for the HL-LHC era (2026+), following the current computing models, indicate that much larger resource increases would be required than those that technology evolution at a constant budget could bring. Since worldwide budget for computing is not expected to increase, many research activities have emerged to improve the performance of the LHC processing software...
Go to contribution page -
Dr William Detmold (MIT)05/11/2019, 14:15
I will discuss recent advances in lattice QCD from the physics and computational points of view that have enabled basic a number properties and interactions of light nuclei to be determined directly from QCD. These calculations offer the prospect of providing nuclear matrix inputs necessary for a range of intensity frontier experiments (DUNE, mu2e) and dark matter direct-detection experiments...
Go to contribution page -
Diana Scannicchio (University of California Irvine (US))05/11/2019, 14:15
Within the ATLAS detector, the Trigger and Data Acquisition system is responsible for the online processing of data streamed from the detector during collisions at the Large Hadron Collider (LHC) at CERN. The online farm is composed of ~4000 servers processing the data read out from ~100 million detector channels through multiple trigger levels. The capability to monitor the ongoing data...
Go to contribution page -
Miha Muskinja (Lawrence Berkeley National Lab. (US))05/11/2019, 14:15
Software improvements in the ATLAS Geant4-based simulation are critical to keep up with the evolving hardware and increasing luminosity. Geant4 simulation currently accounts for about 50% of CPU consumption in ATLAS and it is expected to remain the leading CPU load during Run 4 (HL-LHC upgrade) with an approximately 25% share in the most optimistic computing model. The ATLAS experiment...
Go to contribution page -
Prof. Benda Xu (Tsinghua University)05/11/2019, 14:15
In big physics experiments, as simulation, reconstruction and analysis become more sophisticated, scientific reproducibility is not a trivial task. Software is one of the biggest challenges. Modularity is a common sense of software engineering to facilitate quality and reusability of code. However, that often introduces nested dependencies not obvious for physicists to work with. Package...
Go to contribution page -
David Rohr (CERN)05/11/2019, 14:15
In LHC Run 3, ALICE will increase the data taking rate significantly to 50 kHz continuous read out of minimum bias Pb-Pb collisions. The reconstruction strategy of the online offline computing upgrade foresees a first synchronous online reconstruction stage during data taking enabling detector calibration, and a posterior calibrated asynchronous reconstruction stage. The significant increase...
Go to contribution page -
Stefan Wunsch (KIT - Karlsruhe Institute of Technology (DE))05/11/2019, 14:15Track 8 – Collaboration, Education, Training and OutreachOral
The CMS collaboration at the CERN LHC has made more than one petabyte of open data available to the public, including large parts of the data which formed the basis for the discovery of the Higgs boson in 2012. Apart from their scientific value, these data can be used not only for education and outreach, but also for open benchmarks of analysis software. However, in their original format, the...
Go to contribution page -
Ryan Bignell (University of Adelaide)05/11/2019, 14:30
Background field methods offer an approach through which fundamental non-perturbative hadronic properties can be studied. Lattice QCD is the only ab initio method with which Quantum Chromodynamics can be studied at low energies; it involves numerically calculating expectation values in the path integral formalism. This requires substantial investment in high performance super computing...
Go to contribution page -
Shigeki Misawa (BNL)05/11/2019, 14:30
Computational science, data management and analysis have been key factors in the success of Brookhaven Lab's scientific programs at the Relativistic Heavy Ion Collider (RHIC), the National Synchrotron Light Source (NSLS-II), the Center for Functional Nanomaterials (CFN), and in biological, atmospheric, and energy systems science, Lattice Quantum Chromodynamics (LQCD) and Materials Science as...
Go to contribution page -
Marilena Bandieramonte (University of Pittsburgh (US))05/11/2019, 14:30
HEP experiments simulate the detector response by accessing all needed data and services within their own software frameworks. However, decoupling the simulation process from the experiment infrastructure can be useful for a number of tasks, amongst them the debugging of new features, or the validation of multithreaded vs sequential simulation code and the optimization of algorithms for HPCs....
Go to contribution page -
Thomas Beermann (University of Innsbruck (AT))05/11/2019, 14:30
For the last 10 years, the ATLAS Distributed Computing project has based its monitoring infrastructure on a set of custom designed dashboards provided by CERN-IT. This system functioned very well for LHC Runs 1 and 2, but its maintenance has progressively become more difficult and the conditions for Run 3, starting in 2021, will be even more demanding; hence a more standard code base and more...
Go to contribution page -
Matevz Tadel (Univ. of California San Diego (US))05/11/2019, 14:30Track 4 – Data Organisation, Management and AccessOral
The University of California system has excellent networking between all of its campuses as well as a number of other Universities in CA, including Caltech, most of them being connected at 100 Gbps. UCSD and Caltech have thus joined their disk systems into a single logical xcache system, with worker nodes from both sites accessing data from disks at either site. This setup has been in place...
Go to contribution page -
Jakub Moscicki (CERN)05/11/2019, 14:30Track 8 – Collaboration, Education, Training and OutreachOral
Open Data Science Mesh (CS3MESH4EOSC) is a newly funded project to create a new generation, interoperable federation of data and higher-level services to enable friction-free collaboration between European researchers.
This new EU-funded project brings together 12 partners from the CS3 community (Cloud Synchronization and Sharing Services). The consortium partners include CERN, Danish...
Go to contribution page -
Giulia Tuci (Universita & INFN Pisa (IT))05/11/2019, 14:30
In 2021 the LHCb experiment will be upgraded, and the DAQ system will be based on full reconstruction of events, at the full LHC crossing rate. This requires an entirely new system, capable of reading out, building and reconstructing events at an average rate of 30 MHz. In facing this challenge, the system could take advantage of a fast pre-processing of data on dedicated FPGAs. We present the...
Go to contribution page -
Elizabeth Sexton-Kennedy (Fermi National Accelerator Lab. (US))05/11/2019, 14:30
Development of scientific software has always presented challenges to its practitioners, among other things due to its inherently collaborative nature. Software systems often consistent of up to several dozen closely-related packages developed within a particular experiment or related ecosystem, with up to a couple of hundred externally-sourced dependencies. Making improvements to one such...
Go to contribution page -
Antonio Boveia (Ohio State University)05/11/2019, 14:30
With the unprecedented high luminosity delivered by the LHC, detector readout and data storage limitations severely limit searches for processes with high-rate backgrounds. An example of such searches is those for mediators of the interactions between the Standard Model and dark matter, decaying to hadronic jets. Traditional signatures and data taking techniques limit these searches to masses...
Go to contribution page -
Farid Ould-Saada (University of Oslo (NO))05/11/2019, 14:45Track 8 – Collaboration, Education, Training and OutreachOral
The ATLAS Collaboration is releasing a new set of recorded and simulated data samples at a centre-of-mass energy of 13 TeV. This new dataset was designed after an in-depth review of the usage of the previous release of samples at 8 TeV. That review showed that capacity-building is one of the most important and abundant uses of public ATLAS samples. To fulfil the requirements of the community...
Go to contribution page -
Lukas Layer (Universita e sezione INFN di Napoli (IT))05/11/2019, 14:45
The central Monte-Carlo production of the CMS experiment utilizes the WLCG infrastructure and manages daily thousands of tasks, each up to thousands of jobs. The distributed computing system is bound to sustain a certain rate of failures of various types, which are currently handled by computing operators a posteriori. Within the context of computing operations, and operation intelligence, we...
Go to contribution page -
Igor Sfiligoi (UCSD)05/11/2019, 14:45Track 4 – Data Organisation, Management and AccessOral
A general problem faced by computing on the grid for opportunistic users is that while delivering opportunistic cycles is simpler compared to delivering opportunistic storage. In this project we show how we integrated Xrootd caches places on the internet backbone to simulate a content delivery network for general science workflows. We will show that for some workflows on LIGO, DUNE, and...
Go to contribution page -
Dr Maxim Voronkov (CSIRO)05/11/2019, 14:45
The Australian Square Kilometre Array Pathfinder (ASKAP) is a
Go to contribution page
new generation 36-antenna 36-beam interferometer capable of producing
about 2.5 Gb/s of raw data. The data are streamed from the observatory
directly to the dedicated small cluster at the Pawsey HPC centre. The ingest
pipeline is a distributed real time software which runs on this cluster
and prepares the data for further... -
Alex Westin (The University of Adelaide)05/11/2019, 14:45
There exists a long standing discrepancy of around 3.5 sigma between experimental measurements and standard model calculations of the magnetic moment of the muon. Current experiments aim to reduce the experimental uncertainty by a factor of 4, and Standard Model calculations must also be improved by a similar order. The largest uncertainty in the Standard Model calculation comes from the QCD...
Go to contribution page -
Olof Barring (CERN)05/11/2019, 14:45
Since 2013 CERN’s local data centre combined with a colocation infrastructure at the Wigner data centre in Budapest have been hosting the compute and storage capacity for WLCG Tier-0. In this paper we will describe how we try to optimize and improve the operation of our local data centre to meet the anticipated increment of the physics compute and storage requirements for Run3, taking into...
Go to contribution page -
Masahiko Saito (University of Tokyo (JP))05/11/2019, 14:45
The pattern recognition of the trajectories of charged particles is at the core of the computing challenge for the HL-LHC, which is currently the center of a very active area of research. There has also been rapid progress in the development of quantum computers, including the D-Wave quantum annealer. In this talk we will discuss results from our project investigating the use of annealing...
Go to contribution page -
Chris Burr (CERN)05/11/2019, 14:45
The conda package manager is widely used in both commercial and academic high-performance computing across a wide range of fields. In 2016 conda-forge was founded as a community-driven package repository which allows packaging efforts to be shared across communities. This is especially important with the challenges faced when packaging modern software with complex dependency chains or...
Go to contribution page -
Norman Anthony Graf (SLAC National Accelerator Laboratory (US))05/11/2019, 14:45
The Heavy Photon Search (HPS) is an experiment at the Thomas Jefferson National Accelerator Facility designed to search for a hidden sector photon (A’) in fixed-target electro-production. It uses a silicon micro-strip tracking and vertexing detector inside a dipole magnet to measure charged particle trajectories and a fast lead-tungstate crystal calorimeter just downstream of the magnet to...
Go to contribution page -
Leonid Serkin (INFN Gruppo Collegato di Udine and ICTP Trieste (IT))05/11/2019, 15:00Track 8 – Collaboration, Education, Training and OutreachOral
Perform data analysis and visualisation on your own computer? Yes, you can! Commodity computers are now very powerful in comparison to only a few years ago. On top of that, the performance of today's software and data development techniques facilitates complex computation with fewer resources. Cloud computing is not always the solution, and reliability or even privacy is regularly a concern....
Go to contribution page -
Tamas Bato (CERN)05/11/2019, 15:00
The number of BYOD continuously grows at CERN. Additionally, it is desirable to move from a centrally managed model to a distributed model where users are responsible for their own devices. Following this strategy, the new tools have to be provided to distribute and - in case of licensed software - also track applications used by CERN users. The available open source and commercial solutions...
Go to contribution page -
Santiago Gonzalez De La Hoz (Univ. of Valencia and CSIC (ES))05/11/2019, 15:00
The ATLAS Spanish Tier-1 and Tier-2s have more than 15 years of experience in the deployment and development of LHC computing components and their successful operations. The sites are already actively participating in, and even coordinating, emerging R&D computing activities developing the new computing models needed in the LHC Run3 and HL-LHC periods.
Go to contribution page
In this contribution, we present details... -
Tomas Howson (University of Adelaide)05/11/2019, 15:00
Computing the gluon component of momentum in the nucleon is a difficult and computationally expensive problem, as the matrix element involves a quark-line-disconnected gluon operator which suffers from ultra-violet fluctuations. But also necessary for a successful determination is the non-perturbative renormalisation of this operator. We investigate this renormalisation here by direct...
Go to contribution page -
Tadashi Murakami (KEK)05/11/2019, 15:00
Relational database (RDB) and its management system (RDBMS) offer many advantages to us, such as a rich query language, maintainability gained from a concrete schema, robust and reasonable backup solutions such as differential backup, and so on. Recently, some of RDBMS has supported column-store features that offer data compression with a high level of both data size and query performance....
Go to contribution page -
Stephane Jezequel (LAPP-Annecy CNRS/USMB (FR))05/11/2019, 15:00Track 4 – Data Organisation, Management and AccessOral
With the increase of storage needs at the HL-LHC horizon, the data management and access will be very challenging for this critical service. The evaluation of possible solutions within the DOMA, DOMA-FR (IN2P3 project contribution to DOMA) and ESCAPE initiatives is a major activity to select the most optimal ones from the experiment and site point of views. The LAPP and LPSC teams have put...
Go to contribution page -
Martin Soderen (CERN)05/11/2019, 15:00
The transverse feedback system in LHC provides turn-by-turn, bunch-by-bunch measurements of the beam transverse position with a submicrometer resolution from 16 pickups. This results in a 16 high-bandwidth data-streams (1Gbit/s each), which are sent through a digital signal processing chain to calculate the correction kicks which are then applied to the beam. These data-streams contain...
Go to contribution page -
James Kahn (Karlsruhe Institute of Technology (KIT))05/11/2019, 15:00
The large volume of data expected to be produced by the Belle II experiment presents the opportunity for for studies of rare, previously inaccessible processes. To investigate such rare processes in a high data volume environment necessitates a correspondingly high volume of Monte Carlo simulations to prepare analyses and gain a deep understanding of the contributing physics processes to each...
Go to contribution page -
Jean-Roch Vlimant (California Institute of Technology (US))05/11/2019, 15:00
The HL-LHC will see ATLAS and CMS see proton bunch collisions reaching track multiplicity up to 10.000 charged tracks per event. Algorithms need to be developed to harness the increased combinatorial complexity. To engage the Computer Science community to contribute new ideas, we have organized a Tracking Machine Learning challenge (TrackML). Participants are provided events with 100k 3D...
Go to contribution page -
Markus Schulz (CERN)05/11/2019, 15:15Track 4 – Data Organisation, Management and AccessOral
Data movement between sites, replication and storage are very expensive operations, in terms of time and resources, for the LHC collaborations, and are expected to be even more so in the future. In this work we derived usage patterns based on traces and logs from the data and workflow management systems of CMS and ATLAS, and simulated the impact of different caching and data lifecycle...
Go to contribution page -
Thomas Hartmann (Deutsches Elektronen-Synchrotron (DE))05/11/2019, 15:15
DESY is one of the largest accelerator laboratories in Europe, developing and operating state of the art accelerators, used to perform fundamental science in the areas of high-energy physics photon science and accelerator development.\newline
While for decades high energy physics has been the most prominent user of the DESY compute, storage and network infrastructure, various scientific...
Go to contribution page -
Sebastian Bukowiec (CERN)05/11/2019, 15:15
CERN Windows server infrastructure consists of about 900 servers. The management and maintenance is often a challenging task as the data to be monitored is disparate and has to be collected from various sources. Currently, alarms are collected from the Microsoft System Center Operation Manager (SCOM) and many administrative actions are triggered through e-mails sent by various systems or...
Go to contribution page -
Tibor Simko (CERN)05/11/2019, 15:15Track 8 – Collaboration, Education, Training and OutreachOral
We describe the dataset of very rare events recorded by the OPERA experiment. Those events represent tracks of particles associated with tau neutrinos emerged from a pure muon neutrino beam, due to neutrino oscillations. The OPERA detector, located in the underground Gran Sasso Laboratory, consisted of an emulsion/lead target with an average mass of about 1.2 kt, complemented by the electronic...
Go to contribution page -
Andrei Gheata (CERN)05/11/2019, 15:15
The future High Energy Physics experiments, based on upgraded or next generation particle accelerators with higher luminosity and energy, will put more stringent demands on the simulation as far as precision and speed are concerned. In particular, matching the statistical uncertainties of the collected experimental data, will require the simulation toolkits to be more CPU-efficient, while...
Go to contribution page -
David Lawrence (Jefferson Lab)05/11/2019, 15:15
Development of the second generation JANA2 multi-threaded event processing framework is ongoing through an LDRD initiative grant at Jefferson Lab. The framework is designed to take full advantage of all cores on modern many-core compute nodes. JANA2 efficiently handles both traditional hardware triggered event data and streaming data in online triggerless environments. Development is being...
Go to contribution page -
Peter Love (Lancaster University (GB))05/11/2019, 15:15
In this work we review existing monitoring outputs and recommend some novel alternative approaches to improve the comprehension of large volumes of operations data that are produced in distributed computing. Current monitoring output is dominated by the pervasive use of time-series histograms showing the evolution of various metrics. These can quickly overwhelm or confuse the viewer due to the...
Go to contribution page -
Marilena Bandieramonte (University of Pittsburgh (US))05/11/2019, 15:15
Estimations of the CPU resources that will be needed to produce simulated data for the future runs of the ATLAS experiment at the LHC indicate a compelling need to speed-up the process to reduce the computational time required. While different fast simulation projects are ongoing (FastCaloSim, FastChain, etc.), full Geant4 based simulation will still be heavily used and is expected to consume...
Go to contribution page -
Adam Virgili05/11/2019, 15:15
The origin of the low-lying nature of the Roper resonance has been the subject of significant interest for many years, including several investigations using lattice QCD. It has been claimed that chiral symmetry plays an important role in our understanding of this resonance. We present results from our systematic examination of the potential role of chiral symmetry in the low-lying nucleon...
Go to contribution page -
Sebastian Bukowiec (CERN)05/11/2019, 15:30
To accomplish its mission, the European Centre for Nuclear Research (CERN, Switzerland) is committed to the continuous development of its personnel through a systematic and sustained learning culture, that aims at keeping the knowledge and competences of the personnel in line with the evolving needs of the Organisation.
Go to contribution page
With this goal in mind, CERN supports learning in its broadest sense and... -
Stephan Hageboeck (CERN)05/11/2019, 15:30
RooFit and RooStats, the toolkits for statistical modelling in ROOT, are used in most searches and measurements at the Large Hadron Collider, as well as B factories. The large datasets to be collected in Run 3 will enable measurements with higher precision, but will require faster data processing to keep fitting times stable.
Go to contribution page
In this talk, a redesign of RooFit’s internal dataflow will be... -
Emma Torro Pastor (University of Washington (US))05/11/2019, 15:30
Based on work in the ROOTLINQ project, we’ve re-written a functional declarative analysis language in Python. With a declarative language, the physicist specifies what they want to do with the data, rather than how they want to do it. Then the system translates the intent into actions. Using declarative languages would have numerous benefits for the LHC community, ranging from analysis...
Go to contribution page -
Ben Couturier (CERN)05/11/2019, 15:30
The Gitlab continuous integration system (http://gitlab.com) is an invaluable tool for software developer to test and validate their software. LHCb analysts have also been using it to validate physics software tools and data analysis scripts, but this usage faced issues differing from standard software testing, as it requires significant amount of CPU resources and credentials to access...
Go to contribution page -
Dr Linghui Wu (Institute of High Energy Physics)05/11/2019, 15:30
A present-day detection system for charged tracks in particle physics experiments is typically composed of two or more types of detectors. Then global track finding with these sub-detectors is one important topic. This contribution is to describe a global track finding algorithm with Hough Transform for a detection system consist of a Cylindrical-Gas-Electron-Multiplier (CGEM) and a Drift...
Go to contribution page -
Igor Sfiligoi (UCSD)05/11/2019, 15:30
The Open Science Grid (OSG) provides a common service for resource providers and scientific institutions, and supports sciences such as High Energy Physics, Structural Biology, and other community sciences. As scientific frontiers expand, so does the need for resources to analyze new data. For example, high energy physics (LHC) sciences foresee an exponential growth in the amount of data...
Go to contribution page -
Tomas Lindén (Helsinki Institute of Physics (FI))05/11/2019, 15:30
The ARM platform extends from the mobile phone area to development board computers and servers. It could be that in the future the importance of the ARM platform will increase if new more powerful (server) boards are released. For this reason CMSSW has previously been ported to ARM in earlier work.
The CMS software is deployed using CVMFS and the jobs are run inside Singularity containers....
Go to contribution page -
Shawn Mc Kee (University of Michigan (US))05/11/2019, 15:30
We will present techniques developed in collaboration with the OSiRIS project (NSF Award #1541335, UM, IU, MSU and WSU) and SLATE (NSF Award #1724821) for orchestrating software defined network slices with a goal of building reproducible and reliable computer networks for large data collaborations. With this project we have explored methods of utilizing passive and active measurements to...
Go to contribution page -
Cornelius Grunwald (Technische Universitaet Dortmund (DE)), Cornelius Grunwald (TU Dortmund)05/11/2019, 15:30
BAT.jl, the Julia version of the Bayesian Analysis Toolkit, is a software package which is designed to help solve statistical problems encountered in Bayesian inference. Typical examples are the extraction of the values of the free parameters of a model, the comparison of different models in the light of a given data set, and the test of the validity of a model to represent the data set at...
Go to contribution page -
Chiara Ilaria Rovelli (Sapienza Universita e INFN, Roma I (IT))05/11/2019, 15:30
Many physics analyses using the Compact Muon Solenoid (CMS) detector at the LHC require accurate, high resolution electron and photon energy measurements. Excellent energy resolution is crucial for studies of Higgs boson decays with electromagnetic particles in the final state, as well as searches for very high mass resonances decaying to energetic photons or electrons. The CMS electromagnetic...
Go to contribution page -
Andreas Joachim Peters (CERN)05/11/2019, 15:30
During the last few years, the EOS distributed storage system at CERN has seen a steady increase in use, both in terms of traffic volume as well as sheer amount of stored data.
This has brought the unwelcome side effect of stretching the EOS software stack to its design constraints, resulting in frequent user-facing issues and occasional downtime of critical services.
In this paper, we...
Go to contribution page -
Placido Fernandez Declara (University Carlos III (ES))05/11/2019, 15:30
The LHCb detector will be upgraded in 2021, where the hardware-level trigger will be replaced by a High Level Trigger 1 software trigger that needs to process the full 30 MHz data-collision rate. As part of the efforts to create a GPU High Level Trigger 1, tracking algorithms need to be optimized for SIMD architectures in order to achieve high-throughput. We present a SPMD (Single Program,...
Go to contribution page -
Hannah Short (CERN)05/11/2019, 15:30
CERN is launching the Science Gateway, a new scientific education and outreach centre targeting the general public of all ages. Construction is planned to start in 2020 and to be completed in 2022. In addition to Physics exhibits, the Science Gateway will include immersive, hands-on activities that explore Computer Science and Technology. This poster will present the methodology used to...
Go to contribution page -
Eric Vaandering (Fermi National Accelerator Lab. (US))05/11/2019, 15:30
Conditions databases is an important class of database applications where the database is used
to record the state of a set of quantities as a function of observation time.
Conditions databases are used in Hight Energy Physics to record the state of
the detector apparatus during data taking, and then to use the data during
the event reconstruction and analysis phases.At FNAL, we...
Go to contribution page -
Mr Ming Tang (Institute of High Energy Physics, Chinese Academy of Sciences)05/11/2019, 15:30
China Spallation Neutron Source (CSNS) is a large science facility, and it is public available to researchers from all over the world. The data platform of CSNS is aimed for diverse data and computing supports, the design philosophy behind is data safety, big-data sharing, and user convenience.
Go to contribution page
In order to manage scientific data, a metadata catalogue based on ICAT is built to manage full... -
Jerome Odier (LPSC/CNRS (Grenoble, FR))05/11/2019, 15:30
ATLAS Metadata Interface (AMI) is a generic ecosystem for metadata aggregation, transformation and cataloging benefiting from about 20 years of feedback in the LHC context. This poster describes the design principles of the Metadata Querying Language (MQL) implemented in AMI, a metadata-oriented domain-specific language allowing to query databases without knowing the relation between tables....
Go to contribution page -
Valdas Rapsevicius (Vilnius University (LT))05/11/2019, 15:30
During the third long shutdown of the CERN Large Hadron Collider, the CMS Detector will undergo a major upgrade to prepare for Phase-2 of the CMS physics program, starting around 2026. Upgrade projects will replace or improve detector systems to provide the necessary physics performance under the challenging conditions of high luminosity at the HL-LHC. Among other upgrades, the new CMS...
Go to contribution page -
Max Fischer (Karlsruhe Institute of Technology)05/11/2019, 15:30
With the evolution of the WLCG towards opportunistic resource usage and cross-site data access, new challenges for data analysis have emerged in recent years. To enable performant data access without relying on static data locality, distributed caching aims at providing data locality dynamically. Recent work successfully employs various approaches for effective and coherent caching, from...
Go to contribution page -
Ioana Ifrim (CERN)05/11/2019, 15:30
To address the increase in computational costs and speed requirements for simulation related to the higher luminosity and energy of future accelerators, a number of Fast Simulation tools based on Deep Learning (DL) procedures have been developed. We discuss the features and implementation of an end-to-end framework which integrates DL simulation methods with an existing Full Simulations...
Go to contribution page -
Siarhei Padolski (BNL)05/11/2019, 15:30
The development of the Interactive Visual Explorer (InVEx), a visual analytics tool for ATLAS computing metadata, includes research of various approaches for data handling both on server and on client sides. InVEx is implemented as a web-based application which aims at the enhancing of analytical and visualization capabilities of the existing monitoring tools, and facilitate the process of...
Go to contribution page -
Martin Adam (Acad. of Sciences of the Czech Rep. (CZ))05/11/2019, 15:30
With the explosion of the number of distributed applications, a new dynamic server environment emerged grouping servers into clusters, utilization of which depends on the current demand for the application. To provide reliable and smooth services it is crucial to detect and fix possible erratic behavior of individual servers in these clusters. Use of standard techniques for this purpose...
Go to contribution page -
Laura Sargsyan (A.Alikhanyan National Science Laboratory (AM))05/11/2019, 15:30
Large experiments in high energy physics require efficient and scalable monitoring solutions to digest data of the detector control system. Plotting multiple graphs in the slow control system and extracting historical data for long time periods are resource intensive tasks. The proposed solution leverages the new virtualization, data analytics and visualization technologies such as InfluxDB...
Go to contribution page -
Marc Dobson (CERN)05/11/2019, 15:30
System on Chip (SoC) devices have become popular for custom electronics HEP boards. Advantages include the tight integration of FPGA logic with CPU, and the option for having relatively powerful CPUs, with the potential of running a fully fledged operating system.
In the CMS trigger and data acquisition system, there are already a small number of back-end electronics boards with Xilinx Zync...
Go to contribution page -
Maxim Potekhin (Brookhaven National Laboratory (US))05/11/2019, 15:30
The DUNE Collaboration has successfully implemented and currently operates
Go to contribution page
an experimental program based at CERN which includes a beam test and an extended
cosmic ray run of two large-scale prototypes of the DUNE Far Detector. The volume of data already collected by the protoDUNE-SP (the single-phase Liquid Argon TPC prototype) amounts to approximately 3PB and the sustained rate of data sent... -
Pablo Saiz (CERN)05/11/2019, 15:30
The Load Balance Service at CERN handles more that 400 aliases,
Go to contribution page
distributed over more than 2000 nodes. After being in production for
more than thirteen years, it has been going through a mayor redesign
over the last two years. Last year, the server part got reimplemented in
golang, taking advantage of the concurrency features offered by the
language to improve the scaling of the system. This... -
Mr YI WANG05/11/2019, 15:30
ABSTRACT
Apache Spark is a splendid framework for big data analysis nowadays. A Spark application can be divided into some jobs which are triggered by an action of RDD, then the jobs will be divided into stages by the DAGScheduler, after these processes, we will get the task which is a unit of work within a stage, corresponding to one RDD partition.
Task is the smallest unit when Spark...
Go to contribution page -
Shigeki Misawa (BNL)05/11/2019, 15:30
BNL SDCC(Sentific Data and Computing Center) recently enabled centralized identity management solution. With SSO authentication process being enabled to cross multiple IT systems or organizations including federated login access via CILogon InCommon. With the combination of MFA/DUO to meet security standards for various application & services such as Jupyterhub/Invenio provided to the...
Go to contribution page -
Fedor Ratnikov (Yandex School of Data Analysis (RU))05/11/2019, 15:30
The goal to obtain more precise physics results in current collider experiments drives the plans to significantly increase the instantaneous luminosity collected by the experiments. The increasing complexity of the events due to the resulting increased pileup requires new approaches to triggering, reconstruction, analysis,
Go to contribution page
and event simulation. The last task brings to a critical problem:... -
Leo Piilonen (Virginia Tech)05/11/2019, 15:30
The second-generation Belle II experiment at the SuperKEKB colliding-beam accelerator in Japan searches for new-physics signatures and studies the behaviour of heavy quarks and leptons produced in electron-positron collisions. The KLM (K-long and Muon) subsystem of Belle II identifies long-lived neutral kaons via hadronic-shower byproducts and muons via their undeflected penetration through...
Go to contribution page -
Riccardo Farinelli (Universita e INFN, Ferrara (IT))05/11/2019, 15:30
Triple-GEM detectors are gaseous devices used in high energy physics to measure the path of the particles which cross them. The characterisation of triple GEM detectors and the estimation of the performance for real data experiments require a complete comprehension of the mechanisms which transform the passage of one particle in the detector into electric signals, and dedicated MonteCarlo...
Go to contribution page -
Oleg Samoylov (Joint Institute for Nuclear Research (RU))05/11/2019, 15:30
NOvA is a long-baseline neutrino experiment aiming to study neutrino oscillation phenomenon in the muon neutrino beam from complex NuMI at Fermilab (USA). Two identical detectors have been built to measure the initial neutrino flux spectra at the near site and the oscillated one at a 810 km distance, which significantly reduces many systematic uncertainties. To improve electron neutrino and...
Go to contribution page -
Andrey Baginyan (Joint Institute for Nuclear Research (RU))05/11/2019, 15:30
This paper presents the network architecture of the TIER 1 data center at JINR using the modern multichannel data transfer protocol TRILL. The obtained experimental data folow our activity to further study the nature of traffic distribution in redundant topologies. There are several questions. How the distribution of packet data occurs on four (or more) equivalent routes? What happens when the...
Go to contribution page -
Dr Jerome Odier (LPSC/CNRS (Grenoble, FR))05/11/2019, 15:30
ATLAS Metadata Interface (AMI) is a generic ecosystem for metadata aggregation, transformation and cataloging. Benefiting from about 20 years of feedback in the LHC context, the second major version was released in 2018. This poster describes how to install and administrate AMI version 2. A particular focus is given to the registration of existing databases in AMI, the adding of additional...
Go to contribution page -
Andrew Lahiff05/11/2019, 15:30
J. Hollocombe[1] , Eurofusion WPISA CPT, Eurofusion WPCD
1. UKAEA, Culham Science Centre, OX14 3DBThe ITER Data Model has been created to allow for a common data representation to be used by codes simulating ITER relevant physics. A suite of tools has been created to leverage this data structure called the Integrated Modelling & Analysis Suite (IMAS). As part of an exercise to...
Go to contribution page -
Prof. Qingmin Zhang (Xi'an Jiaotong University)05/11/2019, 15:30
The Jiangmen Underground Neutrino Observatory (JUNO) is designed to primarily measure the neutrino mass hierarchy. The JUNO central detector (CD) would be the world largest liquid scintillator (LS) detector with an unprecedented energy resolution of 3\%/\sqrt{E(MeV)} and a superior energy nonlinearity better than 1%. A calibration complex, including Cable Loop System (CLS), Guide Tube...
Go to contribution page -
Weidong Li (IHEP, Beijing)05/11/2019, 15:30
The JUNO (Jiangmen Underground Neutrino Observatory) is designed to determine the neutrino mass hierarchy and precisely measure oscillation parameters. The JUNO central detector is a 20 kt spherical volume of liquid scintillator (LS) with 35m diameter instrumented with 18,000 20-inch photomultiplier tubes (PMTs). Neutrinos are captured by protons of the target via the inverse beta decay...
Go to contribution page -
Mr Aiqiang ZHANG (+86 18810207912)05/11/2019, 15:30
In modern physics experiments, data analysis need considerable computing capacity. Computing resources of a single site are often limited and distributed computing is often inexpensive and flexible. While several large-scale grid solutions exist, for example DiRAC (Distributed Infrastructure with Remote Agent Control), there are few schemes devoted to solve the problem at small-scale. For the...
Go to contribution page -
Marco Zanetti (Universita e INFN, Padova (IT))05/11/2019, 15:30
This work addresses key technological challenges in the preparation of data pipelines for machine learning and deep learning at scale of interest for HEP. A novel prototype to improve the event filtering system at LHC experiments, based on a classifier trained using deep neural networks has recently been proposed by T. Nguyen et al. https://arxiv.org/abs/1807.00083. This presentation covers...
Go to contribution page -
Marco Clemencic (CERN)05/11/2019, 15:30
The LHCb software stack has to be run in very different computing environments: the trigger farm at CERN, on the grid, on shared clusters, on software developer's desktops... The old model assumes the availability of CVMFS and relies on custom scripts (a.k.a LbScripts) to configure the environment to build and run the software. It lacks flexibility and does not allow, for example running in...
Go to contribution page -
Karl Ehataht (National Institute of Chemical Physics and Biophysics (EE))05/11/2019, 15:30
The CMS Collaboration has recently commissioned a new compact data format, named NANOAOD, reducing the per-event compressed size to about 1-2 kB. This is achieved by retaining only high level information on physics objects, and aims at supporting a considerable fraction of CMS physics analyses with a ~20x reduction in disk storage needs. NANOAOD also facilitates the dissemination of analysis...
Go to contribution page -
Jiri Chudoba (Acad. of Sciences of the Czech Rep. (CZ))05/11/2019, 15:30
The Czech Tier-2 center hosted and operated by Institute of Physics of the Czech Academy o Sciences significantly upgraded external network connection in 2019. The older edge router Cisco 6509 provided several 10 Gbps connections via a 10 Gigabit Ethernet Fiber Module, from which 2 ports were used for external LHCONE conection, 1 port for generic internet traffic and 1 port to reach other...
Go to contribution page -
Ivana HRIVNACOVA (Institut de Physique Nucléaire (IPNO), Université Paris-Sud, CNRS-IN2P3, Orsay, France )05/11/2019, 15:30
Virtual Monte Carlo (VMC) provides a unified interface to different detector simulation transport engines such as GEANT3 and Geant4. Since recently, all VMC packages: the VMC core library, also included in ROOT, Geant3 and Geant4 VMC are distributed via the VMC Project GitHub organization. In addition to these VMC related packages, the VMC project also includes the Virtual Geometry Model...
Go to contribution page -
Andre Scaffidi05/11/2019, 15:30
The Weakly Interacting Massive Particle or "WIMP" has been a widely studied solution to the dark matter problem. A plausible scenario is that DM is not made up of a single WIMP species, but that it has a multi-component nature. In this talk I give an overview of recently published work in which we studied direct detection signals in the presence of multi-component WIMP-like DM. I will give an...
Go to contribution page -
Gerardo Ganis (CERN)05/11/2019, 15:30
The building, testing and deployment of coherent large software stacks is very challenging, in particular when they consist of the diverse set of packages required by the LHC experiments, the CERN Beams department and data analysis services such as SWAN. These software stacks comprise a large number of packages (Monte Carlo generators, machine learning tools, Python modules, HEP specific...
Go to contribution page -
Yanjia Xiao05/11/2019, 15:30
Partial wave analysis is an important tool in hadron physics. Large data sets from the experiments in high precision frontier require high computational power. To utilize GPU cluster and the resource of supercomputers with various types of the accelerator, we implement a software framework for partial wave analysis using OpenAcc, OpenAccPWA. OpenAccPWA provides convenient approaches for...
Go to contribution page -
Kenneth Richard Herner (Fermi National Accelerator Laboratory (US))05/11/2019, 15:30
The Production Operations Management System (POMS) is a set of software tools which allows production teams and analysis groups across multiple Fermilab experiments to launch, modify and monitor large scale campaigns of related Monte Carlo or data processing jobs.
Go to contribution page
POMS provides a web service interface that enables automated jobs submission on distributed resources according to customers’... -
Matthew Feickert (Southern Methodist University (US))05/11/2019, 15:30
The
Go to contribution pageHistFactoryp.d.f. template [CERN-OPEN-2012-016] is per-se independent of its implementation inROOTand it is useful to be able to run statistical analysis outside of theROOT,RooFit,RooStatsframework.pyhfis a pure-python implementation of that statistical model for multi-bin histogram-based analysis and its interval estimation is... -
luca dell'Agnello (INFN)05/11/2019, 15:30
For the latest years, the INFN-CNAF team has been working on the Long Term Data Preservation (LTDP) project for the CDF experiment, active at Fermilab from 1990 to 2011.
Go to contribution page
The main aims of the project are to protect data of the CDF RUN-2 (4 PB) collected between 2001 and 2011 and already stored on CNAF tapes and to ensure the availability and the access to the analysis facility to those data... -
Dr Yan Huang (Tsinghua University)05/11/2019, 15:30
As an important detector spectrum for the Nuclotron-based Ion Collider fAcility(NICA) accelerator complex at JINR, the MultiPurpose Detector(MPD) is proposed to investigate the hot and dense baryonic matter in heavy-ion collisions over a wide range of atomic masses, from Au+Au collisions at a centre-of-mass energy of $\sqrt{s_{nn}}=11GeV(for\ Au^{79+})$ to proton-proton collisions with...
Go to contribution page -
Andrew Davis (United Kingdom Atomic Energy Authority)05/11/2019, 15:30
The SAGE2 project is a collaboration between industry, data centres and research institutes demonstrating an exascale-ready system based on layered hierarchical storage and a novel object storage technology. The development of this system is based on a significant co-design exercise between all partners, with the research institutes having well established needs for exascale computing...
Go to contribution page -
Jean-Roch Vlimant (California Institute of Technology (US))05/11/2019, 15:30
The Caltech team in collaboration with network, computer science, and HEP partners at the DOE laboratories and universities, building smart network services ("The Software-defined network for End-to-end Networked Science at Exascale (SENSE) research project") to accelerate scientific discovery.
The overarching goal of SENSE is to enable National Labs and universities to request and...
Go to contribution page -
Rene Caspart (KIT - Karlsruhe Institute of Technology (DE))05/11/2019, 15:30
Current and future end-user analyses and workflows in High Energy Physics demand the processing of growing amounts of data. This plays a major role when looking at the demands in the context of the High-Luminosity-LHC. In order to keep the processing time and turn-around cycles as low as possible analysis clusters optimized with respect to these demands can be used. Since hyperconverged...
Go to contribution page -
Mr Josh Charvetto (University of Adelaide)05/11/2019, 15:30
Lattice quantum chromodynamics (QCD) has provided great insight into the nature of empty space, but quantum chromodynamics alone does not describe the vacuum in its entirety. Recent developments have introduced Quantum Electrodynamic (QED) effects directly into the generation of lattice gauge field configurations. Using lattice ensembles incorporating fully dynamical QCD and QED effects we are...
Go to contribution page -
Alessandra Forti (University of Manchester (GB))05/11/2019, 15:30
This talk describes the deployment of ATLAS offline software in containers for use in production workflows such as simulation and reconstruction. For this purpose we are using Docker and Singularity, which are both lightweight virtualization technologies that can encapsulate software packages inside complete file systems. The deployment of offline releases via containers removes the...
Go to contribution page -
Wassef Karimeh (Université Saint-Joseph de Beyrouth (LB))05/11/2019, 15:30
Detector Control Systems (DCS) for modern High-Energy Physics (HEP) experiments are based on complex distributed (and often redundant) hardware and software implementing real-time operational procedures meant to ensuring that the detector is always in a "safe" state, while at the same time maximizing the live time of the detector during beam collisions. Display, archival and often analysis of...
Go to contribution page -
Julien Leduc (CERN)05/11/2019, 15:30
CERN storage architecture is evolving to address run3 and run4 challenges. CTA and EOS integration requires parallel development of features in both software that needs to be synchronized and systematically tested on a specific distributed development infrastructure for each commit in the code base.
CTA Continuous Integration development initially started as a place to run functional system...
Go to contribution page -
Antonio Boveia (Ohio State University)05/11/2019, 15:30
In the High Luminosity LHC, planned to start with Run4 in 2026, the ATLAS experiment will be equipped with the Hardware Track Trigger (HTT) system, a dedicated hardware system able to reconstruct tracks in the silicon detectors with short latency. This HTT will be composed of about 700 ATCA boards, based on new technologies available on the market, like high speed links and powerful FPGAs, as...
Go to contribution page -
Adam Wegrzynek (CERN)05/11/2019, 15:30
ALICE (A Large Ion Collider Experiment) is currently ongoing a major upgrade of the detector, read-out and computing system for LHC Run 3. A new facility called O2 (Online-Offline) will perform data acquisition and event processing.
Go to contribution page
To efficiently operate the experiment and the O2 facility a new observability system has been developed. It will provide a complete overview of the overall... -
Patrick Fuhrmann05/11/2019, 15:30
The eXtreme DataCloud (XDC) project is aimed at developing data management services capable to cope with very large data resources allowing the future e-infrastructures to address the needs of the next generation extreme scale scientific experiments. Started in November 2017, XDC is combining the expertise of 8 large European research organisations, the project aims at developing scalable...
Go to contribution page -
Haykuhi Musheghyan (Georg August Universitaet Goettingen (DE))05/11/2019, 15:30
Data growth over several years within HEP experiments requires a wider use of storage systems for WLCG Tiered Centers. It also increases the complexity of storage systems, which includes the expansion of hardware components and thereby complicates existing software products more. To cope with such systems is a non-trivial task and requires highly qualified specialists.
Storing petabytes of...
Go to contribution page -
Mr Raul Jimenez Estupinan (ETH Zurich (CH))05/11/2019, 15:30
The Electromagnetic Calorimeter (ECAL) is one of the sub-detectors of the Compact Muon Solenoid (CMS), a general-purpose particle detector at the CERN Large Hadron Collider (LHC). The CMS ECAL Detector Control System (DCS) and the CMS ECAL Safety System (ESS) have supported the detector operations and ensured the detector's integrity since the CMS commissioning phase, more than 10 years ago....
Go to contribution page -
Ivana HRIVNACOVA (Institut de Physique Nucléaire (IPNO), Université Paris-Sud, CNRS-IN2P3, Orsay, France )05/11/2019, 15:30
The Virtual Geometry Model (VGM) is a geometry conversion tool, currently providing conversion between Geant4 and ROOT TGeo geometry models. Its design allows the inclusion of another geometry model by implementing a single sub-module instead of writing bilateral converters for all already supported models.
Go to contribution page
The VGM was last presented at CHEP in 2008 and since then it has been under continuous... -
Iouri Smirnov (Northern Illinois University (US))05/11/2019, 15:30
The Tile Calorimeter (TileCal) is a crucial part of the ATLAS detector which jointly with other calorimeters reconstructs hadrons, jets, tau-particles, missing transverse energy and assists in muon identification. It is constructed of alternating iron absorber layers and active scintillating tiles and covers region |eta| < 1.7. The TileCal is regularly monitored by several different systems,...
Go to contribution page -
Andrea Ceccanti (Universita e INFN, Bologna (IT))05/11/2019, 15:30
Support for token-based authentication and authorization has emerged in recent years as a key requirement for storage elements powering WLCG data centers. Authorization tokens represent a flexible and viable alternative to other credential delegation schemes (e.g. proxy certificates) and authorization mechanisms (VOMS) historically used in WLCG, as documented in more detail in other submitted...
Go to contribution page -
Fedor Ratnikov (Yandex School of Data Analysis (RU))05/11/2019, 15:30
Designing new experiments, as well as upgrade of ongoing experiments, is a continuous process in experimental high energy physics. Frontier R&Ds are used to squeeze the maximum physics performance using cutting edge detector technologies.
The evaluating of physics performance for particular configuration includes sketching this configuration in Geant, simulating typical signals and...
Go to contribution page -
Derek Leinweber (CSSM, University of Adelaide)05/11/2019, 15:30
The gluon field configurations that form the foundation of every lattice QCD calculation contain a rich diversity of emergent nonperturbative phenomena. Visualisation of these phenomena creates an intuitive understanding of their structure and dynamics. This presentation will illustrate recent advances in observing the chromo-electromagnetic vector fields, their energy and topological charge...
Go to contribution page -
Frank-Dieter Gaede (Deutsches Elektronen-Synchrotron (DE))05/11/2019, 16:30
Detector description is an essential component in simulation, reconstruction and analysis of data resulting from particle collisions in high energy physics experiments and for the detector development studies for future experiments. Current detector description implementations of running experiments are mostly specific implementations. DD4hep is an open source toolkit created in 2012 to serve...
Go to contribution page -
Matevz Tadel (Univ. of California San Diego (US))05/11/2019, 16:30Track 8 – Collaboration, Education, Training and OutreachOral
The CMS experiment supports and contributes the development of next-generation Event Visualization Environment (EVE) of the ROOT framework with the intention of superseding Fireworks, the physics analysis oriented event display of CMS that was developed ten years ago and has been used for Run 1 and Run 2, with a new server-web client implementation. This paper presents progress in development...
Go to contribution page -
Matthias Richter (University of Oslo (NO))05/11/2019, 16:30
The ALICE experiment at the Large Hadron Collider (LHC) at CERN will deploy a combined online-offline facility for detector readout and reconstruction, as well as data compression. This system is designed to allow the inspection of all collisions at rates of 50 kHz in the case of Pb-Pb and 400 kHz for pp collisions in order to give access to rare physics signals. The input data rate of up to...
Go to contribution page -
Mr Brij Kishor Jashal (Tata Inst. of Fundamental Research (IN))05/11/2019, 16:30
The detection of long-lived particles (LLPs) in high energy experiments are key for both the study of the Standard Model (SM) of particle physics and to search for new physics beyond it.
Go to contribution page
Many interesting decay modes involve strange particles with large lifetimes such as Ks or L0s. Exotic LLP are also predicted in many new theoretical models. The selection and reconstruction of LLPs produced... -
Xin Zhao (Brookhaven National Laboratory (US))05/11/2019, 16:30Track 4 – Data Organisation, Management and AccessOral
The ATLAS Experiment is storing detector and simulation data in raw and derived data formats across more than 150 Grid sites world-wide: currently, in total about 200 PB of disk storage and 250 PB of tape storage is used.
Go to contribution page
Data have different access characteristics due to various computational workflows. Raw data is only processed about once per year, whereas derived data are accessed... -
Robert William Gardner Jr (University of Chicago (US))05/11/2019, 16:30
The Scalable Systems Laboratory (SSL), part of the IRIS-HEP Software Institute, provides Institute participants and HEP software developers generally with a means to transition their R&D from conceptual toys to testbeds to production-scale prototypes. The SSL enables tooling, infrastructure, and services supporting innovation of novel analysis and data architectures, development of software...
Go to contribution page -
Fernando Harald Barreiro Megino (University of Texas at Arlington)05/11/2019, 16:30
In recent years containerization has revolutionized cloud environments, providing a secure, lightweight, standardized way to package and execute software. Solutions such as Kubernetes enable orchestration of containers in a cluster, including for the purpose of job scheduling. Kubernetes is becoming a de facto standard, available at all major cloud computing providers, and is gaining increased...
Go to contribution page -
Adam Leinweber (Univeristy of Adelaide)05/11/2019, 16:30
Recent searches for supersymmetric particles at the Large Hadron Collider have been unsuccessful in detecting any BSM physics. This is partially because the exact masses of supersymmetric particles are not known, and as such, searching for them is very difficult. The method broadly used in searching for new physics requires one to optimise on the signal being searched for, potentially...
Go to contribution page -
Andrea Ceccanti (Universita e INFN, Bologna (IT))05/11/2019, 16:30
The WLCG Authorisation Working Group formed in July 2017 with the objective to understand and meet the needs of a future-looking Authentication and Authorisation Infrastructure (AAI) for WLCG experiments. Much has changed since the early 2000s when X.509 certificates presented the most suitable choice for authorisation within the grid; progress in token based authorisation and identity...
Go to contribution page -
Dr Carl Vuosalo (University of Wisconsin Madison (US))05/11/2019, 16:45
DD4hep is an open-source software toolkit that provides comprehensive and complete generic detector descriptions for high energy physics (HEP) detectors. The Compact Muon Solenoid collaboration (CMS) has recently evaluated and adopted DD4hep to replace its custom detector description software. CMS has demanding software requirements as a very large, long-running experiment that must support...
Go to contribution page -
Alessandra Forti (University of Manchester (GB))05/11/2019, 16:45
We will describe the deployment of containers on the ATLAS infrastructure. There are several ways to run containers: as part of the batch system infrastructure, as part of the pilot, or called directly. ATLAS is exploiting them depending on which facility its jobs are sent to. Containers have been a vital part of the HPC infrastructure for the past year, and using fat images - images...
Go to contribution page -
Rosen Matev (CERN)05/11/2019, 16:45
High energy physics experiments traditionally have large software codebases primarily written in C++ and the LHCb physics software stack is no exception. Compiling from scratch can easily take 5 hours or more for the full stack even on an 8-core VM. In a development workflow, incremental builds are often not sufficient for quick compilation on a typical PC (e.g. due to changes to headers or...
Go to contribution page -
David Crooks (Science and Technology Facilities Council STFC (GB))05/11/2019, 16:45
The information security threats currently faced by WLCG sites are both sophisticated and highly profitable for the actors involved. Evidence suggests that targeted organisations take on average more than six months to detect a cyber attack, with more sophisticated attacks being more likely to pass undetected.
An important way to mount an appropriate response is through the use of a...
Go to contribution page -
Mr Grzegorz Jereczek (Intel Corporation)05/11/2019, 16:45
Data acquisition (DAQ) systems are a key component for successful data taking in any experiment. The DAQ is a complex distributed computing system and coordinates all operations, from the data selection stage of interesting events to storage elements.
Go to contribution page
For the High Luminosity upgrade of the Large Hadron Collider (HL-LHC), the experiments at CERN need to meet challenging requirements to record... -
Thomas Britton (JLab)05/11/2019, 16:45
Charged particle tracking represents the largest consumer of CPU resources in high data volume Nuclear Physics experiments. An effort is underway to develop ML networks that will reduce the resources required for charged particle tracking. Tracking in NP experiments represent some unique challenges compared to HEP. In particular, track finding typically represents only a small fraction of the...
Go to contribution page -
Robert John Bainbridge (Imperial College (GB))05/11/2019, 16:45
Measurements involving rare B meson decays by the LHCb and Belle Collaborations have revealed a number of anomalous results. Collectively, these anomalies are generating significant interest in the community, as they may be interpreted as a first sign of new physics in the lepton flavour sector. In 2018, the CMS experiment recorded an unprecedented data set containing the unbiased decays of 10...
Go to contribution page -
Jiahui Wei (Universite de Geneve (CH))05/11/2019, 16:45Track 8 – Collaboration, Education, Training and OutreachOral
The Alpha Magnetic Spectrometer (AMS) is a particle physics experiment installed and operating on board of the International Space Station (ISS) from May 2011 and expected to last through Year 2024 and beyond. Aiming to explore a new frontier in particle physic, the AMS collaboration seeks to store, manage and present its research results as well as the details of the detector and the...
Go to contribution page -
Eric Vaandering (Fermi National Accelerator Lab. (US))05/11/2019, 16:45Track 4 – Data Organisation, Management and AccessOral
Following a thorough review in 2018, the CMS experiment at the CERN LHC decided to adopt Rucio as its new data management system. Rucio is emerging as a community software project and will replace an aging CMS-only system before the start-up of LHC Run 3 in 2021. Rucio was chosen after an evaluation determined that Rucio could meet the technical and scale needs of CMS. The data management...
Go to contribution page -
David Schultz (University of Wisconsin-Madison)05/11/2019, 17:00
As part of a modernization effort at IceCube, a new unified authorization system has been developed to allow access to multiple applications with a single credential. Based on SciTokens and JWT, it allows for the delegation of specific accesses to cluster jobs or third party applications on behalf of the user. Designed with security in mind, it includes short expiration times on access...
Go to contribution page -
Barbara Martelli (INFN CNAF)05/11/2019, 17:00
Software defect prediction aims at detecting part of software that can likely contain faulty modules - e.g. in terms of complexity, maintainability, and other software characteristics - and therefore that require actual attention. Machine Learning (ML) has proven to be of great value in a variety of Software Engineering tasks, such as software defects prediction, also in the presence of...
Go to contribution page -
Steven Goldfarb (University of Melbourne (AU))05/11/2019, 17:00Track 8 – Collaboration, Education, Training and OutreachOral
Four years after deployment of our public web site using the Drupal 7 content management system, the ATLAS Education and Outreach group is in the process of migrating to the new CERN Drupal 8 infrastructure. We present lessons learned from the development, usage and evolution of the original web site, and how the choice of technology helped to shape and reinforce our communication strategy. We...
Go to contribution page -
Marco Zanetti (Universita e INFN, Padova (IT))05/11/2019, 17:00
The need for an unbiased analysis of large complex datasets, especially those collected by the LHC experiments, is pushing for data acquisition systems where predefined online trigger selections are limited if not suppressed at all. Not just this poses tremendous challenges for the hardware components, but also calls for new strategies for the online software infrastructures. Open source...
Go to contribution page -
Jakob Blomer (CERN)05/11/2019, 17:00
The ROOT TTree data format encodes hundreds of petabytes of High Energy and Nuclear Physics events. Its columnar layout drives rapid analyses, as only those parts (branches) that are really used in a given analysis need to be read from storage. Its unique feature is the seamless C++ integration, which allows users to directly store their event classes without explicitly defining data schemas....
Go to contribution page -
Mr Dennis Noll (RWTH Aachen University (DE))05/11/2019, 17:00
For physics analyses with identical final state objects, e.g. jets, the correct sorting of the objects at the input of the analysis can lead to a considerable performance increase.
We present a new approach in which a sorting network is placed upstream of a classification network. The sorting network combines the whole event information and explicitly pre-sorts the inputs of the analysis....
Go to contribution page -
Enrico Bocchi (CERN)05/11/2019, 17:00
Container technologies are rapidly becoming the preferred way by developers and system administrators to package applications, distribute software and run services. A crucial role is played by container orchestration software such as Kubernetes, which is also the natural fit for microservice-based architectures. Complex services are re-thought as a collection of fundamental applications (each...
Go to contribution page -
Chiara Ilaria Rovelli (Sapienza Universita e INFN, Roma I (IT))05/11/2019, 17:00
The CMS experiment at the LHC features the largest crystal electromagnetic calorimeter (ECAL) ever built. It consists of about 75000 scintillating lead tungstate crystals. The ECAL crystal energy response is fundamental for both triggering purposes and offline analysis. Due to the challenging LHC radiation environment, the response of both crystals and photodetectors to particles evolves with...
Go to contribution page -
Frank Berghaus (University of Victoria (CA))05/11/2019, 17:00Track 4 – Data Organisation, Management and AccessOral
The Dynafed data federator is designed to present a dynamic and unified view of a distributed file repository. We describe our use of Dynafed to construct a production-ready WLCG storage element (SE) using existing Grid storage endpoints as well as object storage. In particular, Dynafed is used as the primary SE for the Canadian distributed computing cloud systems. Specifically, we have been...
Go to contribution page -
Walter Lampl (University of Arizona (US))05/11/2019, 17:15
The ART system is designed to run test jobs on the Grid after an ATLAS nightly release has been built. The choice was taken to exploit the Grid as a backend as it offers a huge resource pool, suitable for a deep set of integration tests, and running the tests could be delegated to the highly scalable ATLAS production system (PanDA). The challenge of enabling the Grid as a test environment is...
Go to contribution page -
Oksana Shadura (University of Nebraska Lincoln (US))05/11/2019, 17:15
In mathematics and computer algebra, automatic differentiation (AD) is a set of techniques to evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin,...
Go to contribution page -
Sebastian Lopienski (CERN)05/11/2019, 17:15
In this talk, the speaker will present the computer security risk landscape as faced by academia and research organisations; look into various motivations behind attacks; and explore how these threats can be addressed. This will be followed by details of several types of vulnerabilities and incidents recently affecting HEP community, and lessons learnt. The talk will conclude with a outlook...
Go to contribution page -
Shiyuan Fu, Shiyuan Fu05/11/2019, 17:15Track 4 – Data Organisation, Management and AccessOral
As a data-intensive computing application, high-energy physics requires storage and computing for large amounts of data at the PB level. Performance demands and data access imbalances in mass storage systems are increasing. Specifically, on one hand, traditional cheap disk storage systems have been unable to handle high IOPS demand services. On the other hand, a survey found that only a very...
Go to contribution page -
Matthias Komm (Imperial College (GB))05/11/2019, 17:15
We present preliminary studies of a deep neural network (DNN) "tagger" that is trained to identify the presence of displaced jets arising from the decays of new long-lived particle (LLP) states in data recorded by the CMS detector at the CERN LHC. Particle-level candidates, as well as secondary vertex information, are refined through the use of convolutional neural networks (CNNs) before being...
Go to contribution page -
Teng Jian Khoo (Universite de Geneve (CH))05/11/2019, 17:15
Athena is the software framework used in the ATLAS experiment throughout the data processing path, from the software trigger system through offline event reconstruction to physics analysis. The shift from high-power single-core CPUs to multi-core systems in the computing market means that the throughput capabilities of the framework have become limited by the available memory per process. For...
Go to contribution page -
Luis Fernandez Alvarez (CERN)05/11/2019, 17:15
The CERN Batch Service faces many challenges in order to get ready for the computing demands of future LHC runs. These challenges require that we look at all potential resources, assessing how efficiently we use them and that we explore different alternatives to exploit opportunistic resources in our infrastructure as well as outside of the CERN computing centre.
Several projects, like...
Go to contribution page -
Othmane Bouhali (Texas A & M University (US))05/11/2019, 17:15Track 8 – Collaboration, Education, Training and OutreachOral
Abstract
Go to contribution page
Various studies have shown the crucial and strong impact that undergraduate research has on the learning outcome of students and its role in clarifying their career path. It was proven that promoting research at the undergraduate level is essential to build an enriched learning environment for students [1,2]. Students get exposed to the research world at an early stage, acquire new... -
Natalya Melnikova (Budker Institute of Nuclear Physics (RU))05/11/2019, 17:15
The SND is a non-magnetic detector deployed at the VEPP-2000 e+e- collider (BINP, Novosibirsk) for hadronic cross-section measurements in the center of mass energy region below 2 GeV. The important part of the detector is a three-layer hodoscopic electromagnetic calorimeter (EMC) based on NaI(Tl) counters. Until the recent EMC spectrometric channel upgrade, only the energy deposition...
Go to contribution page -
Kenyi Paolo Hurtado Anampa (University of Notre Dame (US))05/11/2019, 17:30
High Performance Computing (HPC) facilities provide vast computational power and storage, but generally work on fixed environments designed to address the most common software needs locally, making it challenging for users to bring their own software. To overcome this issue, most HPC facilities have added support for HPC friendly container technologies such as Shifter, Singularity, or...
Go to contribution page -
David Crooks (Science and Technology Facilities Council STFC (GB))05/11/2019, 17:30
IRIS is the co-ordinating body of a UK science eInfrastructure and is a collaboration between UKRI-STFC, its resource providers and representatives from the science activities themselves. We document the progress of an ongoing project to build a security policy trust framework suitable for use across the IRIS community.
The EU H2020-funded AARC projects addressed the challenges involved in...
Go to contribution page -
Simone Campana (CERN)05/11/2019, 17:30Track 4 – Data Organisation, Management and AccessOral
The European-funded ESCAPE project will prototype a shared solution to computing challenges in the context of the European Open Science Cloud. It targets Astronomy and Particle Physics facilities and research infrastructures and focuses on developing solutions for handling Exabyte scale datasets.
The DIOS work package aims at delivering a Data Infrastructure for Open Science. Such an...
Go to contribution page -
Dr Robert Andrew Currie (The University of Edinburgh (GB))05/11/2019, 17:30
The physics software stack of LHCb is based on Gaudi and is comprised of about 20 interdependent projects, managed across multiple Gitlab repositories. At present, the continuous integration (CI) system used for regular building and testing of this software is implemented using Jenkins and runs on a cluster of about 300 cores.
LHCb CI pipelines are python-based and relatively modern with some...
Go to contribution page -
Mr Kevin Greif (University of Notre Dame)05/11/2019, 17:30
Deep neural networks (DNNs) have been applied to the fields of computer vision and natural language processing with great success in recent years. The success of these applications has hinged on the development of specialized DNN architectures that take advantage of specific characteristics of the problem to be solved, namely convolutional neural networks for computer vision and recurrent...
Go to contribution page -
Stella Christodoulaki (CERN)05/11/2019, 17:30Track 8 – Collaboration, Education, Training and OutreachOral
The INSPIRE digital library serves the scientific community since almost 50 years. Previously known as SPIRES, it was the first web site outside Europe and the first database on the web. Today, INSPIRE connects 100'000 scientists in High Energy Physics worldwide, with over 1 million scientific articles, thousands scientific profiles of authors, data, conferences and jobs in High Energy...
Go to contribution page -
Lorenzo Moneta (CERN)05/11/2019, 17:30
Pseudo random number generation (PRNG) play an important role in many areas of computational science. Highest random properties, exact reproducibility and CPU efficiency are important requirements for using them in the most demanding Monte Carlo calculations.
Go to contribution page
We are reviewing here the highest quality PRNG available, such as those based on the Kolmogorov-Anosov theory of mixing in classical... -
Paul Gessinger-Befurt (CERN / JGU Mainz)05/11/2019, 17:30
The reconstruction of trajectories of the charged particles in the tracking detectors of high energy physics experiments is one of the most difficult and complex tasks of event reconstruction at particle colliders. As pattern recognition algorithms exhibit combinatorial scaling to high track multiplicities, they become the largest contributor to the CPU consumption within event reconstruction,...
Go to contribution page -
Michael Papenbrock (Uppsala University)05/11/2019, 17:30
The upcoming PANDA experiment is one of the major pillars of the future FAIR accelerator facility in Darmstadt, Germany. With its multipurpose detector and an antiproton beam with a momentum of up to 15 GeV/c, PANDA will be able to test QCD in the intermediate energy regime and shed light on important questions such as: Why is there a matter-antimatter asymmetry in the Universe?
Achieving its...
Go to contribution page -
Giuseppe Andronico (Universita e INFN, Catania (IT))05/11/2019, 17:45
The Jiangmen Underground Neutrino Observatory (JUNO) is an underground 20 kton liquid scintillator detector being built in the south of China and expected to start data taking in late 2021. The JUNO physics program is focused on exploring neutrino properties, by means of electron anti-neutrinos emitted from two nuclear power complexes at a baseline of about 53km. Targeting an unprecedented...
Go to contribution page -
Dr Nobuo Sato (Jefferson Lab), nobuo sato (Florida State University)05/11/2019, 17:45
We describe a multi-disciplinary project to use machine learning techniques based on neural networks (NNs) to construct a Monte Carlo event generator for lepton-hadron collisions that is agnostic of theoretical assumptions about the microscopic nature of particle reactions. The generator, referred to as ETHER (Empirically Trained Hadronic Event Regenerator), is trained to experimental data...
Go to contribution page -
Mircho Nikolaev Rodozov (Bulgarian Academy of Sciences (BG))05/11/2019, 17:45
The CMS experiment relies on a substantial C++ and Python-based software release for its day-to-day production, operations and analysis needs. While very much under active development, this codebase continues to age. At the same time, CMSSW codes are likely to be used for the next two decades, in one form or another. Thus, the "cost" of bugs entering CMSSW continues to increase, both due to...
Go to contribution page -
Maksim Melnik Storetvedt (Western Norway University of Applied Sciences (NO))05/11/2019, 17:45
The new jAliEn (Java ALICE Environment) middleware is a Grid framework designed to satisfy the needs of the ALICE experiment for the LHC Run 3, such as providing a high-performance and high-scalability service to cope with the increased volumes of collected data. This new framework also introduces a split, two-layered job pilot, creating a new approach to how jobs are handled and executed...
Go to contribution page -
Jeff LeFevre (University of California, Santa Cruz)05/11/2019, 17:45Track 4 – Data Organisation, Management and AccessOral
Access libraries such as ROOT and HDF5 allow users to interact with datasets using high level abstractions, like coordinate systems and associated slicing operations. Unfortunately, the implementations of access libraries are based on outdated assumptions about storage systems interfaces and are generally unable to fully benefit from modern fast storage devices. For example, access libraries...
Go to contribution page -
Batool Safarzadeh Samani (University of Sussex (GB))05/11/2019, 17:45
Events containing muons, electrons or photons in the final state are an important signature for many analyses being carried out at the Large Hadron Collider (LHC), including both standard model measurements and searches for new physics. To be able to study such events, it is required to have an efficient and well-understood trigger system. The ATLAS trigger consists of a hardware based system...
Go to contribution page -
Marzena Lapka (CERN)05/11/2019, 17:45Track 8 – Collaboration, Education, Training and OutreachOral
The International Particle Physics Outreach Group (IPPOG) is a network of scientists, science educators and communication specialists working across the globe in informal science education and outreach for particle physics. Members initiate, develop and participate in a variety of activities in classrooms, public events, festivals, exhibitions, museums, institute open days, etc. The IPPOG...
Go to contribution page -
Andre Sailer (CERN)05/11/2019, 17:45
Future HEP experiments require detailed simulation and advanced reconstruction algorithms to explore the physics reach of their proposed machines and to design, optimise, and study the detector geometry and performance. To synergise the development of the CLIC and FCC software efforts, the CERN EP R&D road map proposes the creation of a "Turnkey Software Stack", which is foreseen to provide...
Go to contribution page -
Benedikt Volkel (Ruprecht Karls Universitaet Heidelberg (DE), CERN)05/11/2019, 17:45
The Virtual Monte Carlo (VMC) package together with its concrete implementations provides a unified interface to different detector simulation transport engines such as GEANT3 or GEANT4. However, so far the simulation of one event was restricted to the usage of one chosen engine.
We introduce here the possibility to mix multiple engines within the simulation of one event. Depending on user...
Go to contribution page -
Simon Blyth06/11/2019, 09:00Plenary
-
Dorothea Vom Bruch (LPNHE Paris, CNRS)06/11/2019, 09:30
Beginning in 2021, the upgraded LHCb experiment will use a triggerless readout system collecting data at an event rate of 30 MHz. A software-only High Level Trigger will enable unprecedented flexibility for trigger selections. During the first stage (HLT1), a sub-set of the full offline track reconstruction for charged particles is run to select particles of interest based on single or...
Go to contribution page -
Jeff Adie (NVIDIA)06/11/2019, 10:00Plenary
-
Dirk Pleiter (Forschungszentrum Jülich GmbH)06/11/2019, 10:50Plenary
-
Lukas Alexander Heinrich (CERN), Ricardo Brito Da Rocha (CERN)06/11/2019, 11:20Plenary
-
Lloyd Hollenberg (University of Melbourne)07/11/2019, 09:00Plenary
-
Romain Wartel (CERN)07/11/2019, 09:30Plenary
-
Paul Lasky (Monash University)07/11/2019, 10:00Plenary
-
Alexander Held (University of British Columbia (CA))07/11/2019, 11:00
An important part of the LHC legacy will be precise limits on indirect effects of new physics, framed for instance in terms of an effective field theory. These measurements often involve many theory parameters and observables, which makes them challenging for traditional analysis methods. We discuss the underlying problem of “likelihood-free” inference and present powerful new analysis...
Go to contribution page -
Mr Tigran Mkrtchyan (DESY)07/11/2019, 11:00Track 4 – Data Organisation, Management and AccessOral
The dCache project provides open-source software deployed internationally
Go to contribution page
to satisfy ever more demanding storage requirements of various scientific
communities. Its multifaceted approach provides an integrated way of supporting different use-cases with the same storage, from high throughput data ingest, through wide access and easy integration with existing systems, including
event driven... -
Christophe Haen (CERN)07/11/2019, 11:00
DIRACOS is a project aimed to provide a stable base layer of dependencies, on top of which the DIRAC middleware is running. The goal was to produce a coherent environment for grid interaction and streamline the operational overhead. Historically the DIRAC dependencies were grouped in two bundles; Externals containing Python and standard binary libraries, and the LCGBundle which contained all...
Go to contribution page -
Dr Qiulan Huang (Institute of High Energy of Physics,CAS)07/11/2019, 11:00
The LHAASO(Large High Altitude Air Shower Observatory) experiment of IHEP is located in Daocheng, Sichuan province (at the altitude of 4410 m). The main scientific goals of LHAASO are searching for galactic cosmic ray origins by extensive spectroscopy investigations of gamma ray sources above 30TeV. To accomplish these goals, LHAASO contains four detector arrays, which generates huge amounts...
Go to contribution page -
Jose Flix Molina (Centro de Investigaciones Energéti cas Medioambientales y Tecno)07/11/2019, 11:00
In view of the increasing computing needs for the HL-LHC era, the LHC experiments are exploring new ways to access, integrate and use non-Grid compute resources. Accessing and making efficient use of Cloud and supercomputer (HPC) resources present a diversity of challenges. In particular, network limitations from the compute nodes in HPC centers impede CMS experiment pilot jobs to connect to...
Go to contribution page -
Dr Wuming Luo (Institute of High Energy Physics, CAS)07/11/2019, 11:00
The Jiangmen Underground Neutrino Observatory (JUNO) in China is a 20 kton liquid scintillator detector, designed primarily to determine the neutrino mass hierarchy, as well as to study various neutrino physics topics. Its core part consists of O(10^4) Photomultiplier Tubes (PMTs). Computations looping through this large amount of PMTs on CPU will be very time consuming. GPU parallel computing...
Go to contribution page -
Dr Andrea Bocci (CERN)07/11/2019, 11:00
The CMS experiment has been designed with a two-level trigger system: the Level 1 Trigger, implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms running on the available computing resources,...
Go to contribution page -
Serguei Linev (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))07/11/2019, 11:00
RWebWindow class builds the core functionality for web-based widgets in ROOT. It combines all necessary server-side components and provides communication channels with multiple JavaScript clients.
Following new ROOT widgets are build based on RWebWindow functionality:
- RCanvas – ROOT7 canvas for drawing all kinds of primitives, including
histograms and graphs - RBrowser – hierarchical...
- RCanvas – ROOT7 canvas for drawing all kinds of primitives, including
-
Ilya Komarov07/11/2019, 11:00Track 8 – Collaboration, Education, Training and OutreachOral
Belle II is a rapidly growing collaboration with members from
Go to contribution page
113 institutes spread around the globe. The software development team of
the experiment, as well as the software users, are very much
decentralised. Together with the active development of the software,
such decentralisation makes the adoption of the latest software
releases by users an essential, but quite challenging... -
Stefan Roiser (CERN)07/11/2019, 11:15Track 8 – Collaboration, Education, Training and OutreachOral
With the ever increasing size of scientific collaborations and complexity of scientific instruments the software needed to acquire, process and analyze the gathered data is gaining in complexity and size too. Unfortunately the role and career path of scientists and engineers working on software R&D and developing scientific software is neither clearly established nor defined in many fields of...
Go to contribution page -
Hannah Short (CERN)07/11/2019, 11:15
Until recently, CERN had been considered eligible for academic pricing of Microsoft products. Now, along with many other research institutes, CERN has been disqualified from this educational programme and faces a 20 fold increase in license costs. CERN’s current Authentication and Authorisation Infrastructure comprises Microsoft services all the way down from the web Single-Sign-On to the...
Go to contribution page -
Francesco Giovanni Sciacca (Universitaet Bern (CH))07/11/2019, 11:15Track 4 – Data Organisation, Management and AccessOral
The DOMA activities gave the opportunity for DPM to contribute to
the WLCG plans for Run-3 and beyond. Here we identify the themes
that are relevant to site storage systems and explain how the
approaches chosen in DPM are relevant for features like
scalability, third party copy, bearer tokens, multi-site deployments and
volatile caching pools.We will also discuss the status of the...
Go to contribution page -
Michal Svatos (Acad. of Sciences of the Czech Rep. (CZ))07/11/2019, 11:15
ATLAS distributed computing is allowed to opportunistically use resources of the Czech national HPC center IT4Innovations in Ostrava. The jobs are submitted via an ARC Compute Element (ARC-CE) installed at the grid site in Prague. Scripts and input files are shared between the ARC-CE and the shared file system located at the HPC, via sshfs. This basic submission system has worked there since...
Go to contribution page -
Thomas Owen James (CERN)07/11/2019, 11:15
The CMS experiment at the LHC is designed to study a wide range of high energy physics phenomena. It employs a large all-silicon tracker within a 3.8 T magnetic solenoid, which allows precise measurements of transverse momentum (pT) and vertex position.
This tracking detector will be upgraded to coincide with the installation of the High-Luminosity LHC, which will provide up to about 10^35...
Go to contribution page -
Dr Wally Melnitchouk (Jefferson Lab)07/11/2019, 11:15
Extracting information about the quark and gluon (or parton) structure of the nucleon from high-energy scattering data is a classic example of the inverse problem: the experimental cross sections are given by convolutions of the parton probability distributions with process-dependent hard coefficients that are perturbatively calculable from QCD. While most analyses in the past have been based...
Go to contribution page -
Henry Fredrick Schreiner (University of Cincinnati (US))07/11/2019, 11:15
Boost.Histogram, a header-only C++14 library that provides multi-dimensional histograms and profiles, is now available in Boost-1.70. It is extensible, fast, and uses modern C++ features. Using template meta-programming, the most efficient code path for any given configuration is automatically selected. The library includes key features designed for the particle physics community, such as...
Go to contribution page -
Giuseppe Cerati (Fermi National Accelerator Lab. (US))07/11/2019, 11:15
Neutrinos are particles that interact rarely, so identifying them requires large detectors which produce lots of data. Processing this data with the computing power available is becoming more difficult as the detectors increase in size to reach their physics goals. In liquid argon time projection chambers (TPCs) the charged particles from neutrino interactions produce ionization electrons...
Go to contribution page -
Dr Stefano Bagnasco (Istituto Nazionale di Fisica Nucleare, Torino)07/11/2019, 11:15
VIRGO is an interferometer for the detection of Gravitational Waves at the European Gravitational Observatory in Italy. Along with the two LIGO interferometers in the US, VIRGO is being used to collect data from astrophysical sources such as compact binary coalescences, and is currently running its third observational period, collecting gravitational wave events at a rate if more than one per...
Go to contribution page -
Andrea Ceccanti (Universita e INFN, Bologna (IT))07/11/2019, 11:30
One of the key challenges identified by the HEP R&D roadmap for software and computing is the ability to integrate heterogeneous resources in support of the computing needs of HL-LHC. In order to meet this objective, a flexible Authentication and Authorization Infrastructure (AAI) has to be in place, to allow the secure composition of computing and storage resources provisioned across...
Go to contribution page -
Oksana Shadura (University of Nebraska Lincoln (US))07/11/2019, 11:30
C++ Modules come in C++20 to fix the long-standing build scalability problems in the language. They provide an io-efficient, on-disk representation capable to reduce build times and peak memory usage. ROOT employs the C++ modules technology further in the ROOT dictionary system to improve its performance and reduce the memory footprint.
ROOT with C++ Modules was released as a technology...
Go to contribution page -
Shiyuan Fu, Shiyuan Fu (Institute of High Energy Physics,Chinese Academy of Sciences)07/11/2019, 11:30Track 4 – Data Organisation, Management and AccessOral
High energy physics (HEP) experiments produce a large amount of data, which is usually stored and processed on distributed sites. Nowadays, the distributed data management system faces some challenges such as global file namespace and efficient data access. Focusing on those problems, the paper proposed a cross-domain data access file system (CDFS), a data cache and access system across...
Go to contribution page -
Peter Elmer (Princeton University (US))07/11/2019, 11:30Track 8 – Collaboration, Education, Training and OutreachOral
Developing, maintaining, and evolving the algorithms and
Go to contribution page
software implementations for HEP experiments will continue for many
decades. In particular, the HL-LHC will start collecting data 8 or
9 years from now, and then acquire data for at least another decade.
Building the necessary software requires a workforce with a mix of
HEP domain knowledge, advanced software skills, and strong... -
Stefano Giagu (Sapienza Universita e INFN, Roma I (IT))07/11/2019, 11:30
The Level-0 Muon Trigger system of the ATLAS experiment will undergo a full upgrade for HL-LHC to stand the challenging performances requested with the increasing instantaneous luminosity. The upgraded trigger system foresees to send RPC raw hit data to the off-detector trigger processors, where the trigger algorithms run on new generation of Field-Programmable Gate Arrays (FPGAs). The FPGA...
Go to contribution page -
Doug Benjamin (Argonne National Laboratory (US))07/11/2019, 11:30
The ATLAS experiment is using large High Performance Computers (HPC's) and fine grained simulation workflows (Event Service) to produce fully simulated events in an efficient manner. ATLAS has developed a new software component (Harvester) which provides resource provisioning and workload shaping. In order to run effectively on the largest HPC machines, ATLAS develop Yoda-Droid software to...
Go to contribution page -
Mr Martin Gasthuber (DESY)07/11/2019, 11:30
Experiments in Photon Science at DESY will, in future, undergo significant changes in terms of data volumes, data rates and most important, to fully enable online (synchronous to experiment) data analysis. Primary goal is to support new type of experimental setups requiring significant computing effort to perform controlling and data quality monitoring, allow effective data reductions and,...
Go to contribution page -
Yang Zhang07/11/2019, 11:30
Phase transitions played an important role in the very early evolution of the Universe. We present a C++ software package (PhaseTracer) for finding cosmological phases and calculating transition properties involving single or multiple scalar fields. The package first maps the phase structure by tracing the vacuum expectation value (VEV) of the potential at different temperatures, then finds...
Go to contribution page -
Sabrina Amrouche (Université de Geneve (CH))07/11/2019, 11:30
At the High Luminosity Large Hadron Collider (HL-LHC), many
Go to contribution page
proton-proton collisions happen during a single bunch crossing. This
leads on average to tens of thousands of particles emerging from the
interaction region. Two major factors impede finding charged particle
trajectories from measured hits in the tracking detectors. First,
deciding whether a given set of hits was produced by a... -
Eric Cano (CERN)07/11/2019, 11:45Track 4 – Data Organisation, Management and AccessOral
During 2019 and 2020, the CERN tape archive (CTA) will receive new data from LHC experiments and import existing data from CASTOR, which will be phased out for LHC experiments before Run 3.
This contribution will present the statuses of CTA as a service and of its integration with EOS and FTS and the data flow chains of LHC experiments.
The latest enhancements and additions to the...
Go to contribution page -
Kenneth Richard Herner (Fermi National Accelerator Laboratory (US))07/11/2019, 11:45
The CernVM FileSystem (CVMFS) is widely used in High Throughput Computing to efficiently distributed experiment code. However, the standard CVMFS publishing tools are designed for a small group of people from each experiment to maintain common software, and the tools don't work well for the majority of users that submit jobs related to each experiment. As a result, most user code, such as code...
Go to contribution page -
Sebastian Lopienski (CERN)07/11/2019, 11:45Track 8 – Collaboration, Education, Training and OutreachOral
Since years, e-mail is one of the main attack vectors that organisations and individuals face. Malicious actors use e-mail messages to run phishing attacks, to distribute malware, and to send around various types of scams. While technical solutions exist to filter out most of such messages, no mechanism can guarantee 100% efficiency. Recipients themselves are the next, crucial layer of...
Go to contribution page -
Manuel Jesus Rodriguez Alonso (CERN)07/11/2019, 11:45
The Deep Underground Neutrino Experiment (DUNE) will be a world-class neutrino observatory and nucleon decay detector aiming to address some of the most fundamental questions in particle physics. With a modular liquid argon time-projection chamber (LArTPC) of 40 kt fiducial mass, the DUNE far detector will be able to reconstruct neutrino interactions with an unprecedented resolution. With no...
Go to contribution page -
Benjamin LaRoque07/11/2019, 11:45
Project 8 is applying a novel spectroscopy technique to make a precision measurement of the tritium beta-decay spectrum, resulting in either a measurement of or further constraint on the effective mass of the electron antineutrino. ADMX is operating an axion haloscope to scan the mass-coupling parameter space in search of dark matter axions. Both collaborations are executing medium-scale...
Go to contribution page -
Sergey Gorbunov (Johann-Wolfgang-Goethe Univ. (DE))07/11/2019, 11:45
The Mikado approach is the winner algorithm of the final phase of the TrackML particle reconstruction challenge [1].
The algorithm is combinatorial. Its strategy is to reconstruct data in small portions, each time trying to not damage the rest of the data. The idea reminds Mikado game, where players should carefully remove wood sticks one-by-one from a heap.
The algorithm does 60...
Go to contribution page -
Michael Poat (Brookhaven National Laboratory)07/11/2019, 11:45
The Solenoidal Tracker at RHIC (STAR) is a multi-national supported experiment located at Brookhaven National Lab and is currently the only remaining running experiment at RHIC. The raw physics data captured from the detector is on the order of tens of PBytes per data acquisition campaign, which makes STAR fit well within the definition of a big data science experiment. The production of the...
Go to contribution page -
Frank-Dieter Gaede (Deutsches Elektronen-Synchrotron (DE))07/11/2019, 11:45
PODIO is a C++ toolkit for the creation of event data models (EDMs) with a fast and efficient I/O layer, developed in the AIDA2020 project. It employs plain-old-data (POD) data structures wherever possible, while avoiding deep object-hierarchies and virtual inheritance. A lightweight layer of handle classes provides the necessary high-level interface for the physicist, such as support for...
Go to contribution page -
Lara Lloret Iglesias (CSIC - Consejo Sup. de Investig. Cientif. (ES))07/11/2019, 11:45
The CERN analysis preservation portal (CAP) comprises a set of tools and services aiming to assist researchers in describing and preserving all the components of a physics analysis such as data, software and computing environment. Together with the associated documentation, all these assets are kept in one place so that the analysis can be fully or partially reused even several years after the...
Go to contribution page -
Jan Balewski (Lawrence Berkeley National Lab. (US))07/11/2019, 12:00
Abstract: Over the last few years, many physics experiments migrated their computations from customized locally managed computing clusters to orders of magnitude larger multi-tenant HPC systems often optimized for highly parallelizable long-runtime computations. Historically, physics simulations and analysis workflows were designed for a single core CPUs with abundant RAM, plenty of local...
Go to contribution page -
Greg Corbett (STFC)07/11/2019, 12:00
GOCDB is the official repository for storing and presenting EGI and WLCG topology and resource information. It is a definitive information source, with the emphasis on user communities to maintain their own data. It is intentionally designed to have no dependencies on other operational tools for information.
In recent years, funding sources and user communities have evolved and GOCDB is...
Go to contribution page -
Christian Schmitt (Johannes Gutenberg Universitaet Mainz (DE))07/11/2019, 12:00
Artificial neural networks are becoming a standard tool for data analysis, but their potential remains yet to be widely used for hardware-level trigger applications. Nowadays, high-end FPGAs, as they are also often used in low-level hardware triggers, offer theoretically enough performance to allow for the inclusion of networks of considerable size into these system for the first time. This...
Go to contribution page -
Benedikt Riedel (University of Wisconsin-Madison), Benedikt Riedel (University of Wisconsin-Madison)07/11/2019, 12:00
IceCube sends out real-time alerts for neutrino events to other multi-messenger observatories around the world, including LIGO/VIRGO and electromagnetic observatories. The typical case is to send out an initial alert within one minute, then run more expensive processing to refine the direction and energy estimates and send a follow-on message. This second message has averaged 40 to 60...
Go to contribution page -
Matthew Feickert (Southern Methodist University (US))07/11/2019, 12:00
Likelihoods associated with statistical fits in searches for new physics are beginning to be published by LHC experiments on HEPData [arXiv:1704.05473]. The first of these is the search for bottom-squark pair production by ATLAS [ATLAS-CONF-2019-011]. These likelihoods adhere to a specification first defined by the...
Go to contribution page -
Lukasz Kamil Graczykowski (Warsaw University of Technology (PL))07/11/2019, 12:00Track 8 – Collaboration, Education, Training and OutreachOral
Heavy-ion physics is present within the "International MasterClasses" for almost ten years. New developments aiming at expanding their scope and reach are presented in this talk.
First, in line with the physics research of typical heavy-ion experiments, three measurements were developed based on actual data analysis in the ALICE experiment at CERN/LHC. They correspond to the most important...
Go to contribution page -
Giuseppe Cerati (Fermi National Accelerator Lab. (US))07/11/2019, 12:00
One of the most computationally challenging problems expected for the High-Luminosity Large Hadron Collider (HL-LHC) is finding and fitting particle tracks during event reconstruction. Algorithms used at the LHC today rely on Kalman filtering, which builds physical trajectories incrementally while incorporating material effects and error estimation. Recognizing the need for faster...
Go to contribution page -
Sang Un Ahn (Korea Institute of Science & Technology Information (KR))07/11/2019, 12:00Track 4 – Data Organisation, Management and AccessOral
In November 2018, the KISTI Tier-1 centre started a project to design, develop and deploy a disk-based custodial storage with error rate and reliability compatible with a tape-based storage. This project has been conducted in the collaboration between KISTI and CERN, especially the initial system design was laid out from the intensive discussion with CERN IT and ALICE. The initial system...
Go to contribution page -
Serguei Linev (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))07/11/2019, 12:00
REve library of ROOT provides web-based event display and includes all necessary components for geometry visualization. These components are reused in the new web-based geometry viewer, where ROOT geometries of arbitrary complexity can be displayed.
Go to contribution page
With new geometry viewer one could browse hierarchy of the geometry nodes, change individual node/volume/shape attributes, search volumes by name... -
420. CERN analysis preservation and reuse framework: FAIR research data services for LHC experimentsPamfilos Fokianos (CERN)07/11/2019, 12:15
In this paper we present the CERN Analysis Preservation service as a FAIR (Findable, Accessible, Interoperable and Reusable) research data preservation repository platform for LHC experiments. The CERN Analysis Preservation repository allows LHC collaborations to deposit and share the structured information about analyses as well as to capture the individual data assets associated to the...
Go to contribution page -
Luca Mascetti (CERN)07/11/2019, 12:15Track 4 – Data Organisation, Management and AccessOral
The CERN IT Storage group operates multiple distributed storage systems to support all CERN data storage requirements: the physics data generated by LHC and non-LHC experiments; object and file storage for infrastructure services; block storage for the CERN cloud system; filesystems for general use and specialized HPC clusters; content distribution filesystem for software distribution and...
Go to contribution page -
Dr Richard Hughes-Jones (GEANT Association)07/11/2019, 12:15
This paper describes the work done by the AENEAS project to develop a concept and design for a distributed, federated, European SKA Regional Centre (ESRC) to support the compute, storage, and networking that will be required to achieve the scientific goals of the Square Kilometre Array (SKA).
Go to contribution page
The AENEAS (Advanced European Network of E-infrastructures for Astronomy with the SKA) project is a 3... -
Dr Sally Robertson (University of California Berkeley, Lawrence Berkeley National Lab)07/11/2019, 12:15
Large scale neutrino detectors are relying on accurate muon energy estimates to infer neutrino energy. Reconstruction methods which incorporate physics knowledge will produce a better result. The muon energy reconstruction algorithm Edepillim takes into account the entire pattern of energy loss along the muon track and uses probability distribution functions describing muon energy losses to...
Go to contribution page -
Vladimir Loncar (University of Belgrade (RS))07/11/2019, 12:15
Machine learning is becoming ubiquitous across HEP. There is great potential to improve trigger and DAQ performance with it. However, the exploration of such techniques within the field in low latency/power FPGAs has just begun. We present hls4ml, a user-friendly software, based on High-Level Synthesis (HLS), designed to deploy network architectures on FPGAs. As a case study, we use hls4ml for...
Go to contribution page -
Norman Anthony Graf (SLAC National Accelerator Laboratory (US))07/11/2019, 12:15
Geant4 is the de facto HEP standard for simulating the interaction of particles with materials and fields. The software toolkit provides a very rich library of basic geometrical shapes, often referred to as “primitives”, plus the ability to define compound geometries, making it capable of supporting extremely complex physical structures. The ability to directly import CAD geometries into...
Go to contribution page -
Stefano Dal Pra (Universita e INFN, Bologna (IT))07/11/2019, 12:15
The INFN Tier-1 datacentre provides computing resources to several HEP and Astrophysics experiments. These are organized in Virtual Organizations submitting jobs to our computing facilities through Computing Elements, acting as Grid interfaces to the Local Resource Manager. We are phasing-out our current LRMS (IBM/Platform LSF 9.1.3) and CEs (CREAM) set to adopt HTCondor as a replacement for...
Go to contribution page -
Jorn Schumacher (CERN)07/11/2019, 14:00
NetIO is a network communication library that enables distributed applications to exchange messages using high-level communication patterns such as publish/subscribe. NetIO is based on libfabric and supports various types of RDMA networks, for example, Infiniband, RoCE, or OmniPath. NetIO is currently being used in the data acquisition chain of the ATLAS experiment.
Major parts of NetIO...
Go to contribution page -
Enrico Bocchi (CERN)07/11/2019, 14:00Track 4 – Data Organisation, Management and AccessOral
The S3 service at CERN (S3.CERN.CH) is a horizontally scalable object storage system built with a flexible number of virtual RADOS Gateways on top of a conventional Ceph cluster. A Traefik load balancing frontend (operated via Nomad and Consul) redirects HTTP traffic to the RGW backends, and LogStash publishes to ElasticSearch for monitoring the user traffic. User and quota management is...
Go to contribution page -
Markus Schulz (CERN)07/11/2019, 14:00
Grid information systems enable the discovery of resources in a Grid computing infrastructure and provide further information about their structure and state.
Go to contribution page
The original concepts for a grid information system were defined over 20 years ago and the GLUE 2.0 information model specification was published 10 years ago.
This contribution describes the current status and highlights the changes... -
Dr Marco Milesi (The University of Melbourne, Belle II Experiment)07/11/2019, 14:00
We present a major overhaul to lepton identification for the Belle II experiment, based on a novel multi-variate classification algorithm.
A key topic in the Belle II physics programme is the study of semi-tauonic B decays to test lepton flavour universality violation, such as $B\rightarrow D^{*}\tau\nu$. The analysis of this decay relies on the capability of correctly separating low...
Go to contribution page -
Max Fischer (Karlsruhe Institute of Technology)07/11/2019, 14:00
Dynamic resource provisioning in the WLCG is commonly based on meta-scheduling and the pilot model. For a given set of workflows, a meta-scheduler computes the ideal set of resources; so-called pilot jobs integrate these resources into an overlay batch system, which then processes the initial workflows. While offering a high level of control and precision, the strong coupling between...
Go to contribution page -
Dr Joern Adamczewski-Musch (GSI Helmholtzzentrum f. Schwerionenforschung GmbH)07/11/2019, 14:00
Since 2018 several FAIR Phase 0 beamtimes have been operated at GSI, Darmstadt. Here the new challenging technologies for the upcoming FAIR facility shall be tested while various physics experiments are performed with the existing GSI accelerators. One of these challenges concerns the performance, reliability, and scalability of the experiment data storage. A new system for archiving the data...
Go to contribution page -
Martin Sevior (University of Melbourne (AU))07/11/2019, 14:00
In March 2019 the Belle II detector began collecting data from $e^{+}e^{-}$ collisions at the SuperKEKB electron-positron collider. Belle II aims to collect a data sample 50 times larger than the previous generation of B-Factories. For Belle II analyses to be competitive it is crucial that calibration constants for this data are calculated promptly prior to the main data reconstruction.
To...
Go to contribution page -
Rodrigo Sierra (CERN)07/11/2019, 14:00
Whether you consider “IoT” as a real thing or a buzzword, there’s no doubt that connected devices, data analysis and automation are transforming industry. CERN is no exception: a network of LoRa-based radiation monitors has recently been deployed and there is a growing interest in the advantages connected devices could bring—to accelerator operations just as much as to building management.
Go to contribution page
... -
418. The SKA Science Data Processor (SDP): final design and getting ready for the construction phaseDr Stephen Ord (CSIRO Astronomy and Space Science)07/11/2019, 14:00
The Square Kilometre Array (SKA) project is an international effort to build the world’s largest radio telescope, led by the SKA Organisation based at the Jodrell Bank Observatory near Manchester, UK. The SKA will conduct transformational science to improve our understanding of the Universe and the laws of fundamental physics, monitoring the sky in unprecedented detail and mapping it hundreds...
Go to contribution page -
Tadeas Bilka (Charles University, Prague), Tadeas Bilka (Charles University (CZ))07/11/2019, 14:15
On March 25th 2019, the Belle II detector recorded the first collisions
delivered by the SuperKEKB accelerator. This marked the beginning of the
physics run with vertex detector.The vertex detector was aligned initially with cosmic ray tracks without
Go to contribution page
magnetic field simultaneously with the drift chamber. The alignment
method is based on Millepede II and the General Broken Lines track... -
Jason Oliver (University of Adelaide (AU))07/11/2019, 14:15
The Recursive Jigsaw Reconstruction method is a technique to analyze reconstructed particles in the presence of kinematic and combinatoric unknowns which are associated with unmeasured or indistinguishable particles. By factorizing the unknowns according to an assumed topology and applying fixed algorithmic choices - Jigsaw Rules, we are able to approximately reconstruct rest frames throughout...
Go to contribution page -
Revital Kopeliansky (Indiana University (US))07/11/2019, 14:15
The ATLAS experiment at CERN has started the construction of upgrades
Go to contribution page
for the "High Luminosity LHC", with collisions due to start in
2026. In order to deliver an order of magnitude more data than
previous LHC runs, 14 TeV protons will collide with an instantaneous
luminosity of up to 7.5 x 10e34 cm^-2s^-1, resulting in much higher pileup and
data rates than the current experiment was... -
Mr Tigran Mkrtchyan (DESY)07/11/2019, 14:15
As a well established, large-scale distributed storage system, dCache is required to manage and serve huge amount of data for WLHC experiments and beyond. Based on a microservices-like architecture, dCache is built as a modular system distributed, where each component provides a different core functionality. These services communicate by passing serialized messages of dynamic types to each...
Go to contribution page -
Andreas Joachim Peters (CERN)07/11/2019, 14:15Track 4 – Data Organisation, Management and AccessOral
EOS is the main storage system at CERN providing hundreds of PB of capacity to both physics experiments and also regular users of the CERN infrastructure. Since its first deployment in 2010, EOS has evolved and adapted to the challenges posed by ever increasing requirements for storage capacity, user friendly POSIX-like interactive experience and new paradigms like collaborative applications...
Go to contribution page -
Julia Andreeva (CERN)07/11/2019, 14:15
The WLCG project aimed to develop, build, and maintain a global computing facility for storage and analysis of the LHC data. While currently most of the LHC computing resources are being provided by the classical grid sites, over last years the LHC experiments have been using more and more public clouds and HPCs, and this trend will certainly increase. The heterogeneity of the LHC computing...
Go to contribution page -
Michiru Kaneda (ICEPP, the University of Tokyo)07/11/2019, 14:15
The Tokyo regional analysis center at the International Center for Elementary Particle Physics, the University of Tokyo, is one of the Tier 2 sites for the ATLAS experiment in the Worldwide LHC Computing Grid (WLCG). The current system provides 7,680 CPU cores and 10.56 PB disk storage for WLCG. CERN plans the high-luminosity LHC starting from 2026, which increases the peak luminosity to 5...
Go to contribution page -
Dr Stephen Ord (CSIRO Astronomy and Space Science)07/11/2019, 14:15
The software pipeline for ASKAP has been developed to run on the Galaxy supercomputer as a succession of MPI enabled coarsely parallelised applications. We have been using OpenACC to develop more finely grained parallel applications within the current code base that can utilise GPU accelerators if they are present. Thereby eliminating the overhead of maintaining two versions of the software...
Go to contribution page -
Mr Bruno Hoeft (Karlsruhe Institute of Technology (KIT))07/11/2019, 14:15
This talk explores the methods and results confirming the baseline assumption that LHCONE traffic is science traffic. The LHCONE (LHC Open Network Environment) is a network conceived to support globally distributed collaborative science. The LHCONE connects thousands of researchers to LHC data sets at hundreds of universities and labs performing analysis within the global collaboration. It is...
Go to contribution page -
Alexey Anisenkov (Budker Institute of Nuclear Physics (RU))07/11/2019, 14:30
CRIC is a high-level information system which provides flexible, reliable and complete topology and configuration description for a large scale distributed heterogeneous computing infrastructure. CRIC aims to facilitate distributed computing operations for the LHC experiments and consolidate WLCG topology information. It aggregates information coming from various low-level information sources...
Go to contribution page -
Manuel Giffels (KIT - Karlsruhe Institute of Technology (DE))07/11/2019, 14:30
Increased operational effectiveness and the dynamic integration of only temporarily available compute resources (opportunistic resources) becomes more and more important in the next decade, due to the scarcity of resources for future high energy physics experiments as well as the desired integration of cloud and high performance computing resources. This results in a more heterogenous compute...
Go to contribution page -
Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US))07/11/2019, 14:30
One of the key components of the XRootD software framework is the C++ implementation of the XRootD client. As the foundation of client binaries, XRootD Posix API and the Python API, it is widely used in LHC experiments’ frameworks as well as on server side in the XCache and EOS. In order to facilitate new developments the XRootD client API has been extended to be in line with modern C++...
Go to contribution page -
Andreas Joachim Peters (CERN)07/11/2019, 14:30Track 4 – Data Organisation, Management and AccessOral
The storage group of CERN IT operates more than 20 individual EOS storage services with a raw data storage volume of more than 280 PB. Storage space is a major cost factor in HEP computing and the planned future LHC Run 3 and 4 increase storage space demands by at least an order of magnitude.
A cost effective storage model providing durability is Erasure Coding (EC). The decommissioning of...
Go to contribution page -
David Kelsey (Science and Technology Facilities Council STFC (GB))07/11/2019, 14:30
The use of IPv6 on the general internet continues to grow. Several Broadband/Mobile-phone companies, such as T-Mobile in the USA and BT/EE in the UK, now use IPv6-only networking with connectivity to the IPv4 legacy world enabled by the use of NAT64/DNS64/464XLAT. Large companies, such as Facebook, use IPv6-only networking within their internal networks, there being good management and...
Go to contribution page -
Piero Vicini (Sapienza Universita e INFN, Roma I (IT))07/11/2019, 14:30
The L0TP+ initiative is aimed at the upgrade of the FPGA-based Level-0 Trigger Processor (L0TP) of the NA62 experiment at CERN for the post-LS2 data taking, which is expected to happen at 100% of nominal beam intensity. Although tests performed at the end of 2018 showed a substantial robustness of the L0TP system also at full beam intensity, just hinting at a firmware fix, there are several...
Go to contribution page -
Kinga Anna Wozniak (University of Vienna (AT))07/11/2019, 14:30
We propose a new search strategy, based on deep-learning (DL) anomaly detection, to search for new physics in all-jet final states without specific assumptions. The DL model identifies events with anomalous radiation pattern in the jets. This is done applying a threshold to the reconstruction loss. The threshold is tuned so that the rejected events provide an estimate of the QCD-background...
Go to contribution page -
Petar Kevin Rados, Petar Kevin Rados (DESY)07/11/2019, 14:30
The tracking system of Belle II consists of a silicon vertex detector (VXD) and a cylindrical drift chamber (CDC), both operating in a magnetic field created by the main solenoid of 1.5 T and final focusing magnets. The Belle II VXD is a combined tracking system composed by two layers of pixel detectors married with four layers of double sided silicon strip sensors (SVD). The drift chamber...
Go to contribution page -
Prof. Kihyeon Cho (KISTI)07/11/2019, 14:30
In November 2018, KISTI-5 supercomputer has launched. It is the heterogeneous machine of 25.3 PF Cray 3112-AA000T with Intel Xeon Phi KNL (Knight Landing) 7250 processor which has 68 cores per processor. The goal of this presentation is to discuss the application and usages of Intel KNL-based system of KISTI-5 supercomputer for physics beyond the Standard Model.
The world is made of dark...
Go to contribution page -
Joshua Heneage Dawes (University of Manchester (GB))07/11/2019, 14:45
VyPR (http://cern.ch/vypr) is a framework being developed with the aim of automating as much as possible the performance analysis of Python programs. To achieve this, it uses an analysis-by-specification approach; developers specify the performance requirements of their programs (without any modifications of the source code) and such requirements are checked at runtime. VyPR then provides...
Go to contribution page -
Prof. Andrew Davis (United Kingdom Atomic Energy Authority)07/11/2019, 14:45
Within the fusion radiation transport community for many years the de facto standard codebase for simulation was and still is MCNP. MCNP suffers from very few community perceived drawbacks having widely validated and verified physics, large user base, simple interface, but the main issue in the age of democratised computing access is prohibitive licence conditions. Thus, if we need to be able...
Go to contribution page -
James Letts (Univ. of California San Diego (US))07/11/2019, 14:45
GlideinWMS is a workload management and provisioning system that allows sharing computing resources distributed over independent sites. Based on the requests made by glideinWMS Frontends, a dynamically sized pool of resources is created by glideinWMS pilot Factories via pilot job submission to resource sites' computing elements. More than 400 computing elements (CE) are currently serving more...
Go to contribution page -
Dave Casper (University of California Irvine (US))07/11/2019, 14:45
The increasing track multiplicity in ATLAS poses new challenges for primary vertex reconstruction software, where it is expected to reach over 70 inelastic proton-proton collisions per beam crossing during Run-2 of the LHC and even more extreme vertex density in the next upcoming Runs.
Go to contribution page
In order to address these challenges, two new tools were adapted.
The first is the Gaussian track density... -
Hristo Umaru Mohamed (CERN)07/11/2019, 14:45
DHCP is an often overlooked, but incredibly important component of the operation of every data center. With constantly scaling and dynamic environments, managing DHCP servers that rely on configuration files, which must be in sync, becomes both expensive engineering wise and slow. The LHCb Online infrastructure currently consists of over 2500 DHCP enabled devices - physical and virtual...
Go to contribution page -
Christian Kahra (Johannes Gutenberg Universitaet Mainz (DE))07/11/2019, 14:45
To cope with the enhanced luminosity at the Large Hadron Collider (LHC) in 2021, the ATLAS collaboration is planning a major detector upgrade to be installed during the Long shutdown 2 (LS2). As a part of this, the Level 1 trigger, based on calorimeter data, will be upgraded to exploit the fine granularity readout using a new system of Feature EXtractors (FEX) and a new Topological Processor...
Go to contribution page -
David Lawrence (Jefferson Lab)07/11/2019, 14:45
The Jefferson Lab 12GeV accelerator upgrade completed in 2015 is now producing data at volumes unprecedented for the lab. The resources required to process this data now exceed the capacity of the onsite farm necessitating the use of offsite computing resources for the first time in the history of JLab. GlueX is now utilizing NERSC for raw data production using the new SWIF2 workflow tool...
Go to contribution page -
Andrea Valassi (CERN)07/11/2019, 14:45
HEP event selection is traditionally considered a binary classification problem, involving the dichotomous categories of signal and background. In distribution fits for particle masses or couplings, however, signal events are not all equivalent, as the signal differential cross section has different sensitivities to the measured parameter in different regions of phase space. In this talk, I...
Go to contribution page -
Dario Barberis (Università e INFN Genova (IT))07/11/2019, 14:45Track 4 – Data Organisation, Management and AccessOral
The ATLAS Event Index was designed in 2012-2013 to provide a global event catalogue and limited event-level metadata for ATLAS analysis groups and users during LHC Run 2 (2015-2018). It provides a good and reliable service for the initial use cases (mainly event picking) and several additional ones, such as production consistency checks, duplicate event detection and measurements of the...
Go to contribution page -
Enric Tejedor Saavedra (CERN)07/11/2019, 15:00
Widespread distributed processing of big datasets has been around for more than a decade now thanks to Hadoop, but only recently higher-level abstractions have been proposed for programmers to easily operate on those datasets, e.g. Spark. ROOT has joined that trend with its RDataFrame tool for declarative analysis, which currently supports local multi-threaded parallelisation. However,...
Go to contribution page -
Carlos Fernando Gamboa (Brookhaven National Laboratory (US))07/11/2019, 15:00
The Belle II experiment is a leading world class B-physics experiment. In 2017 BNL became a member of the Belle II collaboration taking responsibility to maintain and develop the Conditions Database (CDB)—an archive of the detector’s conditions at the time of each recorded collision. This database tracks millions of variables—for example, the detector’s level of electronic noise,...
Go to contribution page -
Prof. Ivan Kisel (Johann-Wolfgang-Goethe Univ. (DE))07/11/2019, 15:00
The main purpose of modern experiments with heavy ions is a comprehensive study of the QCD phase diagram in the field of quark-gluon plasma (QGP) and the possible phase transition to the QGP phase.
One of the possible signals of QGP formation is an increase in the production of strange particles. Reconstruction of $\Sigma$ particles together with other strange particles completes the picture...
Go to contribution page -
Shawn Mc Kee (University of Michigan (US))07/11/2019, 15:00
High Energy Physics (HEP) experiments have greatly benefited from a strong relationship with Research and Education (REN) network providers and thanks to the projects such as LHCOPN/LHCONE and REN contributions, have enjoyed significant capacities and high performance networks for some time. Network providers have been able to continually expand their capacities to over-provision the networks...
Go to contribution page -
Dr Andrew Lahiff07/11/2019, 15:00
Access to both High Throughput Computing (HTC) and High Performance Computing (HPC) facilities is vitally important to the fusion community, not only for plasma modelling but also for advanced engineering and design, materials research, rendering, uncertainty quantification and advanced data analytics for engineering operations. The computing requirements are expected to increase as...
Go to contribution page -
Yujiang Bi (Institute of High Energy Physics, Chinese Academy of Sciences)07/11/2019, 15:00
The open source ROCm platform for GPU computing provides an uniform framework to support both the NVIDIA and AMD GPUs, and also the possibility to porting the CUDA code to the ROCm-compatible one. We will present the porting progress on the Overlap fermion inverter (GWU-code) based on thrust and also a general inverter package - QUDA.
Go to contribution page -
Dr Richard Hughes-Jones (GEANT Association)07/11/2019, 15:00Track 4 – Data Organisation, Management and AccessOral
This paper describes the work done to test the performance of several current data transfer protocols. The work was carried out as part of the AENEAS Horizon 2020 project in collaboration with the DOMA project and investigated the interactions between the application, the transfer protocol, TCP/IP and the network elements. When operational, the two telescopes in Australia and South Africa that...
Go to contribution page -
Thiago Tomei Fernandez (UNESP - Universidade Estadual Paulista (BR))07/11/2019, 15:00
The CMS experiment has been designed with a two-level trigger system: the Level 1 Trigger, implemented on custom-designed electronics, and the High Level Trigger, a streamlined version of the CMS offline reconstruction software running on a computer farm. During its “Phase 2” the LHC will reach a luminosity of 7×10³⁴ cm⁻²s⁻¹ with a pileup of 200 collisions, integrating over 3000 fb⁻¹ over the...
Go to contribution page -
Jim Pivarski (Princeton University)07/11/2019, 15:00
Over the past two years, the uproot library has become widely adopted among particle physicists doing analysis in Python. Rather than presenting an event model, uproot gives the user an array for each particle attribute. In case of multiple particles per event, this array is jagged: an array of unequal-length subarrays. Data structures and operations for manipulating jagged arrays are provided...
Go to contribution page -
David Cameron (University of Oslo (NO))07/11/2019, 15:15
ATLAS@Home is a volunteer computing project which enables members of the public to contribute computing power to run simulations of the ATLAS experiment at CERN. The computing resources provided to ATLAS@Home increasingly come not only from traditional volunteers, but from data centres or office computers at institutes associated to ATLAS. The design of ATLAS@Home was built around not giving...
Go to contribution page -
Gordon Watts (University of Washington (US))07/11/2019, 15:15
MATHUSLA has been proposed as a detector that sits over 100 m from an LHC interaction point, on the surface, to look for ultra long-lived particles. A test stand was constructed with two layers of scintillator paddles and six layers of RPCs, on loan from the DZERO and the Argo-YBJ experiments. Downward and upward going tracks from cosmic ray data and muons from the interaction point have been...
Go to contribution page -
Frank Berghaus (University of Victoria (CA))07/11/2019, 15:15
The Simulation at Point1 (Sim@P1) project was built in 2013 to take advantage of the ATLAS Trigger and Data Acquisition High Level Trigger (HLT) farm. The HLT farm provides around 100,000 cores, which are critical to ATLAS during data taking. When ATLAS is not recording data, this large compute resource is used to generate and process simulation data for the experiment. At the beginning of the...
Go to contribution page -
Caterina Marcon (Lund University (SE))07/11/2019, 15:15
Experimental observations and advanced computer simulations in High Energy Physics (HEP) paved way for the recent discoveries at the Large Hadron Collider (LHC) at CERN. Currently, Monte Carlo simulations account for a very significant amount of computational resources of the Worldwide LHC Computing Grid (WLCG).
Go to contribution page
In looking at the recent trends in modern computer architectures we see a... -
Silvio Pardi (INFN)07/11/2019, 15:15
Belle II has started the Phase 3 data taking with a fully quipped detector. The data flow at the maximum luminosity is expected to be 12PB of data/year and will be analysed by a cutting-edge computing infrastructure spread over 26 Countries. Some of the major Computing Centres for HEP in Europe, USA and Canada will store and tackle the second copy of RAW data.
Go to contribution page
In this scenario, the... -
Marc Dobson (CERN)07/11/2019, 15:15
The upgraded High Luminosity LHC, after the third Long Shutdown (LS3) will provide an instantaneous luminosity of 7.5 1034 cm-2 s-1 (levelled), with a pileup of up to 200 interactions per bunch crossing. During LS3, the CMS Detector will undergo a major upgrade to prepare for the Phase-2 of the LHC physics program, starting around 2026. The upgraded CMS detector will be read out at an...
Go to contribution page -
Fabio Alberto Espinosa Burbano (Massachusetts Inst. of Technology (US))07/11/2019, 15:15
The Run Registry of the Compact Muon Solenoid (CMS) experiment at the LHC is the central tool to keep track of the results from the data quality monitoring scrutiny carried out by the collaboration. Recently it has been upgraded for the upcoming Run3 of LHC to a new web application which will replace the current version successfully used during Run1 and Run2. It consists of a Javascript web...
Go to contribution page -
Julius Hrivnac (Centre National de la Recherche Scientifique (FR))07/11/2019, 15:15Track 4 – Data Organisation, Management and AccessOral
Data in HEP are usually stored in tuples (tables), trees, nested tuples (trees of tuples) or relational (SQL-like) databases, with or without a defined schema. But many of our data have a graph structure without a schema, or with a weakly imposed schema. They consist of entities with relations, some of which are known in advance, but many are created later, as needs evolve. Such structures are...
Go to contribution page -
Jakob Blomer (CERN)07/11/2019, 15:30
The CernVM File System provides the software and container distribution backbone for most High Energy and Nuclear Physics experiments. It is implemented as a file system in user-space (fuse) module, which permits its execution without any elevated privileges. Yet, mounting the file system in the first place is handled by a privileged suid helper program that is installed by the fuse package on...
Go to contribution page -
Mr Antonin Kveton (Charles University (CZ))07/11/2019, 15:30
In HEP experiments, remote access to control systems is one of the fundamental pillars of efficient operations. At the same time, development of user interfaces with emphasis on usability can be one of the most labor-intensive software tasks to be undertaken in the life cycle of an experiment. While desirable, the development and maintenance of a large variety of interfaces (e.g., desktop...
Go to contribution page -
Enric Tejedor Saavedra (CERN)07/11/2019, 15:30
PyROOT is the name of ROOT’s automatic Python bindings, which allow to access all the ROOT functionality implemented in C++ from Python. Thanks to the ROOT type system and the Cling C++ interpreter, PyROOT creates Python proxies for C++ entities on the fly, thus avoiding to generate static bindings beforehand.
PyROOT has been enhanced and modernised to meet the demands of the HEP Python...
Go to contribution page -
Othmane Bouhali (Texas A & M University (US))07/11/2019, 15:30
Simulation is an important tool in the R&D process of detectors and their optimization. Fine tuning of detector parameters and running conditions can be achieved by means of advanced simulation tools thus reducing cost associated to prototyping.
Go to contribution page
This simulation, however, in complex detector geometries, large volumes and high gas gain becomes computationally expensive and can run for several... -
Shengsen Sun (Institute of High Energy Physics)07/11/2019, 15:30
The end cap time-of-flight (ETOF) at Beijing Spectrometer (BESIII) was upgraded with multi-gap resistive plate chamber technology in order to improve the particle identification capability. The accurate knowledge of the detector real misalignment is important for getting close to the designed time resolution and the expected reconstruction efficiency of the end cap time-of-flight system. The...
Go to contribution page -
Mr Xiaowei Jiang (IHEP)07/11/2019, 15:30
HTCondor, with high scheduling performance, has been widely adopted for HEP clusters. Unlike other schedulers, HTCondor provides loose management functions to the work-nodes. We developed a Maintenance Automation Tool acronym as “HTCondor MAT“, focusing on resource management dynamically and error handing automatically.
Go to contribution page
A central database is used to record various attributes of all computing... -
Thomas Hartmann (Deutsches Elektronen-Synchrotron (DE))07/11/2019, 15:30
Running a data center is never a trivial job. In addition to daily
Go to contribution page
routine tasks, service operation teams have to provide a meaningful
information for monitoring, reporting and access pattern analytic.
The dCache production instances at DESY, produce gigabytes of billing
files per day. However, with a help of modern BigData analysis tools
like Apache-Spark and Jupiter notebooks such task... -
Andrea Sciabà (CERN)07/11/2019, 15:30
Two recent software development projects are described: The first is a framework for generating load for an xrootd based disk caching proxy (known as xcache) and verifying the generated data as delivered by the cache. The second is a service to reduce the effect of network latency on application execution time due to writing files to remote storage via the xrootd protocol. For both projects...
Go to contribution page -
Marcin Nowak (Brookhaven National Laboratory (US))07/11/2019, 15:30
During the long shutdown, ATLAS is preparing several fundamental changes to its offline event processing framework and analysis model. These include moving to multi-threaded reconstruction and simulation and reducing data duplication during derivation analysis by producing a combined mini-xAOD stream. These changes will allow ATLAS to take advantage of the higher luminosity at Run 3 without...
Go to contribution page -
Atsushi Mizukami (High Energy Accelerator Research Organization (JP))07/11/2019, 15:30
The LHC is expected to increase its center-of-mass energy to 14 GeV and an instantaneous luminosity to 2.4×1034 cm-2s-1 for Run-3 scheduled from 2021 to 2023. In order to cope with the high event rate, an upgrade of the ATLAS trigger system is required.
Go to contribution page
The level-1 Endcap Muon trigger system identifies muons with high transverse momentum by combining data from a fast muon trigger detector,... -
Iouri Smirnov (Northern Illinois University (US))07/11/2019, 15:30
An overview of the Conditions Database (DB) structure for the hadronic Tile Calorimeter (TileCal), one of the ATLAS Detector sub-systems, is presented. ATLAS Conditions DB stores the data on the ORACLE backend, and the design and implementation has been developed using COOL (Conditions Objects for LCG) software package as a common persistency solution for the storage and management of the...
Go to contribution page -
Yao Zhang07/11/2019, 15:30
Drift chamber is the main tracking detector for high energy physics experiment like BESIII. Deep learning developments in the last few years have shown tremendous improvements in the analysis of data especially for object classification and parameter regression. Here we present a first study of deep learning architectures applied to BESIII Monte-carlo data to make estimation of the track...
Go to contribution page -
Qiumei Ma (IHEP China)07/11/2019, 15:30
Supercomputer and other high performance computing resources can be useful supplements to the BESIII computing resources for simulation productions and data analysis. The supercomputer Tianhe-2 has ranked the No.1 on the Top500 certificate list for the sixth consecutive times during the year 2013 to 2015. This paper will describe the deployment singularity containers as well as the integration...
Go to contribution page -
Ofer Rind07/11/2019, 15:30
Large scientific data centers have recently begun providing a number of different types of data storage, to satisfy the various needs of their users. Users with interactive accounts, for example, might want a posix interface for easy access to the data from their interactive machines. Grid computing sites, on the other hand, likely need to provide an X509 based storage protocol, like SRM and...
Go to contribution page -
Xianghu Zhao (Chinese Academy of Sciences (CN))07/11/2019, 15:30
Circular Electron Positron Collider (CEPC) is designed as a future Higgs Factory. Like other high energy physics experiment, the offline software consists of many packages. BSM (Bundled Software Manager) is thus created in order to simplify the deployment and usage of software which has many packages and dependencies.
BSM utilizes git as the software repository. Different software versions...
Go to contribution page -
Christopher Jones (Fermi National Accelerator Lab. (US))07/11/2019, 15:30
The CMS software system, known as CMSSW, has a generalized conditions, calibration, and geometry data products system called the EventSetup. The EventSetup caches results of reading or calculating data products based on the 'interval of validity', IOV, which is based on the time period for which that data product is appropriate. With the original single threaded CMSSW framework, updating only...
Go to contribution page -
Luca Mascetti (CERN)07/11/2019, 15:30
CERN IT is reviewing its portfolio of applications, with the aim to incorporate open-source solutions wherever possible. In particular, the Windows-centric DFS file system is replaced by CERNBox for certain use-cases.
Access to storage from Windows managed devices for end-users is largely covered by synchronization clients. However, online access using standard CIFS/SMB protocol is required...
Go to contribution page -
Sébastien Gadrat (CC-IN2P3)07/11/2019, 15:30
The High Performance Computing (HPC) domain aims to optimize code in order to use the last multicore and parallel technologies including specific processor instructions. In this computing framework, portability and reproducibility are key concepts. A way to handle these requirements is to use Linux containers. These "light virtual machines" allow to encapsulate applications within its...
Go to contribution page -
Thomas Hartmann (Deutsches Elektronen-Synchrotron (DE))07/11/2019, 15:30
DESY manages not only one of the largest Tier-2 sites with about 18 500 CPU cores for Grid workloads but also about 8000 CPU cores for interactive user analyses. In this presentation, we recapitulate the consolidation of the batch systems in a common HTCondor based setup and the lessons learned as both use cases differ in their goals. Followingly, we will give an outlook
Go to contribution page
on the future... -
Xiaowei Jiang (IHEP(中国科学院高能物理研究所))07/11/2019, 15:30
HTCondor is adopted to manage the High Throughput Computing (HTC) cluster at IHEP since 2017. Two months later in the same year, a Slurm cluster is set up to run High Performance Computing (HPC) jobs. To provide accounting service both for HTCondor and Slurm clusters, a unified accounting system named Cosmos is necessary to develop.
Go to contribution page
However, different job workload brings different accounting... -
Qi Fazhi (IHEP)07/11/2019, 15:30
In recent years, along with the rapid development of large scientific facilities and e-science worldwide, various cyber security threats have becoming a noticeable challenge in many data centers for scientific research, such as DDoS attack, ransomware, crypto-currency mining, data leak, etc.
Intrusion and abnormality detection by collecting and analyzing security data is an important...
Go to contribution page -
Julius Hrivnac (Centre National de la Recherche Scientifique (FR))07/11/2019, 15:30
ATLAS EventIndex Service keeps references to all real and simulated ATLAS events. Hadoop Mapfiles and HBase tables are used to store the Event Index data, a subset of data is also stored in the Oracle database. Several user interfaces are currently used to access and search the data. From the simple command line interface, through programmatical API to sophisticated Graphical Web Services. The...
Go to contribution page -
Kim Smith (University of Melbourne)07/11/2019, 15:30
Belle II is a global collaboration with over 700 physicists from 113 institutes. In order to fuel the physics analyses, a distributed grid of computing clusters consisting of tens of thousands of CPU-cores will house the multiple petabytes of data that will come out of the detector in years to come. However, the task of easily finding the particular datasets of interest to physicists with...
Go to contribution page -
Mr Alexander Adler (Johann-Wolfgang-Goethe Univ. (DE))07/11/2019, 15:30
Monitoring is an indispensable tool for the operation of any
Go to contribution page
large installment of grid or cluster computing, be it high
energy physics or elsewhere. Usually, monitoring is configured
to collect a small amount of data, just enough to enable
detection of abnormal conditions. Once detected, the abnormal
condition is handled by gathering all information from the
affected components.... -
Benjamin LaRoque07/11/2019, 15:30
The Project 8 collaboration aims to measure the absolute neutrino mass or improve on the current limit by measuring the tritium beta decay electron spectrum. We present the current distributed computing model for the Project 8 experiment and requirements for future phases. Project 8 is in its second phase of data taking with a near continuous data rate of 1Gbps. The current computing model...
Go to contribution page -
Jiri Chudoba (Acad. of Sciences of the Czech Rep. (CZ))07/11/2019, 15:30
The Computing Center of the Institute of Physics (CC FZU) of the Czech
Go to contribution page
Academy of Sciences provides compute and storage capacity to several
physics experiments. Most resources are used by two LHC experiments,
ALICE and ATLAS. In the WLCG, which coordinates computing activities for
the LHC experiments, the computing center is a Tier-2. The rest of
computing resources is used by astroparticle... -
Brian Paul Bockelman (University of Nebraska Lincoln (US))07/11/2019, 15:30
The ATLAS Event Streaming Service (ESS) is an approach to preprocess and deliver data for Event Service (ES) that has implemented a fine-grained approach for ATLAS event processing. The ESS allows one to asynchronously deliver only the input events required by ES processing, with the aim to decrease data traffic over WAN and improve overall data processing throughput. A prototype of ESS is...
Go to contribution page -
Marco Zanetti (Universita e INFN, Padova (IT))07/11/2019, 15:30
CloudVeneto.it was initially funded and deployed by INFN in 2014 for serving the computational and storage demands of INFN research projects mainly related to HEP and Nuclear Physics. It is an OpenStack-based scientific cloud with resources spread across two different sites connected with a high speed optical link: the INFN Padova Unit and the INFN Legnaro National Laboratories. The...
Go to contribution page -
Andreas Joachim Peters (CERN)07/11/2019, 15:30
With the ongoing decomissioning of the AFS filesystem at CERN many use cases have been migrated to the EOS storage system at CERN.
To cope with additional requirements the filesystem interface implemented using FUSE had been rewritten since 2017. The new implementation supports strong security in conventional, VM and container environments. It is in production for the CERNBOX EOS service...
Go to contribution page -
Katy Ellis (Science and Technology Facilities Council STFC (GB))07/11/2019, 15:30
The STFC CASTOR tape service is responsible for the management of over 80PB of data including 45PB generated by the LHC experiments for the RAL Tier-1. In the last few years there have been several disruptive changes that have or are necessitating significant changes to the service. At the end of 2016, Oracle, which provided the tape libraries, drives and media announced they were leaving the...
Go to contribution page -
Michael Lettrich (Technische Universität Muenchen (DE))07/11/2019, 15:30
With the beginning of LHC Run 3, the upgraded ALICE detector will record Pb-Pb collisions at an interaction rate of 50 kHz using continuous readout, resulting in raw data rates of over 3.5TB/s marking a hundredfold increase over Run 2. Since permanent storage at this rate is unfeasible and exceeds available capacities, a sequence of highly effective compression and data reduction steps is...
Go to contribution page -
Sitong An (CERN, Carnegie Mellon University (US))07/11/2019, 15:30
ROOT provides, through TMVA, machine learning tools for data analysis at HEP experiments and beyond. However, with the rapidly evolving ecosystem for machine learning, the focus of TMVA is shifting.
In this poster, we present the new developments and strategy of TMVA, which will allow the analyst to integrate seamlessly, and effectively, different workflows in the diversified...
Go to contribution page -
Heather Gray (LBNL)07/11/2019, 15:30
The ATLAS physics program relies on very large samples of simulated events. Most of these samples are produced with GEANT4, which provides a highly detailed and accurate simulation of the ATLAS detector. However, this accuracy comes with a high price in CPU, and the sensitivity of many physics analysis is already limited by the available Monte Carlo statistics and will be even more so in the...
Go to contribution page -
Rafal Dominik Krawczyk (CERN)07/11/2019, 15:30
This paper evaluates the utilization of RDMA over Converged Ethernet (RoCE) for the Run3 LHCb event building at CERN. The acquisition system of the detector will collect partial data from approximately 1000 separate detector streams. Total estimated throughput equals 40 terabits per second. Full events will be assembled for subsequent processing and data selection in the filtering farm of the...
Go to contribution page -
Johnny Raine (Universite de Geneve (CH))07/11/2019, 15:30
Modeling the physics of a detector's response to particle collisions is one of the most CPU intensive and time consuming aspects of LHC computing. With the upcoming high-luminosity upgrade and the need to have even larger simulated datasets to support physics analysis, the development of new faster simulation techniques but with sufficiently accurate physics performance is required. The...
Go to contribution page -
Ethan Carragher (University of Adelaide)07/11/2019, 15:30
Composite Higgs models (CHMs), in which the Higgs boson is a bound state of an as-yet undetected strongly interacting sector, offer an attractive solution to the hierarchy problem while featuring rich particle phenomenology at the few-TeV scale. Of particular interest is the minimal CHM (MCHM), based on the $SO(5) \to SO(4)$ symmetry breaking pattern. However, the complexity of its parameter...
Go to contribution page -
Mason Proffitt (University of Washington (US))07/11/2019, 15:30
Analysis languages must, first and foremost, carefully describe how to extract and aggregate data. All analysis languages must be able to make a plot of an event’s Missing Energy, for example. Of course, much more complex queries must also be supported, like making the plot of Missing Energy only for events with at least two jets that satisfy certain requirements. A project was started to try...
Go to contribution page -
Diego Rodriguez Rodriguez (CERN)07/11/2019, 15:30
In this paper we introduce and study the feasibility of running hybrid analysis pipelines using the REANA reproducible analysis platform. The REANA platform allows researchers to specify declarative computational workflow steps describing the analysis process and to execute the workflow pipelines on remote containerised Kubernetes-orchestrated compute clouds. We have designed an abstract job...
Go to contribution page -
Andrew Lahiff07/11/2019, 15:30
In many countries around the world, the development of national infrastructures for science either has been implemented or are under serious consideration by governments and funding bodies. Current examples include ARDC in Australia, CANARIE in Canada and MTA Cloud in Hungary. These infrastructures provide access to compute and storage to a wide swathe of user communities and represent a...
Go to contribution page -
Manfred Peter Fackeldey (Rheinisch Westfaelische Tech. Hoch. (DE))07/11/2019, 15:30
The VISPA (VISual Physics Analysis) project provides a streamlined work environment for physics analyses and hands-on teaching experiences with a focus on deep learning.
Go to contribution page
VISPA has already been successfully used in HEP analyses and teaching and is now being further developed into an interactive deep learning platform.
One specific example is to meet knowledge sharing needs in deep learning by... -
Tigran Mkrtchyan (DESY)07/11/2019, 15:30
Within the DOMA working group, the QoS activity is looking at how best to describe innovative technologies and deployments. Once scenario that has emerged is providing storage that uses end-of-warranty disks: the cheap (almost free) nature of this storage is offset by a much larger likelihood of data loss. In some situations, this trade-off is acceptable, provided the operational overhead of...
Go to contribution page -
Luca Mascetti (CERN)07/11/2019, 15:30
EOS is the key component of the CERN Storage strategy and is behind the success of CERNBox, the CERN cloud synchronisation service which allows syncing and sharing files on all major mobile and desktop platforms aiming to provide offline availability to any data stored in the infrastructure.
CERNBox faced and enormous success within the CERN users' community thanks to its always increasing...
Go to contribution page -
Barbara Martelli (INFN CNAF)07/11/2019, 15:30
In modern data centers an effective and efficient monitoring system is a critical asset, yet a continuous concern for administrators. Since its birth, INFN Tier-1 data center, hosted at CNAF, has used various monitoring tools all replaced, since a few years, by a system common to all CNAF departments (based on Sensu, Influxdb, Grafana).
Go to contribution page
Given the complexity of the inter-dependencies of the... -
Jean-Roch Vlimant (California Institute of Technology (US))07/11/2019, 15:30
We present an NDN-based XRootD plugin and associated methods which have been built for data access in the CMS and other experiments at the LHC, its status and plans for ongoing development.
Named Data Networking (NDN) is a leading Future Internet Architecture where data in the network is accessed directly by its name rather than the location of the host where it resides. NDN enables the...
Go to contribution page -
Jiahui Wei (Universite de Geneve (CH))07/11/2019, 15:30
The Alpha Magnetic Spectrometer (AMS) is a particle physics experiment installed and operating on board of the International Space Station (ISS) from May 2011 and expected to last through 2024 and beyond. The AMS offline software is used for data reconstruction, Monte-Carlo simulation and physics analysis. This paper presents how we manage the offline software, including the version control,...
Go to contribution page -
Federico Carminati (CERN)07/11/2019, 15:30
Accurate particle track reconstruction will be a major challenge for the High Luminosity LHC experiments. Increase in the expected number of simultaneous collisions and the high detector occupancy will make the algorithms extremely demanding in terms of time and computing resources.
The sheer increase in the number of hits would increase the complexity exponentially, however the finite...
Go to contribution page -
Max Fischer (Karlsruhe Institute of Technology)07/11/2019, 15:30
Job schedulers in high energy physics require accurate information about predicted resource consumption of a job to assign jobs to the most reasonable, available resources. For example, job schedulers evaluate information about the runtime, numbers of requested cores, or size of memory, and disk space. Users, therefore, specify those information when submitting their jobs and workflows. Yet,...
Go to contribution page -
Federico Carminati (CERN)07/11/2019, 15:30
The Worldwide LHC Computing Grid (WLCG) processes all LHC data and it has been the computing platform that has allowed the discovery of the Higgs Boson. Optimal usage of its resources represents a major challenge. Attempts at simulating this complex and highly non-linear environment did not yield practically usable results. For job submission and management, a satisfactory solution was...
Go to contribution page -
Oksana Shadura (University of Nebraska Lincoln (US))07/11/2019, 15:30
We overview recent changes in the ROOT I/O system, increasing performance and enhancing it and improving its interaction with other data analysis ecosystems. Both the newly introduced compression algorithms, the much faster Bulk I/O data path, and a few additional techniques have the potential to significantly to improve experiments’ software performance.
Go to contribution page
The need for efficient lossless data... -
Li Wang (IHEP)07/11/2019, 15:30
Wireless local area network (WLAN) technology is widely used in various enterprises and institutions. In order to facilitate the use of users, they often provide a single ssid access point, resulting in different identities of users authenticated and authorized can connect to the wireless network anytime, anywhere as needed and obtain the same accessible network resources such as bandwidth,...
Go to contribution page -
Marten Ole Schmidt (Ruprecht Karls Universitaet Heidelberg (DE))07/11/2019, 15:30
In the upcoming LHC Run 3, starting in 2021, the upgraded Time Projection Chamber (TPC) of the ALICE experiment will record minimum bias Pb--Pb collisions in a continuous readout mode at 50 kHz interaction rate. This corresponds to typically 4-5 overlapping collisions in the detector. Despite careful tuning of the new quadruple GEM-based readout chambers, which fulfill the design requirement...
Go to contribution page -
Doris Yangsoo Kim (Soongsil University)07/11/2019, 15:30
The SuperKEKB collider and the Belle II experiment started Phase III at the beginning of 2019. The run is designed to collect a data sample of up to 50/ab at the collision energy of the Upsilon(4S) resonance for the next decade. The Belle II software library is created to ensure the accuracy and efficiency needed to
accomodate this next generation B factory experiment.The central...
Go to contribution page -
Oliver Gutsche (Fermi National Accelerator Lab. (US))07/11/2019, 15:30
Traditionally, High Energy data analysis is based on the model where data are stored in files and analyzed
Go to contribution page
by running multiple analysis processes, each reading one or more of the data files. This process involves
repeated data reduction step, which produces smaller files, which is time consuming and leads to data duplication. We propose an alternative approach to data storage and analysis,... -
Dr Marcin Slodkowski (Warsaw University of Technology (PL))07/11/2019, 15:30
In this work, we focus on assessing the contribution of the initial-state fluctuations of heavy ion collision in the hydrodynamic simulations. We try to answer the question of whether the hydrodynamic simulation retains the same level of fluctuation in the final-state as for the initial stage. In another scenario, the hydrodynamic simulations the fluctuation drowns in the final distribution of...
Go to contribution page -
Brian Paul Bockelman (University of Nebraska Lincoln (US))07/11/2019, 15:30
LHC data is constantly being moved between computing and storage sites to support analysis, processing, and simulation; this is done at a scale that is currently unique within the science community. For example, the CMS experiment on the LHC manages approximately 200PB of data and, on a daily basis, moves 1PB between sites. This talk shows the performance results we have produced of exploring...
Go to contribution page -
Dr Evgeny Lavrik (Facility for Antiproton and Ion Research)07/11/2019, 15:30
TGenBase is a virtual database engine which allows to communicate with and store data in different underlying database management systems such as PostgreSQL, MySQL, SQLite, based on the configuration. It is universally applicable for any data storage task, such as parameter handling, detector component description, logistics, etc. In addition to usual CRUD (create, read, update, delete), it...
Go to contribution page -
Michel Hernandez Villanueva (University of Mississippi)07/11/2019, 15:30
The Belle II experiment is a major upgrade of the e+e- asymmetric collider Belle, expected to produce tens of peta-bytes of data per year due to the luminosity increase with the SuperKEKB accelerator. The distributed computing system of the Belle II experiment plays a key role, storing and distributing data in a reliable way, to be easily access and analyzed along the more than 800...
Go to contribution page -
Kenneth Richard Herner (Fermi National Accelerator Laboratory (US))07/11/2019, 15:30
The DESGW group seeks to identify electromagnetic counterparts of gravitational wave events seen by the LIGO-VIRGO network, such as those expected from binary neutron star mergers or neutron star- black hole mergers. DESGW was active throughout the first two LIGO observing seasons, following up several binary black hole mergers and the first binary neutron star merger, GW170817. We describe...
Go to contribution page -
Dr Venkitesh Ayyar (Lawrence Berkeley National Lab)07/11/2019, 15:30
The success of Convolutional Neural Networks (CNNs) in image classification has prompted efforts to study their use for classifying image data obtained in Particle Physics experiments.
In this poster, I will discuss our efforts to apply CNNs to 3D image data from particle physics experiments to classify signal and background.In this work, we present an extensive 3D convolutional neural...
Go to contribution page -
Steven Farrell (Lawrence Berkeley National Lab (US))07/11/2019, 15:30
Communication among processes is generating considerable interest in the scientific computing community due to the increasing use of distributed memory systems. In the field of high energy physics (HEP), however, little research has been addressed on this topic. More precisely in ROOT I/O, the de facto standard for data persistence in HEP applications, no such feature is provided. In order to...
Go to contribution page -
Remi Mommsen (Fermi National Accelerator Lab. (US))07/11/2019, 15:30
The CMS experiment at CERN is working to improve the selection capability of the High Level Trigger (HLT) system, in view of the re-start of the collisions for Run 3. One key factor on this scope is to enhance the ability of the Trigger to track the detector evolution during the data taking, along with the LHC Fill cycles. In particular, the HLT performance is sensitive to two areas of...
Go to contribution page -
Mr Dmitriy Maximov (Budker Institute of Nucler Physics)07/11/2019, 15:30
The KEDR experiment is ongoing at the VEPP-4M e+e- collider at Budker INP in Novosibirsk. The collider center of mass energy range covers wide area from 2 to 11 GeV. Most of the up-to-date statistics were taken at the lower end of the energy range around charmonia region.
Go to contribution page
Planned activities at greater energies up to bottomonia would lead to significant rise of event recording rates and... -
Andreas Joachim Peters (CERN)07/11/2019, 15:30
The EOS storage system in use at CERN and several other HEP sites was developed with an access control system driven by known use cases, which is still in its infancy.
Here we motivate the decision to strive supporting the RichACL standard as far as the EOS design allows. We highlight a characteristic that fits particularly well with access control for other applications at CERN, and show...
Go to contribution page -
Mr James Biddle (University of Adelaide)07/11/2019, 15:30
Despite the success of quantum chromodynamics (QCD) in describing the strong nuclear force, a clear picture of how this theory gives rise to the distinctive properties of confinement and dynamical chiral symmetry breaking at low energy is yet to be found. One of the more promising models used to explain these phenomena in recent times is known as the centre vortex model. In this work we...
Go to contribution page -
Sebastian Bukowiec (CERN)07/11/2019, 15:30
In the CERN laboratory, users have access to a large number of different licensed software assets. The landscape of such assets is very heterogeneous including Windows operating systems, office tools and specialized technical and engineering software. In order to improve management of the licensed software and to understand better needs of the users, it was decided to develop a Winventory...
Go to contribution page -
Daniel Crawford (Virginia Tech/Molecular Sciences Software Institute)07/11/2019, 16:30Plenary
-
Andrea Rizzi (INFN Sezione di Pisa, Universita' e Scuola Normale Superiore, P)07/11/2019, 17:00Plenary
-
Gordon Watts (University of Washington (US)), Maria Girone (CERN)07/11/2019, 17:30Plenary
-
Graeme A Stewart (CERN)08/11/2019, 08:55Plenary
-
Steven Schramm (Universite de Geneve (CH))08/11/2019, 09:00Plenary
-
Paul James Laycock (Brookhaven National Laboratory (US))08/11/2019, 09:15Plenary
-
Tomoe Kishimoto (University of Tokyo (JP))08/11/2019, 09:30Plenary
-
Tigran Mkrtchyan (DESY), Tigran Mkrtchyan (Ruprecht Karls Universitaet Heidelberg (DE))08/11/2019, 09:45Plenary
-
Martin Ritter (LMU / Cluster Universe)08/11/2019, 10:00Plenary
-
Phiala Shanahan (Massachusetts Institute of Technology)08/11/2019, 10:15Plenary
-
Christoph Wissing (Deutsches Elektronen-Synchrotron (DE))08/11/2019, 11:00Plenary
-
Marzena Lapka (CERN)08/11/2019, 11:15Plenary
-
Wei Yang (SLAC National Accelerator Laboratory (US))08/11/2019, 11:30Plenary
-
Teng Jian Khoo (Universite de Geneve (CH))08/11/2019, 11:45Plenary
-
Amber Boehnlein (Jefferson Lab)08/11/2019, 11:55Plenary
-
Dr Waseem Kamleh (University of Adelaide)08/11/2019, 12:15Plenary
Choose timezone
Your profile timezone: