-
09/07/2018, 09:00presentation
-
Emanouil Atanassov (Unknown)09/07/2018, 09:15presentation
The region of South-East Europe has a long history of successful collaboration in sharing resources and managing distributed electronic infrastructures for the needs of research communities. The HPC resources like supercomputers and big clusters with low-latency interconnection are an especially valuable and scarce resource in the region. Building upon the successfully tested operational and...
Go to contribution page -
João Fernandes (CERN)09/07/2018, 09:45
Helix Nebula Science Cloud (HNSciCloud) has developed a hybrid cloud platform that links together commercial cloud service providers and research organisations’ in-house IT resources via the GEANT network.
Go to contribution page
The platform offers data management capabilities with transparent data access where applications can be deployed with no modifications on both sides of the hybrid cloud and compute services... -
Jurry de la Mar (T-Systems International GmbH)09/07/2018, 10:00presentation
As the result of joint R&D work with 10 of Europe’s leading public research organisations, led by CERN and funded by the EU, T-Systems provides a hybrid cloud solution, enabling science users to seamlessly extend their existing e-Infrastructures with one of the leading European public cloud services based on OpenStack – the Open Telekom Cloud.
Go to contribution page
With this new approach large-scale data-intensive... -
Mr Alastair Pidgeon (RHEA System S.A.)09/07/2018, 10:15presentation
Ten of Europe’s leading public research organisations led by CERN launched the Helix Nebula Science Cloud (HNSciCloud) Pre-Commercial Procurement to establish a European hybrid cloud platform that will support the high-performance, data-intensive scientific use-cases of this “Buyers Group” and of the research sector at large. It calls for the design and implementation of innovative...
Go to contribution page -
Benedetto Gianluca Siddi (Universita di Ferrara & INFN (IT))09/07/2018, 11:00
Faster alternatives to a full, GEANT4-based simulation are being pursued within the LHCb experiment. In this context the integration of the Delphes toolkit in the LHCb simulation framework is intended to provide a fully parameterized option.
Go to contribution page
Delphes is a modular software designed for general-purpose experiments such as ATLAS and CMS to quickly propagate stable particles using a parametric... -
Mohammad Al-Turany (CERN)09/07/2018, 11:00
ALFA is a message queuing based framework for online and offline data processing. It is a flexible framework that supports an actor based computational model and allows to implement an experiment defined data model on top. The design of ALFA is modular with separate layers for data transport, process management and process deployment. Although still under ongoing development, ALFA is already...
Go to contribution page -
David Schultz (University of Wisconsin-Madison)09/07/2018, 11:00
IceCube Neutrino Observatory is a neutrino detector located at the South Pole. Here we present experiences acquired when using HTCondor to run IceCube’s GPU simulation worksets on the Titan supercomputer. Titan is a large supercomputer geared for High Performance Computing (HPC). Several factors make it challenging to use Titan for IceCube’s High Throughput Computing (HTC) workloads: (1) Titan...
Go to contribution page -
PATRICK MEADE (University of Wisconsin-Madison)09/07/2018, 11:00
IceCube is a cubic kilometer neutrino detector located at the south pole. Every year, 29 TB of data are transmitted via satellite, and 365 TB of data are shipped on archival media, to the data warehouse in Madison, WI, USA. The JADE Long Term Archive (JADE-LTA) software indexes and bundles IceCube files and transfers the archive bundles for long term storage and preservation into tape silos...
Go to contribution page -
Mayank Sharma (CERN)09/07/2018, 11:00Track 7 – Clouds, virtualization and containerspresentation
The WLCG unites resources from over 169 sites spread across the world and the number is expected to grow in the coming years. However, setting up and configuring new sites to support WLCG workloads is still no straightforward task and often requires significant assistance from WLCG experts. A survey presented in CHEP 2016 revealed a strong wish among site admins for reduction of overheads...
Go to contribution page -
Maria Girone (CERN)09/07/2018, 11:00Track 6 – Machine learning and physics analysispresentation
The High Luminosity LHC (HL-LHC) represents an unprecedented computing challenge. For the program to succeed the current estimates from the LHC experiments for the amount of processing and storage required are roughly 50 times more than are currently deployed. Although some of the increased capacity will be provided by technology improvements over time, the computing budget is expected to...
Go to contribution page -
Dr Benjamin Richards (Queen Mary University London)09/07/2018, 11:00
Data AQuisition (DAQ) systems are a vital component of every experiment. The purpose of the underlying software of these systems is to coordinate all the hardware components and detector states, providing the means of data readout, triggering, online processing, persistence, user control and the routing of data. These tasks are made more challenging when also considering fault tolerance,...
Go to contribution page -
Michael Davis (CERN)09/07/2018, 11:15
The first production version of the CERN Tape Archive (CTA) software is planned to be released for the end of 2018. CTA is designed to replace CASTOR as the CERN tape archive solution, in order to face scalability and performance challenges arriving with LHC Run-3.
This contribution will describe the main commonalities and differences of CTA with CASTOR. We outline the functional enhancements...
Go to contribution page -
Giuseppe Avolio (CERN)09/07/2018, 11:15Track 7 – Clouds, virtualization and containerspresentation
The ATLAS experiment at the LHC relies on a complex and distributed Trigger and Data Acquisition (TDAQ) system to gather and select particle collision data. The High Level Trigger (HLT) component of the TDAQ system is responsible for executing advanced selection algorithms, reducing the data rate to a level suitable for recording to permanent storage. The HLT functionality is provided by a...
Go to contribution page -
Jana Schaarschmidt (University of Washington (US))09/07/2018, 11:15
ATLAS relies on very large samples of simulated events for delivering high-quality
Go to contribution page
and competitive physics results, but producing these samples takes much time and
is very CPU intensive when using the full GEANT4 detector simulation.
Fast simulation tools are a useful way of reducing CPU requirements when detailed
detector simulations are not needed. During the LHC Runs 1 and 2, a... -
Kim Albertsson (Lulea University of Technology (SE))09/07/2018, 11:15Track 6 – Machine learning and physics analysispresentation
In this talk, we will describe the latest additions to the Toolkit for Multivariate Analysis (TMVA), the machine learning package integrated into the ROOT framework. In particular, we will focus on the new deep learning module that contains robust fully-connected, convolutional and recurrent deep neural networks implemented on CPU and GPU architectures. We will present performance of these new...
Go to contribution page -
Gianfranco Sciacca09/07/2018, 11:15
Predictions for requirements for the LHC computing for Run 3 and for Run 4 (HL_LHC) over the course of the next 10 years, show a considerable gap between required and available resources, assuming budgets will globally remain flat at best. This will require some radical changes to the computing models for the data processing of the LHC experiments. The use of large scale computational...
Go to contribution page -
Dr Dario Berzano (CERN)09/07/2018, 11:15
The ALICE experiment at the LHC (CERN) is currently developing a new software framework designed for Run 3: detector and software will have to cope with Pb-Pb collision rates 100 times higher than today, leading to the combination of core Online-Offline operations into a single framework called O².
Go to contribution page
The analysis code is expected to run on a few large Analysis Facilities counting 20k cores and... -
Ondrej Subrt (Czech Technical University (CZ))09/07/2018, 11:15
Recently, a stability of Data Acquisition System (DAQ) has become a vital precondition for a successful data taking in high energy physics experiments. The intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN is designed to be able to readout data at the maximum rate of the experiment and running in a mode without any stops. DAQ systems fulfilling such...
Go to contribution page -
Tigran Mkrtchyan (DESY)09/07/2018, 11:30
The dCache project provides open-source storage software deployed internationally to satisfy ever more demanding scientific storage requirements. Its multifaceted approach provides an integrated way of supporting different use-cases with the same storage, from high throughput data ingest, through wide access and easy integration with existing systems.
In supporting new communities, such as...
Go to contribution page -
Wahid Bhimji (Lawrence Berkeley National Lab. (US))09/07/2018, 11:30
Many HEP experiments are moving beyond experimental studies to making large-scale production use of HPC resources at NERSC including the knights landing architectures on the Cori supercomputer. These include ATLAS, Alice, Belle2, CMS, LSST-DESC, and STAR among others. Achieving this has involved several different approaches and has required innovations both on NERSC and the experiments’ sides....
Go to contribution page -
Matteo Rama (Universita & INFN Pisa (IT))09/07/2018, 11:30
In HEP experiments CPU resources required by MC simulations are constantly growing and becoming a very large fraction of the total computing power (greater than 75%). At the same time the pace of performance improvements given by technology is slowing down, so the only solution is a more efficient use of resources. Efforts are ongoing in the LHC experiment collaborations to provide multiple...
Go to contribution page -
Eric Vaandering (Fermi National Accelerator Lab. (US))09/07/2018, 11:30
Weak gravitational lensing is an extremely powerful probe for gaining insight into the nature of two of the greatest mysteries of the universe -- dark energy and dark matter. To help prepare for the massive amounts of data coming from next generation surveys like LSST that hope to advance our understanding of these mysteries, we have developed an automated and seamless weak lensing cosmic...
Go to contribution page -
Maiken Pedersen (University of Oslo (NO))09/07/2018, 11:30Track 7 – Clouds, virtualization and containerspresentation
The cloud computing paradigm allows scientists to elastically grow or shrink computing resources as requirements demand, so that resources only need to be paid for when necessary. The challenge of integrating cloud computing into distributed computing frameworks used by HEP experiments has led to many different solutions in the past years, however none of these solutions offer a complete,...
Go to contribution page -
Mr Marco Boretto (CERN)09/07/2018, 11:30
The NA62 experiment looks for the extremely rare Kaon decay K+->pinunu and aims at measuring its branching ratio with a 10% accuracy.
In order to do so a very high intensity secondary beam from the CERN SPS is used to produce charged Kaons whose decay products are detected by many detectors installed along a 150m decay region.The NA62 Data Acquisition system exploits a multilevel trigger...
Go to contribution page -
Eduardo Rodrigues (University of Cincinnati (US))09/07/2018, 11:30Track 6 – Machine learning and physics analysispresentation
The Scikit-HEP project is a community-driven and community-oriented effort with the aim of providing Particle Physics at large with a Python scientific toolset containing core and common tools. The project builds on five pillars that embrace the major topics involved in a physicist’s analysis work: datasets, data aggregations, modelling, simulation and visualisation. The vision is to build a...
Go to contribution page -
Alexei Klimentov (Brookhaven National Laboratory (US))09/07/2018, 11:45
The Titan supercomputer at Oak Ridge National Laboratory prioritizes the scheduling of large leadership class jobs, but even when the supercomputer is fully loaded and large jobs are standing in the queue to run, 10 percent of the machine remains available for a mix of smaller jobs, essentially ‘filling in the cracks’ between the very large jobs. Such utilisation of the computer resources is...
Go to contribution page -
David Schultz (University of Wisconsin-Madison)09/07/2018, 11:45Track 7 – Clouds, virtualization and containerspresentation
IceCube is a cubic kilometer neutrino detector located at the south pole. CVMFS is a key component to IceCube’s Distributed High Throughput Computing analytics workflow for sharing 500GB of software across datacenters worldwide. Building the IceCube software suite across multiple platforms and deploying it into CVMFS has until recently been a manual, time consuming task that doesn’t fit well...
Go to contribution page -
Andrei Kazarov (Petersburg Nuclear Physics Institut (RU))09/07/2018, 11:45
The Trigger and DAQ (TDAQ) system of the ATLAS experiment is a complex
distributed computing system, composed of O(30000) of applications
running on a farm of computers. The system is operated by a crew of
operators on shift. An important aspect of operations is to minimize
the downtime of the system caused by runtime failures, such as human
errors, unawareness, miscommunication, etc.The...
Go to contribution page -
Viktoriia Chekalina (Yandex School of Data Analysis (RU))09/07/2018, 11:45
The goal to obtain more precise physics results in current collider experiments drives the plans to significantly increase the instantaneous luminosity collected by the experiments . The increasing complexity of the events due to the resulting increased pileup requires new approaches to triggering, reconstruction, analysis,
Go to contribution page
and event simulation. The last task brings to a critical problem:... -
Thomas Paul Charman (University of London (GB))09/07/2018, 11:45Track 6 – Machine learning and physics analysispresentation
HIPSTER (Heavily Ionising Particle Standard Toolkit for Event Recognition) is an open source Python package designed to facilitate the use of TensorFlow in a high energy physics analysis context. The core functionality of the software is presented, with images from the MoEDAL experiment Nuclear Track Detectors (NTDs) serving as an example dataset. Convolutional neural networks are selected as...
Go to contribution page -
Konstantin Gertsenberger (Joint Institute for Nuclear Research (RU))09/07/2018, 11:45
The software for detector simulation, reconstruction and analysis of physics data is an essential part of each high-energy physics experiment. A new generation of the experiments for the relativistic nuclear physics is expected to be started up in the nearest years at the Nuclotron-based Ion Collider facility (NICA) being under construction at the Joint Institute for Nuclear Research in Dubna:...
Go to contribution page -
Dr Doris Ressmann (KIT)09/07/2018, 11:45
Tape storage is still a cost effective way to keep large amounts of data over a long period of time. It is expected that this will continue in the future. The GridKa tape environment is a complex system of many hardware components and software layers. Configuring this system for optimal performance for all use cases is a non-trivial task and requires a lot of experience. We present the current...
Go to contribution page -
Sofia Vallecorsa (Gangneung-Wonju National University (KR))09/07/2018, 12:00
Machine Learning techniques have been used in different applications by the HEP community: in this talk, we discuss the case of detector simulation. The amount of simulated events, expected in the future for LHC experiments and their High Luminosity upgrades, is increasing dramatically and requires new fast simulation solutions. We will describe an R&D activity, aimed at providing a...
Go to contribution page -
Ignacio Asensi Tortajada (Univ. of Valencia and CSIC (ES))09/07/2018, 12:00
Technical details of the directly manipulated systems and the impact on non-obviously connected systems are required knowledge when preparing an intervention in a complex experiment like ATLAS. In order to improve the understanding of the parties involved in an intervention a rule-based expert system has been developed. On the one hand this helps to recognize dependencies that are not always...
Go to contribution page -
Daniele Spiga (Universita e INFN, Perugia (IT))09/07/2018, 12:00Track 7 – Clouds, virtualization and containerspresentation
Reducing time and cost, through setup and operational efficiency increase is a key nowadays while exploiting private or commercial clouds. In turn this means that reducing the learning curve as well as the operational cost of managing community-specific services running on distributed environments became a key to success and sustainability, even more for communities seeking to exploit...
Go to contribution page -
Ms Anna Elizabeth Woodard (Computation Institute, University of Chicago)09/07/2018, 12:00Track 6 – Machine learning and physics analysispresentation
In the traditional HEP analysis paradigm, code, documentation, and results are separate entities that require significant effort to keep synchronized, which hinders reproducibility. Jupyter notebooks allow these elements to be combined into a single, repeatable narrative. HEP analyses, however, commonly rely on complex software stacks and the use of distributed computing resources,...
Go to contribution page -
Dr Hannes Sakulin (CERN)09/07/2018, 12:00
The data acquisition (DAQ) system of the Compact Muon Solenoid (CMS) at CERN reads out the detector at the level-1 trigger accept rate of 100 kHz, assembles events with a bandwidth of 200 GB/s, provides these events to the high level-trigger running on a farm of 26000 cores and records the accepted events. Comprising custom-built and cutting edge commercial hardware and several 1000 instances...
Go to contribution page -
Pavlo Svirin09/07/2018, 12:00
PanDA executes millions of ATLAS jobs a month on Grid systems with more than
Go to contribution page
300k cores. Currently, PanDA is compatible only with few HPC resources due to
different edge services and operational policies, does not implement the pilot
paradigm on HPC, and does not dynamically optimize resource allocation among
queues. We integrated the PanDA Harvester service and the RADICAL-Pilot (RP)
system... -
Valentin Y Kuznetsov (Cornell University (US))09/07/2018, 12:00
The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this talk we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system...
Go to contribution page -
09/07/2018, 12:15Track 6 – Machine learning and physics analysispresentation
-
09/07/2018, 12:15
-
Sandro Christian Wenzel (CERN)09/07/2018, 12:15
In the context of the common online-offline computing infrastructure for Run3 (ALICE-O2), ALICE is reorganizing its detector simulation software to be based on FairRoot, offering a common toolkit to implement simulation based on the Virtual-Monte-Carlo (VMC) scheme. Recently, FairRoot has been augmented by ALFA, a software framework developed in collaboration between ALICE and FAIR, offering...
Go to contribution page -
Marica Antonacci09/07/2018, 12:15Track 7 – Clouds, virtualization and containerspresentation
In the framework of the H2020 INDIGO-DataCloud project we have implemented an advanced solution for the automatic deployment of digital data repositories based on Invenio, the digital library framework developed by Cern. Exploiting cutting-edge technologies, like docker and Apache Mesos, and standard interfaces like TOSCA we are able to provide a service that simplifies the process of creating...
Go to contribution page -
Kosuke Takeda (Kobe University (JP))09/07/2018, 12:15
In 2019, the ATLAS experiment at CERN is planning an upgrade
Go to contribution page
in order to cope with the higher luminosity requirements. In this
upgrade, the installation of the new muon chambers for the end-cap
muon system will be carried out. Muon track reconstruction performance
can be improved, and fake triggers can be reduced. It is also
necessary to develop readout system of trigger data for the... -
Jerome Odier (IN2P3/CNRS (FR))09/07/2018, 12:15
AMI (ATLAS Metadata Interface) is a generic ecosystem for metadata aggregation, transformation and cataloguing. Benefitting from more than 15 years of feedback in the LHC context, the second major version was recently released. We describe the design choices and their benefits for providing high-level metadata-dedicated features. In particular, we focus on the implementation of the Metadata...
Go to contribution page -
Julia Andreeva (CERN)09/07/2018, 12:15
The WLCG computing infrastructure provides distributed storage capacity hosted at the geographically dispersed computing sites.
Go to contribution page
In order to effectively organize storage and processing of the LHC data, the LHC experiments require a reliable and complete overview of the storage capacity in terms of the occupied and free space, the storage shares allocated to different computing activities, and... -
Federico Carminati (CERN)09/07/2018, 14:00
In spite of the fact that HEP computing has evolved considerably over the years, the understanding of the evolution process seems to be still incomplete. There is no clear procedure to replace an established product with a new one, and most of the successful major transitions (e.g. PAW to Root or Geant3 to Geant4) have involved a large dose of serendipity and have caused splits in the...
Go to contribution page -
Vito Di Benedetto (Fermi National Accelerator Lab. (US))09/07/2018, 14:00
The FabrIc for Frontier Experiments (FIFE) project within the Fermilab Scientific Computing Division is charged with integrating offline computing components into a common computing stack for the non-LHC Fermilab experiments, supporting experiment offline computing, and consulting on new, novel workflows. We will discuss the general FIFE onboarding strategy, the upgrades and enhancements in...
Go to contribution page -
Samuel Cadellin Skipsey09/07/2018, 14:00
Pressures from both WLCG VOs and externalities have led to a desire to "simplify" data access and handling for Tier-2 resources across the Grid. This has mostly been imagined in terms of reducing book-keeping for VOs, and total replicas needed across sites. One common direction of motion is to increasing the amount of remote-access to data for jobs, which is also seen as enabling the...
Go to contribution page -
Victor Daniel Elvira (Fermi National Accelerator Lab. (US))09/07/2018, 14:00
Detector simulation has become fundamental to the success of modern high-energy physics (HEP) experiments. For example, the Geant4-based simulation applications developed by the ATLAS and CMS experiments played a major role for them to produce physics measurements of unprecedented quality and precision with faster turnaround, from data taking to journal submission, than any previous hadron...
Go to contribution page -
Edoardo Martelli (CERN)09/07/2018, 14:00
WLCG relies on the network as a critical part of its infrastructure and therefore needs to guarantee effective network usage and prompt detection and resolution of any network issues, including connection failures, congestion and traffic routing. OSG Networking Area in partnership with WLCG has focused on collecting, storing and making available all the network related metrics for further...
Go to contribution page -
Luis Granado Cardoso (CERN)09/07/2018, 14:00
LHCb is one of the 4 experiments at the LHC accelerator at CERN, specialized in b-physics. During the next long shutdown period, the LHCb experiment will be upgraded to a trigger-less readout system with a full software trigger in order to be able to record data with a much higher instantaneous luminosity. To achieve this goal, the upgraded systems for trigger, timing and fast control (TFC)...
Go to contribution page -
Jessica Stietzel (College of the Holy Cross)09/07/2018, 14:00Track 6 – Machine learning and physics analysispresentation
Neural networks, and recently, specifically deep neural networks, are attractive candidates for machine learning problems in high energy physics because they can act as universal approximators. With a properly defined objective function and sufficient training data, neural networks are capable of approximating functions for which physicists lack sufficient insight to derive an analytic,...
Go to contribution page -
Dr Teng LI (University of Edinburgh)09/07/2018, 14:15
The XCache (XRootD Proxy Cache) provides a disk-based caching proxy for data access via the XRootD protocol. This can be deployed at WLCG Tier-2 computing sites to provide a transparent cache service for the optimisation of data access, placement and replication.
We will describe the steps to enable full read/write operations to storage endpoints consistent with the distributed data...
Go to contribution page -
Alexey Anisenkov (Budker Institute of Nuclear Physics (RU))09/07/2018, 14:15
Data acquisition and control play an important role in science applications especially in modern Experiments of high energy physics (HEP). A comprehensive and efficient monitoring system is a vital part of any HEP experiment. In this paper we describe the software web-based framework which is currently used by CMD-3 Collaboration during data taking with the CMD-3 Detector at the VEPP-2000...
Go to contribution page -
Mr Maxim Borisyak (National Research University Higher School of Economics)09/07/2018, 14:15Track 6 – Machine learning and physics analysispresentation
High Energy Physics experiments often rely on Monte-Carlo event generators. Such generators often contain a large number of parameters and need fine-tuning to closely match experimentally observed data. This task traditionally requires expert knowledge of the generator and the experimental setup as well as vast computing power.Generative Adversarial Networks (GAN) is a powerful method to match...
Go to contribution page -
Kevin Pedro (Fermi National Accelerator Lab. (US))09/07/2018, 14:15
The CMS full simulation using Geant4 has delivered billions of simulated events for analysis during Runs 1 and 2 of the LHC. However, the HL-LHC dataset will be an order of magnitude larger, with a similar increase in occupancy per event. In addition, the upgraded CMS detector will be considerably more complex, with an extended silicon tracker and a high granularity calorimeter in the endcap...
Go to contribution page -
Eric Vaandering (Fermi National Accelerator Lab. (US))09/07/2018, 14:15
HEPCloud is rapidly becoming the primary system for provisioning compute resources for all Fermilab-affiliated experiments. In order to reliably meet peak demands of the next generation of High Energy Physics experiments, Fermilab must either plan to locally provision enough resources to cover the forecasted need, or find ways to elastically expand its computational capabilities. Commercial...
Go to contribution page -
David Kelsey (STFC-Rutherford Appleton Laboratory (GB))09/07/2018, 14:15
The fraction of general internet traffic carried over IPv6 continues to grow rapidly. The transition of WLCG central and storage services to dual-stack IPv4/IPv6 is progressing well, thus enabling the use of IPv6-only CPU resources as agreed by the WLCG Management Board and presented by us at CHEP2016. By April 2018, all WLCG Tier 1 data centres will provide access to their services over IPv6....
Go to contribution page -
Dario Berzano (CERN), Chris Burr (University of Manchester (GB))09/07/2018, 14:15
The need for good software training is essential in the HEP community. Unfortunately, current training is non-homogeneous and the definition of a common baseline is unclear, making it difficult for newcomers to proficiently join large collaborations such as ALICE or LHCb.
Go to contribution page
In the last years, both collaborations have started separate efforts to tackle this issue through training workshops, via... -
Christoph Heidecker (KIT - Karlsruhe Institute of Technology (DE))09/07/2018, 14:30
High throughput and short turnaround cycles are core requirements for the efficient processing of I/O-intense end-user analyses. Together with the tremendously increasing amount of data to be processed, this leads to enormous challenges for HEP storage systems, networks and the data distribution to end-users. This situation is even compounded by taking into account opportunistic resources...
Go to contribution page -
Adrian Alan Pol (Université Paris-Saclay (FR))09/07/2018, 14:30Track 6 – Machine learning and physics analysispresentation
The certification of the CMS data as usable for physics analysis is a crucial task to ensure the quality of all physics results published by the collaboration. Currently, the certification conducted by human experts is labor intensive and can only be segmented on a run by run basis. This contribution focuses on the design and prototype of an automated certification system assessing data...
Go to contribution page -
Dr Sebastien Binet (IN2P3/LPC)09/07/2018, 14:30
In order to meet the challenges of the Run-3 data rates and volumes, the ALICE collaboration is merging the online and offline infrastructures into a common framework: ALICE-O2.
Go to contribution page
O2 is based on FairRoot and FairMQ, a message-based, multi-threaded and multi-process control framework.
In FairMQ, processes (possibly on different machines) exchange data via message queues either through 0MQ or... -
Manuel Giffels (KIT - Karlsruhe Institute of Technology (DE))09/07/2018, 14:30
The amount of data to be processed by experiments in high energy physics is tremendously increasing in the coming years. For the first time in history the expected technology advance itself will not be sufficient to cover the arising gap between required and available resources based on the assumption of maintaining the current flat budget hardware procurement strategy. This leads to...
Go to contribution page -
Tadej Novak (Jozef Stefan Institute (SI))09/07/2018, 14:30
The high-luminosity data produced by the LHC leads to many proton-proton interactions per beam
Go to contribution page
crossing in ATLAS, known as pile-up. In order to understand the ATLAS data and extract the physics
results it is important to model these effects accurately in the simulation. As the pile-up rate continues
to grow towards an eventual rate of 200 for the HL-LHC, this puts increasing demands on... -
Manuel Jesus Rodriguez Alonso (Centro de Investigaciones Energéti cas Medioambientales y Tecno)09/07/2018, 14:30
This paper presents the Detector Control System (DCS) that is being designed and implemented for the NP04 experiment at CERN. NP04, also known as protoDUNE Single Phase (SP), aims at validating the engineering processes and detector performance of a large LAr Time Projection Chamber in view of the DUNE experiment. The detector is under construction and will be operated on a tertiary beam of...
Go to contribution page -
Beraldo Costa Leal (UNESP - Universidade Estadual Paulista (BR))09/07/2018, 14:30
Data-intensive science collaborations still face challenges when transferring large data sets between globally distributed endpoints. Many issues need to be addressed to orchestrate the network resources in order to better explore the available infrastructure. In multi-domain scenarios, the complexity increases because network operators rarely export the network topology to researchers and...
Go to contribution page -
Zbigniew Baranowski (CERN)09/07/2018, 14:45
The interest in using Big Data solutions based on Hadoop ecosystem is constantly growing in HEP community. This drives the need for increased reliability and availability of the central Hadoop service and underlying infrastructure provided to the community by the CERN IT department.
Go to contribution page
This contribution will report on the overall status of the Hadoop platform and the recent enhancements and... -
Ivana Hrivnacova (IPNO, Université Paris-Saclay, CNRS/IN2P3)09/07/2018, 14:45
To address the challenges of the major upgrade of the experiment, the ALICE simulations must be able to make efficient use of computing and opportunistic supercomputing resources available on the GRID. The Geant4 transport package, the performance of which has been demonstrated in a hybrid multithreading (MT) and multiprocessing (MPI) environment with up to ¼ million threads, is therefore of a...
Go to contribution page -
Mr Martin Vasilev (University of Plovdiv)09/07/2018, 14:45
Collaboration in research is essential for it is saving time and money. The field of high-energy physics (HEP) is no different. The higher level of collaboration the stronger community. The HEP field encourages organizing various events in format and size such as meetings, workshops and conferences. Making attending a HEP event easier leverages cooperation and dialogue and this is what makes...
Go to contribution page -
Raul Cardoso Lopes (Brunel University (GB))09/07/2018, 14:45
Recent years have seen the mass adoption of streaming in mobile computing, an increase in size and frequency of bulk long-haul data transfers
Go to contribution page
in science in general, and the usage of big data sets in job processing
demanding real-time long-haul accesses that can be greatly affected by
variations in latency. It has been shown in the Physics and climate research communities that the need to... -
Adrian Alan Pol (Université Paris-Saclay (FR))09/07/2018, 14:45Track 6 – Machine learning and physics analysispresentation
Online Data Quality Monitoring (DQM) in High Energy Physics experiment is a key task which, nowadays, is extremely expensive in terms of human resources and required expertise.
We investigate machine learning as a solution for automatised DQM. The contribution focuses on the peculiar challenges posed by the requirement of setting up and evaluating the AI algorithms in the online environment;...
Go to contribution page -
Simone Sottocornola (Universita and INFN (IT))09/07/2018, 14:45
During the Run-2 of the Large Hadron Collider (LHC) the instantaneous luminosity exceeds the nominal value of 10^{34} cm^{−2} s^{−1} with a 25 ns bunch crossing period and the number of overlapping proton-proton interactions per bunch crossing increases up to about 80. These conditions pose a challenge to the trigger system of the experiments that has to control rates while keeping a good...
Go to contribution page -
Daniela Bauer (Imperial College (GB))09/07/2018, 14:45
LZ is a Dark Matter experiment based at the Sanford Underground Research Facility. It is currently under construction and aims to start data taking in 2020. Its computing model is based on two data centres, one in the USA (USDC) and one in the UK (UKDC), both holding a complete copy of its data. During stable periods of running both data centres plan to concentrate on different aspects of...
Go to contribution page -
Pedro Ferreira (CERN)09/07/2018, 15:00
Indico is a general-purpose event management system currently in use by more than 150 institutions world wide. Despite having been born at CERN and primarily adopted by the High Energy Physics Community, it has recently gained adoption in other communities (for example, the United Nations and its agencies) and received the attention of commercial vendors world wide. This growth in adoption...
Go to contribution page -
James Catmore (University of Oslo (NO))09/07/2018, 15:00Track 6 – Machine learning and physics analysispresentation
In 2015 ATLAS Distributed Computing started to migrate its monitoring systems away from Oracle DB and decided to adopt new big data platforms that are open source, horizontally scalable, and offer the flexibility of NoSQL systems. Three years later, the full software stack is in place, the system is considered in production and operating at near maximum capacity (in terms of storage capacity...
Go to contribution page -
Dirk Duellmann (CERN)09/07/2018, 15:00
The EOS deployment at CERN is a core service used for both scientific data
processing, analysis and as back-end for general end-user storage (eg home directories/CERNBOX).
The collected disk failure metrics over a period of 1 year from a deployment
size of some 70k disks allows a first systematic analysis of the behaviour
of different hard disk types for the large CERN use-cases.In this...
Go to contribution page -
Joao Vitor Viana Barbosa (CERN)09/07/2018, 15:00
The LHCb experiment, one of the four operating in the LHC, will be enduring a major upgrade of its electronics during the third long shutdown period of the particle accelerator. One of the main objectives of the upgrade effort is to implement a 40MHz readout of collision data. For this purpose, the Front-End electronics will make extensive use of a radiation resistant chipset, the Gigabit...
Go to contribution page -
Shawn McKee (University of Michigan (US))09/07/2018, 15:00
Networking is foundational to the ATLAS distributed infrastructure and there are many ongoing activities related to networking both within and outside of ATLAS. We will report on the progress in a number of areas exploring ATLAS's use of networking and our ability to monitor the network, analyze metrics from the network, and tune and optimize application and end-host parameters to make the...
Go to contribution page -
Dr Tao Lin (Institute of High Energy Physics, CAS)09/07/2018, 15:00
The Jiangmen Underground Neutrino Observatory (JUNO) is a multi-purpose neutrino experiment. It consists of a central detector, a water pool and a top tracker. The central detector, which is used for neutrino detection, consists of 20 kt liquid scintillator (LS) and about 18,000 20-inch photomultiplier tubes (PMTs) to collect lights from LS.
Go to contribution page
Simulation software is one of the important parts... -
Vladimir Korenkov (Joint Institute for Nuclear Research (RU))09/07/2018, 15:00
Computing in the field of high energy physics requires usage of heterogeneous computing resources and IT, such as grid, high performance computing, cloud computing and big data analytics for data processing and analysis. The core of the distributed computing environment at the Joint Institute for Nuclear Research is the Multifunctional Information and Computing Complex (MICC). It includes...
Go to contribution page -
Oleg Samoylov (Joint Institute for Nuclear Research)09/07/2018, 15:15
The NOvA experiment is a two-detectors, long-baseline neutrino experiment operating since 2014 in the NuMI muon neutrino beam (FNAL, USA). NOvA has already collected about 25% of its expected statistics in both neutrino and antineutrino modes for electron-neutrino appearance and muon-neutrino disappearance analyses. Careful simulation of neutrino events and backgrounds are required for precise...
Go to contribution page -
Dirk Hutter (Johann-Wolfgang-Goethe Univ. (DE))09/07/2018, 15:15
The First-level Event Selector (FLES) is the main event selection
Go to contribution page
system of the upcoming CBM experiment at the future FAIR facility in
Germany. As the central element, a high-performance compute
cluster analyses free-streaming, time-stamped data delivered from the
detector systems at rates exceeding 1 TByte/s and selects data
for permanent storage.
While the detector systems are located in a... -
David Cameron (University of Oslo (NO))09/07/2018, 15:15
LHC@home has provided computing capacity for simulations under BOINC since 2005. Following the introduction of virtualisation with BOINC to run HEP Linux software in a virtual machine on volunteer desktops, initially started on the test BOINC projects, like Test4Theory and ATLAS@home, all CERN applications distributed to volunteers have been consolidated under a single LHC@home BOINC project....
Go to contribution page -
Oksana Shadura (University of Nebraska Lincoln (US))09/07/2018, 15:15
The ROOT software framework is foundational for the HEP ecosystem, providing capabilities such as IO, a C++ interpreter, GUI, and math libraries. It uses object-oriented concepts and build-time modules to layer between components. We believe additional layering formalisms will benefit ROOT and its users.
We present the modularization strategy for ROOT which aims to formalize the description...
Go to contribution page -
Tibor Simko (CERN)09/07/2018, 15:15Track 6 – Machine learning and physics analysispresentation
The revalidation, reinterpretation and reuse of research data analyses requires having access to the original computing environment, the experimental datasets, the analysis software, and the computational workflow steps which were used by the researcher to produce the original scientific results in the first place.
REANA (=Reusable Analyses) is a nascent platform enabling researchers to...
Go to contribution page -
Mr Diogo Di Calafiori (Eidgenoessische Technische Hochschule Zuerich (ETHZ) (CH))09/07/2018, 15:15
The Electromagnetic Calorimeter (ECAL) is one of the sub-detectors of the Compact Muon Solenoid (CMS) experiment of the Large Hadron Collider (LHC) at CERN. Since more than 10 years, the ECAL Detector Control System (DCS) and the ECAL Safety System (ESS) have supported the experiment operation, contributing to its high availability and safety. The evolution of both systems to fulfill new...
Go to contribution page -
Elizabeth Gallas (University of Oxford (GB))09/07/2018, 15:15
Processing ATLAS event data requires a wide variety of auxiliary information from geometry, trigger, and conditions database systems. This information is used to dictate the course of processing and refine the measurement of particle trajectories and energies to construct a complete and accurate picture of the remnants of particle collisions. Such processing occurs on a worldwide computing...
Go to contribution page -
Henry Fredrick Schreiner (University of Cincinnati (US))09/07/2018, 15:30
The GooFit highly parallel fitting package for GPUs and CPUs has been substantially upgraded in the past year. Python bindings have been added to allow simple access to the fitting configuration, setup, and execution. A Python tool to write custom GooFit code given a (compact and elegant) MINT3/AmpGen amplitude description allows the corresponding C++ code to be written quickly and correctly. ...
Go to contribution page -
Dominik Muller (CERN)09/07/2018, 15:30
The increase in luminosity foreseen in the future years of operation of the Large Hadron Collider (LHC) creates new challenges in computing efficiency for all participating experiment. These new challenges extend beyond the data-taking alone, because data analyses require more and more simulated events, whose creation already takes a large fraction of the overall computing resources. For Run 3...
Go to contribution page -
Lukas Alexander Heinrich (New York University (US))09/07/2018, 15:30Track 6 – Machine learning and physics analysispresentation
We present recent work within the ATLAS collaboration to centrally provide tools to facilitate analysis management and highly automated container-based analysis execution in order to both enable non-experts to benefit from these best practices as well as the collaboration to track and re-execute analyses independently, e.g. during their review phase.
Through integration with the ATLAS GLANCE...
Go to contribution page -
Andrew John Washbrook (The University of Edinburgh (GB))09/07/2018, 15:30
The Edinburgh (UK) Tier-2 computing site has provided CPU and storage resources to the Worldwide LHC Computing Grid (WLCG) for close to 10 years. Unlike other sites, resources are shared amongst members of the hosting institute rather than being exclusively provisioned for Grid computing. Although this unconventional approach has posed challenges for troubleshooting and service delivery there...
Go to contribution page -
Stefan Nicolae Stancu (CERN)09/07/2018, 15:30
Network performance is key to the correct operation of any modern datacentre infrastructure or data acquisition (DAQ) system. Hence, it is crucial to ensure the devices employed in the network are carefully selected to meet the required needs.
The established benchmarking methodology [1,2] consists of various tests that create perfectly reproducible traffic patterns. This has the advantage of...
Go to contribution page -
Holger Schulz (Fermilab)09/07/2018, 15:30
In their measurement of the neutrino oscillation parameters (PRL 118, 231801
Go to contribution page
(2017)), NOvA uses a sample of approximately 27 million reconstructed spills to
search for electron-neutrino appearance events. These events are stored in an
n-tuple format, in 180 thousand ROOT files. File sizes range from a few hundred KiB to a
few MiB; the full dataset is approximately 3 TiB. These millions of... -
Teo Mrnjavac (CERN)09/07/2018, 15:30
The ALICE Experiment at CERN LHC (Large Hadron Collider) is under
Go to contribution page
preparation for a major upgrade that is scheduled to be deployed during Long
Shutdown 2 in 2019-2020 and that includes new computing systems, called O2
(Online-Offline).
To ensure the efficient operation of the upgraded experiment along with its
newly designed computing system, a reliable, high performance and automated
control... -
09/07/2018, 15:45
-
David Cameron (University of Oslo (NO))09/07/2018, 15:45
The volunteer computing project ATLAS@Home has been providing a stable computing resource for the ATLAS experiment since 2013. It has recently undergone some significant developments and as a result has become one of the largest resources contributing to ATLAS computing, by expanding its scope beyond traditional volunteers and into exploitation of idle computing power in ATLAS data centres....
Go to contribution page -
Ralf Vamosi (CERN)09/07/2018, 15:45Track 6 – Machine learning and physics analysispresentation
The distributed data management system Rucio manages all data of the ATLAS collaboration across the grid. Automation such as replication and rebalancing are an important part to ensure the minimum workflow execution times. In this paper, a new rebalancing algorithm based on machine learning is proposed. First, it can run independently of the existing rebalancing mechanism and can be...
Go to contribution page -
Tadashi Murakami (KEK)09/07/2018, 15:45
We provide KEK general purpose network to support various kinds of research activities in the field of high-energy physics, material physics, and accelerator physics. Since the end of 20th century, cyber attacks to the network are on an almost daily basis, and attack techniques change rapidly and drastically. In such circumstances, we are constantly facing difficult tradeoff and are required...
Go to contribution page -
Mario Lassnig (CERN)09/07/2018, 15:45
With the LHC High Luminosity upgrade the workload and data management systems are facing new major challenges. To address those challenges ATLAS and Google agreed to cooperate on a project to connect Google Cloud Storage and Compute Engine to the ATLAS computing environment. The idea is to allow ATLAS to explore the use of different computing models, to allow ATLAS user analysis to benefit...
Go to contribution page -
Xiaobin Ji (IHEP, Beijing, China)09/07/2018, 15:45
The BESIII detector is a magnetic spectrometer operating at BEPCII, a
double-ring e+e- collider with center-of-mass energies between 2.0 and
4.6 GeV and a peak luminosity $10^{33}$ cm$^{-2}$ s$^{-1}$. The event rate
is about 4 kHz after the online event filter (L3 trigger) at J/$\psi$
peak.The BESIII online data quality monitoring (DQM) system is used to
Go to contribution page
monitor the data and the detector in... -
Lorenzo Moneta (CERN)09/07/2018, 15:45
In order to take full advantage of new computer architectures and to satisfy the requirement of minimising the CPU usage with increasing amount of data to analysis, parallelisation and SIMD vectorisation have been introduced in the ROOT mathematical libraries. The VecCore library provides a very convenient solution to abstract SIMD vectorization and it has been found extremely useful for...
Go to contribution page -
Lee Bitsoi09/07/2018, 16:30
-
Philippe Charpentier (CERN)09/07/2018, 17:00
-
Daniel S. Katz (University of Illinois)09/07/2018, 17:30
-
David Rousseau (LAL-Orsay, FR)10/07/2018, 09:00presentation
Machine Learning (known as Multi Variate Analysis) has been used somewhat in HEP since the nighties. If Boosted Decision Trees are now common place, there is now an explosion of novel algorithms following the « deep learning revolution » in industry, applicable to data taking, triggering and handling, reconstruction, simulation and analysis. This talk will review some of these algorithms and...
Go to contribution page -
Steven Andrew Farrell (Lawrence Berkeley National Lab. (US))10/07/2018, 09:30
Initial studies have suggested generative adversarial networks (GANs) have promise as fast simulations within HEP. These studies, while promising, have been insufficiently precise and also, like GANs in general, suffer from stability issues. We apply GANs to to generate full particle physics events (not individual physics objects), and to large weak lensing cosmology convergence maps. We...
Go to contribution page -
Jennifer Ngadiuba (INFN, Milano)10/07/2018, 09:50
Machine learning methods are becoming ubiquitous across particle physics. However, the exploration of such techniques in low-latency environments like L1 trigger systems has only just begun. We present here a new software, based on High Level Synthesis (HLS), to generically port several kinds of network models (BDTs, DNNs, CNNs) into FPGA firmware. As a benchmark physics use case, we consider...
Go to contribution page -
Jean-Yves Le Meur (CERN)10/07/2018, 10:10
In 2016 was started the CERN Digital Memory project with the main goal of preventing loss of historical content produced by the organisation. The first step of the project was targeted to address the risk of deterioration of the most vulnerable materials, mostly the multimedia assets created in analogue formats from 1954 to the late 1990's, like still and moving images on films or magnetic...
Go to contribution page -
Nico Madysa (Technische Universitaet Dresden (DE))10/07/2018, 11:00
The design of readout electronics for the LAr calorimeters of the ATLAS detector to be operated at the future High-Luminosity LHC (HL-LHC) requires a detailed simulation of the full readout chain in order to find optimal solutions for the analog and digital processing of the detector signals. Due to the long duration of the LAr calorimeter pulses relative to the LHC bunch crossing time,...
Go to contribution page -
Jim Pivarski (Princeton University)10/07/2018, 11:00Track 6 – Machine learning and physics analysispresentation
In the last stages of data analysis, only order-of-magnitude computing speedups translate into increased human productivity, and only if they're not difficult to set up. Producing a plot in a second instead of an hour is life-changing, but not if it takes two hours to write the analysis code. Fortunately, HPC-inspired techniques can result in such large speedups, but unfortunately, they can be...
Go to contribution page -
Lorenzo Rinaldi (Universita e INFN, Bologna (IT))10/07/2018, 11:00
The ATLAS experiment is approaching mid-life: the long shutdown period (LS2) between LHC Runs 1 and 2 (ending in 2018) and the future collision data-taking of Runs 3 and 4 (starting in 2021). In advance of LS2, we have been assessing the future viability of existing computing infrastructure systems. This will permit changes to be implemented in time for Run 3. In systems with broad impact...
Go to contribution page -
Mr Adrian Coveney (STFC)10/07/2018, 11:00
While the WLCG and EGI have both made significant progress towards solutions for storage space accounting, one area that is still quite exploratory is that of dataset accounting. This type of accounting would enable resource centre and research community administrators to report on dataset usage to the data owners, data providers, and funding agencies. Eventually decisions could be made about...
Go to contribution page -
Jack Cranshaw (Argonne National Laboratory (US))10/07/2018, 11:00
Scalable multithreading poses challenges to I/O, and the performance of a thread-safe I/O strategy
Go to contribution page
may depend upon many factors, including I/O latencies, whether tasks are CPU- or I/O-intensive, and thread count.
In a multithreaded framework, an I/O infrastructure must efficiently supply event data to and collect it from many threads processing multiple events in flight.
In particular,... -
Belmiro Moreira (CERN)10/07/2018, 11:00Track 7 – Clouds, virtualization and containerspresentation
The CERN OpenStack Cloud provides over 200.000 CPU cores to run data processing analyses for the Large Hadron Collider (LHC) experiments. To deliver these services, with high performance and reliable service levels, while at the same time ensuring a continuous high resource utilization has been one of the major challenges for the CERN Cloud engineering team.
Several optimizations like...
Go to contribution page -
Joel Closier (CERN)10/07/2018, 11:00
LHCb is one of the 4 LHC experiments and continues to revolutionise data acquisition and analysis techniques. Already two years ago the concepts of “online” and “offline” analysis were unified: the calibration and alignment processes take place automatically in real time and are used in the trigerring process such that Online data are immediately available offline for physics analysis (Turbo...
Go to contribution page -
Jose Castro Leon (CERN)10/07/2018, 11:15Track 7 – Clouds, virtualization and containerspresentation
The CERN OpenStack cloud has been delivering a wide variety of services to its 3000 customers since it entered in production in 2013. Initially, standard resources such a Virtual Machines and Block Storage were offered. Today, the cloud offering includes advanced features since as Container Orchestration (for Kubernetes, Docker Swarm mode, Mesos/DCOS clusters), File Shares and Bare Metal, and...
Go to contribution page -
Tadeas Bilka (Charles University, Prague)10/07/2018, 11:15
In spring 2018 the SuperKEKB electron-positron collider at High Energy Accelerator Research Organization (KEK, Tsukuba, Japan) will deliver its first collisions to the Belle II experiment. The aim of Belle II is to collect a data sample 50 times larger than the previous generation of B-Factories taking advantage of the unprecedented SuperKEKB design luminosity of 8x10^35 cm^-2 s^-1. The Belle...
Go to contribution page -
Patrick Robbe (Université Paris-Saclay (FR))10/07/2018, 11:15
The LHCb experiment is a fully instrumented forward spectrometer designed for
Go to contribution page
precision studies in the flavour sector of the standard model with proton-proton
collisions at the LHC. As part of its expanding physics programme, LHCb collected data also during the LHC proton-nucleus collisions in 2013 and 2016 and
during nucleus-nucleus collisions in 2015. All the collected datasets are... -
Brian Paul Bockelman (University of Nebraska Lincoln (US))10/07/2018, 11:15
The OSG has long maintained a central accounting system called Gratia. It uses small probes on each computing and storage resource in order to usage. The probes report to a central collector which stores the usage in a database. The database is then queried to generate reports. As the OSG aged, the size of the database grew very large. It became too large for the database technology to...
Go to contribution page -
Antonio Augusto Alves Junior (University of Cincinnati (US))10/07/2018, 11:15
Hydra is a templatized header-only, C++11-compliant library for data analysis on massively parallel platforms targeting, but not limited to, the field High Energy Physics research.
Go to contribution page
Hydra supports the description of particle decays via the generation of phase-space Monte Carlo, generic function evaluation, data fitting, multidimensional adaptive numerical integration and histograming.
Hydra is... -
Lynn Wood (Pacific Northwest National Laboratory, USA)10/07/2018, 11:15
The Belle II experiment at KEK is preparing for first collisions in early 2018. Processing the large amounts of data that will be produced requires conditions data to be readily available to systems worldwide in a fast and efficient manner that is straightforward for both the user and maintainer. This was accomplished by relying on industry-standard tools and methods: the conditions database...
Go to contribution page -
Matteo Cremonesi (Fermi National Accelerator Lab. (US))10/07/2018, 11:15Track 6 – Machine learning and physics analysispresentation
The HEP community is approaching an era were the excellent performances of the particle accelerators in delivering collision at high rate will force the experiments to record a large amount of information. The growing size of the datasets could potentially become a limiting factor in the capability to produce scientific results timely and efficiently. Recently, new technologies and new...
Go to contribution page -
Jaroslava Schovancova (CERN)10/07/2018, 11:30
HammerCloud is a testing service and framework to commission, run continuous tests or on-demand large-scale stress tests, and benchmark computing resources and components of various distributed systems with realistic full-chain experiment workflows.
HammerCloud, userd by the ATLAS and CMS experiments in production, has been a useful service to commission both compute resources and various...
Go to contribution page -
Dr Simon Blyth (National Taiwan University)10/07/2018, 11:30
Opticks is an open source project that integrates the NVIDIA OptiX
GPU ray tracing engine with Geant4 toolkit based simulations.
Massive parallelism brings drastic performance improvements with optical photon simulation speedup expected to exceed 1000 times Geant4 with workstation GPUs.Optical physics processes of scattering, absorption, reemission and
Go to contribution page
boundary processes are implemented... -
Dr Benjamin Krikler (University of Bristol (GB))10/07/2018, 11:30Track 6 – Machine learning and physics analysispresentation
Many analyses on CMS are based on the histogram, used throughout the workflow from data validation studies to fits for physics results. Binned data frames are a generalisation of multidimensional histograms, in a tabular representation where histogram bins are denoted by category labels. Pandas is an industry-standard tool, providing a data frame implementation that allows easy access to "big...
Go to contribution page -
Frank Berghaus (University of Victoria (CA))10/07/2018, 11:30Track 7 – Clouds, virtualization and containerspresentation
The Simulation at Point1 (Sim@P1) project was built in 2013 to take advantage of the ATLAS Trigger and Data Acquisition High Level Trigger (HLT) farm. The HLT farm provides more than 2,000 compute nodes, which are critical to ATLAS during data taking. When ATLAS is not recording data, this large compute resource is used to generate and process simulation data for the experiment. The Sim@P1...
Go to contribution page -
Arun Kumar (National Taiwan University (TW))10/07/2018, 11:30
The calibration of the detector in almost real time is a key to the exploitation of the large data volumes at the LHC experiments. For this purpose the CMS collaboration deployed a complex machinery involving several components of the processing infrastructure and of the condition DB system. Accurate reconstruction of data start only once all the calibrations become available for consumption...
Go to contribution page -
Danilo Piparo (CERN)10/07/2018, 11:30
In the coming years, HEP data processing will need to exploit parallelism on present and future hardware resources to sustain the bandwidth requirements.
Go to contribution page
As one of the cornerstones of the HEP software ecosystem, ROOT embraced an ambitious parallelisation plan which delivered compelling results.
In this contribution the strategy is characterised as well as its evolution in the medium term.
The... -
Dave Dykstra (Fermi National Accelerator Lab. (US))10/07/2018, 11:30
LHC experiments make extensive use of Web proxy caches, especially for software distribution via the CernVM File System and for conditions data via the Frontier Distributed Database Caching system. Since many jobs read the same data, cache hit rates are high and hence most of the traffic flows efficiently over Local Area Networks. However, it is not always possible to have local Web caches,...
Go to contribution page -
156. A new mechanism to use the Conditions Database REST API to serve the ATLAS detector descriptionAlessandro De Salvo (Sapienza Universita e INFN, Roma I (IT))10/07/2018, 11:45
An efficient and fast access to the detector description of the ATLAS experiment is needed for many tasks, at different steps of the data chain: from detector development to reconstruction, from simulation to data visualization. Until now, the detector description was only accessible through dedicated services integrated into the experiment's software framework, or by the usage of external...
Go to contribution page -
Diego Da Silva Gomes (CERN)10/07/2018, 11:45Track 7 – Clouds, virtualization and containerspresentation
The primary goal of the online cluster of the Compart Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) is to build event data from the detector and to select interesting collisions in the High Level Trigger (HLT) farm for offline storage. With more than 1100 nodes and a capactity of about 600 kHEPSpec06, the HLT machines represent up to 40% of the combined Tier0/Tier-1...
Go to contribution page -
Christopher Jones (Fermi National Accelerator Lab. (US))10/07/2018, 11:45
Since the beginning of the LHC Run 2 in 2016 the CMS data processing framework, CMSSW, has been running with multiple threads during production of data and simulation via the use of Intel's Thread Building Blocks (TBB) library. The TBB library utilizes tasks as concurrent units of work. CMS used these tasks to allow both concurrent processing of events as well as concurrent running of modules...
Go to contribution page -
Chris Burr (University of Manchester (GB))10/07/2018, 11:45
A key ingredient of the data taking strategy used by the LHCb experiment in Run-II is the novel real-time detector alignment and calibration. Data collected at the start of the fill are processed within minutes and used to update the alignment, while the calibration constants are evaluated hourly. This is one of the key elements which allow the reconstruction quality of the software trigger in...
Go to contribution page -
Yuji Kato10/07/2018, 11:45
The Belle II is an asymmetric energy e+e- collider experiment at KEK, Japan. The Belle II aims to reveal the physics beyond the standard model with a data set of about 5×10^10 BB^bar pairs and starts the physics run in 2018. In order to store such a huge amount of data including simulation events and analyze it in a timely manner, Belle II adopts a distributed computing model with DIRAC...
Go to contribution page -
Vladimir Ivantchenko (CERN)10/07/2018, 11:45
We report developments for the Geant4 electromagnetic (EM) physics sub-packages for Geant4 release 10.4 and beyond. Modifications are introduced to the models of photo-electric effect, bremsstrahlung, gamma conversion, and multiple scattering. Important developments for calorimetry applications were carried out for the modeling of single and multiple scattering of charged particles....
Go to contribution page -
Enrico Guiraud (CERN, University of Oldenburg (DE))10/07/2018, 11:45Track 6 – Machine learning and physics analysispresentation
The Physics programmes of LHC Run III and HL-LHC challenge the HEP community. The volume of data to be handled is unprecedented at every step of the data processing chain: analysis is no exception.
Go to contribution page
First class analysis tools need to be provided to physicists which are easy to use, exploit the bleeding edge hardware technologies and allow to seamlessly express parallelism.
This contribution... -
Andrea Rizzi (INFN Sezione di Pisa, Universita' e Scuola Normale Superiore, P)10/07/2018, 12:00Track 6 – Machine learning and physics analysispresentation
A new event data format has been designed and prototyped by the CMS collaboration to satisfy the needs of a large fraction of Physics Analyses (at least 50%) with a per event size of order 1 Kb. This new format is more than a factor 20x smaller than the MINIAOD format and contains only top level information typically used in the last steps of the analysis. The talk will review the current...
Go to contribution page -
Marco Clemencic (CERN)10/07/2018, 12:00
LHCb has been using the CERN/IT developed Conditions Database library COOL for several years, during LHC Run 1 and Run 2. With the opportunity window of the second long shutdown of LHC, in preparation for Run 3 and the upgraded LHCb detector, we decided to investigate alternatives to COOL as Conditions Database backend. In particular, given our conditions and detector description data model,...
Go to contribution page -
Matthias Richter (University of Oslo (NO))10/07/2018, 12:00
The ALICE experiment at the Large Hadron Collider (LHC) at CERN is planned to be operated in a continuous data-taking mode in Run 3.This will allow to inspect data from all collisions at a rate of 50 kHz for Pb-Pb, giving access to rare physics signals embedded into a large background.
Based on experience with real-time reconstruction of particle trajectories and event properties in the ALICE...
Go to contribution page -
Haibo li (Institute of High Energy Physics Chinese Academy of Science)10/07/2018, 12:00Track 7 – Clouds, virtualization and containerspresentation
As the development of cloud computing, more and more clouds are widely applied in the high-energy physics fields. OpenStack is generally considered as the future of cloud computing. However in OpenStack, the resource allocation model assigns a fixed number of resources to each group. It is not very suitable for scientific computing such as high energy physics applications whose demands of...
Go to contribution page -
Dr Marilena Bandieramonte (CERN)10/07/2018, 12:00
The development of the GeantV Electromagnetic (EM) physics package has evolved following two necessary paths towards code modernization. A first phase required the revision of the main electromagnetic physics models and their implementation. The main objectives were to improve their accuracy, extend them to the new high-energy frontiers posed by the Future Circular Collider (FCC) programme and...
Go to contribution page -
Adam Wegrzynek (CERN)10/07/2018, 12:00
ALICE (A Large Ion Collider Experiment) is preparing for a major upgrade of the detector, readout system and computing for LHC Run 3. A new facility called O2 (Online-Offline) will play a major role in data compression and event processing. To efficiently operate the experiment, we are designing a monitoring subsystem, which will provide a complete overview of the O2 overall health, detect...
Go to contribution page -
Guilherme Amadio (CERN)10/07/2018, 12:00
The LHC experiments produce tens of petabytes of new data in ROOT format per year that need to be processed and analysed. In the next decade, following the planned upgrades of the LHC and the detectors, this rate is expected to increase at least ten-fold.
Go to contribution page
Therefore, optimizing the ROOT I/O subsystem is of critical importance to the success of the LHC physics programme. This contribution... -
Bartlomiej Rachwal (AGH University of Science and Technology (PL))10/07/2018, 12:15
In high energy physics experiments, silicon detectors are often subjected to a harsh radiation environment, specially at hadronic colliders. Understanding the impact of radiation damage on the detector performance is an indispensable prerequisite for a successful operation throughout the lifetime of the planned experiment.
A dedicated irradiation programme followed by detailed studies with...
Go to contribution page -
Andrei Gheata (CERN)10/07/2018, 12:15
SIMD acceleration can potentially boost by factors the application throughput. However, achieving efficient SIMD vectorization for scalar code with complex data flow and branching logic, goes way beyond breaking loop dependencies and relying on the compiler. Since the re-factoring effort scales with the number of lines of code, it is important to understand what kind of performance gains can...
Go to contribution page -
Alexey Anisenkov (Budker Institute of Nuclear Physics (RU))10/07/2018, 12:15
The WLCG Information System (IS) is an important component of the huge heterogeneous distributed infrastructure. Considering the evolution of LHC computing towards high luminosity era and analyzing experience accumulated by the computing operations teams and limitations of the current information system, the WLCG IS evolution task force came up with the proposal to develop Computing Resource...
Go to contribution page -
Tian Yan (Institution of High Energy Physics, Chinese Academy of Science)10/07/2018, 12:15Track 7 – Clouds, virtualization and containerspresentation
To improve hardware utilization and save man power in system management, we have migrated most of the web services in our institute (Institute of High Energy Physics, IHEP) to a private cloud build upon OpenStack since last few years. However, cyber security attacks becomes a serious threats to the cloud progressively. Therefore, a detection and monitoring system for cyber security threats is...
Go to contribution page -
Gordon Watts (University of Washington (US))10/07/2018, 12:15Track 6 – Machine learning and physics analysispresentation
The HEP community is preparing for the LHC’s Run 3 and 4. One of the big challenges for physics analysis will be developing tools to efficiently express an analysis and able to efficiently process the x10 more data expected. Recently, interest has focused on declarative analysis languages: a way of specifying a physicists’ intent, and leaving everything else to the underlying system. The...
Go to contribution page -
Yaodong Cheng (Chinese Academy of Sciences (CN))10/07/2018, 12:15
Beijing Spectrometer (BESIII) experiment has produced hundreds of billions of events. It has collected the world's largest data samples of J/ψ, ψ(3686), ψ(3770) andψ(4040) decays. The typical branching fractions for interesting physics channels are of the order of O(10^-3). The traditional event-wise accessing of BOSS (Bes Offline Software System) is not effective for the selective accessing...
Go to contribution page -
Virginia Azzolini (Massachusetts Inst. of Technology (US))10/07/2018, 12:15
The CMS experiment dedicates a significant effort to supervise the quality of its data, online and offline. A real-time data quality (DQ) monitoring is in place to spot and diagnose problems as promptly as possible to avoid data loss. The evaluation a posteriori of processed data is designed to categorize the data in term of their usability for physics analysis. These activities produce DQ...
Go to contribution page -
Rob Appleyard (STFC)10/07/2018, 14:00
Since February 2017, the RAL Tier-1 has been storing production data from the LHC experiments on its new Ceph backed object store called Echo. Echo has been designed to meet the data demands of LHC Run 3 and should scale to meet the challenges of HL-LHC. Echo is already providing better overall throughput than the service it will replace (CASTOR) even with significantly less hardware...
Go to contribution page -
Dr Christopher Tunnell (University of Chicago)10/07/2018, 14:00Track 6 – Machine learning and physics analysispresentation
Within the field of dark matter direct detection, there has been very little penetration of machine learning. This is primarily due to the difficulty of modeling such low-energy detectors for training sets (the keV energies are $10^{-10}$ smaller than LHC). Xenon detectors have been leading the field of dark matter direct detection for the last decade. The current front runner is XENON1T,...
Go to contribution page -
Andrew McNab (University of Manchester)10/07/2018, 14:00
During 2017 LHCb developed the ability to interrupt Monte Carlo
Go to contribution page
simulation jobs and cause them to finish cleanly with the events
simulated so far correctly uploaded to grid storage. We explain
how this functionality is supported in the Gaudi framework and handled
by the LHCb simulation framework Gauss. By extending DIRAC, we have been
able to trigger these interruptions when running... -
Domenico Giordano (CERN)10/07/2018, 14:00
Benchmarking is a consolidated activity in High Energy Physics (HEP) computing where large computing power is needed to support scientific workloads. In HEP, great attention is paid to the speed of the CPU in accomplishing high-throughput tasks characterised by a mixture of integer and floating point operations and a memory footprint of few gigabytes.
As of 2009, HEP-SPEC06 (HS06) is the...
Go to contribution page -
Slava Krutelyov (Univ. of California San Diego (US))10/07/2018, 14:00
Majority of currently planned or considered hadron colliders are expected to deliver data in collisions with hundreds of simultaneous interactions per beam bunch crossing on average, including the high luminosity LHC upgrade currently in preparation and the possible high energy LHC upgrade or a future circular collider FCC-hh. Running of charged particle track reconstruction for the general...
Go to contribution page -
Sebastien Ponce (CERN)10/07/2018, 14:00
The LHCb detector will be upgraded for the LHC Run 3. The new, full software trigger must be able to sustain the 30MHz proton-proton inelastic collision rate. The Gaudi framework currently used in LHCb has been re-engineered in order to enable the efficient usage of vector registers and of multi- and many-core architectures. This contribution presents the critical points that had to be...
Go to contribution page -
Julie Hart Kirk (STFC-Rutherford Appleton Laboratory (GB))10/07/2018, 14:00
The design and performance of the ATLAS Inner Detector (ID) trigger
Go to contribution page
algorithms running online on the High Level Trigger (HLT) processor
farm for 13 TeV LHC collision data with high pileup are discussed.
The HLT ID tracking is a vital component in all physics signatures
in the ATLAS trigger for the precise selection of the rare or
interesting events necessary for physics analysis... -
Dmytro Kresan (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))10/07/2018, 14:15
The multi-purpose R$^{3}$B (Reactions with Relativistic Radioactive Beams) detector at the future FAIR facility in Darmstadt will be used for various experiments with exotic beams in inverse kinematics. The two-fold setup will serve for particle identification and momentum measurement up- and downstream the secondary reaction target. In order to perform a high-precision charge identification...
Go to contribution page -
Johannes Elmsheuser (Brookhaven National Laboratory (US))10/07/2018, 14:15
The CERN ATLAS experiment grid workflow system manages routinely 250 to
Go to contribution page
500 thousand concurrently running production and analysis jobs
to process simulation and detector data. In total more than 300 PB
of data is distributed over more than 150 sites in the WLCG.
At this scale small improvements in the software and computing
performance and workflows can lead to significant resource usage... -
Mr Tigran Mkrtchyan (DESY)10/07/2018, 14:15
The life cycle of the scientific data is well defined: data is collected, then processed,
Go to contribution page
archived and finally deleted. Data is never modified. The original data is used or new,
derived data is produced: Write Once Read Many times (WORM). With this model in
mind, dCache was designed to handle immutable files as efficiently as possible. Currently,
data replication, HSM connectivity and... -
Dr Thomas Vuillaume (LAPP, CNRS, Univ. Savoie Mont-Blanc)10/07/2018, 14:15Track 6 – Machine learning and physics analysispresentation
The Cherenkov Telescope Array (CTA) is the next generation of ground-based gamma-ray telescopes for gamma-ray astronomy. Two arrays will be deployed composed of 19 telescopes in the Northern hemisphere and 99 telescopes in the Southern hemisphere. Observatory operations are planned to start in 2021 but first data from prototypes should be available already in 2019. Due to its very high...
Go to contribution page -
Giulio Eulisse (CERN)10/07/2018, 14:15
ALICE is one of the four major LHC experiments at CERN. When the accelerator enters the Run 3 data-taking period, starting in 2021, ALICE expects almost 100 times more Pb-Pb central collisions than now, resulting in a large increase of data throughput. In order to cope with this new challenge, the collaboration had to extensively rethink the whole data processing chain, with a tighter...
Go to contribution page -
Sioni Paris Summers (Imperial College Sci., Tech. & Med. (GB))10/07/2018, 14:15
Track reconstruction at the CMS experiment uses the Combinatorial Kalman Filter. The algorithm computation time scales exponentially with pile-up, which will pose a problem for the High Level Trigger at the High Luminosity LHC. FPGAs, which are already used extensively in hardware triggers, are becoming more widely used for compute acceleration. With a combination of high perfor- mance, energy...
Go to contribution page -
David Smith (CERN)10/07/2018, 14:15
Based on the observation of low average CPU utilisation of several hundred disk servers in the EOS storage system at CERN, the Batch on EOS Extra Resources (BEER) project developed an approach to utilise these resources for batch processing. After initial proof of concept tests, showing almost no interference between the batch and storage services, a model for production has been developed and...
Go to contribution page -
Fabrizio Furano (CERN)10/07/2018, 14:30
The DPM (Disk Pool Manager) system is a multiprotocol scalable technology for Grid storage that supports about 130 sites for a total of about 90 Petabytes online.
The system has recently completed the development phase that had been announced in the past years, which consolidates its core component (DOME: Disk Operations Management Engine) as a full-featured high performance engine that can...
Go to contribution page -
Placido Fernandez Declara (University Carlos III (ES))10/07/2018, 14:30
In order to profit from the largely increased instantaneous luminosity provided by the accelerator in Run III (2021-2023), the upgraded LHCb detector will make usage of a fully software based trigger, with a real-time event reconstruction and selection performed at the bunch crossing rate of the LHC (~30 MHz). This assumption implies much tighter timing constraints for the event reconstruction...
Go to contribution page -
Todor Trendafilov Ivanov (University of Sofia (BG)), Jose Hernandez (CIEMAT)10/07/2018, 14:30
Hundreds of physicists analyse data collected by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) using the CMS Remote Analysis builder (CRAB) and the CMS GlideinWMS global pool to exploit the resources of the World LHC Computing Grid. Efficient use of such an extensive and expensive resource is crucial. At the same time the CMS collaboration is committed on...
Go to contribution page -
Dimitri Bourilkov (University of Florida (US))10/07/2018, 14:30Track 6 – Machine learning and physics analysispresentation
With the accumulation of large datasets at energy of 13 TeV, the LHC experiments can search for rare processes, where the extraction of the signal from the copious and varying Standard Model backgrounds poses increasing challenges. Techniques based on machine learning promise to achieve optimal search sensitivity and signal-to-background ratios for such searches. Taking the search for the...
Go to contribution page -
Alberto Aimar (CERN)10/07/2018, 14:30
The new unified monitoring (MONIT) for the CERN Data Centres and for the WLCG Infrastructure is now based on established open source technologies for collection, streaming and storage of monitoring data. The previous solutions, based on in-house development and commercial software, are been replaced with widely- recognized technologies such as Collectd, Flume, Kafka, ElasticSearch, InfluxDB,...
Go to contribution page -
340. Parallelized and Vectorized Tracking Using Kalman Filters with CMS Detector Geometry and EventsMatevz Tadel (Univ. of California San Diego (US))10/07/2018, 14:30
The High-Luminosity Large Hadron Collider (HL-LHC) at CERN will be characterized by higher event rate, greater pileup of events, and higher occupancy. Event reconstruction will therefore become far more computationally demanding, and given recent technology trends, the extra processing capacity will need to come from expanding the parallel capabilities in the tracking software. Existing...
Go to contribution page -
Scott Snyder (Brookhaven National Laboratory (US))10/07/2018, 14:30
In preparation for Run 3 of the LHC, scheduled to start in 2021, the ATLAS
Go to contribution page
experiment is revising its offline software so as to better take advantage
of machines with many cores. A major part of this effort is migrating the
software to run as a fully multithreaded application, as this has been
shown to significantly improve the memory scaling behavior. This talk will
outline changes made to... -
David Martin Clavo (CERN)10/07/2018, 14:45
CERN has been using ITIL Service Management methodologies and ServiceNow since early 2011. Initially a joint project between just the Information Technology and the General Services Departments, now most of CERN is using this common methodology and tool, and all departments are represented totally or partially in the CERN Service Catalogue.
We will present a summary of the current situation...
Go to contribution page -
Ms Yao Zhang10/07/2018, 14:45
One of the task of track reconstruction for COMET Phase-I drift chamber is to fit multi-turn curling tracks. A method of Deterministic Annealing Filter and implements a global competition between hits from different turn tracks is introduced. This method assigns the detector measurements to the track assumption based on the weighted mean of fitting quality on different turns. This method is...
Go to contribution page -
Sioni Paris Summers (Imperial College Sci., Tech. & Med. (GB))10/07/2018, 14:45
Boosted Decision Trees are used extensively in offline analysis and reconstruction in high energy physics. The computation time of ensemble inference has previously prohibited their use in online reconstruction, whether at the software or hardware level. An implementation of BDT inference for FPGAs, targeting low latency by leveraging the platform’s enormous parallelism, is presented. Full...
Go to contribution page -
Ms Anna Fatkina (JINR)10/07/2018, 14:45
Measurement of the physical parameters is usually done by fitting a numerical experiment model to the data. High precision experiments require detailed models with a large number of uncertain parameters. The models should be computationally efficient. In the same time they should be flexible enough since the analysis preparation requires a lot of testing.
Go to contribution page
We are solving these problems by... -
Herve Rousseau (CERN)10/07/2018, 14:45
The CERN IT Storage group operates multiple distributed storage systems and is
responsible
for the support of the infrastructure to accommodate all CERN storage
requirements, from the
physics data generated by LHC and non-LHC experiments to the personnel users’
files.EOS is now the key component of the CERN Storage strategy. It allows to
Go to contribution page
operate at high incoming
throughput for experiment... -
Peter Love (Lancaster University (GB))10/07/2018, 14:45
ATLAS Distributed Computing (ADC) uses the pilot model to submit jobs to Grid computing resources. This model isolates the resource from the workload management system (WMS) and helps to avoid running jobs on faulty resources. A minor side-effect of this isolation is that the faulty resources are neglected and not brought back into production because the problems are not visible to the WMS. In...
Go to contribution page -
Dr Tomasz Piotr Trzcinski (Warsaw University of Technology (PL))10/07/2018, 14:45Track 6 – Machine learning and physics analysispresentation
Data Quality Assurance (QA) is an important aspect of every High-Energy Physics experiment, especially in the case of the ALICE Experiment at the Large Hadron Collider (LHC) whose detectors are extremely sophisticated and complex devices. To avoid processing low quality or redundant data, human experts are currently involved in assessing the detectors’ health during the collisions’ recording....
Go to contribution page -
Thomas Hauth (KIT - Karlsruhe Institute of Technology (DE))10/07/2018, 15:00
In early 2018, e+e- collisions of the SuperKEKB B-Factory will be recorded by the Belle II detector in Tsukuba (Japan) for the first time. The new accelerator and detector represent a major upgrade from the previous Belle experiment and will achieve a 40-times higher instantaneous luminosity. Special considerations and challenges arise for track reconstruction at Belle II due to multiple...
Go to contribution page -
Tobias Stockmanns (Forschungszentrum Jülich GmbH)10/07/2018, 15:00
PANDA is one of the main experiments of the future FAIR accelerator facility at Darmstadt. It utilizes an anti-proton beam with a momentum up to 15 GeV/c on a fixed proton or nuclear target to investigate the features of strong QCD.
The reconstruction of charged particle tracks is one of the most challenging aspects in the online and offline reconstruction of the data taken by PANDA. Several...
Go to contribution page -
Hannah Short (CERN)10/07/2018, 15:00
Federated identity management (FIM) is an arrangement that can be made among multiple organisations that lets subscribers use the same identification data to obtain access to the secured resources of all organisations in the group. In many research communities there is an increasing interest in a common approach to FIM as there is obviously a large potential for synergies. FIM4R [1] provides a...
Go to contribution page -
Zhechka Toteva (CERN)10/07/2018, 15:00
In the CERN IT agile infrastructure, Puppet, CERN IT central messaging infrastructure and the roger application are the key constituents handling the configuration of the machines of the computer centre. The machine configuration at any given moment depends on its declared state in roger and Puppet ensures the actual implementation of the desired configuration by running the puppet agent on...
Go to contribution page -
Vardan Gyurjyan (Jefferson Lab)10/07/2018, 15:00
In this paper, we present micro-services framework to develop data processing applications.
Go to contribution page
We discuss functional decomposition strategies that help transitioning of existing data processing applications into a micro-services environment. We will also demonstrate advantages and disadvantages of this framework in terms of operational elasticity, vertical and horizontal scalability,... -
Andrea Manzi (CERN)10/07/2018, 15:00
The EOS namespace has outgrown its legacy in-memory implementation, presenting the need for an alternative solution. In response to this need we developed QuarkDB, a highly-available datastore capable of serving as the metadata backend for EOS. Even though the datastore was tailored to the needs of the namespace, its capabilities are generic.
We will present the overall system design, and our...
Go to contribution page -
Prof. Martin Sevior (University of Melbourne)10/07/2018, 15:00Track 6 – Machine learning and physics analysispresentation
Data from B-physics experiments at the KEKB collider have a substantial background from $e^{+}e^{-}\to q \bar{q}$ events. To suppress this we employ deep neural network algorithms. These provide improved signal from background discrimination. However, the neural network develops a substantial correlation with the $\Delta E$ kinematic variable used to distinguish signal from background in the...
Go to contribution page -
Remi Ete (DESY)10/07/2018, 15:15
Data quality monitoring is the first step to the certification of the recorded data for off-line physics analysis. Dedicated monitoring framework have been developed by many experiments in the past and usually rely on the event data model (EDM) of the experiment, leading to a strong dependency on the data format and storage. We present here a generic data quality monitoring system, DQM4HEP,...
Go to contribution page -
Todd Michael Seiss (University of Chicago (US))10/07/2018, 15:15
The ATLAS Fast TracKer (FTK) is a hardware based track finder for the ATLAS trigger infrastructure currently under installation and commissioning. FTK sits between the two layers of the current ATLAS trigger system, the hardware-based Level 1 Trigger and the CPU-based High-Level Trigger (HLT). It will provide full-event tracking to the HLT with a design latency of 100 µs at a 100 kHz event...
Go to contribution page -
Hugo Gonzalez Labrador (CERN)10/07/2018, 15:15
CERNBox is the CERN cloud storage hub. It allows synchronising and sharing files on all major desktop and mobile platforms (Linux, Windows, MacOSX, Android, iOS) aiming to provide universal access and offline availability to any data stored in the CERN EOS infrastructure.
With more than 12000 users registered in the system, CERNBox has responded to the high demand in our diverse community to...
Go to contribution page -
Hristo Umaru Mohamed (CERN)10/07/2018, 15:15
Prometheus is a leading open source monitoring and alerting tool. Prometheus also utilizes a pull model, in the sense is pulls metrics from monitored entities, rather than receives them as a push. But sometimes this can be a major headache, even without security in mind, when performing network gymnastics to reach your monitored entities. Not only that, but sometimes system metrics might be...
Go to contribution page -
David Crooks (University of Glasgow (GB))10/07/2018, 15:15
The modern security landscape for distributed computing in High Energy Physics (HEP) includes a wide range of threats employing different attack vectors. The nature of these threats is such that the most effective method for dealing with them is to work collaboratively, both within the HEP community and with partners further afield - these can, and should, include institutional and campus...
Go to contribution page -
Mr Victor Estrade (LRI, UPSud, Université Paris-Saclay)10/07/2018, 15:15Track 6 – Machine learning and physics analysispresentation
Experimental science often has to cope with systematic errors that coherently bias data. We analyze this issue on the analysis of data produced by experiments of the Large Hadron Collider at CERN as a case of supervised domain adaptation. The dataset used is a representative Higgs to tau tau analysis from ATLAS and released as part of the Kaggle Higgs ML challenge. Perturbations have been...
Go to contribution page -
Stefano Spataro (University of Turin)10/07/2018, 15:15
The Belle II experiment is ready to take data in 2018, studying e+e- collisions at the KEK facility in Tsukuba (Japan), in a center of mass energy range of the Bottomonium states. The tracking system includes a combination of hit measurements coming from the vertex detector, made of pixel detectors and double-sided silicon strip detectors, and a central drift chamber, inside a solenoid of 1.5...
Go to contribution page -
Andrea Valassi (CERN)10/07/2018, 15:30Track 6 – Machine learning and physics analysispresentation
This presentation discusses some of the metrics used in HEP and other scientific domains for evaluating the relative quality of binary classifiers that are built using modern machine learning techniques. The use of the area under the ROC curve, which is common practice in the evaluation of diagnostic accuracy in the medical field and has now become widespread in many HEP applications, is...
Go to contribution page -
Hugo Gonzalez Labrador (CERN)10/07/2018, 15:30
In the last few years we have been seeing constant interest for technologies providing effective cloud storage for scientific use, matching the requirements of price, privacy and scientific usability. This interest is not limited to HEP and extends out to other scientific fields due to the fast data increase: for example, "big data" is a characteristic of modern genomics, energy and financial...
Go to contribution page -
Martin Adam (Acad. of Sciences of the Czech Rep. (CZ))10/07/2018, 15:30
With the explosion of the number of distributed applications, a new dynamic server environment emerged grouping servers into clusters, which utilization depends on the current demand for the application.
To provide reliable and smooth services it is crucial to detect and fix possible erratic behavior of individual servers in these clusters. Use of standard techniques for this purpose delivers...
Go to contribution page -
David Rohr (CERN)10/07/2018, 15:30
In LHC Run 3, ALICE will increase the data taking rate significantly to 50 kHz continuous read out of minimum bias Pb-Pb collisions.
Go to contribution page
The reconstruction strategy of the online offline computing upgrade foresees a first synchronous online reconstruction stage during data taking enabling detector calibration, and a posterior calibrated asynchronous reconstruction stage.
Many new challenges arise,... -
Paul Millar (DESY)10/07/2018, 15:30
X.509 is the dominate security infrastructure used in WLCG. Although
this technology has worked well, it has some issues. One is that,
currently, a delegated proxy can do everything the parent credential
can do. A stolen "production" proxy could be used from any machine in
the world to delete all data owned by that VO on all storage systems
in the grid.Generating a delegated X.509...
Go to contribution page -
Slava Krutelyov (Univ. of California San Diego (US))10/07/2018, 15:30
CMS offline event reconstruction algorithms cover simulated and acquired data processing starting from the detector raw data on input and providing high level reconstructed objects suitable for analysis. The landscape of supported data types and detector configuration scenarios has been expanding and covers the past and expected future configurations including proton-proton collisions and...
Go to contribution page -
Johan Bregeon (Laboratoire Univers et Particules, Université de Montpellier Place Eugène Bataillon - CC 72, CNRS/IN2P3, F-34095 Montpellier, France )10/07/2018, 15:30
The Cherenkov Telescope Array (CTA), currently under construction, is the next- generation instrument in the field of very high energy gamma-ray astronomy. The first data are expected by the end of 2018, while the scientific operations will start in 2022 for a duration of about 30 years. In order to characterise the instrument response to the Cherenkov light emitted by atmospheric cosmic ray...
Go to contribution page -
Mr Ignacio Heredia Cacha (Instituto de Física de Cantabria)10/07/2018, 15:45Track 6 – Machine learning and physics analysispresentation
The application of deep learning techniques using convolutional neu-
Go to contribution page
ral networks to the classification of particle collisions in High Energy Physics is
explored. An intuitive approach to transform physical variables, like momenta of
particles and jets, into a single image that captures the relevant information, is
proposed. The idea is tested using a well known deep learning framework on a... -
Herve Rousseau (CERN)10/07/2018, 15:45
The Ceph File System (CephFS) is a software-defined network filesystem built upon the RADOS object store. In the Jewel and Luminous releases, CephFS was labeled as production ready with horizontally scalable metadata performance. This paper seeks to evaluate that statement in relation to both the HPC and general IT infrastructure needs at CERN. We highlights the key metrics required by four...
Go to contribution page -
Mr Nicolas Liampotis (Greek Research and Technology Network - GRNET)10/07/2018, 15:45
The European Open Science Cloud (EOSC) aims to enable trusted access to services and the re-use of shared scientific data across disciplinary, social and geographical borders. The EOSC-hub will realise the EOSC infrastructure as an ecosystem of research e-Infrastructures leveraging existing national and European investments in digital research infrastructures. EGI Check-in and EUDAT B2ACCESS...
Go to contribution page -
Hadrien Benjamin Grasland (Université Paris-Saclay (FR)), Bruno Lathuilière10/07/2018, 15:45
Numerical stability is not only critical to the correctness of scientific computations, but also has a direct impact on their software efficiency as it affects the convergence of iterative methods and the available choices of floating-point precision.
Verrou is a Valgrind-based tool which challenges the stability of floating-point code by injecting random rounding errors in computations (a...
Go to contribution page -
David Lawrence (Jefferson Lab)10/07/2018, 15:45
Development of the JANA multi-threaded event processing framework began in 2005. It’s primary application has been for GlueX, a major Nuclear Physics experiment at Jefferson Lab. Production data taking began in 2016 and JANA has been highly successful in analyzing that data on the JLab computing farm. Work has now begun on JANA2, a near complete rewrite emphasizing features targeted for large...
Go to contribution page -
Baosong Shan (Beihang University (CN))10/07/2018, 15:45
The Alpha Magnetic Spectrometer (AMS) is a high energy physics experiment installed and operating on board of the International Space Station (ISS) from May 2011 and expected to last through Year 2024 and beyond. The Science Operation Centre is in charge of the offline computing for the AMS experiment, including flight data production, Monte-Carlo simulation, data management, data backup, etc....
Go to contribution page -
Illya Shapoval (Lawrence Berkeley National Laboratory)10/07/2018, 15:45
We have entered the Noisy Intermediate-Scale Quantum Era. A plethora of quantum processor prototypes allow evaluation of potential of the Quantum Computing paradigm in applications to pressing computational problems of the future. Growing data input rates and detector resolution foreseen in High-Energy LHC (2030s) experiments expose the often high time and/or space complexity of classical...
Go to contribution page -
Volker Friese (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))10/07/2018, 16:00
In position-sensitive detectors with segmented readout (pixels or strips), charged particles activate in general several adjacent read-out channels. The first step in the reconstruction of the hit position is thus to identify clusters of active channels associated to one particle crossing the detector. In conventionally triggered systems, where the association of raw data to events is given by...
Go to contribution page -
Philipp Sitzmann10/07/2018, 16:00
To study the performance of the Micro Vertex Detector (MVD), a fully modularized framework has been developed. The main goals of this framework have been: easy adaptability to new sensor specifications or changes in the geometry. This should be provided and additional high constrains on performance and memory usage had been set.
Go to contribution page
To achieve these goals a framework has been build which... -
Dimitrios Loukas (Nat. Cent. for Sci. Res. Demokritos (GR))10/07/2018, 16:00
The Historic Data Quality Monitor (HDQM) of the CMS experiment is a framework developed by the Tracker group of the CMS collaboration that permits a web-based monitoring of the time evolution of measurements ( S/N ratio, cluster size etc) in the Tracker silicon micro-strip and pixel detectors. It addition, it provides a flexible way for the implementation of HDQM to the other detector systems...
Go to contribution page -
Walter Lampl (University of Arizona (US))10/07/2018, 16:00
The offline software framework of the ATLAS experiment (Athena) consists of many small components of various types like Algorithm, Tool or Service. To assemble these components into an executable application for event processing, a dedicated configuration step is necessary. The configuration of a particular job depends on the workflow (simulation, reconstruction, high-level trigger, overlay,...
Go to contribution page -
Mr Sean Murray (University of Cape Town (ZA))10/07/2018, 16:00
Muon reconstruction is currently all done offline for ALICE. In Run3 this is supposed to move online, with ALICE running in continuous readout with a minimum bias Pb-Pb interaction rate of 50kHz.
There are numerous obstacles to getting the muon software to achieve the required performance, with the muon cluster finder being replaced and moved to run on a GPU inside the new O2 computing...
Go to contribution page -
David Nonso Ojika (University of Florida (US))10/07/2018, 16:00
We introduce SWiF - Simplified Workload-intuitive Framework - a workload-centric, application programming framework designed to simplify the large-scale deployment of FPGAs in end-to-end applications. SWiF intelligently mediates access to shared resources by orchestrating the distribution and scheduling of tasks across a heterogeneous mix of FPGA and CPU resources in order to improve...
Go to contribution page -
Armenuhi Abramyan (A.Alikhanyan National Science Laboratory (AM)), Narine Manukyan (A.Alikhanyan National Laboratory (AM))10/07/2018, 16:00
The LHC experiments produce petabytes of data each year, which must be stored, processed and analyzed. This requires a significant amount of storage and computing resources. In addition to that, the requirements to these resources are increasing over the years, at each LHC running period.
Go to contribution page
In order to predict the resource usage requirements of the ALICE Experiment for a particular LHC Run... -
Wojciech Jan Krzemien (National Centre for Nuclear Research (PL))10/07/2018, 16:00
The Message Queue architecture is an asynchronous communication scheme that provides an attractive solution for certain scenarios in the distributed computing model. The introduction of the intermediate component (queue) in-between the interacting processes, allows to decouple the end-points making the system more flexible and providing high scalability and redundancy. The message queue...
Go to contribution page -
Matthias Jochen Schnepf (KIT - Karlsruhe Institute of Technology (DE))10/07/2018, 16:00
The GridKa Tier 1 data and computing center hosts a significant share of WLCG processing resources. Providing these resources to all major LHC and other VOs requires an efficient, scalable and reliable cluster management. To satisfy this, GridKa has recently migrated its batch resources from CREAM-CE and PBS to ARC-CE and HTCondor. This contribution discusses the key highlights of the adoption...
Go to contribution page -
Mikhail Titov (National Research Centre Kurchatov Institute (RU))10/07/2018, 16:00
Modern workload management systems that are responsible for central data production and processing in High Energy and Nuclear Physics experiments have highly complicated architectures and require a specialized control service for resource and processing components balancing. Such a service represents a comprehensive set of analytical tools, management utilities and monitoring views aimed at...
Go to contribution page -
Nikita Balashov (JINR)10/07/2018, 16:00
IaaS clouds brought us greater flexibility in managing computing infrastructures enabling us to mix different computing environments (e.g. Grid systems, web-servers and even personal desktop-like systems) in form of virtual machines (VM) within the same hardware equipment. The new paradigm automatically introduced efficiency increase caused by switching from using single-task dedicated...
Go to contribution page -
Johannes Lehrbach (Johann-Wolfgang-Goethe Univ. (DE))10/07/2018, 16:00
Abstract:
ALICE (A Large Ion Collider Experiment) is one of the four big experiments at the Large Hadron Collider (LHC). For ALICE Run 3 there will be a major upgrade for several detectors as well as the compute infrastructure with a combined Online-Offline computing system (O2) to support continuous readout at much higher data rates than before (3TB/s). The ALICE Time Projection Chamber...
Go to contribution page -
213. AlphaTwirl: a python library for summarizing event data into multi-dimensional categorical dataDr Tai Sakuma (University of Bristol (GB))10/07/2018, 16:00
AlphaTwirl is a python library that loops over event data and summarizes them into multi-dimensional categorical (binned) data as data frames. Event data, input to AlphaTwirl, are data with one entry (or row) for one event: for example, data in ROOT TTree with one entry per collision event of an LHC experiment. Event data are often large -- too large to be loaded in memory -- because they have...
Go to contribution page -
Juraj Smiesko (Comenius University (SK))10/07/2018, 16:00
The ATLAS experiment records data from the proton-proton collisions produced by the Large Hadron Collider (LHC). The Tile Calorimeter is the hadronic sampling calorimeter of ATLAS in the region |eta| < 1.7. It uses iron absorbers and scintillators as active material. Jointly with the other calorimeters it is designed for reconstruction of hadrons, jets, tau-particles and missing transverse...
Go to contribution page -
David Colling (Imperial College (GB))10/07/2018, 16:00
Many areas of academic research are increasingly catching up with the LHC experiments
Go to contribution page
when it comes to data volumes, and just as in particle physics they require large data sets to be moved between analysis locations.
The LHC experiments have built a global e-Infrastructure in order to handle hundreds of
petabytes of data and massive compute requirements. Yet, there is nothing particle physics... -
Dr Malachi Schram, Malachi Schram (Pacific Northwest National Laboratory)10/07/2018, 16:00
We investigate novel approaches using Deep Learning (DL) for efficient execution of workflows on distributed resources. Specifically, we studied the use of DL for job performance prediction, performance classification, and anomaly detection to improve the utilization of the computing resources.
- Performance prediction:
- capture performance of workflows on multiple resources
-...
-
Chris Lee (University of Cape Town (ZA))10/07/2018, 16:00
The ATLAS Distributed Computing (ADC) Project is responsible for the off-line processing of data produced by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. It facilitates data and workload management for ATLAS computing on the Worldwide LHC Computing Grid (WLCG).
ADC Central Services operations (CSops)is a vital part of ADC, responsible for the deployment and configuration...
Go to contribution page -
Fernando Harald Barreiro Megino (University of Texas at Arlington)10/07/2018, 16:00
PanDA (Production and Distributed Analysis) is the workload management system for ATLAS across the Worldwide LHC Computing Grid. While analysis tasks are submitted to PanDA by over a thousand users following personal schedules (e.g. PhD or conference deadlines), production campaigns are scheduled by a central Physics Coordination group based on the organization’s calendar. The Physics...
Go to contribution page -
Alexander Undrus (Brookhaven National Laboratory (US))10/07/2018, 16:00
PowerPC and high performance computers (HPC) are important resources for computing in the ATLAS experiment. The future LHC data processing will require more resources than Grid computing, currently using approximately 100,000 cores at well over 100 sites, can provide. Supercomputers are extremely powerful as they use resources of hundreds of thousands CPUs joined together. However their...
Go to contribution page -
Michal Svatos (Acad. of Sciences of the Czech Rep. (CZ))10/07/2018, 16:00
The Czech national HPC center IT4Innovations located in Ostrava provides two HPC systems, Anselm and Salomon. The Salomon HPC is amongst the hundred most powerful supercomputers on Earth since its commissioning in 2015. Both clusters were tested for usage by the ATLAS experiment for running simulation jobs. Several thousand core hours were allocated to the project for tests, but the main aim...
Go to contribution page -
David Dossett (University of Melbourne)10/07/2018, 16:00
In 2018 the Belle II detector will begin collecting data from $e^+e^-$ collisions at the SuperKEKB electron-positron collider at the High Energy Accelerator Research Organization (KEK, Tsukuba, Japan). Belle II aims to collect a data sample 50 times larger than the previous generation of B-Factories, taking advantage of the SuperKEKB design luminosity of $8\times10^{35} cm^{-2} s^{-1}$.
It is...
Go to contribution page -
Andre Sailer (CERN)10/07/2018, 16:00
Creating software releases is one of the more tedious occupations in the life of
Go to contribution page
a software developer. For this purpose we have tried to automate as many of the
repetitive tasks involved from getting the commits to running the software as
possible. For this simplification we rely in large parts on free collaborative
services build around GitHub: issue tracking, code review (GitHub),... -
Ilaria Vai (Universita and INFN (IT))10/07/2018, 16:00
One of the main challenges the CMS collaboration must overcome during the phase-2 upgrade is the radiation damage to the detectors from the high integrated luminosity of the LHC and the very high pileup. The LHC will produce collisions at a rate of about 5x10^9/s. The particles emerging from these collisions and the radioactivity they induce will cause significant damage to the detectors and...
Go to contribution page -
Alex Iribarren (CERN), Julien Leduc (CERN)10/07/2018, 16:00
CERN's current Backup and Archive Service hosts 11 PB of data in more than 2.1 billion files. We have over 500 clients which back up or restore an average of 80 TB of data each day. At the current growth rate, we expect to have about 13 PB by the end of 2018.
In this contribution we present CERN's Backup and Archive Service based on IBM Spectrum Protect (previously known as Tivoli Storage...
Go to contribution page -
Oliver Lantwin (Imperial College (GB))10/07/2018, 16:00
The SHiP experiment is new general purpose fixed target experiment designed to complement collider experiments in the search for new physics. A 400 GeV/c proton beam from the CERN SPS will be dumped on a dense target to accumulate $2\times10^{20}$ protons on target in five years.
A crucial part of the experiment is the active muon shield, which allows the detector to operate at a very high...
Go to contribution page -
Dr Martin Ritter (LMU / Cluster Universe)10/07/2018, 16:00
Over the last seven years the software stack of the next generation B factory experiment Belle II has grown to over one million lines of C++ and Python code, counting only the part included in offline software releases. This software is used by many physicists for their analysis, many of which will be students with no prior experience in HEP software. A beginner-friendly and up-to-date...
Go to contribution page -
Rok Pestotnik (Jozef Stefan Institute (SI))10/07/2018, 16:00
Several data samples from a Belle II experiment will be available to the general public as a part of experiment outreach activities. Belle2Lab is designed as an interactive graphical user interface to reconstructed particles, offering users basic particle selection tools. The tool is based on a Blockly JavaScript graphical code generator and can be run in a HTML5 capable browser. It allows...
Go to contribution page -
Leo Piilonen (Virginia Tech)10/07/2018, 16:00
I describe a novel interactive virtual reality visualization of subatomic particle physics, designed as an educational tool for learning about and exploring the subatomic particle collision events of the Belle II experiment. The visualization is designed for untethered, locomotive virtual reality, allowing multiple simultaneous users to walk naturally through a virtual model of the Belle II...
Go to contribution page -
David Yu (Brookhaven National Laboratory (US))10/07/2018, 16:00
Tape is an excellent choice for archival storage because of the capacity, cost per GB and long retention intervals, but its main drawback is the slow access time due to the nature of sequential medium. Modern enterprise tape drives now support Recommended Access Ordering (RAO), which is designed to improve recall/retrieval times.
BNL's mass storage system currently holds more than 100 PB of...
Go to contribution page -
Gianluca Cerminara (CERN)10/07/2018, 16:00
Many of the workflows in the CMS offline operation are designed around the concept of acquisition of a run: a period of data-taking with stable detector and accelerator conditions. The capability of integrating statistics across several runs is an asset for statistically limited monitoring and calibration workflows. Crossing run boundaries requires careful evaluation of the conditions of the...
Go to contribution page -
Mr Fabian Lambert (LPSC Grenoble IN2P3/CNRS (FR))10/07/2018, 16:00
AMI (ATLAS Metadata Interface) is a generic ecosystem for metadata
Go to contribution page
aggregation, transformation and cataloguing. Often, it is interesting
to share up-to-date metadata with other content services such as wikis.
Here, we describe the cross-domain solution implemented in the AMI Web
Framework: a system of embeddable controls, communicating with the
central AMI service and based on the AJAX and... -
Fabrizio Chiarello (INFN - National Institute for Nuclear Physics), Dr Sergio Traldi (INFN - Sezione di Padova), Paolo Andreetto (Universita e INFN, Padova (IT))10/07/2018, 16:00
The analysis and understanding of resources utilization in shared infrastructures, such as cloud environments, is crucial in order to provide better performance, administration and capacity planning.
The management of resource usage of the OpenStack-based cloud infrastructures hosted at INFN-Padova, the Cloud Area Padovana and the INFN-PADOVA-STACK instance of the EGI Federated Cloud, started...
Go to contribution page -
Mr Flavio Costa (CERN), Mr Esteban Gabancho (CERN), Mr Jose Benito Gonzalez Lopez (CERN), Mrs Ludmila Marian (CERN), Mr Nicola Tarocco (CERN), Mr Sebastian Witowski (CERN)10/07/2018, 16:00
CERN Document Server (CDS, cds.cern.ch) is the CERN Institutional Repository based on the Invenio open source digital repository framework. It is a heterogeneous repository, containing more than 2 million records, including research publications, audiovisual material, images, and the CERN archives. Its mission is to store and preserve all the content produced at CERN as well as to make it...
Go to contribution page -
Adam Aurisano (University of Cincinnati)10/07/2018, 16:00
The observation of neutrino oscillations provides evidence of physics beyond the Standard Model, and the precise measurement of those oscillations remains an essential goal for the field of particle physics. The NOvA experiment is a long-baseline neutrino experiment composed of two finely-segmented liquid-scintillator detectors located off-axis from the NuMI muon-neutrino beam having as its...
Go to contribution page -
Giacomo Cucciati (CERN)10/07/2018, 16:00
The Large Hadron Collider (LHC) at CERN Geneva has entered the Run 2 era, colliding protons at a center of mass energy of 13 TeV at high instantaneous luminosity. The Compact Muon Solenoid (CMS) is a general-purpose particle detector experiment at the LHC. The CMS Electromagnetic Calorimeter (ECAL) has been designed to achieve excellent energy and position resolution for electrons and photons....
Go to contribution page -
Giacomo Cucciati (CERN)10/07/2018, 16:00
In 2017 the Large Hadron Collider (LHC) at CERN has provided an astonishing 50 fb-1 of proton-proton collisions at a center of mass energy of 13 TeV. The Compact Muon Solenoid (CMS) detector has been able to record the 90.3% of this data. During this period, the CMS Electromagnetic Calorimeter (ECAL), based on 75000 scintillating PbWO4 crystals and a silicon and lead preshower, has continued...
Go to contribution page -
Andrea Perrotta (Universita e INFN, Bologna (IT))10/07/2018, 16:00
LHC Run2 began in April 2015. With the restart of the collisions in the CERN Large Hadron Collider. In the perspective of the offline event reconstruction, the most relevant detector updates appeared in 2017: they were the restructuring of the pixel detector, with an additional layer closer to the beams, and the improved photodetectors and readout chips for the hadron calorimeter, which will...
Go to contribution page -
Jean-Roch Vlimant (California Institute of Technology (US))10/07/2018, 16:00
The central production system of CMS is utilizing the LHC grid and effectively about 200 thousand cores, over about a hundred computing centers worldwide. Such a wide and unique distributed computing system is bound to sustain a certain rate of failures of various types. These are appropriately addressed with site administrators a posteriori. With up to 50 different campaigns ongoing...
Go to contribution page -
Nikos Kasioumis (CERN)10/07/2018, 16:00
For over a year and a half we ran a CERN-wide trial of collaborative authoring platforms, understanding how the CERN community authors and co-authors, gathering the user needs and requirements and evaluating the available options. As a result, the Overleaf and ShareLaTeX cloud platforms are now fully available to the CERN Community. First, we will explain our user-centered approach...
Go to contribution page -
William Kalderon (Lund University (SE))10/07/2018, 16:00
The LHC delivers an unprecedented number of proton-proton collisions
Go to contribution page
to its experiments. In kinematic regimes first studied by earlier
generations of collider experiments, the limiting factor to more
deeply probing for new physics can be the online and offline
computing, and offline storage, requirements for the recording and
analysis of this data. In this contribution, we describe a... -
Zhechka Toteva (CERN)10/07/2018, 16:00
CERN is using an increasing number of DNS based load balanced aliases (currently over 600). We explain the Go based concurrent implementation of the Load Balancing Daemon (LBD), how it is being progressively deployed using Puppet and how concurrency greatly improves scalability, ultimately allowing a single master-slave couple of Openstack VMs to server all LB aliases. We explain the Lbclient...
Go to contribution page -
Charles Leggett (Lawrence Berkeley National Lab. (US))10/07/2018, 16:00
In preparation for Run 3 of the LHC, the ATLAS experiment is migrating
Go to contribution page
its offline software to use a multithreaded framework, which will allow
multiple events to be processed simultaneously. This implies that the
handling of non-event, time-dependent (conditions) data,
such as calibrations and geometry, must also be extended to allow
for multiple versions of such data to exist... -
Alessandra Forti (University of Manchester (GB))10/07/2018, 16:00
Containerization is a lightweight form of virtualization that allows reproducibility and isolation responding to a number of long standing use cases in running the ATLAS software on the grid. The development of Singularity in particular with the capability to run as a standalone executable allows for containers to be integrated in the ATLAS (and other experiments) submission framework....
Go to contribution page -
Oksana Shadura (University of Nebraska Lincoln)10/07/2018, 16:00
Foundational software libraries such as ROOT are under intense pressure to avoid software regression, including performance regressions. Continuous performance benchmarking, as a part of continuous integration and other code quality testing, is an industry best-practice to understand how the performance of a software product evolves over time. We present a framework, built from industry best...
Go to contribution page -
Xiaoguang Yue (Ruprecht Karls Universitaet Heidelberg (DE))10/07/2018, 16:00
The LHC has planned a series of upgrades culminating in the High Luminosity LHC (HL-LHC) which will have an average luminosity 5-7 times larger than the design LHC value. The Tile Calorimeter (TileCal) is the hadronic sampling calorimeter installed in the central region of the ATLAS detector. It uses iron absorbers and scintillators as active material. TileCal will undergo a substantial...
Go to contribution page -
Maxim Potekhin (Brookhaven National Laboratory (US))10/07/2018, 16:00
The DUNE Collaboration is pursuing an experimental program (named protoDUNE)
Go to contribution page
which involves a beam test of two large-scale prototypes of the DUNE Far Detector
at CERN in 2018. The volume of data to be collected by the protoDUNE-SP (the single-phase detector) will amount to a few petabytes and the sustained rate of data sent to mass
storage will be in the range of a few hundred MB per second.... -
Enrico Fattibene (INFN - National Institute for Nuclear Physics)10/07/2018, 16:00
Since the current data infrastructure of the HEP experiments is based on gridftp, most computing centres have adapted and based their own access to the data on the X.509. This is an issue for smaller experiments who do not have the resources to train their researchers about the complexities of X.509 certificates and who clearly would prefer an approach based on username/password.
On the...
Go to contribution page -
Christian Voss (Rheinisch-Westfaelische Tech. Hoch. (DE))10/07/2018, 16:00
Various sites providing storage for experiments in high energy particle physics and photon science deploy dCache as flexible and modern large scale storage system. As such, dCache is a complex and elaborated software framework, which needs a test driven development in order to ensure a smooth and bug-free release cycle. So far, tests for dCache are performed on dedicated hosts emulating the...
Go to contribution page -
Andrey Lebedev (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))10/07/2018, 16:00
The Dynamic Deployment System (DDS) is a tool-set that automates and significantly simplifies a deployment of user-defined processes and their dependencies on any resource management system (RMS) using a given topology. DDS is a part of the ALFA framework.
A number of basic concepts are taken into account in DDS. DDS implements a single responsibility principle command line tool-set and API....
Go to contribution page -
Dr Francesco Tenchini (University of Melbourne)10/07/2018, 16:00
The Belle II detector will begin its data taking phase in 2018. Featuring a state of the art vertex detector with innovative pixel sensors, it will record collisions of e+e- beams from the SuperKEKB accelerator which is slated to provide luminosities 40x higher than KEKB.
Go to contribution page
This large amount of data will come at the price of an increased beam background, as well as an operating point providing... -
Kenyi Paolo Hurtado Anampa (University of Notre Dame (US))10/07/2018, 16:00
CMS Tier 3 centers, frequently located at universities, play an important role in the physics analysis of CMS data. Although different computing resources are often available at universities, meeting all requirements to deploy a valid Tier 3 able to run CMS workflows can be challenging in certain scenarios. For instance, providing the right operating system (OS) with access to the CERNVM File...
Go to contribution page -
Thomas Hartmann (Deutsches Elektronen-Synchrotron (DE))10/07/2018, 16:00
We investigate the automatic deployment and scaling of grid infrastructure components as virtual machines in OpenStack. To optimize the CVMFS usage per hypervisor, we study different approaches to share CVMFS caches and cache VMs between multiple client VMs.\newline
Go to contribution page
For monitoring, we study container solutions and extend these to monitor non-containerized applications within cgroups resource... -
Tadashi Murakami (KEK)10/07/2018, 16:00
It is difficult to promote cyber security measures in research institutes, especially in DMZ network that allows connections from outside network. This difficulty mainly comes from two types of variety. One is the various requirements of servers operated by each research group. The other is the divergent skill level among server administrators. Unified manners rarely fit managing those...
Go to contribution page -
Gioacchino Vino (Universita e INFN, Bari (IT))10/07/2018, 16:00
Beside their increasing complexity and variety of provided resources and services, large data-centers nowadays often belong to a distributed network and need non-conventional monitoring tools. This contribution describes the implementation of a monitoring system able to provide active support for problem solving to the system administrators.
Go to contribution page
The key components are information collection and... -
Andrew John Washbrook (The University of Edinburgh (GB))10/07/2018, 16:00
There is a growing need to incorporate sustainable software practices into High Energy Physics. Widely supported tools offering source code management, continuous integration, unit testing and software quality assurance can greatly help improve standards. However, for resource-limited projects there is an understandable inertia in deviating effort to cover systems maintenance and application...
Go to contribution page -
Sang Un Ahn (Korea Institute of Science & Technology Information (KR))10/07/2018, 16:00
The Standard Model in particle physics is refined. However, new physics beyond the Standard Model, such as dark matter, requires thousand to million times of simulation events compared to those of the Standard Model. Thus, the development of software is required, especially for the development of simulation tool kits. In addition, computing is evolving. It requires the development of the...
Go to contribution page -
Stefano Stalio (INFN)10/07/2018, 16:00
INFN Corporate Cloud (INFN-CC) is a geographically distributed private cloud infrastructure, based on OpenStack, that has recently been deployed in three of the major INFN data-centres in Italy. INFN-CC has a twofold purpose: on one hand its fully redundant architecture and its resiliency characteristics make of it the perfect environment for providing critical network services for the...
Go to contribution page -
Alessandra Doria (INFN, Napoli (IT))10/07/2018, 16:00
The experience gained in several years of storage system administration has shown that the WLCG distributed grid infrastructure is very performing for the needs of the LHC experiments. However, an excessive number of storage sites leads to inefficiencies in the system administration because of the needs of having experienced manpower in each site and of the increased burden on the central...
Go to contribution page -
Fabrizio Furano (CERN)10/07/2018, 16:00
Dynafed is a system that allows the creation of flexible and seamless storage federations out of participating sites that expose WebDAV, HTTP, S3 or Azure interfaces. The core components are considered stable since a few years, and the recent focus has been on supporting various important initiatives willing to exploit the potential of Cloud storage in the context of Grid computing for various...
Go to contribution page -
Valerio Formato (Universita e INFN, Perugia (IT))10/07/2018, 16:00
Replicability and efficiency of data processing on the same data samples are a major challenge for the analysis of data produced by HEP experiments. High-level data analyzed by end-users are typically produced as a subset of the whole experiment data sample to study interesting selection of data (streams). For standard applications, streams may be eventually copied from servers and analyzed on...
Go to contribution page -
Andrey Baginyan (Joint Institute for Nuclear Research (RU))10/07/2018, 16:00
The work is devoted to the result of the creating a first module of the data processing center at the Joint Institute for Nuclear Research for modeling and processing experiments. The issues related to handling the enormous data flow from the LHC experimental installations and troubles of distributed storages are considered. The article presents a hierarchical diagram of the network farm and a...
Go to contribution page -
Dr Geonmo Ryu (Korea Institute of Science & Technology Information (KR))10/07/2018, 16:00
WLCG, a Grid computing technology used by CERN researchers, is based on two kinds of middleware. One of them, UMD middleware, is widely used in many European research groups to build a grid computing environment. The most widely used system in the UMD middleware environment was the combination of CREAM-CE and the batch job manager "torque". In recent years, however, there have been many...
Go to contribution page -
Joaquin Ignacio Bogado Garcia (Universidad Nacional de La Plata (AR))10/07/2018, 16:00
Transfer Time To Complete (T³C) is a new extension for the data management system Rucio that allows to make predictions about the duration of a file transfer. The extension has a modular architecture which allows to make predictions based on simple to more sophisticated models, depending on available data and computation power. The ability to predict file transfer times with reasonable...
Go to contribution page -
Mr Enrico Fattibene (INFN - CNAF)10/07/2018, 16:00
CNAF is the national center of INFN for IT services. The Tier-1 data center operated at CNAF provides computing and storage resources mainly to scientific communities such as those working on the four LHC experiments and 30 more experiments in which INFN is involved.
Go to contribution page
In past years, every CNAF departments used to choose their preferred tools for monitoring, accounting and alerting. In... -
Benjamin Fischer (RWTH Aachen University)10/07/2018, 16:00
VISPA (Visual Physics Analysis) is a web-platform that enables users to work on any SSH reachable resource using just their web-browser. It is used successfully in research and education for HEP data analysis.
Go to contribution page
The emerging JupyterLab is an ideal choice for a comprehensive, browser-based, and extensible work environment and we seek to unify it with the efforts of the VISPA project. The primary... -
Stefan Nicolae Stancu (CERN)10/07/2018, 16:00
The CERN IT Communication Systems group is in charge of providing various wired and wireless based communication services across the laboratory. Among them, the group designs, installs and manages a large complex of networks: external connectivity, data-centre network (deserving central services and the WLCG), campus network (providing connectivity to users on site), and last but not least...
Go to contribution page -
Silvio Pardi (INFN)10/07/2018, 16:00
The current level of flexibility reached by Cloud providers enables Physicists to take advantage of extra resources to extend the distributed computing infrastructure supporting High Energy Physics Experiments. However, the discussion about the optimal usage of such kind of resources is still ongoing. Moreover, because each Cloud provider offers his own interfaces, API set and different...
Go to contribution page -
Andrew McNab (University of Manchester)10/07/2018, 16:00
We describe how the Blackett facility at the University of Manchester
Go to contribution page
High Energy Physics group has been extended to provide Docker container and
cloud platforms as part of the UKT0 initiative. We show how these new
technologies can be managed using the facility's existing fabric
management based on Puppet and Foreman. We explain how use of the
facility has evolved beyond its origins as a WLCG... -
Edgar Fajardo Hernandez (Univ. of California San Diego (US))10/07/2018, 16:00
A key aspect of pilot-based grid operations are the pilot (glidein) factories. A proper and efficient use of any central blocks in the grid infrastructure is for operations inevitable, and glideinWMS factories are not the exception. The monitoring package for glideinWMS factory monitoring was originally developed when the factories were serving a couple of VO’s and tens of sites. Nowadays with...
Go to contribution page -
Dr Stefano Bagnasco (Istituto Nazionale di Fisica Nucleare, Torino), Sara Vallero (Universita e INFN Torino (IT)), Valentina Zaccolo (Universita e INFN Torino (IT))10/07/2018, 16:00
A small Cloud infrastructure for scientific computing likely operates in a saturated regime, which imposes constraints to free applications’ auto-scaling. Tenants typically pay a priori for a fraction of the overall resources. Within this business model, an advanced scheduling strategy is needed in order to optimize the data centre occupancy.
Go to contribution page
FaSS, a Fair Share Scheduler service for... -
Maksym Zyzak (GSI)10/07/2018, 16:00
The future heavy ion experiment CBM at the FAIR facility will study the QCD phase diagram in the region of high baryon chemical potential at relatively moderate temperatures, where a complex structure is predicted by modern theories. In order to detect possible signatures of this structures, the physics program of the experiment includes comprehensive study of the extremely rare probes like...
Go to contribution page -
David Schultz (University of Wisconsin-Madison)10/07/2018, 16:00
IceCube is a cubic kilometer neutrino detector located at the south pole. IceCube’s simulation and production processing requirements far exceed the number of available CPUs and GPUs in house. Collaboration members commit resources in the form of cluster time at institutions around the world. IceCube also signs up for allocations from large clusters in the United States like XSEDE. All of...
Go to contribution page -
Serguei Kolos (University of California Irvine (US))10/07/2018, 16:00
During the next major shutdown from 2019-2021, the ATLAS experiment at the LHC at CERN will adopt the Front-End Link eXchange (FELIX) system as the interface between the data acquisition, detector control and TTC (Timing, Trigger and Control) systems and new or updated trigger and detector front-end electronics. FELIX will function as a router between custom serial links from front end ASICs...
Go to contribution page -
Vito Di Benedetto (Fermi National Accelerator Lab. (US))10/07/2018, 16:00
Fermilab is developing the Frontier Experiments RegistRY (FERRY) service that provides a centralized repository for the access control and job management attributes such as batch and storage access policies, quotas, batch priorities and NIS attributes for cluster configuration. This paper describes FERRY architecture, deployment and integration with services that consume the stored...
Go to contribution page -
PATRICK MEADE (University of Wisconsin-Madison)10/07/2018, 16:00
IceCube is a cubic kilometer neutrino detector located at the south pole. Data are processed and filtered in a data center at the south pole. After transfer to a data warehouse in the north, data are further refined through multiple levels of selection and reconstruction to reach analysis samples. So far, the production and curation of these analysis samples has been handled in an ad-hoc way...
Go to contribution page -
Dr Guy Barrand (CNRS/IN2P3/LAL)10/07/2018, 16:00
g4tools is a collection of pure header classes intended to be a technical low level layer of the analysis category introduced in Geant4 release 9.5 to help Geant4 users to manage their histograms and ntuples in various file formats. In g4tools bundled with the latest Geant4 release (10.4, December 2017), we introduced a new HDF5 IO driver for histograms and column wise paged ntuples as well as...
Go to contribution page -
Valentin Y Kuznetsov (Cornell University (US))10/07/2018, 16:00
Efficient handling of large data-volumes becomes a necessity in today's world. It is driven by desire to get more insight from the data and to gain a better understanding of user trends which can be transformed into economic incentives (profits, cost-reduction and various optimization of data workflows and pipelines). In this talk we discuss how modern technologies are transforming a well...
Go to contribution page -
Ivan Razumov (Institute for High Energy Physics (RU)), Witold Pokorski (CERN)10/07/2018, 16:00
One of the key factors for the successful development of a physics Monte-Carlo is the ability to properly organize regression testing and validation. Geant4, a world-standard toolkit for HEP detector simulation, is one such example that requires thorough validation. The CERN/SFT group, which contributes to the development, testing, deployment and support of the toolkit, is also responsible for...
Go to contribution page -
Mateusz Jacek Goncerz (AGH University of Science and Technology (PL))10/07/2018, 16:00
Central Exclusive Production (CEP) is a class of diffractional processes studied at the Large Hadron Collider, that offers a very clean experimental environment for probing the low energy regime of Quantum Chromodynamics.
Go to contribution page
As any other analyses in High Energy Physics, it requires a large amount of simulated Monte Carlo data, that is usually created by means of the so-called MC event generators.... -
Dr Guy Barrand (Laboratoire de l'Accélérateur Linéaire, Université Paris-Sud, CNRS-IN2P3, Orsay, France.)10/07/2018, 16:00
A user : with PAW I had the impression to do physics, with ROOT I have the impression to type C++. Then why not returning to do physics?! We will present how gopaw is done, especially putting accent on its portability, its way to handle multiple file formats (including ROOT/IO and HDF5), its unified graphics based on the inlib/sg scene graph manager (see CHEP 2013 for softinex) and its...
Go to contribution page -
Esteban Fullana Torregrosa (Univ. of Valencia and CSIC (ES))10/07/2018, 16:00
ATLAS has developed and previously presented a new computing architecture, the Event Service, that allows real time delivery of fine grained workloads which process
Go to contribution page
dispatched events (or event ranges) and immediately streams outputs.
The principal aim was to profit from opportunistic resources such as commercial
cloud, supercomputing, and volunteer computing, and otherwise unused cycles on... -
Alessandro De Salvo (Sapienza Universita e INFN, Roma I (IT))10/07/2018, 16:00
The long standing problem of reconciling the cosmological evidence of the existence of dark matter with the lack of any clear experimental observation of it, has recently revived the idea that the new particles are not directly connected with the Standard Model gauge fields, but only through mediator fields or "portals", connecting our world with new "secluded" or "hidden" sectors. One of the...
Go to contribution page -
Bruno Heinrich Hoeft (KIT - Karlsruhe Institute of Technology (DE))10/07/2018, 16:00
The LAN and WAN development of DE-KIT will be shown from the very beginning to the current status. DE-KIT is the German Tier-1 center collaborating with the Large Hadron Collider (LHC) at CERN. This includes the local area network capacity level ramp up from 10Gbps over 40 Gbps to 100 Gbps as well as the wide area connections. It will be demonstrated how the deployed setup serves the current...
Go to contribution page -
Dirk Hufnagel (Fermi National Accelerator Lab. (US))10/07/2018, 16:00
The higher energy and luminosity from the LHC in Run2 has put increased pressure on CMS computing resources. Extrapolating to even higher luminosities (and thus higher event complexities and trigger rates) beyond Run3, it becomes clear that simply scaling up the the current model of CMS computing alone will become economically unfeasible. High Performance Computing (HPC) facilities, widely...
Go to contribution page -
Maciej Pawel Szymanski (University of Chinese Academy of Sciences (CN))10/07/2018, 16:00
Software is an essential component of the experiments in High Energy Physics. Due to the fact that it is upgraded on relatively short timescales, software provides flexibility, but at the same time is susceptible to issues introduced during development process, which enforces systematic testing. We present recent improvements to LHCbPR, the framework implemented at LHCb to measure physics and...
Go to contribution page -
Jaroslava Schovancova (CERN)10/07/2018, 16:00
HammerCloud is a framework to commission, test, and benchmark ATLAS computing resources and components of various distributed systems with realistic full-chain experiment workflows. HammerCloud contributes to ATLAS Distributed Computing (ADC) Operations and automation efforts, providing the automated resource exclusion and recovery tools, that help re-focus operational manpower to areas which...
Go to contribution page -
James Letts (Univ. of California San Diego (US))10/07/2018, 16:00
Scheduling multi-core workflows in a global HTCondor pool is a multi-dimensional problem whose solution depends on the requirements of the job payloads, the characteristics of available resources, and the boundary conditions such as fair share and prioritization imposed on the job matching to resources. Within the context of a dedicated task force, CMS has increased significantly the...
Go to contribution page -
Pablo Martin Zamora (CERN)10/07/2018, 16:00
Over 8000 Windows PCs are actively used on the CERN site for tasks ranging from controlling the accelerator facilities to processing invoices. PCs are managed through CERN's Computer Management Framework and Group Policies, with configurations deployed based on machine sets and a lot of autonomy left to the end-users. While the generic central configuration works well for the majority of the...
Go to contribution page -
Arturo Sanchez Pineda (Abdus Salam Int. Cent. Theor. Phys. (IT))10/07/2018, 16:00
The online farm of the ATLAS experiment at the LHC, consisting of
nearly 4000 PCs with various characteristics, provides configuration
and control of the detector and performs the collection, processing,
selection, and conveyance of event data from the front-end electronics
to mass storage.Different aspects of the farm management are already accessible via
Go to contribution page
several tools. The status and... -
Frank Berghaus (University of Victoria (CA))10/07/2018, 16:00
Input data for applications that run in cloud computing centres can be stored at remote repositories, typically with multiple copies of the most popular data stored at many sites. Locating and retrieving the remote data can be challenging, and we believe that federating the storage can address this problem. In this approach, the closest copy of the data is used based on geographical or other...
Go to contribution page -
Felix Buhrer (Albert Ludwigs Universitaet Freiburg (DE))10/07/2018, 16:00
High-Performance Computing (HPC) and other research cluster computing resources provided by universities can be useful supplements to the collaboration’s own WLCG computing resources for data analysis and production of simulated event samples. The shared HPC cluster "NEMO" at the University of Freiburg has been made available to local ATLAS users through the provisioning of virtual machines...
Go to contribution page -
David Martin Clavo (CERN)10/07/2018, 16:00
The Information Technology department at CERN has been using ITIL Service Management methodologies and ServiceNow since early 2011. In recent years, several developments have been accomplished regarding the data centre and service monitoring, as well as status management.
ServiceNow has been integrated with the data centre monitoring infrastructure, via GNI (General Notification...
Go to contribution page -
PATRICK MEADE (University of Wisconsin-Madison)10/07/2018, 16:00
IceCube is a cubic kilometer neutrino detector located at the south pole. Data handling has been managed by three separate applications: JADE, JADE North, and JADE Long Term Archve (JADE-LTA). JADE3 is the new version of JADE that merges these diverse data handling applications into a configurable data handling pipeline (“LEGO® Block JADE”). The reconfigurability of JADE3 has enabled...
Go to contribution page -
Serguei Linev (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))10/07/2018, 16:00
The new version of JSROOT provides full implementation of the ROOT binary I/O, now including TTree. Powerful JSROOT.TreeDraw functionality provides a simple way to inspect complex data in web browsers directly, without need to involve ROOT-based code.
JSROOT is now fully integrated into Node.js environment. Without binding to any C++ code, one get direct access to all kinds of ROOT data....
Go to contribution page -
Hristo Umaru Mohamed (CERN)10/07/2018, 16:00
Prometheus is a leading open source monitoring and alerting tool. Prometheus's local storage is limited in its scalability and durability, but it integrates very well with other solutions which provide us with robust long term storage. This talk will cover two solutions which interface excellently and do not require us to deal with HBase - KairosDB and Chronix. Intended audience are people who...
Go to contribution page -
Zbigniew Baranowski (CERN)10/07/2018, 16:00
The ATLAS EventIndex has been in operation since the beginning of LHC Run 2 in 2015. Like all software projects, its components have been constantly evolving and improving in performance. The main data store in Hadoop, based on MapFiles and HBase, can work for the rest of Run 2 but new solutions are explored for the future. Kudu offers an interesting environment, with a mixture of BigData and...
Go to contribution page -
Edgar Fajardo Hernandez (Univ. of California San Diego (US))10/07/2018, 16:00
In the past, several scaling tests have been performed on the HTCondor batch system regarding its job scheduling capabilities. In this talk we report on a first set of scalability measurements of the file transfer capabilities of the HTCondor batch system. Motivated by the GLUEX experiment needs we evaluate the limits and possible use of HTCondor as a solution to transport the output of jobs...
Go to contribution page -
Mirena Paneva (University of California Riverside (US))10/07/2018, 16:00
The design of the CMS detector is specially optimized for muon measurements and includes gas-ionization detector technologies to make up the muon system. Cathode strip chambers (CSC) with both tracking and triggering capabilities are installed in the forward region. The first stage of muon reconstruction deals with information from within individual muon chambers and is thus called local...
Go to contribution page -
Alessandro Lonardo (Sapienza Universita e INFN, Roma I (IT))10/07/2018, 16:00
In the last few years the European Union has launched several initiatives aiming to support the development of an European-based HPC industrial/academic eco-system made of scientific and data analysis application experts, software developers and computer technology providers. In this framework the ExaNeSt and EuroExa projects respectively funded in H2020 research framework programs call...
Go to contribution page -
Mikhail Hushchyn (Yandex School of Data Analysis (RU))10/07/2018, 16:00
One of the most important aspects of data processing at LHC experiments is the particle identification (PID) algorithm. In LHCb, several different sub-detector systems provide PID information: the Ring Imaging Cherenkov detectors, the hadronic and electromagnetic calorimeters, and the muon chambers. The charged PID based on the sub-detectors response is considered as a machine learning problem...
Go to contribution page -
Stefano Bagnasco (Istituto Nazionale di Fisica Nucleare, Torino)10/07/2018, 16:00
Current computing paradigms often involve concepts like microservices, containerisation and, of course, Cloud Computing.
Go to contribution page
Scientific computing facilities, however, are usually conservatively managed through plain batch systems and as such can cater to a limited range of use cases. On the other side, scientific computing needs are in general orthogonal to each other in several dimensions.
We... -
Dr Stefano Dal Pra (INFN)10/07/2018, 16:00
In the latest years, CNAF worked at a project of Long Term Data Preservation (LTDP) for the CDF experiment, that ran at Fermilab after 1985. A part of this project has the goal of archiving data produced during Run I into recent and reliable storage devices, in order to preserve their availability for further access through not obsolete technologies. In this paper, we report and explain the...
Go to contribution page -
Cesare Calabria (Universita e INFN, Bari (IT))10/07/2018, 16:00
The CMS muon system presently consists of three detector technologies equipping different regions of the spectrometer. Drift Tube chambers (DT) are installed in the muon system barrel, while Cathode Strip Chambers (CSC) cover the end-caps; both serve as tracking and triggering detectors. Moreover, Resistive Plate Chambers (RPC) complement DT and CSC in barrel and end-caps respectively and are...
Go to contribution page -
Dr Marco Verlato (INFN - Sezione di Padova)10/07/2018, 16:00
The Cloud Area Padovana (CAP) is, since 2014, a scientific IaaS cloud, spread across two different sites: the INFN Padova Unit and the INFN Legnaro National Labs. It provides about 1100 logical cores and 50 TB of storage. The entire computing facility, owned by INFN, satisfies the computational and storage demands of more than 100 users afferent to about 30 research projects, mainly related to...
Go to contribution page -
Alexey Poyda (National Research Centre Kurchatov Institute (RU)), Mikhail Titov (National Research Centre Kurchatov Institute (RU))10/07/2018, 16:00
Most supercomputers provide computing resources that are shared between users and projects, with utilization determined by predefined policies, load and quotas. The efficiency of the utilization of resources in terms of user/project depends on factors such as particular supercomputer policy and dynamic workload of supercomputer based on users' activities. The load on a resource is...
Go to contribution page -
Cesare Calabria (Universita e INFN, Bari (IT))10/07/2018, 16:00
The CMS muon system presently consists of three detector technologies equipping different regions of the spectrometer. Drift Tube chambers (DT) are installed in the muon system barrel, while Cathode Strip Chambers (CSC) cover the end-caps; both serve as tracking and triggering detectors. Moreover, Resistive Plate Chambers (RPC) complement DT and CSC in barrel and end-caps respectively and are...
Go to contribution page -
Dr Andrew McNab (University of Manchester)10/07/2018, 16:00
At the start of 2017, GridPP deployed VacMon, a new monitoring system
Go to contribution page
suitable for recording and visualising the usage of virtual machines and
containers at multiple sites. The system uses short JSON messages
transmitted by logical machine lifecycle managers such as Vac and
Vcycle. These are directed to a VacMon logging service which records the
messages in an ElasticSearch database. The... -
Dr Guy Barrand (Laboratoire de l'Accélérateur Linéaire, Université Paris-Sud, CNRS-IN2P3, Orsay, France.)10/07/2018, 16:00
We want to propose here a smooth migration plan for ROOT in order to have for 2040 at least and last an acceptable histogram class (a goal clearly not stated in the HSF common white paper for HL-LHC for 2020), but also to have a solid rock basement at this time for good part of this toolkit (IO, plotting, graphics, UI, math, etc...). The proposal is going to be technical because centred on a...
Go to contribution page -
Christoph Hasse (CERN / Technische Universitaet Dortmund (DE))10/07/2018, 16:00
Starting with Upgrade 1 in 2021, LHCb will move to a purely software-based trigger system. Therefore, the new trigger strategy is to process events at the full rate of 30MHz. Given that the increase of CPU performance has slowed down in recent years, the predicted performance of the software trigger currently falls short of the necessary 30MHz throughput. To cope with this shortfall, LHCb's...
Go to contribution page -
Marko Petric (CERN)10/07/2018, 16:00
For a successful experiment, it is of utmost importance to provide a consistent detector description originating from a single source of information. This is also the main motivation behind DD4hep, which addresses detector description in a broad sense including the geometry and the materials used in the device, and additionally parameters describing, e.g., the detection techniques, constants...
Go to contribution page -
Zachary Louis Marshall (University of California Berkeley (US))10/07/2018, 16:00
Muons with high momentum -- above 500 GeV/c -- are an important constituent of new physics signatures in many models. Run-2 of the LHC is greatly increasing ATLAS's sensitivity to such signatures thanks to an ever-larger dataset of such particles. The ATLAS Muon Spectrometer chamber alignment contributes significantly to the uncertainty of the reconstruction of these high-momentum objects. The...
Go to contribution page -
Othmane Bouhali (Texas A & M University (US))10/07/2018, 16:00
Gas Electron Multiplier (GEM) based detectors have been used in many applications since their introduction in 1997. Large areas of GEM are foreseen in several experiments such as the future upgrade of the CMS muon detection system, where triple GEM based detectors will be installed and operated. During the assembly and operation, GEM foils are stretched in order to keep the vertical distance...
Go to contribution page -
Peter Love (Lancaster University (GB))10/07/2018, 16:00
Various workflows used by ATLAS Distributed Computing (ADC) are now using object stores as a convenient storage resource via boto S3 libraries. The load and performance requirement varies widely across the different workflows and for heavier cases it has been useful to understand the limits of the underlying object store implementation. This work describes the performance of various object...
Go to contribution page -
Dr Valentina Akishina (Johann-Wolfgang-Goethe Univ. (DE))10/07/2018, 16:00
The CBM experiment is a future fixed-target experiment at FAIR/GSI (Darmstadt, Germany). It is being designed to study heavy-ion collisions at extremely high interaction rates of up to 10 MHz. Therefore, the experiment will use a very novel concept of data processing based on free streaming triggerless front-end electronics. In CBM time-stamped data will be collected into a readout buffer in a...
Go to contribution page -
Matteo Concas (INFN e Politecnico di Torino (IT))10/07/2018, 16:00
In view of the LHC Run3 starting in 2021, the ALICE experiment is preparing a major upgrade including the construction of an entirely new inner silicon tracker (the Inner Tracking System) and a complete renewal of its Online and Offline systems (O²).
In this context, one of the requirements for a prompt calibration of external detectors and a fast offline data processing is to run online the...
Go to contribution page -
Sergey Gorbunov (Johann-Wolfgang-Goethe Univ. (DE))10/07/2018, 16:00
The upcoming LHC Run 3 brings new challenges for the ALICE online reconstruction which will be used also for the offline data processing in the O2 (combined Online-Offline) framework. To improve the accuracy of the existing online algorithms they need to be enhanced with all the necessary offline features, while still satisfying speed requirements of the synchronous data processing.
Here we...
Go to contribution page -
Ivan Glushkov (University of Texas at Arlington (US))10/07/2018, 16:00
We describe the central operation of the ATLAS distributed computing system. The majority of compute intensive activities within ATLAS are carried out on some 350,000 CPU cores on the Grid, augmented by opportunistic usage of significant HPC and volunteer resources. The increasing scale, and challenging new payloads, demand fine-tuning of operational procedures together with timely...
Go to contribution page -
Dr Waseem Kamleh (University of Adelaide)10/07/2018, 16:00
The University of Adelaide has invested several million dollars in the Phoenix HPC facility. Phoenix features a large number of GPUs, which were
Go to contribution page
critical to its entry in the June 2016 Top500 supercomputing list. The status of high performance computing in Australia relative to other nations
poses a unique challenge to researchers, in particular those involved in computationally intensive... -
Alastair Dewhurst (STFC-Rutherford Appleton Laboratory (GB)), Rob Appleyard (STFC)10/07/2018, 16:00
Since the start of 2017, the RAL Tier-1’s Echo object store has been providing disk storage to the LHC experiments. Echo provides access via both the GridFTP and XRootD protocols. GridFTP is primarily used for WAN transfers between sites while XRootD is used for data analysis.
Object stores and those using erasure coding in particular are designed to efficiently serve entire objects which...
Go to contribution page -
Lorenzo Rinaldi (Universita e INFN, Bologna (IT))10/07/2018, 16:00
The processing of ATLAS event data requires access to conditions data which is stored in database systems. This data includes, for example alignment, calibration, and configuration information which may be characterized by large volumes, diverse content, and/or information which evolves over time as refinements are made in those conditions. Additional layers of complexity are added by the...
Go to contribution page -
Yuka Takahashi (University of Cincinnati (US))10/07/2018, 16:00
The LLVM community advances its C++ Modules technology providing an io-efficient, on-disk code representation capable of reducing build times and peak memory usage. Significant amount of efforts were invested in teaching ROOT and its toolchain to operate with clang's implementation of the C++ Modules. Currently, C++ Modules files are used by: cling to avoid header re-parsing; rootcling to...
Go to contribution page -
Andreas Wagner (CERN)10/07/2018, 16:00
Following the deployment of OpenShift Origin by the CERN Web Frameworks team in 2016, this Platform-as-a-Service “PaaS” solution oriented for web applications has become rapidly a key component of the CERN Web Services infrastructure. We will present the evolution of the PaaS service since its introduction, detailed usage trends and statistics, its integration with other CERN services and the...
Go to contribution page -
Pavlo Svirin (Brookhaven National Laboratory (US))10/07/2018, 16:00
Lattice QCD (LQCD) is a well-established non-perturbative approach to solving the quantum chromodynamics (QCD) theory of quarks and gluons. It is understood that future LQCD calculations will require exascale computing capacities and workload management system (WMS) in order to manage them efficiently.
Go to contribution page
In this talk we will discuss the use of the PanDA WMS for LQCD simulations. The PanDA WMS... -
Jennifer Ngadiuba (INFN, Milano)10/07/2018, 16:00
With the planned addition of the tracking information in the Level 1 trigger in CMS for the HL-LHC, the algorithms for Level 1 trigger can be completely reconceptualized. Following the example for offline reconstruction in CMS to use complementary subsystem information and mitigate pileup, we explore the feasibility of using Particle Flow-like and pileup per particle identification techniques...
Go to contribution page -
Steven Andrew Farrell (Lawrence Berkeley National Lab. (US))10/07/2018, 16:00
A core component of particle tracking algorithms in LHC experiments is the Kalman Filter. Its capability to iteratively model dynamics (linear or non-linear) in noisy data makes it powerful for state estimation and extrapolation in a combinatorial track builder (the CKF). In practice, the CKF computational cost scales quadratically with the detector occupancy and will become a heavy burden on...
Go to contribution page -
Felice Pantaleo (CERN)10/07/2018, 16:00
Starting from 2017, during CMS Phase-I, the increased accelerator luminosity with the consequently increased number of simultaneous proton-proton collisions (pile-up) will pose significant new challenges for the CMS experiment. The main goal of the HLT is to apply a specific set of physics selection algorithms and to accept the events with the most interesting physics content. To cope with the...
Go to contribution page -
Baosong Shan (Beihang University (CN))10/07/2018, 16:00
The Alpha Magnetic Spectrometer (AMS) is a high energy physics experiment installed and operating on board of the International Space Station (ISS) from May 2011 and expected to last through Year 2024 and beyond. More than 50 million of CPU hours has been delivered for AMS Monte Carlo simulations using NERSC and ALCF facilities in 2017. The details of porting of the AMS software to the 2nd...
Go to contribution page -
Milena Veneva (Joint Institute for Nuclear Research)10/07/2018, 16:00
Systems of linear algebraic equations (SLEs) with heptadiagonal (HD), pentadiagonal (PD) and tridiagonal (TD) coefficient matrices arise in many scientific problems. Three symbolic algorithms for solving SLEs with HD, PD and TD coefficient matrices are considered. The only assumption on the coefficient matrix is nonsingularity. These algorithms are implemented using the GiNaC library of C++...
Go to contribution page -
Ben Couturier (CERN)10/07/2018, 16:00
The LHCb experiment uses a custom made C++ detector and geometry description toolkit, integrated with the Gaudi framework, designed in the early 2000s when the LHCb software was first implemented. With the LHCb upgrade scheduled for 2021, it is necessary for the experiment to review this choice to adapt to the evolution of software and computing (need to support multi-threading, importance of...
Go to contribution page -
Pier Paolo Ricci (INFN CNAF)10/07/2018, 16:00
The accurate calculation of the power usage effectiveness (PUE) is the most important factor when trying to analyse the overall efficiency of the power consumption in a big data center. In the INFN CNAF Tier-1, a new monitoring infrastructure as Building Management System (BMS) was implemented during the last years using the Schneider StruxureWare Building Operation (SBO) software. During this...
Go to contribution page -
Carlo Mancini-Terracciano (INFN - Roma1)10/07/2018, 16:00
Despite their frequent use, the hadronic models implemented in Geant4 have shown severe limitations in reproducing the measured yield of secondaries in ions interaction below 100 MeV/A, in term of production rates, angular and energy distributions [1,2,3]. We will present a benchmark of the Geant4 models with double-differential cross section and angular distributions of the secondary...
Go to contribution page -
Kenyi Paolo Hurtado Anampa (University of Notre Dame (US))10/07/2018, 16:00
The CMS experiment has an HTCondor Global Pool, composed of more than 200K CPU cores available for Monte Carlo production and the analysis of data. The submission of user jobs to this pool is handled by either CRAB3, the standard workflow management tool used by CMS users to submit analysis jobs requiring event processing of large amounts of data, or by CMS Connect, a service focused on final...
Go to contribution page -
Dennis Klein (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))10/07/2018, 16:00
ALFA is a modern software framework for simulation, reconstruction and analysis of particle physics experiments. ALFA provides building blocks for highly parallelized processing pipelines required by the next generation of experiments, e.g. the upgraded ALICE detector or the FAIR experiments. The FairMQ library in ALFA provides the means to easily create actors (so-called devices) that...
Go to contribution page -
Andrea Manzi (CERN)10/07/2018, 16:00
The "File Transfer Service" (FTS) has been proven capable of satisfying the equirements – in terms of functionality, reliability and volume – of three major LHC experiments: ATLAS, CMS and LHCb.
We believe small experiments, or individual scientists, can also benefit from FTS advantages, and integrate it into their frameworks, allowing to effectively outsource the complexities of data...
Go to contribution page -
Marco Mascheroni (Univ. of California San Diego (US))10/07/2018, 16:00
GlideinWMS is a workload management system that allows different scientific communities, or Virtual Organizations (VO), to share computing resources distributed over independent sites. A dynamically sized pool of resources is created by different VO-independent glideinWMS pilot factories, based on the requests made by the several VO-dependant glideinWMS frontends. For example, the CMS VO...
Go to contribution page -
Volodimir Begy (University of Vienna (AT))10/07/2018, 16:00
This work describes the technique of remote data access from computational jobs on the ATLAS data grid. In comparison to traditional data movement and stage-in approaches it is well suited for data transfers which are asynchronous with respect to the job execution. Hence, it can be used for optimization of data access patterns based on various policies. In this study, remote data access is...
Go to contribution page -
Xiaowei Jiang (IHEP(中国科学院高能物理研究所)), Jingyan Shi (IHEP), Jiaheng Zou (IHEP)10/07/2018, 16:00
At IHEP, computing resources are contributed by different experiments including BES, JUNO, DYW, HXMT, etc. The resources were divided into different partitions to satisfy the dedicated experiment data processing requirements. IHEP had a local torque maui cluster with 50 queues serving for above 10 experiments. The separated resource partitions leaded to resource imbalance load. Sometimes, BES...
Go to contribution page -
Ralf Spiwoks (CERN)10/07/2018, 16:00
The Muon to Central Trigger Processor Interface (MUCTPI) of the ATLAS experiment at the
Go to contribution page
Large Hadron Collider (LHC) at CERN is being upgraded for the next run of the LHC in order
to use optical inputs and to provide full-precision information for muon candidates to the
topological trigger processor (L1TOPO) of the Level-1 trigger system. The new MUCTPI is
implemented as a single ATCA blade... -
Borja Aparicio Cotarelo (CERN)10/07/2018, 16:00
CERN IT department is providing production services to run container technologies. Given that, the IT-DB team, responsible to run the Java based platforms, has started a new project to move the WebLogic deployments from virtual or bare metal servers to containers: Docker together with Kubernetes allow us to improve the overall productivity of the team, reducing operations time and speeding up...
Go to contribution page -
Zhechka Toteva (CERN)10/07/2018, 16:00
In early 2016 CERN IT created a new project to consolidate and centralise Elasticsearch instances across the site, with the aim to offer a production quality new IT services to experiments and departements. We'll present the solutions we adapted for securing the system using open source only tools, which allowes us to consolidate up to 20 different use cases on a single Elasticsearch cluster.
Go to contribution page -
Igor Pelevanyuk (Joint Institute for Nuclear Research (RU))10/07/2018, 16:00
Tier-1 for CMS was created in JINR in 2015. It is important to keep an eye on the Tier-1 center all the time in order to maintain its performance. The one monitoring system is based on Nagios: it monitors the center on the several levels: engineering infrastructure, network and hardware. It collects many metrics, creates plots and determines some statuses like HDD state, temperatures, loads...
Go to contribution page -
Enrico Fattibene (INFN - National Institute for Nuclear Physics)10/07/2018, 16:00
The Italian Tier1 center is mainly focused on LHC and physics experiments in general. Recently we tried to widen our area of activity and established a collaboration with the University of Bologna to set-up an area inside our computing center for hosting expriments with high demands of security and privacy requirements on stored data. The first experiment we are going to host is Harmony, a...
Go to contribution page -
Alexey Rybalchenko (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))10/07/2018, 16:00
The high data rates expected for the next generation of particle physics experiments (e.g.: new experiments at FAIR/GSI and the upgrade of CERN experiments) call for dedicated attention with respect to design of the needed computing infrastructure. The common ALICE-FAIR framework ALFA is a modern software layer, that serves as a platform for simulation, reconstruction and analysis of particle...
Go to contribution page -
Mikhail Hushchyn (Yandex School of Data Analysis (RU))10/07/2018, 16:00
SHiP is a new proposed fixed-target experiment at the CERN SPS accelerator. The goal of the experiment is to search for hidden particles predicted by models of Hidden Sectors. The purpose of the SHiP Spectrometer Tracker is to reconstruct tracks of charged particles from the decay of neutral New Physics objects with high efficiency. Efficiency of the track reconstruction depends on the...
Go to contribution page -
Andrey Nechaevskiy (JINR)10/07/2018, 16:00
The goal of the project is to improve the computing network topology and performance of the China IHEP Data Center taking into account growing numbers of hosts, experiments and computing resources. The analysis of the computing performance of the IHEP Data Center in order to optimize its distributed data processing system is a really hard problem due to the great scale and complexity of shared...
Go to contribution page -
Raffaella Radogna (Universita e INFN, Bari (IT))10/07/2018, 16:00
Full MC simulation is a powerful tool for designing new detectors and guide the construction of new prototypes.
Go to contribution page
Improved micro-structure technology has lead to the rise of Micro-Pattern Gas Detectors (MPGDs), with main features: fexible geometry; high rate capability; excellent spatial resolution; and reduced radiation length. A new detector layout, the Fast Timing MPGD (FTM), could combine... -
Chris Burr (University of Manchester (GB))10/07/2018, 16:00
Software is an essential and rapidly evolving component of modern high energy physics research. The ability to be agile and take advantage of new and updated packages from the wider data science community is allowing physicists to efficiently utilise the data available to them. However, these packages often introduce complex dependency chains and evolve rapidly introducing specific, and...
Go to contribution page -
Santiago Gonzalez De La Hoz (Univ. of Valencia and CSIC (ES))10/07/2018, 16:00
Since the beginning of the WLCG Project the Spanish ATLAS computer centres have contributed with reliable and stable resources as well as personnel for the ATLAS Collaboration.
Our contribution to the ATLAS Tier2s and Tier1s computing resources (disk and CPUs) in the last 10 years has been around 5%, even though the Spanish contribution to the ATLAS detector construction as well as the number...
Go to contribution page -
Grigory Kozlov (FIAS, JINR)10/07/2018, 16:00
Track finding procedure is one of the key steps of events reconstruction in high energy physics experiments. Track finding algorithms combine hits into tracks and reconstruct trajectories of particles flying through the detector. The tracking procedure is considered as an extremely time consuming task because of large combinatorics. Thus, calculation speed is crucial in heavy ion experiments,...
Go to contribution page -
Benedikt Riedel (University of Chicago)10/07/2018, 16:00
SPT-3G, the third generation camera on the South Pole Telescope (SPT), was deployed in the 2016-2017 Austral summer season. The SPT is a 10-meter telescope located at the geographic South Pole and designed for observations in the millimeter-wave and submillimeter-wave regions of the electromagnetic spectrum. The SPT is primarily used to study the Cosmic Microwave Background (CMB). The upgraded...
Go to contribution page -
Igor Soloviev (University of California Irvine (US))10/07/2018, 16:00
The ATLAS experiment is operated daily by many users and experts working concurrently on several aspects of the detector.
Go to contribution page
The safe and optimal access to the various software and hardware resources of the experiment is guaranteed by a role-based access control system (RBAC) provided by the ATLAS Trigger and Data Acquisition (TDAQ) system. The roles are defined by an inheritance hierarchy.... -
Shota Hayashida (Nagoya University (JP))10/07/2018, 16:00
Events containing muons in the final state are an important signature
Go to contribution page
for many analyses being carried out at the Large Hadron Collider
(LHC), including both standard model measurements and searches for new
physics. To be able to study such events, it is required to have an
efficient and well-understood muon trigger. The ATLAS muon trigger
consists of a hardware based system (Level 1), as well... -
Alex Kastanas (KTH Royal Institute of Technology (SE))10/07/2018, 16:00
The Online Luminosity software of the ATLAS experiment has been upgraded in the last two years to improve scalability, robustness, and redundancy and to increase automation keeping Run-3 requirements in mind.
Go to contribution page
The software package is responsible for computing the instantaneous and integrated luminosity for particle collisions at the ATLAS interaction point at the Large Hadron Collider (LHC).... -
Javier Montejo Berlingen (CERN)10/07/2018, 16:00
The ATLAS experiment records about 1 kHz of physics collisions, starting from an LHC design bunch crossing rate of 40 MHz. To reduce the large background rate while maintaining a high selection efficiency for rare and Beyond-the-Standard-Model physics events, a two-level trigger system is used.
Events are selected based on physics signatures, such as the presence
Go to contribution page
of energetic leptons,... -
Catrin Bernius (SLAC National Accelerator Laboratory (US))10/07/2018, 16:00
Physics analyses at the LHC require accurate simulations of the detector response and the event selection processes. The accurate simulation of the trigger response is crucial for determining the overall selection efficiencies and signal sensitivities. For the generation and reconstruction of simulated event data, the most recent software releases are used to ensure the best agreement between...
Go to contribution page -
Petya Tsvetanova Vasileva (CERN)10/07/2018, 16:00
In HEP experiments at LHC the database applications often become complex by reflecting the ever demanding requirements of the researchers. The ATLAS experiment has several Oracle DB clusters with over 216 database schemes each with its own set of database objects. To effectively monitor them, we designed a modern and portable application with exceptionally good characteristics. Some of them...
Go to contribution page -
Mr Petr Fedchenkov (ITMO University)10/07/2018, 16:00
University ITMO (ifmo.ru) is developing the cloud of geographically distributed data
Go to contribution page
centers. The “geographically distributed” means data centers (DC) located in
different places far from each other by hundred or thousand kilometers.
Usage of the geographically distributed data centers promises a number of advantages
for end users such as opportunity to add additional DC and service... -
Cecile Cavet (APC)10/07/2018, 16:00
The High Performance Computing (HPC) domain aims to optimize code in order to use the last multicore and parallel technologies including specific processor instructions. In this computing framework, portability and reproducibility are key concepts. A way to handle these requirements is to use Linux containers. These "light virtual machines" allow to encapsulate applications within its...
Go to contribution page -
Enric Tejedor Saavedra (CERN)10/07/2018, 16:00
In January 2017, a consortium of European companies, research labs, universities, and education networks started the “Up to University” project (Up2U). Up2U is a 3-year EU-funded project that aims at creating a bridge between high schools and higher education. Up2U addresses both the technological and methodological gaps between secondary school and higher education by (a.) provisioning the...
Go to contribution page -
Mr Dorin Lobontu (Karlsruhe Institut of Technology)10/07/2018, 16:00
A tape system usually comprises lots of tape drives, several thousand or even tens of thousands of cartridges, robots, software applications and machines which are running these applications. All involved components are able to log failures and statistical data. However, correlation is a laborious and ambiguous process and a wrong interpretation can easily result in a wrong decision. A single...
Go to contribution page -
Andreas Petzold (KIT - Karlsruhe Institute of Technology (DE))10/07/2018, 16:00
The GridKa center is serving the ALICE, ATLAS, CMS, LHCb and Belle-II experiments as one of the biggest WLCG Tier-1 centers world wide with compute and storage resources. It is operated by the Steinbuch Centre for Computing at Karlsruhe Institute of Technology in Germany. In this presentation, we will describe the current status of the compute, online and offline storage resources and we will...
Go to contribution page -
Khalil Bouaouda (Universite Hassan II, Ain Chock (MA))10/07/2018, 16:00
Online selection is an essential step to collect the most interesting collisions among a very large number of events delivered by the ATLAS detector at the Large Hadron Collider (LHC). The Fast TracKer (FTK) is a hardware based track finder, for the ATLAS trigger system, that rapidly identifies important physics processes through their track-based signatures, in the Inner Detector pixel and...
Go to contribution page -
Tommaso Boccali (INFN Sezione di Pisa, Universita' e Scuola Normale Superiore, P), Gianpaolo Carlino (INFN Napoli), Luca dell'Agnello (INFN), Donatella Lucchesi (INFN Padova)10/07/2018, 16:00
The INFN scientific computing infrastructure is composed of more than 30 sites, ranging from CNAF (Tier-1 for LHC and main data center for nearly 30 other experiments) and 9 LHC Tier-2s to ~20 smaller sites, including LHC Tier-3s and not-LHC experiment farms.
Go to contribution page
A comprehensive review of the installed resources, together with plans for the near future, has been collected during the second half of... -
Paul Nilsson (Brookhaven National Laboratory (US))10/07/2018, 16:00
The Production and Distributed Analysis system (PanDA) is a pilot-based workload management system that was originally designed for the ATLAS Experiment at the LHC to operate on grid sites. Since the coming LHC data taking runs will require more resources than grid computing alone can provide, the various LHC experiments are engaged in an ambitious program to extend the computing model to...
Go to contribution page -
Ondrej Subrt (Czech Technical University (CZ))10/07/2018, 16:00
Modern experiments demand a powerful and efficient Data Acquisition System (DAQ). The intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN is composed of many processes communicating between each other. The DIALOG library covers a communication mechanism between processes and establishes a communication layer to each of them. It has been introduced to the...
Go to contribution page -
Volodymyr Yurchenko (National Academy of Sciences of Ukraine (UA))10/07/2018, 16:00
JAliEn (Java-AliEn) is the ALICE’s next generation Grid framework which will be used for the top-level distributed computing resources management during the LHC Run3 and onward. While preserving an interface familiar to the ALICE users, its performance and scalability are an order of magnitude better than the currently used system.
Go to contribution page
To enhance the JAliEn security, we have developed the... -
Prof. G. Ososkov (Joint Institute for Nuclear Research)10/07/2018, 16:00
Cloud computing became a routine tool for scientists in many domains. The JINR cloud infrastructure provides JINR users computational resources for performing various scientific calculations. In order to speed up achievements of scientific results the JINR cloud service for parallel applications was developed. It consists of several components and implements a flexible and modular architecture...
Go to contribution page -
Dr Ka Vang Tsang (SLAC National Accelerator Laboratory)10/07/2018, 16:00
The ProtoDUNE-SP is a single-phase liquid argon time projection chamber (LArTPC) prototype for the Deep Underground Neutrino Experiment (DUNE). Signals from 15,360 electronic channels are received by 60 Reconfigurable Cluster Elements (RCEs), which are processing elements designed at SLAC for a wide range of applications and are based upon the "system-on-chip” Xilinx Zynq family of FPGAs....
Go to contribution page -
Gordon Watts (University of Washington (US))10/07/2018, 16:00
The HEP community has voted strongly with its feet to adopt ROOT as the current de facto analysis toolkit. It is used to write out and store our RAW data, our reconstructed data, and to drive our analysis. Almost all modern data models in particle physics are written in ROOT. New tools in industry have are making appearance in particle physics analysis, however, driven by the massive interest...
Go to contribution page -
Hristo Umaru Mohamed (CERN)10/07/2018, 16:00
Up until September 2017 LHCb Online was running on Puppet 3.5 Master/Server non redundant architecture. As a result, we had problem with outages, both planned and unplanned, as well as with scalability issues (How do you run 3000 nodes at the same time? How do you even run 100 without bringing down the Puppet Master). On top of that Puppet 5.0 was released, so we were running now 2 versions...
Go to contribution page -
Radu Popescu (CERN)10/07/2018, 16:00
The CernVM File System (CernVM-FS) provides a scalable and reliable software distribution service implemented as a POSIX read-only filesystem in user space (FUSE). It was originally developed at CERN to assist High Energy Physics (HEP) collaborations in deploying software on the worldwide distributed computing infrastructure for data processing applications. Files are stored remotely as...
Go to contribution page -
Leo Piilonen (Virginia Tech)10/07/2018, 16:00
I describe the charged-track extrapolation and muon-identification modules in the Belle II data-analysis code framework (basf2). These modules use GEANT4E to extrapolate reconstructed charged tracks outward from the Belle II Central Drift Chamber into the outer particle-identification detectors, the electromagnetic calorimeter, and the K-long and muon detector (KLM). These modules propagate...
Go to contribution page -
Nikolay Voytishin (Joint Institute for Nuclear Research (RU))10/07/2018, 16:00
The Baryonic Matter at Nuclotron (BM@N) experiment represents the 1st phase of Nuclotron-based Ion Collider fAcility (NICA) Mega science project at the Joint Institute for Nuclear Research. It is a fixed target experiment built for studying nuclear matter in conditions of extreme density and temperature.
Go to contribution page
The tracking system of the BM@N experiment consists of three main detector systems:... -
Servesh Muralidharan (CERN)10/07/2018, 16:00
We describe the development of a tool (Trident) using a three pronged approach to analysing node utilisation while aiming to be user friendly. The three areas of focus are data IO, CPU core and memory.
Compute applications running in a batch system node will stress different parts of the node over time. It is usual to look at metrics such as CPU load average and memory consumed. However,...
Go to contribution page -
Andrew Wightman (University of Notre Dame (US))10/07/2018, 16:00
One of the major challenges for the Compact Muon Solenoid (CMS) experiment, is the task of reducing event rate from roughly 40 MHz down to a more manageable 1 kHz while keeping as many interesting physics events as possible. This is accomplished through the use of a Level-1 (L1) hardware based trigger as well as a software based High-Level-Trigger (HLT). Monitoring and understanding the output...
Go to contribution page -
Emanuel Gouveia (LIP Laboratorio de Instrumacao e Fisica Experimental de Particu)10/07/2018, 16:00
Hadronic signatures are critical to the ATLAS physics program, and are used extensively for both Standard Model measurements and searches for new physics. These signatures include generic quark and gluon jets, as well as jets originating from b-quarks or the decay of massive particles (such as electroweak bosons or top quarks). Additionally, missing transverse momentum from non-interacting...
Go to contribution page -
191. Understanding the evolution of conditions data access through Frontier for the ATLAS ExperimentMichal Svatos (Acad. of Sciences of the Czech Rep. (CZ))10/07/2018, 16:00
The ATLAS Distributed Computing system uses the Frontier system to access the Conditions, Trigger, and Geometry database data stored in the Oracle Offline Database at CERN by means of the http protocol. All ATLAS computing sites use squid web proxies to cache the data, greatly reducing the load on the Frontier servers and the databases. One feature of the Frontier client is that in the event...
Go to contribution page -
Prof. Vladimir Ivantchenko (CERN)10/07/2018, 16:00
We report status of the CMS full simulation for run-2. Initially, Geant4 10.0p02 was used in sequential mode, about 16 billion events were produced for analysis of 2015-2016 data. In 2017, the CMS detector was updated: new tracking pixel detector is installed, hadronic calorimeter electronics is modified, and extra muon detectors are added. Corresponding modifications were introduced in the...
Go to contribution page -
Wei Yang (SLAC National Accelerator Laboratory (US))10/07/2018, 16:00
CVMFS helps ATLAS in distributing software to the Grid, and isolating software lookup to batch nodes’ local filesystems. But CVMFS is rarely available in HPC environments. ATLAS computing has experimented with "fat" containers, and later developed an environment to produce such containers for both Shifter and Singularity. The fat containers include most of the recent ATLAS software releases,...
Go to contribution page -
Daniel Peter Traynor (University of London (GB))10/07/2018, 16:00
The Queen Mary University of London Grid site has investigated the use of its' Lustre file system to support Hadoop work flows using the newly open sourced Hadoop adaptor for Lustre. Lustre is an open source, POSIX compatible, clustered file system often used in high performance computing clusters and, is often paired with the SLURM batch system as it is at Queen Mary. Hadoop is an open-source...
Go to contribution page -
Dr Dimitri Bourilkov (University of Florida)10/07/2018, 16:00
The use of machine learning techniques for classification is well established. They are applied widely to improve the signal-to-noise ratio and the sensitivity of searches for new physics at colliders. In this study I explore the use of machine learning for optimizing the output of high precision experiments by selecting the most sensitive variables to the quantity being measured. The precise...
Go to contribution page -
Gareth Douglas Roy (University of Glasgow (GB))10/07/2018, 16:00
Containers are becoming ubiquitous within the WLCG with CMS announcing a requirement for Singularity at supporting sites in 2018. The ubiquity of containers means it is now possible to reify configuration along with applications as a single easy to deploy unit rather than via a myriad of configuration management tools such as Puppet, Ansible or Salt. This allows more use of industry devops...
Go to contribution page -
Robert Andrew Currie (The University of Edinburgh (GB)), Teng LI (Shandong University, China)10/07/2018, 16:00
ZFS is a powerful storage management technology combining filesystem, volume management and software raid technology into a single solution. The WLCG Tier2 computing at Edinburgh was an early adopter of ZFS on Linux, with this technology being used to manage all of our storage systems including servers with aging components. Our experiences of ZFS deployment have been shared with the Grid...
Go to contribution page -
Hannah Short (CERN)10/07/2018, 16:00
As most are fully aware, cybersecurity attacks are an ever-growing problem as larger parts of our lives take place on-line. Distributed digital infrastructures are no exception and action must be taken to both reduce the security risk and to handle security incidents when they inevitably happen. These activities are carried out by the various e-Infrastructures and it has become very clear in...
Go to contribution page -
Wei Yang (SLAC National Accelerator Laboratory (US))10/07/2018, 16:00
Built upon the Xrootd Proxy Cache (Xcache), we developed additional features to adapt the ATLAS distributed computing and data environment, especially its data management system Rucio, to help improve the cache hit rate, as well as features that make the Xcache easy to use, similar to the way the Squid cache is used by the HTTP protocol. We packaged the software in CVMFS and in singularity...
Go to contribution page -
Michal Kamil Simon (CERN), Andreas Joachim Peters (CERN)10/07/2018, 16:00
XRootD is distributed low-latency file access system with its own communication protocol and scalable, plugin based architecture. It is the primary data access framework for the high-energy physics community, and the backbone of the EOS service at CERN.
In order to bring the potential of Erasure Coding (EC) to the XrootD / EOS ecosystem an effort has been undertaken to implement a native EC...
Go to contribution page -
Jan Knedlik (GSI Helmholtzzentrum für Schwerionenforschung)10/07/2018, 16:00
XRootD has been established as a standard for WAN data access in HEP and HENP. Site specific features, like those existing at GSI, have historically been hard to implement with native methods. XRootD allows a custom replacement of basic functionality for native XRootD functions through the use of plug-ins. XRootD clients allow this since version 4.0. In this contribution, our XRootD based...
Go to contribution page -
Imma Riu (IFAE Barcelona (ES))10/07/2018, 17:00
The ATLAS and CMS experiments at CERN are planning a second phase of upgrades to prepare for the "High Luminosity LHC", with collisions due to start in 2026. In order to deliver an order of magnitude more data than previous runs, protons at 14 TeV center-of-mass energy will collide with an instantaneous luminosity of 7.5 x 10^34 cm^-2 s^-1, resulting in much higher pileup and data rates than...
Go to contribution page -
Gerhard Raven (Natuurkundig Laboratorium-Vrije Universiteit (VU)-Unknown)10/07/2018, 17:30presentation
-
Andreas Salzburger (CERN)10/07/2018, 18:00
The reconstruction of particle trajectories is one of the most complex and CPU intensive tasks of event reconstruction at current LHC experiments. The growing particle multiplicity stemming from an increasing number of instantaneous collisions as forseen for the upcoming high luminosity upgrade of the LHC (HL-LHC) and future hadron collider studies will intensify this problem significantly. In...
Go to contribution page -
Michel Jouvin (Université Paris-Saclay (FR))11/07/2018, 09:00presentation
Most HEP experiments coming in the next decade will have computing requirements that cannot be met by adding more hardware (HL-LHC, FAIR, DUNE...). A major software re-engineering is needed and more collaboration between experiments around software developments is
Go to contribution page
needed. This was the reason for setting up the HEP Software Foundation (HSF) in 2015. In 2017, the HSF published "A Roadmap for ... -
Thomas Kuhr11/07/2018, 09:30
The Belle II experiment is taking first collision data in 2018. This is an exciting time for the collaboration and allows to not only assess the performance of accelerator and detector, but also of the computing system and the software. Is Belle II ready to quickly process the data and produce physics results? Which parts are well prepared and where do we have to invest more effort? The...
Go to contribution page -
Rosie Bolton11/07/2018, 10:00presentation
-
Karol Hennessy (University of Liverpool (GB))11/07/2018, 10:30
DUNE will be the world's largest neutrino experiment due to take data in 2025. Here are described the data acquisition (DAQ) systems for both of its prototypes, ProtoDUNE single-phase (SP) and ProtoDUNE dual-phase (DP) - due to take data later this year. ProtoDUNE also breaks records as the largest beam test experiment yet constructed, and are the fundamental elements of CERN's Neutrino...
Go to contribution page -
Serguei Linev (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))11/07/2018, 11:30
For two decades, ROOT brought its own window system abstraction (for X11, GL, Cocoa, and Windows) together with its own GUI library. X11 is nearing the end of its lifetime; new windowing systems shine with performance and features. To make best use of them, the ROOT team has decided to re-implement its graphics and GUI subsystem using web technology.
This presentation introduces the model,...
Go to contribution page -
Artem Petrosyan (Joint Institute for Nuclear Research (RU))11/07/2018, 11:30
LHC Computing Grid was a pioneer integration effort, managed to unite computing and
Go to contribution page
storage resources all over the world, thus making them available to experiments on the Large Hadron Collider. During decade of LHC computing, Grid software has learned to effectively utilise different types of computing resources, such as classic computing clusters, clouds and hyper power computers. While the... -
Michael Papenbrock (Uppsala University)11/07/2018, 11:30
The upcoming PANDA at FAIR experiment in Darmstadt, Germany will belong to a new generation of accelerator-based experiments relying exclusively on software filters for data selection. Due to the likeness of signal and background as well as the multitude of investigated physics channels, this paradigm shift is driven by the need for having full and precise information from all detectors in...
Go to contribution page -
Michael Andrews (Carnegie-Mellon University (US))11/07/2018, 11:30Track 6 – Machine learning and physics analysispresentation
An essential part of new physics searches at the Large Hadron Collider (LHC) at CERN involves event classification, or distinguishing signal events from the background. Current machine learning techniques accomplish this using traditional hand-engineered features like particle 4-momenta, motivated by our understanding of particle decay phenomenology. While such techniques have proven useful...
Go to contribution page -
Randy Sobie (University of Victoria (CA))11/07/2018, 11:30Track 7 – Clouds, virtualization and containerspresentation
The HEP group at the University of Victoria operates a distributed cloud computing system for the ATLAS and Belle II experiments. The system uses private and commercial clouds in North America and Europe that run OpenStack, Open Nebula or commercial cloud software. It is critical that we record accounting information to give credit to cloud owners and to verify our use of commercial resources....
Go to contribution page -
Thomas Hauth (KIT - Karlsruhe Institute of Technology (DE))11/07/2018, 11:30
The Belle II detector is currently commissioned for operation in early 2018. It is designed to record collision events with an instantaneous luminosity of up to 8⋅10^35 cm−2*s−1 which is delivered by the SuperKEKB collider in Tsukuba, Japan. Such a large luminosity is required to significantly improve the precision on measurements of B and D mesons and Tau lepton decays to probe for signs of...
Go to contribution page -
Nicolo Magini (INFN e Universita Genova (IT))11/07/2018, 11:30
he ATLAS experiment is gradually transitioning from the traditional file-based processing model to dynamic workflow management at the event level with the ATLAS Event Service (AES). The AES assigns fine-grained processing jobs to workers and streams out the data in quasi-real time, ensuring fully efficient utilization of all resources, including the most volatile. The next major step in this...
Go to contribution page -
Wenjing Wu (Computer Center, IHEP, CAS)11/07/2018, 11:45Track 7 – Clouds, virtualization and containerspresentation
Virtualization is a commonly used solution for utilizing the opportunistic computing resources in the HEP field, as it provides an unified software and OS layer that the HEP computing tasks require over the heterogeneous opportunistic computing resources. However there is always performance penalty with virtualization, especially for short jobs which are always the case for volunteer computing...
Go to contribution page -
Hasib Md (University of Delhi (IN))11/07/2018, 11:45
Alignment and calibration workflows in CMS require a significant operational effort, due to the complexity of the systems involved. To serve the variety of condition data management needs of the experiment, the alignment and calibration team has developed and deployed a set of web-based applications. The Condition DB Browser is the main portal to search, navigate and prepare a consistent set...
Go to contribution page -
Alja Mrak Tadel (Univ. of California San Diego (US))11/07/2018, 11:45
The divergence of windowing systems among modern Linux distributions and OSX is making the current mode of operations difficult to maintain. In order to continue support the CMS experiment event display, aka Fireworks, we need to explore other options beyond the current distribution model of centrally built tarballs.
We think that C++-server web-client event display is a promising direction...
Go to contribution page -
Mauro Verzetti (CERN)11/07/2018, 11:45Track 6 – Machine learning and physics analysispresentation
Jet flavour identification is a fundamental component for the physics program of the LHC-based experiments. The presence of multiple flavours to be identified leads to a multiclass classification problem. We present results from a realistic simulation of the CMS detector, one of two multi-purpose detectors at the LHC, and the respective performance measured on data. Our tagger, named DeepJet,...
Go to contribution page -
Christopher Jones (Fermi National Accelerator Lab. (US))11/07/2018, 11:45
CMS has worked aggressively to make use of multi-core architectures, routinely running 4 to 8 core production jobs in 2017. The primary impediment to efficiently scaling beyond 8 cores has been our ROOT-based output module, which has been necessarily single threaded. In this presentation we explore the changes made to the CMS framework and our ROOT output module to overcome the previous...
Go to contribution page -
Johannes Elmsheuser (Brookhaven National Laboratory (US))11/07/2018, 11:45
The CERN ATLAS experiment successfully uses a worldwide
Go to contribution page
computing infrastructure to support the physics program during LHC
Run 2. The grid workflow system PanDA routinely manages 250 to
500 thousand concurrently running production and analysis jobs
to process simulation and detector data. In total more than 300 PB
of data is distributed over more than 150 sites in the WLCG and
handled by the... -
Alex Christopher Martyniuk (University College London)11/07/2018, 11:45
The ATLAS Trigger system has been operating successfully during 2017, its excellent performance has been vital for the ATLAS physics program.
The trigger selection capabilities of the ATLAS detector have been significantly enhanced for Run-2 compared to Run-1, in order to cope with the higher event rates and with the large number of simultaneous interactions (pile-up). The improvements at...
Go to contribution page -
Prasanth Kothuri (CERN)11/07/2018, 12:00Track 7 – Clouds, virtualization and containerspresentation
This talk is about sharing our recent experiences in providing data analytics platform based on Apache Spark for High Energy Physics, CERN accelerator logging system and infrastructure monitoring. The Hadoop Service has started to expand its user base for researchers who want to perform analysis with big data technologies. Among many frameworks, Apache Spark is currently getting the most...
Go to contribution page -
Dr Semen Lebedev (Justus Liebig University Giessen)11/07/2018, 12:00
The Compressed Baryonic Matter (CBM) experiment at the future FAIR facility requires fast and efficient event reconstruction algorithms. CBM will be one of the first HEP experiments which works in a triggerless mode: data received in the DAQ from the detectors will not be associated with events by a hardware trigger anymore. All raw data within a given period of time will be collected...
Go to contribution page -
Dr Thomas Vuillaume (LAPP, CNRS, Univ. Savoie Mont-Blanc)11/07/2018, 12:00
The Cherenkov Telescope Array (CTA) is the next generation of ground-based gamma-ray telescopes for gamma-ray astronomy. Two arrays will be deployed composed of 19 telescopes in the Northern hemisphere and 99 telescopes in the Southern hemisphere. Observatory operations are planned to start in 2021 but first data from prototypes should be available already in 2019. Due to its very high...
Go to contribution page -
David Schultz (University of Wisconsin-Madison)11/07/2018, 12:00
IceCube is a cubic kilometer neutrino detector located at the south pole. IceProd is IceCube’s internal dataset management system, keeping track of where, when, and how jobs run. It schedules jobs from submitted datasets to HTCondor, keeping track of them at every stage of the lifecycle. Many updates have happened in the last years to improve stability and scalability, as well as increase...
Go to contribution page -
Thomas Maier (Ludwig Maximilians Universitat (DE))11/07/2018, 12:00
For high-throughput computing the efficient use of distributed computing resources relies on an evenly distributed workload, which in turn requires wide availability of input data that is used in physics analysis. In ATLAS, the dynamic data placement agent C3PO was implemented in the ATLAS distributed data management system Rucio which identifies popular data and creates additional, transient...
Go to contribution page -
Benjamin Morgan (University of Warwick (GB))11/07/2018, 12:00
The process of building software for High Energy Physics is a problem that all experiments must face. It is also an aspect of the technical management of HEP software that is highly suited to sharing knowledge and tools. For this reason the HEP Software Foundation established a working group in 2015 to look at packaging and deployment solutions in the HEP community. The group has examined in...
Go to contribution page -
Mr Fernando Abudinen (Max-Planck-institut für Physik)11/07/2018, 12:00Track 6 – Machine learning and physics analysispresentation
Measurements of time-dependent CP violation and of $B$-meson mixing at B-factories require a determination of the flavor of one of the two exclusively produced $B^0$ mesons. The predecessors of Belle II, the Belle and BaBar experiments, developed so-called flavor tagging algorithms for this task. However, due to the novel high-luminosity conditions and the increased beam-backgrounds at Belle...
Go to contribution page -
Viktoriia Chekalina (Yandex School of Data Analysis (RU))11/07/2018, 12:15Track 6 – Machine learning and physics analysispresentation
Reconstruction and identification in calorimeters of modern High Energy Physics experiments is a complicated task. Solutions are usually driven by a priori knowledge about expected properties of reconstructed objects. Such an approach is also used to distinguish single photons in the electromagnetic calorimeter of the LHCb detector on LHC from overlapping photons produced from high momentum...
Go to contribution page -
Paul James Laycock (CERN)11/07/2018, 12:15
In 2017, NA62 recorded over a petabyte of raw data, collecting around a billion events per day of running. Data are collected in bursts of 3-5 seconds, producing output files of a few gigabytes. A typical run, a sequence of bursts with the same detector configuration and similar experimental conditions, contains 1500 bursts and constitutes the basic unit for offline data processing. A...
Go to contribution page -
Alvaro Fernandez Casani (Univ. of Valencia and CSIC (ES))11/07/2018, 12:15
The ATLAS EventIndex currently runs in production in order to build a
complete catalogue of events for experiments with large amounts of data.The current approach is to index all final produced data files at CERN Tier0,
Go to contribution page
and at hundreds of grid sites, with a distributed data collection architecture
using Object Stores to temporarily maintain the conveyed information, with
references to them... -
Enric Tejedor Saavedra (CERN)11/07/2018, 12:15Track 7 – Clouds, virtualization and containerspresentation
SWAN (Service for Web-based ANalysis) is a CERN service that allows users to perform interactive data analysis in the cloud, in a "software as a service" model. It is built upon the widely-used Jupyter notebooks, allowing users to write - and run - their data analysis using only a web browser. By connecting to SWAN, users have immediate access to storage, software and computing resources that...
Go to contribution page -
Federico Stagni (CERN)11/07/2018, 12:15
The DIRAC project is developing interware to build and operate distributed computing systems. It provides a development framework and a rich set of services for both Workload and Data Management tasks of large scientific communities. DIRAC is adopted by a growing number of collaborations, including LHCb, Belle2, the Linear Collider, and CTA.
The LHCb experiment will be upgraded during the...
Go to contribution page -
Rick Cavanaugh (University of Illinois at Chicago (US))11/07/2018, 12:15
The High-Luminosity LHC will open an unprecedented window on the weak-scale nature of the universe, providing high-precision measurements of the standard model as well as searches for new physics beyond the standard model. Such precision measurements and searches require information-rich datasets with a statistical power that matches the high-luminosity provided by the Phase-2 upgrade of the...
Go to contribution page -
Kyle Knoepfel (Fermi National Accelerator Laboratory)11/07/2018, 12:15
Since its inception in 2010, the art event-based analysis framework and associated software have been delivered to client experiments using a Fermilab-originated system called UPS. Salient features valued by the community include installation without administration privileges, trivially-relocatable binary packages and the ability to use coherent sets of packages together (such as those...
Go to contribution page -
Dr Beijiang Liu (Institue of High Energy Physics, Chinese Academy of Sciences)11/07/2018, 12:30Track 6 – Machine learning and physics analysispresentation
The BESIII detector is a general purpose spectrometer located at BEPCII. BEPCII is a double ring $e^+e^-$ collider running at center of mass energies between 2.0 and 4.6 GeV and reached a peak luminosity of $1\times 10^{33}cm^{-2}s^{-1}$ at $\sqrt{s}$ =3770 MeV.
As an experiment in the high precision frontier of hadron physics, since 2009, BESIII has collected the world's largest data samples...
Go to contribution page -
Brian Paul Bockelman (University of Nebraska Lincoln (US))11/07/2018, 12:30
GridFTP transfers and the corresponding Grid Security Infrastructure (GSI)-based authentication and authorization system have been data transfer pillars of the Worldwide LHC Computing Grid (WLCG) for more than a decade. However, in 2017, the end of support for the Globus Toolkit - the reference platform for these technologies - was announced. This has reinvigorated and expanded efforts to...
Go to contribution page -
Matteo Cremonesi (Fermi National Accelerator Lab. (US))11/07/2018, 12:30
In recent years the LHC delivered a record-breaking luminosity to the CMS experiment making it a challenge to successfully handle all the demands for the efficient Data and Monte Carlo processing. In the presentation we will review major issues managing such requests and how we were able to address them. Our main strategy relies on the increased automation and dynamic workload and data...
Go to contribution page -
David Schultz (University of Wisconsin-Madison)11/07/2018, 12:30
IceCube is a cubic kilometer neutrino detector located at the south pole. CVMFS is a key component to IceCube’s Distributed High Throughput Computing analytics workflow for sharing 500GB of software across datacenters worldwide. Building the IceCube software suite on CVMFS has historically been accomplished first by a long bash script, then by a more complex set of python scripts. We...
Go to contribution page -
179. Implementation of the ATLAS trigger within the ATLAS Multi-Threaded Software Framework AthenaMTStewart Martin-Haugh (Science and Technology Facilities Council STFC (GB))11/07/2018, 12:30
We present an implementation of the ATLAS High Level Trigger (HLT)
Go to contribution page
that provides parallel execution of trigger algorithms within the
ATLAS multi-threaded software framework, AthenaMT. This development
will enable the HLT to meet future challenges from the evolution of
computing hardware and upgrades of the Large Hadron Collider (LHC) and
ATLAS Detector. During the LHC data-taking period... -
Rafal Grzymkowski (IFJ PAN)11/07/2018, 12:30Track 7 – Clouds, virtualization and containerspresentation
In recent years, public clouds have undergone a large transformation. Nowadays, cloud providers compete in delivery specialized scalable and fault tolerant services where resource management is completely on their side. Such computing model called serverless computing is very attractive for users who do not want to worry about OS level management, security patches and scaling resources.
Go to contribution page
Our... -
Dr Holger Schulz (Fermi National Accelerator Laboratory)11/07/2018, 12:30
We present a range of conceptual improvements and extensions to the popular
tuning tool "Professor".Its core functionality remains the construction of multivariate analytic
approximations to an otherwise computationally expensive function. A typical
example would be histograms obtained from Monte-Carlo (MC) event generators for
standard model and new physics processes.The fast Professor...
Go to contribution page -
11/07/2018, 12:45
-
11/07/2018, 12:45Track 7 – Clouds, virtualization and containerspresentation
-
Rosen Matev (CERN)11/07/2018, 12:45
The first LHCb upgrade will take data at an instantaneous luminosity of $2\times10^{33}\mathrm{cm}^{-2}s^{-1}$ starting in 2021. Due to the high rate of beauty and charm signals LHCb will read out the entire detector into a software trigger running on commodity hardware at the LHC collision frequency of 30 MHz. In this talk we present the challenges of triggering in the MHz signal era. We pay...
Go to contribution page -
Brian Paul Bockelman (University of Nebraska Lincoln (US))11/07/2018, 12:45
Outside the HEP computing ecosystem, it is vanishingly rare to encounter user X509 certificate authentication (and proxy certificates are even more rare). The web never widely adopted the user certificate model, but increasingly sees the need for federated identity services and distributed authorization. For example, Dropbox, Google and Box instead use bearer tokens issued via the OAuth2...
Go to contribution page -
Mr Tigran Mkrtchyan (DESY)11/07/2018, 12:45
For over a decade, dCache.org has provided software which is used at more than 80 sites around the world, providing reliable services for WLCG experiments and others. This can be achieved only with a well established process starting from the whiteboard, where ideas are created, all the way through to packages, installed on the production systems. Since early 2013 we have moved to git as our...
Go to contribution page -
Luca Perrozzi (Eidgenoessische Technische Hochschule Zuerich (ETHZ) (CH))11/07/2018, 12:45
We describe the CMS computing model for MC event generation, and technical integration and workflows for generator tools in CMS. We discuss the most commonly used generators, standard configurations, their event tunes, and the technical performance of these configurations for Run II as well as the needs for Run III.
Go to contribution page -
Michael Russell (Heidelberg University)11/07/2018, 12:45Track 6 – Machine learning and physics analysispresentation
We show how a novel network architecture based on Lorentz Invariance (and not much else) can be used to identify hadronically decaying top quarks. We compare its performance to alternative approaches, including convolutional neural networks, and find it to be very competitive.
Go to contribution page
We also demonstrate how this architecture can be extended to include tracking information and show its application to... -
Lindsey Gray (Fermi National Accelerator Lab. (US))12/07/2018, 09:00
The HL-LHC will present enormous storage and computational demands, creating a total dataset of up to 200 Exabytes and requiring commensurate computing power to record, reconstruct, calibrate, and analyze these data. Addressing these needs for the HL-LHC will require innovative approaches to deliver the necessary processing and storage resources. The "blockchain" is a recent technology for...
Go to contribution page -
Elizabeth Sexton-Kennedy (Fermi National Accelerator Lab. (US))12/07/2018, 09:30presentation
-
Andrea Ceccanti12/07/2018, 10:00
X.509 certificates and [VOMS][voms] have proved to be a secure and reliable solution for authentication and authorization on the Grid, but also showed usability issues and required the development of ad-hoc services and libraries to support VO-based authorization schemes in Grid middleware and experiment computing frameworks. The need to move beyond X.509 certificates is recognized as an...
Go to contribution page -
Patricia Mendez Lorenzo (CERN)12/07/2018, 11:00
Building, testing and deploying of coherent large software stacks is very challenging, in particular when they consist of the diverse set of packages required by the LHC experiments, the CERN Beams department and data analysis services such as SWAN. These software stacks include several packages (Grid middleware,Monte Carlo generators, Machine Learning tools, Python modules) all required for...
Go to contribution page -
Dr Markus Frank (CERN)12/07/2018, 11:00
The detector description is an essential component to analyse data resulting from particle collisions in high energy physics experiments.
Go to contribution page
The interpretation of these data from particle collisions typically require more long-living data which describe in detail the state of the experiment itself. Such accompanying data include alignment parameters, the electronics calibration and their... -
Martin Barisits (CERN)12/07/2018, 11:00
Rucio, the distributed data management system of the ATLAS collaboration already manages more than 330 Petabytes of physics data on the grid. Rucio has seen incremental improvements throughout LHC Run-2 and is currently being prepared for the HL-LHC era of the experiment. Next to these improvements the system is currently evolving into a full-scale generic data management system for...
Go to contribution page -
Luca dell'Agnello (INFN)12/07/2018, 11:00
The INFN Tier-1 center at CNAF has been extended in 2016 and 2017 in order to include a small amount of resources (~24 kHS06 corresponding to ~10% of the CNAF pledges for LHC in 2017) physically located art the Bari-ReCas site (~600 km far from CNAF).
Go to contribution page
In 2018, a significant percentage of the CPU power (~170 kHS06, equivalent to ~50% of the total CNAF pledges) are going to be provided via a... -
Felice Pantaleo (CERN)12/07/2018, 11:00Track 6 – Machine learning and physics analysispresentation
In the recent years, several studies have demonstrated the benefit of using deep learning to solve typical tasks related to high energy physics data taking and analysis. Building on these proofs of principle, many HEP experiments are now working on integrating Deep Learning into their workflows. The computation need for inference of a model once trained is rather modest and does not usually...
Go to contribution page -
Filippo Costa (CERN)12/07/2018, 11:00
ALICE (A Large Ion Collider Experiment) is a heavy-ion detector studying the physics of
Go to contribution page
strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron
Collider). During the second long shut-down of the LHC, the ALICE detector will be
upgraded to cope with an interaction rate of 50 kHz in Pb-Pb collisions, producing in the
online computing system (O2) a sustained input... -
Boris Bauermeister (Stockholm University)12/07/2018, 11:00
The Xenon Dark Matter experiment is looking for non baryonic particle Dark Matter in the universe. The demonstrator is a dual phase time projection chamber (TPC), filled with a target mass of ~2000 kg of ultra pure liquid xenon. The experimental setup is operated at the Laboratori Nazionali del Gran Sasso (LNGS).
Go to contribution page
We present here a full overview about the computing scheme for data distribution... -
Mihaela Gheata (Institute of Space Science (RO))12/07/2018, 11:15
VecGeom is a multi-purpose geometry library targeting the optimisation of the 3D-solid's algorithms used extensively in particle transport and tracking applications. As a particular feature, the implementations of these algorithms are templated on the input data type and are explicitly vectorised using VecCore library in case of SIMD vector inputs. This provides additional performance for...
Go to contribution page -
Janusz Martyniak12/07/2018, 11:15
The SoLid experiment is a short-baseline neutrino project located at the BR2 research reactor in Mol, Belgium. It started data taking in November 2017. Data management, including long term storage will be handled in close collaboration by VUB Brussels, Imperial College London and Rutherford Appleton Laboratory (RAL).
Go to contribution page
The data management system makes the data available for analysis on the... -
Matthias Jochen Schnepf (KIT - Karlsruhe Institute of Technology (DE))12/07/2018, 11:15
Experience to date indicates that the demand for computing resources in high energy physics shows a highly dynamic behaviour, while the provided resources by the WLCG remain static over the year. It has become evident that opportunistic resources such as High Performance Computing (HPC) centers and commercial clouds are very well suited to cover peak loads. However, the utilization of this...
Go to contribution page -
Enrico Gamberini (CERN)12/07/2018, 11:15
The liquid argon Time Projection Chamber technique has matured and is now in use by several short-baseline neutrino experiments. This technology will be used in the long-baseline DUNE experiment; however, this experiment represents a large increase in scale, which needs to be validated explicitly. To this end, both the single-phase and dual-phase technology are being tested at CERN, in two...
Go to contribution page -
Tadashi Maeno (Brookhaven National Laboratory (US))12/07/2018, 11:15
The Production and Distributed Analysis (PanDA) system has been successfully used in the ATLAS experiment as a data-driven workload management system. The PanDA system has proven to be capable of operating at the Large Hadron Collider data processing scale over the last decade including the Run 1 and Run 2 data taking periods. PanDA was originally designed to be weakly coupled with the WLCG...
Go to contribution page -
Andrea Manzi (CERN)12/07/2018, 11:15
Complex, large-scale distributed systems are more frequently used to solve
Go to contribution page
extraordinary computing, storage and other problems. However, the development
of these systems usually requires working with several software components,
maintaining and improving large codebases, and also a relatively large number
of developers working together. Therefore, it is inevitable to introduce faults
to the... -
Jean-Roch Vlimant (California Institute of Technology (US))12/07/2018, 11:15Track 6 – Machine learning and physics analysispresentation
In the field of High Energy Physics, the simulation of the interaction of particles in the material of calorimeters is a computing intensive task, even more so with complex and fined grained detectors. The complete and most accurate simulation of particle/matter interaction is primordial while calibrating and understanding the detector at the very low level, but is seldomly required at physics...
Go to contribution page -
Ran Du12/07/2018, 11:30
There are two production clusters co-existed in the Institute of High Energy Physics (IHEP). One is a High Throughput Computing (HTC) cluster with HTCondor as the workload manager, the other is a High Performance Computing (HPC) cluster with SLURM as the workload manager. The resources of the HTCondor cluster are provided by multiple experiments, and the resource utilization has reached more...
Go to contribution page -
Simone Campana (CERN)12/07/2018, 11:30
The computing strategy document for HL-LHC identifies storage as one of the main WLCG challenges in one decade from now. In the naive assumption of applying today’s computing model, the ATLAS and CMS experiments will need one order of magnitude more storage resources than what could be realistically provided by the funding agencies at the same cost of today. The evolution of the computing...
Go to contribution page -
Irina Filozova (Joint Institute for Nuclear Research (RU))12/07/2018, 11:30
This paper is dedicated to the current state of the Geometry Database (Geometry DB) for the CBM experiment. The geometry DB is an information system that supports the CBM geometry. The main aims of Geometry DB are to provide storage of the CBM geometry, convenient tools for managing the geometry modules assembling various versions of the CBM setup as a combination of geometry modules and...
Go to contribution page -
Guilherme Amadio (CERN)12/07/2018, 11:30
The data processing of HEP data relies on rich software distributions, made of experiment specific software and hundreds of other software products, developed by our community and outside it.
Go to contribution page
This kind of software stacks are traditionally distributed on shared file systems as a set of packages coherently built. This has the benefit of reducing as much as possible any coupling with the... -
Jan Fridolf Strube12/07/2018, 11:30Track 6 – Machine learning and physics analysispresentation
Measurements in LArTPC neutrino detectors feature high fidelity and result in large event images. Deep learning techniques have been extremely successful in classification tasks of photographs, but their application to LArTPC event images is challenging, due to the large size of the events; two orders of magnitude larger than images found in classical challenges like MNIST or ImageNet. This...
Go to contribution page -
Remi Mommsen (Fermi National Accelerator Lab. (US))12/07/2018, 11:30
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events of 2 MB at a rate of 100 kHz. The event builder collects event fragments from about 740 sources and assembles them into complete events which are then handed to the high-level trigger (HLT) processes running on O(1000) computers. The aging event-building hardware will be replaced...
Go to contribution page -
Pavlo Svirin (Brookhaven National Laboratory (US))12/07/2018, 11:30
A goal of LSST (Large Synoptic Survey Telescope) project is to conduct a 10-year survey of the sky that is expected to deliver 200 petabytes of data after it begins full science operations in 2022. The project will address some of the most pressing questions about the structure and evolution of the universe and the objects in it. It will require a large amount of simulations to understand the...
Go to contribution page -
Ms Qiumei Ma (IHEP)12/07/2018, 11:45
BES III experiment have taked data more than ten years, about fifty thounsand runs have been taken. So how to manage these large data is a big challenge to us. For years, we have created an efficient and complete data management system, including MySQL database, C++ API, BookKeeping system, monitor applications and etc. I will focus on introduce BESIII central database management system’s...
Go to contribution page -
Lorenzo Moneta (CERN)12/07/2018, 11:45Track 6 – Machine learning and physics analysispresentation
The ROOT Mathematical and Statistical libraries have been recently improved to facilitate the modelling of parametric functions that can be used for performing maximum likelihood fits to data sets to estimate parameters and their uncertainties.
Go to contribution page
We report here on the new functionality of the ROOT TFormula and TF1 classes to build these models in a convenient way for the users. We show how... -
Stefan-Gabriel Chitic (CERN)12/07/2018, 11:45
The LHCb physics software has to support the analysis of data taken up to now and at the same time is under active development in preparation for the detector upgrade coming into operation in 2021. A continuous integration system is therefore crucial to maintain the quality of the ~6 millions of lines of C++ and Python, to ensure consistent builds of the software as well as to run the unit and...
Go to contribution page -
Edoardo Martelli (CERN)12/07/2018, 11:45
While the LHCb experiment will be using a local data-centre at the experiment site for its computing infrastructure in Run3, LHCb is also evaluating the possibility to move its High Level Trigger server farm into an IT data-centre located few kilometres away from the LHCb detector. If proven feasible and if it could be replicated by other LHC experiments, the solution would allow the...
Go to contribution page -
Luisa Arrabito12/07/2018, 11:45
The Cherenkov Telescope Array (CTA) is the next-generation instrument in the field of very high energy gamma-ray astronomy. It will be composed of two arrays of Imaging Atmospheric Cherenkov Telescopes, located at La Palma (Spain) and Paranal (Chile). The construction of CTA has just started with the installation of the first telescope on site at La Palma and the first data expected by the end...
Go to contribution page -
Dr Grzegorz Jereczek (Intel Corporation)12/07/2018, 11:45
Data acquisition (DAQ) systems for high energy physics experiments readout data from a large number of electronic components, typically over thousands of point to point links. They are thus inherently distributed systems. Traditionally, an important stage in the data acquisition chain has always been the so called event building: data fragments coming from different sensors are identified as...
Go to contribution page -
Peter Onyisi (University of Texas at Austin (US))12/07/2018, 11:45
ATLAS is embarking on a project to multithread its reconstruction software in time for use in Run 3 of the LHC. One component that must be migrated is the histogramming infrastructure used for data quality monitoring of the reconstructed data. This poses unique challenges due to its large memory footprint which forms a bottleneck for parallelization and the need to accommodate relatively...
Go to contribution page -
Kilian Schwarz (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))12/07/2018, 12:00
The ALICE computing model for Run3 foresees few big centres, called Analysis Facilities, optimised for fast processing of large local sets of Analysis Object Data (AODs). Contrary to the current running of analysis trains on the Grid, this will allow for more efficient execution of inherently I/O-bound jobs. GSI will host one of these centres and has therefore finalised a first Analysis...
Go to contribution page -
Gvozden Neskovic (Johann-Wolfgang-Goethe Univ. (DE))12/07/2018, 12:00
ALICE (A Large Ion Collider Experiment), one of the large LHC experiments, is undergoing a major upgrade during the next long shutdown. Increase in data rates planned for LHC Run3 (3TiB/s for Pb-Pb collisions) with triggerless continuous readout operation requires a paradigm shift in computing and networking infrastructure.
Go to contribution page
The new ALICE O2 (online-offline) computing facility consists of two... -
Chris Burr (University of Manchester (GB))12/07/2018, 12:00Track 6 – Machine learning and physics analysispresentation
Analyses of multi-million event datasets are natural candidates to exploit the massive parallelisation available on GPUs. This contribution presents two such approaches to measure CP violation and the corresponding user experience.
The first is the energy test, which is used to search for CP violation in the phase-space distribution of multi-body hadron decays. The method relies on a...
Go to contribution page -
Dr Robert Andrew Currie (The University of Edinburgh (GB))12/07/2018, 12:00
The LHCb Performance Regression (LHCbPR) framework allows for periodic software testing to be performed in a reproducible manner.
Go to contribution page
LHCbPR provides a JavaScript based web front-end service, built atop industry standard tools such as AngularJS, Bootstrap and Django (https://lblhcbpr.cern.ch).
This framework records the evolution of tests over time allowing for this data to be extracted for... -
Xiaomei Zhang (Chinese Academy of Sciences (CN))12/07/2018, 12:00
The Jiangmen Underground Neutrino Observatory (JUNO) is a multipurpose neutrino experiment which will start in 2020. To fasten JUNO data processing over multicore hardware, the JUNO software framework is introducing parallelization based on TBB. To support JUNO multicore simulation and reconstruction jobs in the near future, a new workload scheduling model has to be explored and implemented in...
Go to contribution page -
Dr Malachi Schram (Pacific Northwest National Laboratory)12/07/2018, 12:00
The Belle II experiment at the SuperKEKB collider in Tsukuba, Japan, will start taking physics data in early 2018 and aims to accumulate 50/ab, or approximately 50 times more data than the Belle experiment. The collaboration expects it will manage and process approximately 200 PB of data.
Computing at this scale requires efficient and coordinated use of the compute grids in North America,...
Go to contribution page -
Marcel Andre Schneider (CERN)12/07/2018, 12:00
The Data Quality Monitoring Software is a central tool in the CMS experiment. It is used in the following key environments: 1) Online, for real-time detector monitoring; 2) Offline, for the prompt-offline-feedback and final fine-grained data quality analysis and certification; 3) Validation of all the reconstruction software production releases; 4) Validation in Monte Carlo productions. Though...
Go to contribution page -
Carl Lundstedt (University of Nebraska Lincoln (US))12/07/2018, 12:15
Even as grid middleware and analysis software has matured over the course of the LHC's lifetime it is still challenging for non-specialized computing centers to contribute resources. Many U.S. CMS collaborators would like to set up Tier-3 sites to contribute campus resources for the use of their local CMS group as well as the collaboration at large, but find the administrative burden of...
Go to contribution page -
Dr Vito Di Benedetto (Fermi National Accelerator Lab. (US))12/07/2018, 12:15
This paper describes the current architecture of Continuous Integration (CI) service developed at Fermilab, encountered successes and difficulties, as well as future development plans. Current experiment code has hundreds of contributors that provide new features, bug fixes, and other improvements. Version control systems help developers to collaborate in contributing software for their...
Go to contribution page -
Gilles Grasseau (Centre National de la Recherche Scientifique (FR))12/07/2018, 12:15Track 6 – Machine learning and physics analysispresentation
In the proton-proton collisions at the LHC, the associate production of the Higgs boson with two top quarks has not been observed yet. This ttH channel allows directly probing the coupling of the Higgs boson to the top quark. The observation of this process could be a highlight of the ongoing Run 2 data taking.
Unlike to supervised methods (neural networks, decision trees, support vector...
Go to contribution page -
Antonio Perez-Calero Yzquierdo (Centro de Investigaciones Energéti cas Medioambientales y Tecno)12/07/2018, 12:15
The CMS Submission Infrastructure Global Pool, built on GlideinWMS and HTCondor, is a worldwide distributed dynamic pool responsible for the allocation of resources for all CMS computing workloads. Matching the continuously increasing demand for computing resources by CMS requires the anticipated assessment of its scalability limitations. Extrapolating historical usage trends, by LHC Run III...
Go to contribution page -
PATRICK MEADE (University of Wisconsin-Madison)12/07/2018, 12:15
IceCube is a cubic kilometer neutrino detector located at the south pole. Metadata for files in IceCube has traditionally been handled on an application by application basis, with no user-facing access. There has been no unified view of data files, and users often just ls the filesystem to locate files. Recently effort has been put into creating such a unified view. Going for a simple...
Go to contribution page -
Dmitry Popov (Max-Planck-Gesellschaft (DE))12/07/2018, 12:15
Monte-Carlo simulation is a fundamental tool for high-energy physics experiments, from the design phase to data analysis. In recent years its relevance has increased due to the ever growing measurements precision. Accuracy and reliability are essential features in simulation and particularly important in the current phase of the LHCb experiment, where physics analysis and preparation for data...
Go to contribution page -
Alessandro Lonardo (Sapienza Universita e INFN, Roma I (IT))12/07/2018, 12:15
The NA62 experiment at CERN SPS is aimed at measuring the branching ratio of the ultra-rare K+→π+νν decay.
Go to contribution page
This imposes very tight requirements on the particle identification capabilities of the apparatus in order to reject the considerable background.
To this purpose a centralized level 0 hardware trigger system (L0TP) processes in real-time the streams of data primitives coming from the... -
Raymond Ehlers (Yale University (US))12/07/2018, 14:00
ALICE Overwatch is a project started in late 2015 to provide augmented online monitoring and data quality assurance utilizing time-stamped QA histograms produced by the ALICE High Level Trigger (HLT). The system receives the data via ZeroMQ, storing it for later review, enriching it with detector specific functionality, and visualizing it via a web application. These provided capabilities are...
Go to contribution page -
Mr Julian Myrcha (Warsaw University of Technology)12/07/2018, 14:00
Good quality track visualization is an important aspect of every High-Energy Physics experiment, where it can be used for quick assessment of recorded collisions. The event display, operated in the Control Room, is also important for visitors and increases public recognition of the experiment. Especially in the case of the ALICE detector at the Large Hadron Collider (LHC), which reconstructs...
Go to contribution page -
Alastair Dewhurst (STFC-Rutherford Appleton Laboratory (GB))12/07/2018, 14:00
CVMFS has proved an extremely effective mechanism for providing scalable, POSIX like, access to experiment software across the Grid. The normal method for file access is http downloads via squid caches from a small number of Stratum 1 servers. In the last couple of years this mechanisms has been extended to allow access of files from any storage offering http access. This has been named...
Go to contribution page -
David Lange (Princeton University (US))12/07/2018, 14:00
The HL-LHC program has seen numerous extrapolations of its needed computing resources that each indicate the need for substantial changes if the desired HL-LHC physics program is to be supported within the current level of computing resource budgets. Drivers include large increases in event complexity (leading to increased processing time and analysis data size) and trigger rates needed (5-10...
Go to contribution page -
Katarzyna Maria Dziedziniewicz-Wojcik (CERN)12/07/2018, 14:00Track 7 – Clouds, virtualization and containerspresentation
This contribution reports on the experience acquired from using the Oracle Cloud
Go to contribution page
Infrastructure (OCI) as an Infrastructure as a Service (IaaS) within the distributed computing environments of the LHC experiments. The bare metal resources provided in the cloud were integrated using existing deployment and computer management tools. The model used in earlier cloud exercises was adapted to the... -
Dr Scott Rowan (University of Huddersfield)12/07/2018, 14:00
MERLIN is a C++ particle tracking software package, originally developed at DESY for use in International Linear Collider (ILC) simulations. MERLIN has more recently been adapted for High-Luminosity Large Hadron Collider (HL-LHC) collimation studies, utilising more advanced scattering physics. However, as is all too common in existing high-energy physics software, recent developments have not...
Go to contribution page -
Dr Jean-Roch Vlimant (California Institute of Technology (US))12/07/2018, 14:00Track 6 – Machine learning and physics analysispresentation
In the horizon of the High Luminosity Large Hadron Collider phase (HL-LHC), each proton bunch crossing will bring up to 200 simultaneous proton collisions. Performing the charged particle trajectory reconstruction in such dense environment will be computationally challenging because of the nature of the traditional algorithms used. The common combinatorial Kalman Filter state-of-the-art...
Go to contribution page -
Silvio Pardi (INFN)12/07/2018, 14:15
The implementation of Cache Systems in the computing model of HEP experiments enables to accelerate access to hot data sets by scientists, opening new scenarios of data distribution and enable to exploit the paradigm of storage-less sites.
Go to contribution page
In this work, we present a study for the creation of an http data-federation eco-system with caching functionality. By exploiting the volatile-pool concept... -
Marco Canaparo (INFN)12/07/2018, 14:15
Software quality monitoring and analysis is one of the most productive topics of software engineering research. Their results may be employed effectively by engineers during software development life cycle. Software metrics, together with data mining techniques, can provide the basis for developing prediction models.
Open source software constitutes a valid test case for the assessment of...
Go to contribution page -
Mr Roland Kunz (DELL)12/07/2018, 14:15Track 7 – Clouds, virtualization and containerspresentation
Field-programmable gate arrays (FPGAs) have largely been used in communication and high-performance computing, and given the recent advances in big data and emerging trends in cloud computing (e.g., serverless [18]), FPGAs are increasingly being introduced into these domains (e.g., Microsoft’s datacenters [6] and Amazon Web Services [10]). To address these domains’ processing needs, recent...
Go to contribution page -
Sebastian Andreas Merkt (University of Pittsburgh (US))12/07/2018, 14:15
Until recently, the direct visualization of the complete ATLAS experiment geometry and final analysis data was confined within the software framework of the experiment.
Go to contribution page
To provide a detailed interactive data visualization capability to users, as well as easy access to geometry data, and to ensure platform independence and portability, great effort has been recently put into the modernization... -
Serguei Kolos (University of California Irvine (US))12/07/2018, 14:15
Unprecedented size and complexity of the ATLAS experiment required
adoption of a new approach for online monitoring system development as
many requirements for this system were not known in advance due to the
innovative nature of the project.The ATLAS online monitoring facility has been designed as a modular
Go to contribution page
system consisting of a number of independent components, which can
interact with one... -
Prof. Gennady Ososkov (Joint Institute for Nuclear Research (JINR), Russia)12/07/2018, 14:15Track 6 – Machine learning and physics analysispresentation
Charged particle tracks registered in high energy and nuclear physics (HENP) experiments are to be reconstructed on the very important stage of physical analysis named the tracking. It consists in joining into clusters a great number of so-called hits produced on sequential co-ordinate planes of tracking detectors. Each of these clusters joins all hits belonging to the same track, one of many...
Go to contribution page -
Stefan Roiser (CERN)12/07/2018, 14:15
The LHCb experiment will be upgraded for data taking in the LHC Run 3. The foreseen trigger output bandwidth trigger of a few GB/s will result in datasets of tens of PB per year, which need to be efficiently streamed and stored offline for low-latency data analysis. In addition, simulation samples of up to two orders of magnitude larger than those currently simulated are envisaged, with big...
Go to contribution page -
Michael Bender (University of Munich (LMU))12/07/2018, 14:30
The Belle II experiment, based in Japan, is designed for the precise measurement of B and C meson as well as $\tau$ decays and is intended to play an important role in the search for physics beyond the Standard Model. To visualize the collected data, amongst other things, virtual reality (VR) applications are used within the collaboration. In addition to the already existing VR application...
Go to contribution page -
Pablo Llopis Sanmillan (CERN)12/07/2018, 14:30Track 7 – Clouds, virtualization and containerspresentation
CERN's batch and grid services are mainly focused on High Throughput computing (HTC) for LHC data processing. However, part of the user community requires High Performance Computing (HPC) for massively parallel applications across many cores on MPI-enabled intrastructure. This contribution addresses the implementation of HPC infrastructure at CERN for Lattice QCD application development, as...
Go to contribution page -
Mikhail Hushchyn (Yandex School of Data Analysis (RU))12/07/2018, 14:30Track 6 – Machine learning and physics analysispresentation
SHiP is a new proposed fixed-target experiment at the CERN SPS accelerator. The goal of the experiment is to search for hidden particles predicted by models of Hidden Sectors. Track pattern recognition is an early step of data processing at SHiP. It is used to reconstruct tracks of charged particles from the decay of neutral New Physics objects. Several artificial neural networks and boosting...
Go to contribution page -
Laura Promberger (University of Applied Sciences (DE))12/07/2018, 14:30
LHCb is undergoing major changes in its data selection and processing chain for the upcoming LHC Run 3 starting in 2021. With this in view several initiatives have been launched to optimise the software stack. This contribution discusses porting the LHCb Stack from x86 architecture to aarch64 architecture with the goal to evaluate the performance and the cost of the computing infrastructure...
Go to contribution page -
Mantas Stankevicius (Fermi National Accelerator Lab. (US))12/07/2018, 14:30
The Compact Muon Solenoid (CMS) is one of the experiments at the CERN Large Hadron Collider (LHC). The CMS Online Monitoring system (OMS) is an upgrade and successor to the CMS Web-Based Monitoring (WBM) system, which is an essential tool for shift crew members, detector subsystem experts, operations coordinators, and those performing physics analyses. CMS OMS is divided into aggregation and...
Go to contribution page -
Fernando Harald Barreiro Megino (University of Texas at Arlington)12/07/2018, 14:30
The Production and Distributed Analysis system (PanDA) for the ATLAS experiment at the Large Hadron Collider has seen big changes over the past couple of years to accommodate new types of distributed computing resources: clouds, HPCs, volunteer computers and other external resources. While PanDA was originally designed for fairly homogeneous resources available through the Worldwide LHC...
Go to contribution page -
Jan Erik Sundermann (Karlsruhe Institute of Technology (KIT))12/07/2018, 14:30
The computing center GridKa is serving the ALICE, ATLAS, CMS and LHCb experiments as one of the biggest WLCG Tier-1 centers world wide with compute and storage resources. It is operated by the Steinbuch Centre for Computing at Karlsruhe Institute of Technology in Germany. In April 2017 a new online storage system was put into operation. In its current stage of expansion it offers the HEP...
Go to contribution page -
Daniele Cesini (Universita e INFN, Bologna (IT))12/07/2018, 14:45
The development of data management services capable to cope with very large data resources is a key challenge to allow the future e-infrastructures to address the needs of the next generation extreme scale scientific experiments.
Go to contribution page
To face this challenge, in November 2017 the H2020 “eXtreme DataCloud - XDC” project has been launched. Lasting for 27 months and combining the expertise of 8 large... -
Dr Andrew McNab (University of Manchester)12/07/2018, 14:45Track 7 – Clouds, virtualization and containerspresentation
During 2017 support for Docker and Singularity containers was added to
Go to contribution page
the Vac system, in addition to its long standing support for virtual
machines. All three types of "logical machine" can now be run in
parallel on the same pool of hypervisors, using container or virtual
machine definitions published by experiments. We explain how CernVM-FS
is provided to containers by the hypervisors, to... -
Alexandre Sousa (University of Cincinnati)12/07/2018, 14:45
Analysis of neutrino oscillation data involves a combination of complex fitting procedures and statistical corrections techniques that are used to determine the full three-flavor PMNS parameters and constraint contours. These techniques rely on computationally intensive “multi-universe” stochastic modeling. The process of calculating these contours and corrections can dominate final stages...
Go to contribution page -
Miguel Martinez Pedreira (Johann-Wolfgang-Goethe Univ. (DE))12/07/2018, 14:45
The ALICE experiment will undergo an extensive detector and readout upgrade for the LHC Run3 and will collect a 10 times larger data volume than today. This will translate into increase of the required CPU resources worldwide as well as higher data access and transfer rates. JAliEn (Java ALICE Environment) is the new Grid middleware designed to scale-out horizontally and satisfy the ALICE...
Go to contribution page -
Francesco Di Capua (Università di Napoli Federico II and INFN)12/07/2018, 14:45
Control and monitoring of experimental facilities as well as laboratory equipment requires handling a blend of different tasks. Often in industrial or scientific fields there are standards or form factor to comply with and electronic interfaces or custom busses to adopt. With such tight boundary conditions, the integration of an off-the-shelf Single Board Computer (SBC) is not always a...
Go to contribution page -
Mr Adriano Di Florio (Universita e INFN, Bari (IT))12/07/2018, 14:45Track 6 – Machine learning and physics analysispresentation
Since Run II, future development projects for the Large Hadron Collider will constantly bring nominal luminosity increase, with the ultimate goal of reaching a peak luminosity of $5 · 10^{34} cm^{−2} s^{−1}$ for ATLAS and CMS experiments planned for the High Luminosity LHC (HL-LHC) upgrade. This rise in luminosity will directly result in an increased number of simultaneous proton collisions...
Go to contribution page -
Claire Adam Bourdarios (Centre National de la Recherche Scientifique (FR))12/07/2018, 14:45
Interactive 3D data visualization plays a key role in HEP experiments, as it is used in many tasks at different levels of the data chain. Outside HEP, for interactive 3D graphics, the game industry makes heavy use of so-called “game engines”, modern software frameworks offering an extensive set of powerful graphics tools and cross-platform deployment. Recently, a very strong support for...
Go to contribution page -
Arturo Sanchez Pineda (Abdus Salam Int. Cent. Theor. Phys. (IT))12/07/2018, 15:00
One of the big challenges in High Energy Physics development is the fact that many potential -and very valuable- students and young researchers live in countries where internet access and computational infrastructure are poor compared to institutions already participating.
In order to accelerate the process, the ATLAS Open Data project releases useful and meaningful data and tools using...
Go to contribution page -
Antonio Dias (Universidade de Lisboa (PT))12/07/2018, 15:00
The current scientific environment has experimentalists and system administrators allocating large amounts of time for data access, parsing and gathering
Go to contribution page
as well as instrument management. This is a growing challenge with more large
collaborations with significant amount of instrument resources, remote instrumentation sites and continuously improved and upgraded scientific... -
Maksim Melnik Storetvedt (Western Norway University of Applied Sciences (NO))12/07/2018, 15:00Track 7 – Clouds, virtualization and containerspresentation
Virtualization and containers have become the go-to solutions for simplified deployment, elasticity and workflow isolation. These benefits are especially advantageous in containers, which dispense with the resources overhead associated with VMs, applicable in all cases where virtualization of the full hardware stack is not considered necessary. Containers are also simpler to setup and maintain...
Go to contribution page -
Andrea Sciaba (CERN)12/07/2018, 15:00
The increase in the scale of LHC computing expected for Run 3 and even more so for Run 4 (HL-LHC) over the course of the next ten years will most certainly require radical changes to the computing models and the data processing of the LHC experiments. Translating the requirements of the physics programmes into computing resource needs is an extremely complicated process and subject to...
Go to contribution page -
Jiaheng Zou (IHEP)12/07/2018, 15:00
SNiPER is a general purpose offline software framework for high energy physics experiment. It provides some features that is attractive to neutrino experiments, such as the event buffer. More than one events are available in the buffer according to a customizable time window, so that it is easy for users to apply events correlation analysis.
Go to contribution page
We also implemented the MT-SNiPER to support... -
Moritz Kiehn (Universite de Geneve (CH))12/07/2018, 15:00Track 6 – Machine learning and physics analysispresentation
The High-Luminosity LHC will see pileup level reaching 200, which will greatly increase the complexity the tracking component of the event reconstruction.
Go to contribution page
To reach out to Computer Science specialists, a Tracking Machine Learning challenge (trackML) is being set up on Kaggle for the first 2018 semester by a team of ATLAS, CMS and LHCb physicists tracking experts and Computer Scientists,... -
Dr Marcus Ebert (University of Victoria)12/07/2018, 15:00
The dynamic data federation software (Dynafed), developed by CERN IT, provides a federated storage cluster on demand using the HTTP protocol with WebDAV extensions. Traditional storage sites which support an experiment can be added to Dynafed without requiring any changes to the site. Dynafed also supports direct access to cloud storage such as S3 and Azure. We report on the usage of Dynafed...
Go to contribution page -
Valerio Formato (Universita e INFN, Perugia (IT))12/07/2018, 15:15
In many HEP experiments a typical data analysis workflow requires each user
Go to contribution page
to read the experiment data in order to extract meaningful information and produce relevant plots for the considered analysis. Multiple users accessing the same data result in a redundant access to the data itself, which could be factorised effectively improving the CPU efficiency of the analysis jobs and relieving... -
Dr Dainius Simelevicius (Vilnius University (LT))12/07/2018, 15:15
The part of the CMS data acquisition (DAQ) system responsible for data readout and event building is a complex network of interdependent distributed programs. To ensure successful data taking, these programs have to be constantly monitored in order to facilitate the timeliness of necessary corrections in case of any deviation from specified behaviour. A large number of diverse monitoring data...
Go to contribution page -
12/07/2018, 15:15
-
Dr Kurt Rinnert (University of Liverpool (GB))12/07/2018, 15:15Track 6 – Machine learning and physics analysispresentation
The LHCb experiment will undergo a major upgrade for LHC Run-III, scheduled to
start taking data in 2021. The upgrade of the LHCb detector introduces a
radically new data-taking strategy: the current multi-level event filter will
be replaced by a trigger-less readout system, feeding data into a software
event filter at a rate of 40 MHz.In particular, a new Vertex Locator (VELO) will be...
Go to contribution page -
Edgar Fajardo Hernandez (Univ. of California San Diego (US))12/07/2018, 15:15
With the increase of power and reduction of cost of GPU accelerated processors a corresponding interest in their uses in the scientific domain has spurred. OSG users are no different and they have shown an interest in accessing GPU resources via their usual workload infrastructures. Grid sites that have these kinds of resources also want to make them grid available. In this talk, we discuss...
Go to contribution page -
Paul Millar (DESY)12/07/2018, 15:15
Whatever the use case, for federated storage to work well some knowledge from each storage system must exist outside that system. This is needed to allow coordinated activity; e.g., executing analysis jobs on worker nodes with good accessibility to the data.
Currently, this is achieved by clients notifying central services of activity; e.g., a client notifies a replica catalogue after an...
Go to contribution page -
Andrew McNab (University of Manchester)12/07/2018, 15:15Track 7 – Clouds, virtualization and containerspresentation
During 2017, LHCb created Docker and Singularity container definitions which allow sites to run all LHCb DIRAC workloads in containers as "black boxes". This parallels LHCb's previous work to encapsulate the execution of DIRAC payload jobs in virtual machines, and we explain how these three types of "logical machine" are related in LHCb's case and how they differ, in terms of architecture,...
Go to contribution page -
Jim Pivarski (Princeton University)12/07/2018, 16:00presentation
High energy physics is no longer the main user or developer of data analysis tools. Open source tools developed primarily for data science, business intelligence, and finance are available for use in HEP, and adopting them would the reduce in-house maintenance burden and provide users with a wider set of training examples and career options. However, physicists have been analyzing data with...
Go to contribution page -
Axel Naumann (CERN)12/07/2018, 16:30
After 20 years of evolution, ROOT is currently undergoing a change of gears, bringing our vision of simplicity, robustness and speed closer to physicists' reality. ROOT is now offering a game-changing, fundamentally superior approach to writing analysis code. It is working on a rejuvenation of the graphics system and user interaction. It automatically leverages modern CPU vector and multi-core...
Go to contribution page -
Andreas Joachim Peters (CERN)12/07/2018, 17:00
The EOS project started as a specialized disk-only storage software solution for physics analysis use-cases at CERN in 2010.
Go to contribution page
Over the years EOS has evolved into an open storage platform, leveraging several open source building blocks from the community. The service at CERN manages around 250 PB, distributed across two data centers and provides user- and project-spaces to all CERN experiments.... -
Jakob Blomer (CERN)12/07/2018, 17:20
The CernVM File System (CernVM-FS) provides a scalable and reliable software distribution and---to some extent---a data distribution service. It gives POSIX access to more than half a billion binary files of experiment application software stacks and operating system containers to end user devices, grids, clouds, and supercomputers. Increasingly, CernVM-FS also provides access to certain...
Go to contribution page -
Luca dell'Agnello (INFN)12/07/2018, 17:40
The year 2017 was most likely a turning point for the INFN Tier-1. In fact, on November 9th 2017 early at morning, a large pipe of the city aqueduct, located under the road next to CNAF, broke. As a consequence, a river of water and mud flowed towards the Tier-1 data center. The level of the water did not exceeded the threshold of safety of the waterproof doors but, due to the porosity of the...
Go to contribution page -
Latchezar Betev (CERN)13/07/2018, 08:55presentation
-
Catrin Bernius (SLAC National Accelerator Laboratory (US))13/07/2018, 09:00presentation
-
Patricia Mendez Lorenzo (CERN)13/07/2018, 09:20presentation
-
Hannah Short (CERN)13/07/2018, 09:40presentation
-
Costin Grigoras (CERN)13/07/2018, 10:00presentation
-
Gene Van Buren (Brookhaven National Laboratory)13/07/2018, 10:20presentation
-
Sergei Gleyzer (University of Florida (US))13/07/2018, 10:40presentation
-
Dave Dykstra (Fermi National Accelerator Lab. (US))13/07/2018, 11:30presentation
-
Jose Flix Molina (Centro de Investigaciones Energéti cas Medioambientales y Tecno)13/07/2018, 11:50presentation
-
13/07/2018, 12:10presentation
-
Dr Waseem Kamleh (University of Adelaide)13/07/2018, 12:25presentation
-
Enrico Fattibene (INFN - CNAF)
IBM Spectrum Protect (ISP) software, one of the leader solutions in data protection, contributes to the data management infrastructure operated at CNAF, the central computing and storage facility of INFN (Istituto Nazionale di Fisica Nucleare – Italian National Institute for Nuclear Physics). It is used to manage about 44 Petabytes of scientific data produced by LHC (Large Hadron Collider at...
Go to contribution page -
Maarten Litmaath (CERN)Track 8 – Networks and facilitiespresentation
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque maximus felis eu magna feugiat bibendum. Nullam leo ligula, vestibulum a molestie sit amet, consectetur nec sapien. Sed vel scelerisque elit, non hendrerit lacus. Duis imperdiet sapien ut dictum scelerisque. Curabitur volutpat porta elit, eu blandit velit molestie a. Cras risus nisl, scelerisque at molestie at, tincidunt in...
Go to contribution page -
Maarten Litmaath (CERN)presentation
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque maximus felis eu magna feugiat bibendum. Nullam leo ligula, vestibulum a molestie sit amet, consectetur nec sapien. Sed vel scelerisque elit, non hendrerit lacus. Duis imperdiet sapien ut dictum scelerisque. Curabitur volutpat porta elit, eu blandit velit molestie a. Cras risus nisl, scelerisque at molestie at, tincidunt in...
Go to contribution page
Choose timezone
Your profile timezone: