-
10/10/2016, 08:45
-
Jim Siegrist (DOE)10/10/2016, 09:00
-
Mark Seager (Intel)10/10/2016, 09:30
-
Ian Fisk (Simons Foundation)10/10/2016, 10:00
-
Francesco Giovanni Sciacca (Universitaet Bern (CH))10/10/2016, 11:00
Access and exploitation of large scale computing resources, such as those offered by general
Go to contribution page
purpose HPC centres, is one import measure for ATLAS and the other Large Hadron Collider experiments
in order to meet the challenge posed by the full exploitation of the future data within the constraints of flat budgets.
We report on the effort moving the Swiss WLCG T2 computing,
serving ATLAS, CMS... -
Oliver Keeble (CERN)10/10/2016, 11:00
The DPM (Disk Pool Manager) project is the most widely deployed solution for storage of large data repositories on Grid sites, and is completing the most important upgrade in its history, with the aim of bringing important new features, performance and easier long term maintainability.
Go to contribution page
Work has been done to make the so-called "legacy stack" optional, and substitute it with an advanced... -
Liang Sun (Wuhan University (CN))10/10/2016, 11:00
Based on GooFit, a GPU-friendly framework for doing maximum-likelihood fits, we have developed a tool for extracting model-independent S-wave amplitudes from three-body decays such as D+ --> h(')-,h+,h+. A full amplitude analysis is done where the magnitudes and phases of the S-wave amplitudes (or alternatively, the real and imaginary components), are anchored at a finite number of...
Go to contribution page -
10/10/2016, 11:00
The Production and Distributed Analysis (PanDA) system has been developed to meet ATLAS production
Go to contribution page
and analysis requirements for a data-driven workload management system capable of operating
at the Large Hadron Collider (LHC) data processing scale. Heterogeneous resources used by the ATLAS
experiment are distributed worldwide at hundreds of sites, thousands of physicists analyse the... -
Alexander Bogdanchikov (Budker Institute of Nuclear Physics (RU))10/10/2016, 11:00
The SND detector takes data at the e+e- collider VEPP-2000 in Novosibirsk. We present here
Go to contribution page
recent upgrades of the SND DAQ system which are mainly aimed to handle the enhanced events
rate load after the collider modernization. To maintain acceptable events selection quality the electronics
throughput and computational power should be increased. These goals are achieved with the new fast... -
Steven Goldfarb (University of Melbourne (AU))10/10/2016, 11:00
The installation of Virtual Visit services by the LHC collaborations began shortly after the first high energy collisions were provided by the CERN accelerator in 2010. The experiments: ATLAS, CMS, LHCb, and ALICE have all joined in this popular and effective method to bring the excitement of scientific exploration and discovery into classrooms and other public venues around the world. Their...
Go to contribution page -
10/10/2016, 11:15
The Cherenkov Telescope Array (CTA) will be the next generation ground-based gamma-ray observatory. It will be made up of approximately 100 telescopes of three different sizes, from 4 to 23 meters in diameter. The previously presented prototype of a high speed data acquisition (DAQ) system for CTA (CHEP 2012) has become concrete within the NectarCAM project, one of the most challenging camera...
Go to contribution page -
10/10/2016, 11:15
ATLAS's current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognised for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we...
Go to contribution page -
Fons Rademakers (CERN)10/10/2016, 11:15
CERN openlab is a unique public-private partnership between CERN and leading IT companies and research institutes. Having learned a lot from the close collaboration with industry in many different projects we now are using this experience to transfer some of our knowledge to other scientific fields, specifically in the areas of code optimization for the simulations of biological dynamics and...
Go to contribution page -
Elvin Alin Sindrilaru (CERN)10/10/2016, 11:15
CERN has been developing and operating EOS as a disk storage solution successfully for 5 years. The CERN deployment provides 135 PB and stores 1.2 billion replicas distributed over two computer centres. Deployment includes four LHC instances, a shared instance for smaller experiments and since last year an instance for individual user data as well. The user instance represents the backbone of...
Go to contribution page -
Andrej Filipcic (Jozef Stefan Institute (SI))10/10/2016, 11:15
Fifteen Chinese High Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the...
Go to contribution page -
10/10/2016, 11:15
The ATLAS workload management system is a pilot system based on a late binding philosophy that avoided for many years
Go to contribution page
to pass fine grained job requirements to the batch system. In particular for memory most of the requirements were set to request
4GB vmem as defined in the EGI portal VO card, i.e. 2GB RAM + 2GB swap. However in the past few years several changes have happened
in the... -
10/10/2016, 11:15
PODIO is a C++ library that supports the automatic creation and efficient handling of HEP event data, developed as a new EDM toolkit for future particle physics experiments in the context of the AIDA2020 EU programme. Event
Go to contribution page
data models (EDMs) are at the core of every HEP experiment’s software framework, essential for providing a communication channel between different algorithms in the data... -
10/10/2016, 11:30
In 2015, CMS was the first LHC experiment to begin using a multi-threaded framework for doing event processing. This new framework utilizes Intel's Thread Building Block library to manage concurrency via a task based processing model. During the 2015 LHC run period, CMS only ran reconstruction jobs using multiple threads because only those jobs were sufficiently thread efficient. Recent work...
Go to contribution page -
Andrew Haas (New York University)10/10/2016, 11:30
Since the launch of HiggsHunters.org in November 2014, citizen science volunteers
Go to contribution page
have classified more than a million points of interest in images from the ATLAS experiment
at the LHC. Volunteers have been looking for displaced vertices and unusual features in images
recorded during LHC Run-1. We discuss the design of the project, its impact on the public,
and the surprising results of how... -
Stefano Bagnasco (I.N.F.N. TORINO)10/10/2016, 11:30
Obtaining CPU cycles on an HPC cluster is nowadays relatively simple and sometimes even cheap for academic institutions. However, in most of the cases providers of HPC services would not allow changes on the configuration, implementation of special features or a lower-level control on the computing infrastructure and networks, for example for testing new computing patterns or conducting...
Go to contribution page -
Imma Riu (IFAE Barcelona (ES))10/10/2016, 11:30
The LHC will collide protons in the ATLAS detector with increasing luminosity through 2016, placing stringent operational and physical requirements to the ATLAS trigger system in order to reduce the 40 MHz collision rate to a manageable event storage rate of about 1 kHz, while not rejecting interesting physics events. The Level-1 trigger is the first rate-reducing step in the ATLAS trigger...
Go to contribution page -
Paolo Calafiura (Lawrence Berkeley National Lab. (US))10/10/2016, 11:30
The instantaneous luminosity of the LHC is expected to increase at HL-LHC so that the amount of pile-up can reach a level of 200 interaction per bunch crossing, almost a factor of 10 w.r.t the luminosity reached at the end of run 1. In addition, the experiments plan a 10-fold increase of the readout rate. This will be a challenge for the ATLAS and CMS experiments, in particular for the...
Go to contribution page -
Dave Dykstra (Fermi National Accelerator Lab. (US))10/10/2016, 11:30
All four of the LHC experiments depend on web proxies (that is, squids) at each grid site in order to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxy to use for...
Go to contribution page -
Andrew Hanushevsky (STANFORD LINEAR ACCELERATOR CENTER)10/10/2016, 11:30
XRootD is a distributed, scalable system for low-latency file access. It is the primary data access framework for the high-energy physics community. One of the latest developments in the project has been to incorporate metalink and segmented file transfer technologies.
Go to contribution page
We report on the implementation of the metalink metadata format support within XRootD client. This includes both the CLI and... -
Mikolaj Krzewicki (Johann-Wolfgang-Goethe Univ. (DE))10/10/2016, 11:45
ALICE HLT Run2 performance overview
M.Krzewicki for the ALICE collaboration
The ALICE High Level Trigger (HLT) is an online reconstruction and data compression system used in the ALICE experiment at CERN. Unique among the LHC experiments, it extensively uses modern coprocessor technologies like general purpose graphic processing units (GPGPU) and field programmable gate arrays (FPGA) in the...
Go to contribution page -
Fernando Harald Barreiro Megino (University of Texas at Arlington)10/10/2016, 11:45
The ATLAS computing model was originally designed as static clouds (usually national or geographical groupings of sites) around the
Go to contribution page
Tier 1 centers, which confined tasks and most of the data traffic. Since those early days, the sites' network bandwidth has
increased at O(1000) and the difference in functionalities between Tier 1s and Tier 2s has reduced. After years of manual,
intermediate... -
Patrick Fuhrmann (DESY), Patrick Fuhrmann (Deutsches Elektronen-Synchrotron (DE))10/10/2016, 11:45
For the previous decade, high performance, high capacity Open Source storage systems have been designed and implemented, accommodating the demanding needs of the LHC experiments. However, with the general move away from the concept of local computer centers, supporting their associated communities, towards large infrastructures, providing Cloud-like solutions to a large variety of different...
Go to contribution page -
10/10/2016, 11:45
The Future Circular Collider (FCC) software effort is supporting the different experiment design studies for the three future collider options, hadron-hadron, electron-electron or electron-hadron. The software framework used by data processing applications has to be independent of the detector layout and the collider configuration. The project starts from the premise of using existing software...
Go to contribution page -
Marco Clemencic (CERN)10/10/2016, 11:45
The vast majority of high-energy physicists use and produce software every day. Software skills are usually acquired “on the go” and dedicated training courses are rare. The LHCb Starterkit is a new training format for getting LHCb collaborators started in effectively using software to perform their research. The course focuses on teaching basic skills for research computing. Unlike...
Go to contribution page -
Dr Bo Jayatilaka (Fermi National Accelerator Lab. (US))10/10/2016, 11:45
The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed...
Go to contribution page -
Karl Harrison (University of Cambridge)10/10/2016, 11:45
Radiotherapy is planned with the aim of delivering a lethal dose of radiation to a tumour, while keeping doses to nearby healthy organs at an acceptable level. Organ movements and shape changes, over a course of treatment typically lasting four to eight weeks, can result in actual doses being different from planned. The UK-based VoxTox project aims to compute actual doses, at the level of...
Go to contribution page -
Johannes Lehrbach (Johann-Wolfgang-Goethe Univ. (DE))10/10/2016, 12:00
ALICE HLT Cluster operation during ALICE Run 2
(Johannes Lehrbach) for the ALICE collaboration
ALICE (A Large Ion Collider Experiment) is one of the four major detectors located at the LHC at CERN, focusing on the study of heavy-ion collisions. The ALICE High Level Trigger (HLT) is a compute cluster which reconstructs the events and compresses the data in real-time. The data compression...
Go to contribution page -
554. Deep-Learning Analysis Pipelines on Raw HEP Data from the Daya Bay Neutrino Experiment at NERSCSamuel Kohn (Lawrence Berkeley National Lab. (US))10/10/2016, 12:00
The use of up-to-date machine learning methods, including deep neural networks, running directly on raw data has significant potential in High Energy Physics for revealing patterns in detector signals and as a result improving reconstruction and the sensitivity of the final physics analyses. In this work, we describe a machine-learning analysis pipeline developed and operating at the National...
Go to contribution page -
Marcus Ebert (University of Edinburgh (GB))10/10/2016, 12:00
ZFS is a combination of file system, logical volume manager, and software raid system developed by SUN Microsystems for the Solaris OS. ZFS simplifies the administration of disk storage and on Solaris it has been well regarded for its high performance, reliability, and stability for many years. It is used successfully for enterprise storage administration around the globe, but so far on such...
Go to contribution page -
10/10/2016, 12:00
With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means
Go to contribution page
to add much-needed extra computing power. These resources can be very different in design from the resources that comprise the Grid
computing of most experiments, therefore exploiting these resources requires a change in strategy for the experiment. The... -
David Rohr (Johann-Wolfgang-Goethe Univ. (DE))10/10/2016, 12:00
The ALICE HLT uses a data transport framework based on the publisher subscriber message principle, which transparently handles the communication between processing components over the network and between processing components on the same node via shared memory with a zero copy approach.
Go to contribution page
We present an analysis of the performance in terms of maximum achievable data rates and event rates as well... -
Lars Holm Nielsen (CERN)10/10/2016, 12:00
We present the new Invenio 3 digital library framework and demonstrate
Go to contribution page
its application in the field of open research data repositories. We
notably look at how the Invenio technology has been applied in two
research data services: (1) the CERN Open Data portal that provides
access to the approved open datasets and software of the ALICE, ATLAS,
CMS and LHCb collaborations; (2) the Zenodo... -
Christopher Jones (Fermi National Accelerator Lab. (US))10/10/2016, 12:00
LArSoft is an integrated, experiment-agnostic set of software tools for liquid argon (LAr) neutrino experiments
Go to contribution page
to perform simulation, reconstruction and analysis within Fermilab art framework.
Along with common algorithms, the toolkit provides generic interfaces and extensibility
that accommodate the needs of detectors of very different size and configuration.
To date, LArSoft has been... -
10/10/2016, 12:15
In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel's Thread Building...
Go to contribution page -
Marc Dobson (CERN)10/10/2016, 12:15
During the past years an increasing number of CMS computing resources are offered as clouds, bringing the flexibility of having virtualised compute resources and centralised management of the Virtual Machines (VMs). CMS has adapted its job submission infrastructure from a traditional Grid site to operation using a cloud service and meanwhile can run all types of offline workflows. The cloud...
Go to contribution page -
10/10/2016, 12:15
The ATLAS production system has provided the infrastructure to process of tens of thousand of events during LHC Run1 and the first year of the LHC Run2 using grid, clouds and high performance computing. We address in this contribution all the strategies and improvements added to the production system to optimize its performance to get the maximum efficiency of available resources from...
Go to contribution page -
Fernanda Psihas (Indiana University)10/10/2016, 12:15
The observation of neutrino oscillation provides evidence of physics beyond the standard model, and the precise measurement of those oscillations remains an important goal for the field of particle physics. Using two finely segmented liquid scintillator detectors located 14 mrad off-axis from the NuMI muon-neutrino beam, NOvA is in a prime position to contribute to precision measurements of...
Go to contribution page -
Andrey Ustyuzhanin (Yandex School of Data Analysis (RU))10/10/2016, 12:15
A framework for performing a simplified particle physics data analysis has been created. The project analyses a pre-selected sample from the full 2011 LHCb data. The analysis aims to measure matter antimatter asymmetries. It broadly follows the steps in a significant LHCb publication where large CP violation effects are observed in charged B meson three-body decays to charged pions and kaons....
Go to contribution page -
Gerhard Raven (Nikhef National institute for subatomic physics (NL))10/10/2016, 12:15
The LHCb software trigger underwent a paradigm shift before the start of Run-II. From being a system to select events for later offline reconstruction, it can now perform the event analysis in real-time, and subsequently decide which part of the event information is stored for later analysis.
The new strategy is only possible due to a major upgrade during the LHC long shutdown I (2012-2015)....
Go to contribution page -
John Freeman (Fermi National Accelerator Lab. (US))10/10/2016, 14:00
For a few years now, the artdaq data acquisition software toolkit has
Go to contribution page
provided numerous experiments with ready-to-use components which allow
for rapid development and deployment of DAQ systems. Developed within
the Fermilab Scientific Computing Division, artdaq provides data
transfer, event building, run control, and event analysis
functionality. This latter feature includes built-in... -
Tigran Mkrtchyan10/10/2016, 14:00
For over a decade, dCache.ORG has provided robust software that is used at more than 80 Universities and research institutes around the world, allowing these sites to provide reliable storage services for the WLCG experiments and many other scientific communities. The flexible architecture of dCache allows running it in a wide variety of configurations and platforms - from all-in-one...
Go to contribution page -
Daniela Remenska (eScience engineer)10/10/2016, 14:00
In preparation for the XENON1T Dark Matter data acquisition, we have
Go to contribution page
prototyped and implemented a new computing model. The XENON signal and data processing
software is developed fully in Python 3, and makes extensive use of generic scientific data
analysis libraries, such as the SciPy stack. A certain tension between modern “Big Data”
solutions and existing HEP frameworks is typically... -
Anna Elizabeth Woodard (University of Notre Dame (US)), Matthias Wolf (University of Notre Dame (US))10/10/2016, 14:00
We previously described Lobster, a workflow management tool for exploiting volatile opportunistic computing resources for computation in HEP. We will discuss the various challenges that have been encountered while scaling up the simultaneous CPU core utilization and the software improvements required to overcome these challenges.
Categories: Workflows can now be divided into categories...
Go to contribution page -
Axel Naumann (CERN)10/10/2016, 14:00
With ROOT 6 in production in most experiments, ROOT has changed gear during the past year: the development focus on the interpreter has been redirected into other areas.
This presentation will summarize the developments that have happened in all areas of ROOT, for instance concurrency mechanisms, the serialization of C++11 types, new graphics palettes, new "glue" packages for multivariate...
Go to contribution page -
Tony Wong (Brookhaven National Laboratory)10/10/2016, 14:00
Brookhvaven National Laboratory (BNL) anticipates significant growth in scientific programs with large computing and data storage needs in the near future and has recently re-organized support for scientific computing to meet these needs.
Go to contribution page
A key component is the enhanced role of the RHIC-ATLAS Computing Facility
(RACF)in support of high-throughput and high-performance computing (HTC and HPC) ... -
Dr Juan Antonio Lopez Perez (Fermi National Accelerator Lab. (US))10/10/2016, 14:00
We present the Web-Based Monitoring project of the CMS experiment at the LHC at CERN. With the growth in size and complexity of High Energy Physics experiments and the accompanying increase in the number of collaborators spread across the globe, the importance of broadly accessible monitoring has grown. The same can be said about the increasing relevance of operation and reporting web tools...
Go to contribution page -
Mr Bing Suo (Shandong University), Xiaomei Zhang (Chinese Academy of Sciences (CN))10/10/2016, 14:15
In the near future, many new experiments (JUNO, LHAASO, CEPC, etc) with challenging data volume are coming into operations or are planned in IHEP, China. The Jiangmen Underground Neutrino Observatory (JUNO) is a multipurpose neutrino experiment to be operational in 2019. The Large High Altitude Air Shower Observatory (LHAASO) is oriented to the study and observation of cosmic rays, which is...
Go to contribution page -
Maarten Litmaath (CERN)10/10/2016, 14:15
The Worldwide LHC Computing Grid (WLCG) infrastructure
Go to contribution page
allows the use of resources from more than 150 sites.
Until recently the setup of the resources and the middleware at a site
were typically dictated by the partner grid project (EGI, OSG, NorduGrid)
to which the site is affiliated.
Since a few years, however, changes in hardware, software, funding and
experiment computing requirements have... -
Eric Vaandering (Fermi National Accelerator Lab. (US))10/10/2016, 14:15
The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful...
Go to contribution page -
Oliver Keeble (CERN)10/10/2016, 14:15
Understanding how cloud storage can be effectively used, either standalone or in support of its associated compute, is now an important consideration for WLCG.
We report on a suite of extensions to familiar tools targeted at enabling the integration of cloud object stores into traditional grid infrastructures and workflows. Notable updates include support for a number of object store...
Go to contribution page -
Remi Mommsen (Fermi National Accelerator Lab. (US))10/10/2016, 14:15
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GByte/s to the high-level trigger (HLT) farm. The HLT farm selects and classifies interesting events for storage and offline analysis at a rate of around 1 kHz.
Go to contribution page
The DAQ system has been redesigned during the... -
Philippe Canal (Fermi National Accelerator Lab. (US))10/10/2016, 14:15
ROOT is one of the core software tool for physicists. For more than a decade it has a central position in the physicists' analysis code and the experiments' frameworks thanks in parts to its stability and simplicity of use. This allowed software development for analysis and frameworks to use ROOT as a "common language" for HEP, across virtually all experiments.
Software development in...
Go to contribution page -
10/10/2016, 14:15
The Muon Ionization Cooling Experiment (MICE) is a proof-of-principle experiment designed to demonstrate muon ionization cooling for the first time. MICE is currently on Step IV of its data taking programme, where transverse emittance reduction will be demonstrated. The MICE Analysis User Software (MAUS) is the reconstruction, simulation and analysis framework for the MICE experiment. MAUS is...
Go to contribution page -
Antonio Perez-Calero Yzquierdo (Centro de Investigaciones Energ. Medioambientales y Tecn. - (ES)10/10/2016, 14:30
In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run-2 events requires parallelization of the code in order to reduce the memory-per-core footprint constraining serial-execution programs, thus optimizing the exploitation of present multi-core...
Go to contribution page -
Lynn Wood (Pacific Northwest National Laboratory, USA)10/10/2016, 14:30
The Belle II experiment at KEK is preparing for first collisions in 2017. Processing the large amounts of data that will be produced will require conditions data to be readily available to systems worldwide in a fast and efficient manner that is straightforward for both the user and maintainer.
The Belle II conditions database was designed with a straightforward goal: make it as easily...
Go to contribution page -
Baosong Shan (Beihang University (CN))10/10/2016, 14:30
This paper introduces the evolution of the monitoring system of the Alpha Magnetic Spectrometer (AMS) Science Operation Center (SOC) at CERN.
The AMS SOC monitoring system includes several independent tools: Network Monitor to poll the health metrics of AMS local computing farm, Production Monitor to show the production status, Frame Monitor to record the flight data arriving status, and...
Go to contribution page -
Pier Paolo Ricci (INFN CNAF)10/10/2016, 14:30
The INFN CNAF Tier-1 computing center is composed by 2 different main rooms containing IT resources and 4 additional locations that hosts the necessary technology infrastructures providing the electrical power and refrigeration to the facility. The power supply and continuity are ensured by a dedicated room with three 15,000 to 400 V transformers in a separate part of the principal building...
Go to contribution page -
Philippe Canal (Fermi National Accelerator Lab. (US)), Vasil Georgiev Vasilev (Fermi National Accelerator Lab. (US))10/10/2016, 14:30
ROOT version 6 comes with a C++ compliant interpreter cling. Cling needs to know everything about the code in libraries to be able to interact with them.
This translates into increased memory usage with respect to previous versions of
ROOT.During the runtime automatic library loading process, ROOT6 re-parses a
Go to contribution page
set of header files, which describe the library; and enters "recursive"... -
Mikolaj Krzewicki (Johann-Wolfgang-Goethe Univ. (DE))10/10/2016, 14:30
Support for Online Calibration in the ALICE HLT Framework
Mikolaj Krzewicki, for the ALICE collaboration
ALICE (A Large Heavy Ion Experiment) is one of the four major experiments at the Large Hadron Collider (LHC) at CERN. The High Level Trigger (HLT) is an online compute farm, which reconstructs events measured by the ALICE detector in real-time. The HLT uses a custom online...
Go to contribution page -
Alastair Dewhurst (STFC - Rutherford Appleton Lab. (GB))10/10/2016, 14:30
Since 2014, the RAL Tier 1 has been working on deploying a Ceph backed object store. The aim is to replace Castor for disk storage. This new service must be scalable to meet the data demands of the LHC to 2020 and beyond. As well as offering access protocols the LHC experiments currently use, it must also provide industry standard access protocols. In order to keep costs down the service...
Go to contribution page -
Xavier Espinal Curull (CERN)10/10/2016, 14:45
Dependability, resilience, adaptability, and efficiency. Growing requirements require tailoring storage services and novel solutions. Unprecedented volumes of data coming from the detectors need to be quickly available in a highly scalable way for large-scale processing and data distribution while in parallel they are routed to tape for long-term archival. These activities are critical for the...
Go to contribution page -
Enric Tejedor Saavedra (CERN)10/10/2016, 14:45
The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments.
The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In...
Go to contribution page -
Roland Sipos (Eotvos Lorand University (HU))10/10/2016, 14:45
Since the 2014 the ATLAS and CMS experiments share a common vision for the Condition Database infrastructure required to handle the non-event data for the forthcoming LHC runs. The large commonality in the use cases allows to agree on a common overall design solution meeting the requirements of both experiments. A first prototype implementing these solutions has been completed in 2015 and was...
Go to contribution page -
Daniela Bauer (Imperial College Sci., Tech. & Med. (GB)), Simon Fayer (Imperial College Sci., Tech. & Med. (GB))10/10/2016, 14:45
The GridPP project in the UK has a long-standing policy of supporting non-LHC VOs with 10% of the provided resources. Up until recently this had only been taken up be a very limited set of VOs, mainly due to a combination of the (perceived) large overhead of getting started, the limited computing support within non-LHC VOs and the ability to fulfill their computing requirements on local batch...
Go to contribution page -
Jakub Moscicki (CERN)10/10/2016, 14:45
1. Statement
OpenCloudMesh has a very simple goal: to be an open and vendor agnostic standard for private cloud interoperability.
To address the YetAnotherDataSilo problem, a working group under the umbrella of the GÉANT Association is has been created with the goal of ensuring neutrality and a clear context for this project.All leading partners of the OpenCloudMesh project - GÉANT,...
Go to contribution page -
Maurizio Martinelli (Ecole Polytechnique Federale de Lausanne (CH))10/10/2016, 14:45
LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run 2. Data collected at the start of the fill are processed in a few minutes and used to update the alignment parameters, while the calibration constants are evaluated for each run. This procedure improves the quality of the online reconstruction. For example, the vertex locator is retracted and...
Go to contribution page -
Edward Karavakis (CERN)10/10/2016, 14:45
For over a decade, LHC experiments have been relying on advanced and specialized WLCG dashboards for monitoring, visualizing and reporting the status and progress of the job execution, data management transfers and sites availability across the WLCG distributed grid resources.
In the recent years, in order to cope with the increase of volume and variety of the grid resources, the WLCG...
Go to contribution page -
Justin Lewis Salmon (CERN)10/10/2016, 15:00
The CERN Control and Monitoring Platform (C2MON) is a modular, clusterable framework designed to meet a wide range of monitoring, control, acquisition, scalability and availability requirements. It is based on modern Java technologies and has support for several industry-standard communication protocols. C2MON has been reliably utilised for several years as the basis of multiple monitoring...
Go to contribution page -
Xavier Espinal Curull (CERN)10/10/2016, 15:00
This work will present the status of Ceph-related operations and development within the CERN IT Storage Group: we summarise significant production experience at the petabyte scale as well as strategic developments to integrate with our core storage services. As our primary back-end for OpenStack Cinder and Glance, Ceph has provided reliable storage to thousands of VMs for more than 3 years;...
Go to contribution page -
Lorenzo Rinaldi (Universita e INFN, Bologna (IT))10/10/2016, 15:00
Conditions data (for example: alignment, calibration, data quality) are used extensively in the processing of real and simulated data in ATLAS. The volume and variety of the conditions data needed by different types of processing are quite diverse, so optimizing its access requires a careful understanding of conditions usage patterns. These patterns can be quantified by mining representative...
Go to contribution page -
Piotr Karol Oramus (AGH University of Science and Technology (PL))10/10/2016, 15:00
The exploitation of the full physics potential of the LHC experiments requires fast and efficient processing of the largest possible dataset with the most refined understanding of the detector conditions. To face this challenge, the CMS collaboration has setup an infrastructure for the continuous unattended computation of the alignment and calibration constants, allowing for a refined...
Go to contribution page -
Luca dell'Agnello (INFN-CNAF)10/10/2016, 15:00
The Tier-1 at CNAF is the main INFN computing facility offering computing and storage resources to more than 30 different scientific collaborations including the 4 experiments at the LHC. It is also foreseen a huge increase in computing needs in the following years mainly driven by the experiments at the LHC (especially starting with the run 3 from 2021) but also by other upcoming experiments...
Go to contribution page -
Danilo Piparo (CERN), Enric Tejedor Saavedra (CERN)10/10/2016, 15:00
Notebooks represent an exciting new approach that will considerably facilitate collaborative physics analysis.
They are a modern and widely-adopted tool to express computational narratives comprising, among other elements, rich text, code and data visualisations. Several notebook flavours exist, although one of them has been particularly successful: the Jupyter open source project.In this...
Go to contribution page -
Anna Elizabeth Woodard (University of Notre Dame (US))10/10/2016, 15:00
CRAB3 is a workload management tool used by more than 500 CMS physicists every month to analyze data acquired by the Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider (LHC). CRAB3 allows users to analyze a large collection of input files (datasets), splitting the input into multiple Grid jobs depending on parameters provided by users.
The process of manually specifying...
Go to contribution page -
Elizabeth Gallas (University of Oxford (GB))10/10/2016, 15:15
The ATLAS EventIndex System has amassed a set of key quantities for a large number of ATLAS events into a Hadoop based infrastructure for the purpose of providing the experiment with a number of event-wise services. Collecting this data in one place provides the opportunity to investigate various storage formats and technologies and assess which best serve the various use cases as well as...
Go to contribution page -
Goncalo Borges (University of Sydney (AU))10/10/2016, 15:15
CEPH is a cutting edge, open source, self-healing distributed data storage technology which is exciting both the enterprise and academic worlds. CEPH delivers an object storage layer (RADOS), block storage layer, and file system storage in a single unified system. CEPH object and block storage implementations are widely used in a broad spectrum of enterprise contexts, from dynamic provision of...
Go to contribution page -
Andreas Heiss (KIT - Karlsruhe Institute of Technology (DE))10/10/2016, 15:15
The WLCG Tier-1 center GridKa is developed and operated by the Steinbuch Centre for Computing (SCC)
Go to contribution page
at the Karlsruhe Institute of Technology (KIT). It was the origin of further Big Data research activities and
infrastructures at SCC, e.g. the Large Scale Data Facility (LSDF), providing petabyte scale data storage
for various non-HEP research communities.
Several ideas and plans... -
10/10/2016, 15:15
The CMS Computing and Offline group has put in a number of enhancements into the main software packages and tools used for centrally managed processing and data transfers in order to cope with the challenges expected during the LHC Run 2. In the presentation we will highlight these improvements that allow CMS to deal with the increased trigger output rate and the increased collision pileup in...
Go to contribution page -
Christophe Haen (CERN)10/10/2016, 15:15
In order to ensure an optimal performance of the LHCb Distributed Computing, based on LHCbDIRAC, it is necessary to be able to inspect the behavior over time of many components: firstly the agents and services on which the infrastructure is built, but also all the computing tasks and data transfers that are managed by this infrastructure. This consists of recording and then analyzing time...
Go to contribution page -
Dr Sergei Gleyzer (University of Florida (US))10/10/2016, 15:15
ROOT provides advanced statistical methods needed by the LHC experiments to analyze their data. These include machine learning tools for classification, regression and clustering. TMVA, a toolkit for multi-variate analysis in ROOT, provides these machine learning methods.
We will present new developments in TMVA, including parallelisation, deep-learning neural networks, new features and...
Go to contribution page -
10/10/2016, 15:15
The SuperKEKB $\mathrm{e^{+}\mkern-9mu-\mkern-1mue^{-}}$collider
Go to contribution page
has now completed its first turns. The planned running luminosity
is 40 times higher than its previous record during the KEKB operation.
The Belle II detector placed at the interaction point will acquire
a data sample 50 times larger than its predecessor. The monetary and
time costs associated with storing and processing... -
Eric Vaandering (Fermi National Accelerator Lab. (US))10/10/2016, 15:30
AsyncStageOut (ASO) is the component of the CMS distributed data analysis system (CRAB3) that manages users’ transfers in a centrally controlled way using the File Transfer System (FTS3) at CERN. It addresses a major weakness of the previous, decentralized model, namely that the transfer of the user's output data to a single remote site was part of the job execution, resulting in inefficient...
Go to contribution page -
Stefano Dal Pra (INFN)10/10/2016, 15:30
On a typical WLCG site providing batch access to computing resources according to a fairshare policy, the idle timelapse after a job ends and before a new one begins on a given slot is negligible if compared to the duration of typical jobs. The overall amount of these intervals over a time window increases with the size of the cluster and the inverse of job duration and can be considered...
Go to contribution page -
Zhang Zhe (University of Nebraska-Lincoln)10/10/2016, 15:30
ROOT provides an extremely flexible format used throughout the HEP community. The number of use cases – from an archival data format to end-stage analysis – has required a number of tradeoffs to be exposed to the user. For example, a high “compression level” in the traditional DEFLATE algorithm will result in a smaller file (saving disk space) at the cost of slower decompression (costing CPU...
Go to contribution page -
Tim Martin (University of Warwick (GB))10/10/2016, 15:30
The ATLAS High Level Trigger Farm consists of around 30,000 CPU cores which filter events at up to 100 kHz input rate.
Go to contribution page
A costing framework is built into the high level trigger, this enables detailed monitoring of the system and allows for data-driven predictions to be made
utilising specialist datasets. This talk will present an overview of how ATLAS collects in-situ monitoring data on both... -
Shawn Mc Kee (University of Michigan (US))10/10/2016, 15:30
We will report on the first year of the OSiRIS project (NSF Award #1541335, UM, IU, MSU and WSU) which is targeting the creation of a distributed Ceph storage infrastructure coupled together with software-defined networking to provide high-performance access for well-connected locations on any participating campus. The project’s goal is to provide a single scalable, distributed storage...
Go to contribution page -
Koichi Murakami10/10/2016, 15:30
The KEK central computer system (KEKCC) supports various activities in KEK, such as the Belle / Belle II, J-PARC experiments, etc. The system is now under replacement and will be put into production in September 2016. The computing resources, CPU and storage, in the next system are much enhanced as recent increase of computing resource demand. We will have 10,000 CPU cores, 13 PB disk storage,...
Go to contribution page -
10/10/2016, 15:45
One of the principle goals of the Dept. of Energy funded SciDAC-Data project is to analyze the more than 410,000 high energy physics “datasets” that have been collected, generated and defined over the past two decades by experiments using the Fermilab storage facilities. These datasets have been used as the input to over 5.6 million recorded analysis projects, for which detailed analytics...
Go to contribution page -
Luca Canali (CERN)10/10/2016, 15:45
This work reports on the activities of integrating Oracle and Hadoop technologies for CERN database services and in particular in the development of solutions for offloading data and queries from Oracle databases into Hadoop-based systems. This is of interest to increase the scalability and reduce cost for some our largest Oracle databases. These concepts have been applied, among others, to...
Go to contribution page -
Hannes Sakulin (CERN)10/10/2016, 15:45
The Run Control System of the Compact Muon Solenoid (CMS) experiment at CERN is a distributed Java web application running on Apache Tomcat servers. During Run-1 of the LHC, many operational procedures have been automated. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters....
Go to contribution page -
Dr Marek Szuba (KIT - Karlsruhe Institute of Technology (DE))10/10/2016, 15:45
We present rootJS, an interface making it possible to seamlessly integrate ROOT 6 into applications written for Node.js, the JavaScript runtime platform increasingly commonly used to create high-performance Web applications. ROOT features can be called both directly from Node.js code and by JIT-compiling C++ macros. All rootJS methods are invoked asynchronously and support callback functions,...
Go to contribution page -
Andrew David Lahiff (STFC - Rutherford Appleton Lab. (GB))10/10/2016, 15:45
At the RAL Tier-1 we have been deploying production services on both bare metal and a variety of virtualisation platforms for many years. Despite the significant simplification of configuration and deployment of services due to the use of a configuration management system, maintaining services still requires a lot of effort. Also, the current approach of running services on static machines...
Go to contribution page -
Paul Messina (ANL)10/10/2016, 16:45
-
Caroline Simard (Stanford University)10/10/2016, 17:15
-
10/10/2016, 18:00
-
Inder Monga (ESnet)11/10/2016, 08:45
-
Shawn Mc Kee (University of Michigan (US))11/10/2016, 09:10
-
Per Brashers (Yttibrium LLC)11/10/2016, 09:35
-
11/10/2016, 10:15
-
Sami Kama (Southern Methodist University (US))11/10/2016, 11:00
HEP applications perform an excessive amount of allocations/deallocations within short time intervals which results in memory churn, poor locality and performance degradation. These issues are already known for a decade, but due to the complexity of software frameworks and the large amount of allocations (which are in the order of billions for a single job), up until recently no efficient...
Go to contribution page -
Makoto Asai (SLAC National Accelerator Laboratory (US))11/10/2016, 11:00
The Geant4 Collaboration released a new generation of the Geant4 simulation toolkit (version 10) in December 2013 and reported its new features at CHEP 2015. Since then, the Collaboration continues to improve its physics and computing performance and usability. This presentation will survey the major improvements made since version 10.0. On the physics side, it includes fully revised multiple...
Go to contribution page -
Martin Gasthuber (DESY)11/10/2016, 11:00
For the upcoming experiments at the European XFEL light source facility, a new online and offline data processing and storage infrastructure is currently being built and verified. Based on the experience of the system being developed for the Petra III light source at DESY, presented at the last CHEP conference, we further develop the system to cope with the much higher volumes and rates...
Go to contribution page -
Dr Jean-Roch Vlimant (California Institute of Technology (US))11/10/2016, 11:00
We present a system deployed in the summer of 2015 for the automatic assignment of production and reprocessing workflows for simulation and detector data in the frame of the Computing Operation of the CMS experiment at the CERN LHC. Processing requests involves a number of steps in the daily operation, including transferring input datasets where relevant and monitoring them, assigning work to...
Go to contribution page -
Emilio Meschi (CERN)11/10/2016, 11:00
In Long Shutdown 3 the CMS Detector will undergo a major upgrade to prepare for the second phase of the LHC physics program, starting around 2026. The HL-LHC upgrade will bring instantaneous luminosity up to 5x10^34 cm-2 s-1 (levelled), at the price of extreme pileup of 200 interactions per crossing. A new silicon tracker with trigger capabilities and extended coverage, and new high...
Go to contribution page -
Simon George (Royal Holloway, University of London)11/10/2016, 11:15
The ATLAS experiment at CERN is planning a second phase of upgrades to prepare for the "High Luminosity LHC", a 4th major run due to start in 2026. In order to deliver an order of magnitude more data than previous runs, 14 TeV protons will collide with an instantaneous luminosity of 7.5 × 1034 cm−2s−1, resulting in much higher pileup and data rates than the current experiment was designed to...
Go to contribution page -
Soon Yung Jun (Fermi National Accelerator Lab. (US))11/10/2016, 11:15
The recent progress in parallel hardware architectures with deeper
Go to contribution page
vector pipelines or many-cores technologies brings opportunities for
HEP experiments to take advantage of SIMD and SIMT computing models.
Launched in 2013, the GeantV project studies performance gains in
propagating multiple particles in parallel, improving instruction
throughput and data locality in HEP event simulation.... -
Michele Selvaggi (Universite Catholique de Louvain (UCL) (BE))11/10/2016, 11:15
A status of recent developments of the DELPHES C++ fast detector simulation framework will be given. New detector cards for the LHCb detector and prototypes for future e+ e- (ILC, FCC-ee) and p-p colliders at 100 TeV (FCC-hh) have been designed. The particle-flow algorithm has been optimised for high multiplicity environments such as high luminosity and boosted regimes. In addition, several...
Go to contribution page -
Azher Mughal (California Institute of Technology (US))11/10/2016, 11:15
The HEP prototypical systems at the Supercomputing conferences each year have served to illustrate the ongoing state of the art developments in high throughput, software-defined networked systems important for future data operations at the LHC and for other data intensive programs. The Supercomputing 2015 SDN demonstration revolved around an OpenFlow ring connecting 7 different booths and the...
Go to contribution page -
403. Stability and scalability of the CMS Global Pool: Pushing HTCondor and glideinWMS to new limitsAntonio Perez-Calero Yzquierdo (Centro de Investigaciones Energ. Medioambientales y Tecn. - (ES)11/10/2016, 11:15
The CMS Global Pool, based on HTCondor and glideinWMS, is the main computing resource provisioning system for all CMS workflows, including analysis, Monte Carlo production, and detector data reprocessing activities. Total resources at Tier-1 and Tier-2 sites pledged to CMS exceed 100,000 CPU cores, and another 50,000-100,000 CPU cores are available opportunistically, pushing the needs of the...
Go to contribution page -
Paul Millar11/10/2016, 11:15
When preparing the Data Management Plan for larger scientific endeavours, PI’s have to balance between the most appropriate qualities of storage space along the line of the planned data lifecycle, it’s price and the available funding. Storage properties can be the media type, implicitly determining access latency and durability of stored data, the number and locality of replicas, as well as...
Go to contribution page -
Simone Campana (CERN)11/10/2016, 11:15
The ATLAS experiment successfully commissioned a software and computing infrastructure to support
Go to contribution page
the physics program during LHC Run 2. The next phases of the accelerator upgrade will present
new challenges in the offline area. In particular, at High Luminosity LHC (also known as Run 4)
the data taking conditions will be very demanding in terms of computing resources:
between 5 and 10 KHz... -
11/10/2016, 11:30
Detector design studies, test beam analyses, or other small particle physics experiments require the simulation of more and more detector geometries and event types, while lacking the resources to build full scale Geant4 applications from
Go to contribution page
scratch. Therefore an easy-to-use yet flexible and powerful simulation program
that solves this common problem but can also be adapted to specific... -
Soo Ryu (Argonne National Laboratory (US))11/10/2016, 11:30
After the Phase-I upgrade and onward, the Front-End Link eXchange (FELIX) system will be the interface between the data handling system and the detector front-end electronics and trigger electronics at the ATLAS experiment. FELIX will function as a router between custom serial links and a commodity switch network which will use standard technologies (Ethernet or Infiniband) to communicate with...
Go to contribution page -
Graeme Stewart (University of Glasgow (GB))11/10/2016, 11:30
As the ATLAS Experiment prepares to move to a multi-threaded framework
Go to contribution page
(AthenaMT) for Run3, we are faced with the problem of how to migrate 4
million lines of C++ source code. This code has been written over the
past 15 years and has often been adapted, re-written or extended to
the changing requirements and circumstances of LHC data taking. The
code was developed by different authors, many of... -
Shawn Mc Kee (University of Michigan (US))11/10/2016, 11:30
In today's world of distributed scientific collaborations, there are many challenges to providing reliable inter-domain network infrastructure. Network operators use a combination of
Go to contribution page
active monitoring and trouble tickets to detect problems, but these are often ineffective at identifying issues that impact wide-area network users. Additionally, these approaches do not scale to wide area... -
Gabriele Garzoglio11/10/2016, 11:30
The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to...
Go to contribution page -
Concezio Bozzi (CERN and INFN Ferrara)11/10/2016, 11:30
The LHCb detector will be upgraded for the LHC Run 3 and will be readout at 40 MHz, with major implications on the software-only trigger and offline computing. If the current computing model is kept, the data storage capacity and computing power required to process data at this rate, and to generate and reconstruct equivalent samples of simulated events, will exceed the current capacity by a...
Go to contribution page -
Lukasz Dutka (Cyfronet)11/10/2016, 11:30
Nowadays users have a variety of options to get access to storage space, including private resources, commercial Cloud storage services as well as storage provided by e-Infrastructures. Unfortunately, all these services provide completely different interfaces for data management (REST, CDMI, command line) and different protocols for data transfer (FTP, GridFTP, HTTP). The goal of the...
Go to contribution page -
Dr Kenneth Richard Herner (Fermi National Accelerator Laboratory (US))11/10/2016, 11:45
The FabrIc for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientific Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of differing size, scope, and physics area. The FIFE...
Go to contribution page -
Maria Grazia Pia (Universita e INFN Genova (IT))11/10/2016, 11:45
Some data analysis methods typically used in econometric studies and in ecology have been evaluated and applied in physics software environments. They concern the evolution of observables through objective identification of change points and trends, and measurements of inequality, diversity and evenness across a data set. Within each one of these analysis areas, several statistical tests and...
Go to contribution page -
Takanori Hara11/10/2016, 11:45
The Belle II is the next-generation flavor factory experiment at the SuperKEKB accelerator in Tsukuba, Japan. The first physics run will take place in 2017, then we plan to increase the luminosity gradually. We will reach the world’s highest luminosity L=8x10^35 cm-2s-1 after roughly five years operation and finally collect ~25 Petabyte of raw data per year. Such a huge amount of data allows...
Go to contribution page -
Robert Quick (Indiana University)11/10/2016, 11:45
The Open Science Grid (OSG) relies upon the network as a critical part of the distributed infrastructures it enables. In 2012 OSG added a new focus area in networking with a goal of becoming the primary source of network information for its members and collaborators. This includes gathering, organizing and providing network metrics to guarantee effective network usage and prompt detection and...
Go to contribution page -
Leonidas Aliaga Soplin (College of William and Mary (US))11/10/2016, 11:45
The SciDAC-Data project is a DOE funded initiative to analyze and exploit two decades of information and analytics that have been collected, by the Fermilab Data Center, on the organization, movement, and consumption of High Energy Physics data. The project is designed to analyze the analysis patterns and data organization that have been used by the CDF, DØ, NO𝜈A, Minos, Minerva and other...
Go to contribution page -
11/10/2016, 11:45
GeantV simulation is a complex system based on the interaction of different modules needed for detector simulation, which include transportation (heuristically managed mechanism of sets of predefined navigators), scheduling policies, physics models (cross-sections and reaction final states) and a geometrical modeler library with geometry algorithms. The GeantV project is recasting the...
Go to contribution page -
Filippo Costa (CERN)11/10/2016, 11:45
ALICE, the general purpose, heavy ion collision detector at the CERN LHC is designed
Go to contribution page
to study the physics of strongly interacting matter using proton-proton, nucleus-nucleus and proton-nucleus collisions at high energies. The ALICE experiment will be
upgraded during the Long Shutdown 2 in order to exploit the full scientific potential of the future LHC. The requirements will then be... -
11/10/2016, 12:00
Particle physics experiments make heavy use of the Geant4 simulation package to model interactions between subatomic particles and bulk matter. Geant4 itself employs a set of carefully validated physics models that span a wide range of interaction energies.
Go to contribution page
They rely on measured cross-sections and phenomenological models with the physically motivated parameters that are tuned to cover many... -
Alastair Dewhurst (STFC - Rutherford Appleton Lab. (GB))11/10/2016, 12:00
The fraction of internet traffic carried over IPv6 continues to grow rapidly. IPv6 support from network hardware vendors and carriers is pervasive and becoming mature. A network infrastructure upgrade often offers sites an excellent window of opportunity to configure and enable IPv6.
There is a significant overhead when setting up and maintaining dual stack machines, so where possible...
Go to contribution page -
Oliver Keeble (CERN)11/10/2016, 12:00
The IT Analysis Working Group (AWG) has been formed at CERN across individual computing units and the experiments to attempt a cross cutting analysis of computing infrastructure and application metrics. In this presentation we will describe the first results obtained using medium/long term data (1 months - 1 year) correlating box level metrics, job level metrics from LSF and HTCondor, I/O...
Go to contribution page -
Matthias Richter (University of Oslo (NO))11/10/2016, 12:00
The ALICE Collaboration and the ALICE O$^2$ project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects of the data handling concept are partial reconstruction of raw data organized in so called time frames, and based on that information reduction of the data rate without...
Go to contribution page -
Bo Jayatilaka (Fermi National Accelerator Lab. (US))11/10/2016, 12:00
High Energy Physics experiments have long had to deal with huge amounts of data. Other fields of study are now being faced with comparable volumes of experimental data and have similar requirements to organize access by a distributed community of researchers. Fermilab is partnering with the Simons Foundation Autism Research Initiative (SFARI) to adapt Fermilab’s custom HEP data management...
Go to contribution page -
Misha Borodin (University of Iowa (US))11/10/2016, 12:00
The second generation of the ATLAS production system called ProdSys2 is a
Go to contribution page
distributed workload manager that runs daily hundreds of thousands of jobs,
from dozens of different ATLAS specific workflows, across more than
hundred heterogeneous sites. It achieves high utilization by combining
dynamic job definition based on many criteria, such as input and output
size, memory requirements and... -
Volker Friese (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))11/10/2016, 12:00
The Compressed Baryonic Matter experiment (CBM) is a next-generation heavy-ion experiment to be operated at the FAIR facility, currently under construction in Darmstadt, Germany. A key feature of CBM are very high intercation rates, exceeding those of contemporary nuclear collision experiments by several orders of magnitude. Such interaction rates forbid a conventional, hardware-triggered...
Go to contribution page -
markus diefenthaler (Thomas Jefferson National Laboratory)11/10/2016, 12:15
The Electron-Ion Collider (EIC) is envisioned as the
Go to contribution page
next-generation U.S. facility to study quarks and gluons in
strongly interacting matter. Developing the physics program for
the EIC, and designing the detectors needed to realize it,
requires a plethora of software tools and multifaceted analysis
efforts. Many of these tools have yet to be developed or need to
... -
11/10/2016, 12:15
Networks have played a critical role in high-energy physics
(HEP), enabling us to access and effectively utilize globally distributed
resources to meet the needs of our physicists.Because of their importance in enabling our grid computing infrastructure
Go to contribution page
many physicists have taken leading roles in research and education (R&E)
networking, participating in, and even convening, network... -
simon blyth11/10/2016, 12:15
Opticks is an open source project that integrates the NVIDIA OptiX
Go to contribution page
GPU ray tracing engine with Geant4 toolkit based simulations.
Massive parallelism brings drastic performance improvements with
optical photon simulation speedup expected to exceed 1000 times Geant4
when using workstation GPUs. Optical photon simulation time becomes
effectively zero compared to the rest of the... -
Matteo Manzali (Universita di Ferrara & INFN (IT))11/10/2016, 12:15
The LHCb experiment will undergo a major upgrade during the second long shutdown (2018 - 2019). The upgrade will concern both the detector and the Data Acquisition (DAQ) system, to be rebuilt in order to optimally exploit the foreseen higher event rate. The Event Builder (EB) is the key component of the DAQ system which gathers data from the sub-detectors and build up the whole event. The EB...
Go to contribution page -
Vincent Ducret (CERN)11/10/2016, 12:15
Over the last few years, the number of mobile devices connected to the CERN internal network has increased from a handful in 2006 to more than 10,000 in 2015. Wireless access is no longer a “nice to have” or just for conference and meeting rooms, now support for mobility is expected by most, if not all, of the CERN community. In this context, a full renewal of the CERN Wi-Fi network has been...
Go to contribution page -
Dr Jianlin Zhu (South-central University For Nationalities (CN)), Dr Jin Huang (Wuhan Textile University (CN))11/10/2016, 14:00
The goal of the comparison is to summarize the state-of-the-art techniques of deep learning which is boosted with modern GPUs. Deep learning, which is also known as deep structured learning or hierarchical learning, is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers composed of multiple...
Go to contribution page -
David Lange (Princeton University (US))11/10/2016, 14:00
We report current status of the CMS full simulation. For run-II CMS is using Geant4 10.0p02 built in sequential mode. About 8 billion events are produced in 2015. In 2016 any extra production will be done using the same production version. For the development Geant4 10.0p03 with CMS private patches built in multi-threaded mode were established. We plan to use newest Geant4 10.2 for 2017...
Go to contribution page -
458. Implementation of the ATLAS trigger within the ATLAS MultiThreaded Software Framework AthenaMTBenjamin Michael Wynne (University of Edinburgh (GB))11/10/2016, 14:00
We present an implementation of the ATLAS High Level Trigger that provides parallel execution of trigger algorithms within the ATLAS multithreaded software framework, AthenaMT. This development will enable the ATLAS High Level Trigger to meet future challenges due to the evolution of computing hardware and upgrades of the Large Hadron Collider, LHC, and ATLAS Detector. During the LHC...
Go to contribution page -
Antonio Limosani (University of Sydney (AU))11/10/2016, 14:00
The LHC is the world's most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the
Go to contribution page
energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for
event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline
reconstruction workflows at... -
Josh Bendavid (California Institute of Technology (US))11/10/2016, 14:00
With the increased load and pressure on required computing power brought by the higher luminosity in LHC during Run2, there is a need to utilize opportunistic resources not currently dedicated to the Compact Muon Solenoid (CMS) collaboration. Furthermore, these additional resources might be needed on demand. The Caltech group together with the Argonne Leadership Computing Facility (ALCF) are...
Go to contribution page -
Simaolhoda Baymani (CERN)11/10/2016, 14:00
RapidIO (http://rapidio.org/) technology is a packet-switched high-performance fabric, which has been under active development since 1997. Originally meant to be a front side bus, it developed into a system level interconnect which is today used in all 4G/LTE base stations world wide. RapidIO is often used in embedded systems that require high reliability, low latency and scalability in a...
Go to contribution page -
Alvaro Fernandez Casani (Instituto de Fisica Corpuscular (ES))11/10/2016, 14:00
The ATLAS EventIndex has been running in production since mid-2015,
Go to contribution page
reliably collecting information worldwide about all produced events and storing
them in a central Hadoop infrastructure at CERN. A subset of this information
is copied to an Oracle relational database for fast access.
The system design and its optimization is serving event picking from requests of
a few events up to scales of... -
Jorn Schumacher (University of Paderborn (DE))11/10/2016, 14:15
HPC network technologies like Infiniband, TrueScale or OmniPath provide low-
Go to contribution page
latency and high-throughput communication between hosts, which makes them
attractive options for data-acquisition systems in large-scale high-energy
physics experiments. Like HPC networks, data acquisition networks are local
and include a well specified number of systems. Unfortunately traditional... -
Nikita Kazeev (Yandex School of Data Analysis (RU))11/10/2016, 14:15
The LHCb experiment stores around 10^11 collision events per year. A typical physics analysis deals with a final sample of up to 10^7 events. Event preselection algorithms (lines) are used for data reduction. They are run centrally and check whether an event is useful for a particular physical analysis. The lines are grouped into streams. An event is copied to all the streams its lines belong,...
Go to contribution page -
Andrea Di Simone (Albert-Ludwigs-Universitaet Freiburg (DE))11/10/2016, 14:15
The ATLAS Simulation infrastructure has been used to produce upwards of 50 billion proton-proton collision events for analyses
Go to contribution page
ranging from detailed Standard Model measurements to searches for exotic new phenomena. In the last several years, the
infrastructure has been heavily revised to allow intuitive multithreading and significantly improved maintainability. Such a
massive update of a... -
Andrea Dotti (SLAC National Accelerator Laboratory (US))11/10/2016, 14:15
In the midst of the multi- and many-core era, the computing models employed by
Go to contribution page
HEP experiments are evolving to embrace the trends of new hardware technologies.
As the computing needs of present and future HEP experiments -particularly those
at the Large Hadron Collider- grow, adoption of many-core architectures and
highly-parallel programming models is essential to prevent degradation... -
Benedict Allbrooke (University of Sussex (GB))11/10/2016, 14:15
The ATLAS experiment at the high-luminosity LHC will face a five-fold
increase in the number of interactions per collision relative to the ongoing
Run 2. This will require a proportional improvement in rejection power at
the earliest levels of the detector trigger system, while preserving good signal efficiency.One critical aspect of this improvement will be the implementation of
Go to contribution page
precise... -
Xanthe Hoad (University of Edinburgh (GB))11/10/2016, 14:15
Changes in the trigger menu, the online algorithmic event-selection of the ATLAS experiment at the LHC in response to luminosity and detector changes are followed by adjustments in their monitoring system. This is done to ensure that the collected data is useful, and can be properly reconstructed at Tier-0, the first level of the computing grid. During Run 1, ATLAS deployed monitoring updates...
Go to contribution page -
Alexei Klimentov (Brookhaven National Laboratory (US)), Ruslan Mashinistov (National Research Centre Kurchatov Institute (RU))11/10/2016, 14:15
PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive...
Go to contribution page -
Dr Sebastien Fabbro (NRC Herzberg)11/10/2016, 14:30
The Canadian Advanced Network For Astronomical Research (CANFAR)
is a digital infrastructure that has been operational for the last
six years.The platform allows astronomers to store, collaborate, distribute and
Go to contribution page
analyze large astronomical datasets. We have implemented multi-site storage and
in collaboration with an HEP group at University of Victoria, multi-cloud processing.
CANFAR is deeply... -
11/10/2016, 14:30
ATLAS Distributed Computing during LHC Run-1 was challenged by steadily increasing computing, storage and network
Go to contribution page
requirements. In addition, the complexity of processing task workflows and their associated data management requirements
led to a new paradigm in the ATLAS computing model for Run-2, accompanied by extensive evolution and redesign of the
workflow and data management systems. The... -
Dr Wahid Bhimji (Lawrence Berkeley National Lab. (US))11/10/2016, 14:30
In recent years there has been increasing use of HPC facilities for HEP experiments. This has initially focussed on less I/O intensive workloads such as generator-level or detector simulation. We now demonstrate the efficient running of I/O-heavy ‘analysis’ workloads for the ATLAS and ALICE collaborations on HPC facilities at NERSC, as well as astronomical image analysis for DESI.
To do...
Go to contribution page -
Anna Zaborowska (Warsaw University of Technology (PL))11/10/2016, 14:30
Software for the next generation of experiments at the Future Circular Collider (FCC), should by design efficiently exploit the available computing resources and therefore support of parallel execution is a particular requirement. The simulation package of the FCC Common Software Framework (FCCSW) makes use of the Gaudi parallel data processing framework and external packages commonly used in...
Go to contribution page -
11/10/2016, 14:30
Around the year 2000, the convergence on Linux and commodity x86_64 processors provided a homogeneous scientific computing platform which enabled the construction of the Worldwide LHC Computing Grid (WLCG) for LHC data processing. In the last decade the size and density of computing infrastructure has grown significantly. Consequently, power availability and dissipation have become important...
Go to contribution page -
Dorian Kcira (California Institute of Technology (US))11/10/2016, 14:30
MonALISA, which stands for Monitoring Agents using a Large Integrated Services Architecture, has been developed over the last fourteen years by Caltech and its partners with the support of the CMS software and computing program. The framework is based on Dynamic Distributed Service Architecture and is able to provide complete monitoring, control and global optimization services for complex...
Go to contribution page -
Kristian Hahn (Northwestern University (US)), Marco Trovato (Northwestern University (US))11/10/2016, 14:30
The High Luminosity LHC (HL-LHC) will deliver luminosities of up to 5x10^34 cm^2/s, with an average of about 140-200 overlapping proton-proton collisions per bunch crossing. These extreme pileup conditions can significantly degrade the ability of trigger systems to cope with the resulting event rates. A key component of the HL-LHC upgrade of the CMS experiment is a Level-1 (L1) track...
Go to contribution page -
Jinghui Zhang (Southeast University (CN))11/10/2016, 14:45
Abstract: Southeast University Science Operation Center (SEUSOC) is one of the computing centers of the Alpha Magnetic Spectrometer (AMS-02) experiment. It provides 2000 CPU cores for AMS scientific computing and a dedicated 1Gbps Long Fat Network (LFN) for AMS data transmission between SEU and CERN. In this paper, the workflows of SEUSOC Monte Carlo (MC) production are discussed in...
Go to contribution page -
Andrew Haas (New York University)11/10/2016, 14:45
or some physics processes studied with the ATLAS detector, a more
Go to contribution page
accurate simulation in some respects can be achieved by including real
data into simulated events, with substantial potential improvements in the CPU,
disk space, and memory usage of the standard simulation configuration,
at the cost of significant database and networking challenges.
Real proton-proton background events can be... -
11/10/2016, 14:45
ATLAS has been extensively exploring possibilities of using computing resources extending beyond conventional grid sites in the WLCG fabric to deliver as many computing cycles as possible and thereby enhance the significance of the Monte-Carlo samples to deliver better physics results.
The difficulties of using such opportunistic resources come from architectural differences such as...
Go to contribution page -
Taylor Childers (Argonne National Laboratory (US))11/10/2016, 14:45
Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was...
Go to contribution page -
Vincent Garonne (University of Oslo (NO))11/10/2016, 14:45
The ATLAS Distributed Data Management (DDM) system has evolved drastically in the last two years with the Rucio software fully
Go to contribution page
replacing the previous system before the start of LHC Run-2. The ATLAS DDM system manages now more than 200 petabytes spread on 130
storage sites and can handle file transfer rates of up to 30Hz. In this talk, we discuss our experience acquired in... -
Antanas Norkus (Vilnius University (LT))11/10/2016, 14:45
Physics analysis at the Compact Muon Solenoid (CMS) requires both a vast production of simulated events and an extensive processing of the data collected by the experiment.
Since the end of the LHC runI in 2012, CMS has produced over 20 Billion simulated events, from 75 thousand processing requests organised in one hundred different campaigns, which emulate different configurations of...
Go to contribution page -
Mrs Lucie Flekova (Technical University of Darmstadt)11/10/2016, 14:45
Micropattern gaseous detector (MPGD) technologies, such as GEMs or MicroMegas, are particularly suitable for precision tracking and triggering in high rate environments. Given their relatively low production costs, MPGDs are an exemplary candidate for the next generation of particle detectors. Having acknowledged these advantages, both the ATLAS and CMS collaborations at the LHC are exploiting...
Go to contribution page -
Sylvain Chapeland (CERN)11/10/2016, 15:00
ALICE (A Large Ion Collider Experiment) is a heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). After the second long shut-down of the LHC, the ALICE detector will be upgraded to cope with an interaction rate of 50 kHz in Pb-Pb collisions, producing in the online computing system (O2) a sustained throughput...
Go to contribution page -
Emanuele Leonardi (INFN Roma)11/10/2016, 15:00
The long standing problem of reconciling the cosmological evidence of the existence of dark matter with the lack of any clear experimental observation of it, has recently revived the idea that the new particles are not directly connected with the Standard Model gauge fields, but only through mediator fields or ''portals'', connecting our world with new ''secluded'' or ''hidden'' sectors. One...
Go to contribution page -
Piero Vicini (Universita e INFN, Roma I (IT))11/10/2016, 15:00
With processor architecture evolution, the HPC market has undergone a paradigm shift. The adoption of low-cost, Linux-based clusters extended HPC’s reach from its roots in modeling and simulation of complex physical systems to a broad range of industries, from biotechnology, cloud computing, computer analytics and big data challenges to manufacturing sectors. In this perspective, the near...
Go to contribution page -
11/10/2016, 15:00
The Compressed Baryonic Matter (CBM) experiment is currently under construction at the upcoming FAIR accelerator facility in Darmstadt, Germany. Searching for rare probes, the experiment requires complex online event selection criteria at a high event rate.
To achieve this, all event selection is performed in a large online processing farm of several hundred nodes, the "First-level Event...
Go to contribution page -
Vakho Tsulaia (Lawrence Berkeley National Lab. (US))11/10/2016, 15:00
The ATLAS Event Service (ES) has been designed and implemented for efficient
Go to contribution page
running of ATLAS production workflows on a variety of computing platforms, ranging
from conventional Grid sites to opportunistic, often short-lived resources, such
as spot market commercial clouds, supercomputers and volunteer computing.
The Event Service architecture allows real time delivery of fine grained... -
Dzmitry Makatun (Acad. of Sciences of the Czech Rep. (CZ))11/10/2016, 15:00
Distributed data processing in High Energy and Nuclear Physics (HENP) is a prominent example of big data analysis. Having petabytes of data being processed at tens of computational sites with thousands of CPUs, standard job scheduling approaches either do not address well the problem complexity or are dedicated to one specific aspect of the problem only (CPU, network or storage). As a result, ...
Go to contribution page -
DIEGO MICHELOTTO (INFN - National Institute for Nuclear Physics), Stefano Bovina (INFN - National Institute for Nuclear Physics)11/10/2016, 15:00
Over the past two years, the operations at INFN-CNAF have undergone significant changes.
Go to contribution page
The adoption of configuration management tools, such as Puppet and the constant increase of dynamic and cloud infrastructures, have led us to investigate a new monitoring approach.
Our aim is the centralization of the monitoring service at CNAF through a scalable and highly configurable monitoring... -
Tomoaki Nakamura (High Energy Accelerator Research Organization (JP))11/10/2016, 15:15
A lot of experiments in the field of accelerator based science are actively running at High Energy Accelerator Research Organization (KEK) by using SuperKEKB and J-PARC accelerator in Japan. In these days at KEK, the computing demand from the various experiments for the data processing, analysis and MC simulation is monotonically increasing. It is not only for the case with high-energy...
Go to contribution page -
Alexandr Zaytsev (Brookhaven National Laboratory (US))11/10/2016, 15:15
This contribution gives a report on the remote evaluation of the pre-production Intel Omni-Path (OPA) interconnect hardware and software performed by RHIC & ATLAS Computing Facility (RACF) at BNL in Dec 2015 - Feb 2016 time period using a 32 node “Diamond” cluster with a single Omni-Path Host Fabric Interface (HFI) installed on each and a single 48-port Omni-Path switch with the non-blocking...
Go to contribution page -
David Delventhal (University of Wisconsin-Madison)11/10/2016, 15:15
IceProd is a data processing and management framework developed by the IceCube Neutrino Observatory for processing of Monte Carlo simulations, detector data, and analysis levels. It runs as a separate layer on top of grid and batch systems. This is accomplished by a set of daemons which process job workflow, maintaining configuration and status information on the job before, during, and after...
Go to contribution page -
Dr Tobias Winchen (Vrije Universiteit Brussel)11/10/2016, 15:15
The low flux of the ultra-high energy cosmic rays (UHECR) at the highest energies provides a challenge to answer the long standing question about their origin and nature. Even lower fluxes of neutrinos with energies above 10^22 eV are predicted in certain Grand-Unifying-Theories (GUTs) and e.g. models for super-heavy dark matter (SHDM). The significant increase in detector volume required to...
Go to contribution page -
Jana Schaarschmidt (Weizmann Institute of Science (IL))11/10/2016, 15:15
Many physics and performance studies with the ATLAS detector at the Large Hadron Collider require very large samples of simulated events, and producing these using the full GEANT4 detector simulation is highly CPU intensive.
Go to contribution page
Often, a very detailed detector simulation is not needed, and in these cases fast simulation tools can be used
to reduce the calorimeter simulation time by a few orders... -
11/10/2016, 15:30
The software suite required to support a modern high energy physics experiment is typically made up of many experiment-specific packages in addition to a large set of external packages. The developer-level build system has to deal with external package discovery, versioning, build variants, user environments, etc. We find that various systems for handling these requirements divide the problem...
Go to contribution page -
Gioacchino Vino (Universita e INFN, Bari (IT))11/10/2016, 15:30
The ALICE experiment at CERN was designed to study the properties of the strongly-interacting hot and dense matter created in heavy-ion collisions at the LHC energies. The computing model of the experiment currently relies on the hierarchical Tier-based structure, with a top-level Grid site at CERN (Tier-0, also extended to Wigner) and several globally distributed datacenters at national and...
Go to contribution page -
Dr Sara Vallero (INFN Torino)11/10/2016, 15:30
In the ideal limit of infinite resources, multi-tenant applications are able to scale in/out on a Cloud driven only by their functional requirements. A large Public Cloud may be a reasonable approximation of this condition, where tenants are normally charged a posteriori for their resource consumption. On the other hand, small scientific computing centres usually work in a saturated regime...
Go to contribution page -
Omar Awile (CERN)11/10/2016, 15:30
Application performance is often assessed using the Performance Monitoring Unit (PMU) capabilities present in modern processors. One popular tool that can read the PMU's performance counters is the Linux-perf tool. pmu-tools is a toolkit built around Linux-perf that provides a more powerful interface to the different PMU events and give a more abstracted view of the events. Unfortunately...
Go to contribution page -
庆宝 胡 (IHEP)11/10/2016, 15:30
OpenStack is an open source cloud computing project that is enjoying wide popularity. More and more organizations and enterprises deploy it to provide their private cloud services. However, most organizations and enterprises cannot achieve unified user management access control to the cloud service, since the authentication and authorization systems of Cloud providers are generic and they...
Go to contribution page -
11/10/2016, 15:30
Belle II experiment can take advantage from Data federation technologies to simplify access to distributed datasets and file replicas. The increasing adoption of http and webdav protocol by sites, enable to create lightweight solutions to give an aggregate view of the distributed storage.
Go to contribution page
In this work, we make a study on the possible usage of the software Dynafed developed by CERN for the... -
11/10/2016, 15:30
Virtual machines have many features — flexibility, easy controlling and customized system environments. More and more organizations and enterprises begin to deploy virtualization technology and cloud computing to construct their distributed system. Cloud computing is widely used in high energy physics field. In this presentation, we introduce an integration of virtual machines with HTCondor,...
Go to contribution page -
Jiri Chudoba (Acad. of Sciences of the Czech Rep. (CZ))11/10/2016, 15:30
The Computing Center of the Institute of Physics (CC IoP) of the Czech Academy of Sciences serves a broad spectrum of users with various computing needs. It runs WLCG Tier-2 center for the ALICE and the ATLAS experiments; the same group of services is used by astroparticle physics projects the Pierre Auger Observatory (PAO) and the Cherenkov Telescope Array (CTA). OSG stack is installed for...
Go to contribution page -
11/10/2016, 15:30
With the era of big data emerging, Hadoop has become de facto standard of big data processing. However, it is still difficult to get High Energy Physics (HEP) applications run efficiently on HDFS platform. There are two reasons to explain. Firstly, Random access to events data is not supported by HDFS platform. Secondly, it is difficult to make HEP applications adequate to Hadoop data...
Go to contribution page -
Joseph Boudreau (University of Pittsburgh)11/10/2016, 15:30
The complex geometry of the whole detector of the ATLAS experiment at LHC is currently stored only in custom online databases, from which it is built on-the-fly on request. Accessing the online geometry guarantees accessing the latest version of the detector description, but requires the setup of the full ATLAS software framework "Athena", which provides the online services and the tools to...
Go to contribution page -
Prof. Marco Maggiora (University of Turin and INFN Turin)11/10/2016, 15:30
The INFN Section of Turin hosts a middle-size multi-tenant cloud infrastructure optimized for scientific computing.
A new approach exploiting the features of VMDIRAC and aiming to allow for dynamic automatic instantiation and destruction of Virtual Machines from different tenants, in order to maximize the global computing efficiency of the infrastructure, has been designed, implemented and...
Go to contribution page -
11/10/2016, 15:30
The use of Webdav protocol to access at large storage areas is becoming popular in the High Energy Physics community. All the main Grid and Cloud storage solutions provide such kind of interface, in this scenario, tuning the storage systems and performance evaluation became crucial aspects to promote the adoption of these protocols within the Belle II community.
Go to contribution page
In this work, we present the... -
Attila Krasznahorkay (CERN)11/10/2016, 15:30
The ATLAS software infrastructure facilitates efforts of more than 1000
Go to contribution page
developers working on the code base of 2200 packages with 4 million C++
and 1.4 million python lines. The ATLAS offline code management system is
the powerful, flexible framework for processing new package versions
requests, probing code changes in the Nightly Build System, migration to
new platforms and compilers,... -
Jorn Schumacher (University of Paderborn (DE))11/10/2016, 15:30
ATLAS is a high energy physics experiment in the Large Hadron Collider
Go to contribution page
located at CERN.
During the so called Long Shutdown 2 period scheduled for late 2018,
ATLAS will undergo
several modifications and upgrades on its data acquisition system in
order to cope with the
higher luminosity requirements. As part of these activities, a new
read-out chain will be built
for the New Small Wheel muon... -
Andres Gomez Ramirez (Johann-Wolfgang-Goethe Univ. (DE))11/10/2016, 15:30
Distributed computing infrastructures require automatic tools to strengthen, monitor and analyze the security behavior of computing devices. These tools should inspect monitoring data such as resource usage, log entries, traces and even processes' system calls. They also should detect anomalies that could indicate the presence of a cyber-attack. Besides, they should react to attacks without...
Go to contribution page -
Luca Canali (CERN)11/10/2016, 15:30
This paper reports on the activities aimed at improving the architecture and performance of the ATLAS EventIndex implementation in Hadoop. The EventIndex contains tens of billions event records, each of which consisting of ~100 bytes, all having the same probability to be searched or counted. Data formats represent one important area for optimizing the performance and storage footprint of...
Go to contribution page -
Carl Vuosalo (University of Wisconsin-Madison (US))11/10/2016, 15:30
The engineering design of a particle detector is usually performed in a
Computer Aided Design (CAD) program, and simulation of the detector's performance
can be done with a Geant4-based program. However, transferring the detector
design from the CAD program to Geant4 can be laborious and error-prone.SW2GDML is a tool that reads a design in the popular SolidWorks CAD
Go to contribution page
program and... -
Audrius Mecionis (Vilnius University (LT))11/10/2016, 15:30
The Compact Muon Solenoid (CMS) experiment makes a vast use of alignment and calibration measurements in several data processing workflows: in the High Level Trigger, in the processing of the recorded collisions and in the production of simulated events for data analysis and studies of detector upgrades. A complete alignment and calibration scenario is factored in approximately three-hundred...
Go to contribution page -
Jorn Schumacher (University of Paderborn (DE))11/10/2016, 15:30
The Trigger and Data Acquisition system of the ATLAS detector at the Large Hadron
Go to contribution page
Collider at CERN is composed of a large number of distributed hardware and software
components (about 3000 machines and more than 25000 applications) which, in a coordinated
manner, provide the data-taking functionality of the overall system.
During data taking runs, a huge flow of operational data is produced... -
Fabrizio Furano (CERN), Laurence Field (CERN)11/10/2016, 15:30
Volunteer computing has the potential to provide significant additional computing capacity for the LHC experiments.
Go to contribution page
One of the challenges with exploiting volunteer computing is to support a global community of volunteers that provides heterogeneous resources.
However, HEP applications require more data input and output than the CPU intensive applications that are typically used by other... -
11/10/2016, 15:30
Deploying a complex application on a Cloud-based infrastructure can be a challenging task. Among other things, the complexity can derive from software components the application relies on, from requirements coming from the use cases (i.e. high availability of the components, autoscaling, disaster recovery), from the skills of the users that have to run the application.
Go to contribution page
Using an orchestration... -
Michael Poat (Brookhaven National Laboratory)11/10/2016, 15:30
As demand for widely accessible storage capacity increases and usage is on the rise, steady IO performance is desired but tends to suffer within multi-user environments. Typical deployments use standard hard drives as the cost per/GB is quite low. On the other hand, HDD based solutions for storage are not known to scale well with process concurrency and soon enough, high rate of IOPs create a...
Go to contribution page -
11/10/2016, 15:30
The variety of the ATLAS Distributed Computing infrastructure requires a central information
system to define the topology of computing resources and to store the different parameters and
configuration data which are needed by the various ATLAS software components.The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration
Go to contribution page
and status ... -
Michael David Sokoloff (University of Cincinnati (US))11/10/2016, 15:30
GooFit, a GPU-friendly framework for doing maximum-likelihood fits, has been extended in functionality to do a full amplitude analysis of scalar mesons decaying into four final states via various combinations of intermediate resonances. Recurring resonances in different amplitudes are recognized and only calculated once, to save memory and execution time. As an example, this tool can be used...
Go to contribution page -
Alexander Egorov (Massachusetts Inst. of Technology (US))11/10/2016, 15:30
The AMS data production uses different programming modules for job submission, execution and management, as well as for validation of produced data. The modules communicate with each other using CORBA interface. The main module is the AMS production server, a scalable distributed service which links all modules together starting from job submission request and ending with writing data to disk...
Go to contribution page -
Gerhard Ferdinand Rzehorz (Georg-August-Universitaet Goettingen (DE))11/10/2016, 15:30
Efficient administration of computing centres requires advanced tools for the monitoring and front-end interface of their infrastructure. The large-scale distributed grid systems, like the Worldwide LHC Computing Grid (WLCG) and ATLAS computing, offer many existing web pages and information sources indicating the status of the services, systems, requests and user jobs at grid sites. These...
Go to contribution page -
Dr Steven Murray (CERN)11/10/2016, 15:30
The IT Storage group at CERN develops the software responsible for archiving to tape the custodial copy of the physics data generated by the LHC experiments. Physics run 3 will start in 2021 and will introduce two major challenges for which the tape archive software must be evolved. Firstly the software will need to make more efficient use of tape drives in order to sustain the predicted data...
Go to contribution page -
Jingyan Shi (IHEP)11/10/2016, 15:30
An Job Accounting tool for IHEP Computing
The computing services running at computing center of IHEP support some HEP experiments and bio-medicine study. It provides 120,000 cpu cores including 3 local cluster and a Tier 2 grid site. A private cloud with 1000 cpu cores has been established to fit the experiment peak requirement. Besides, the computing center has several remote clusters as...
Go to contribution page -
Alessandra Forti (University of Manchester (GB))11/10/2016, 15:30
The pilot model employed by the ATLAS production system has been in use for many years. The model has proven to be a success, with many
Go to contribution page
advantages over push models. However one of the negative side-effects of using a pilot model is the presence of 'empty pilots' running
on sites, consuming a small amount of walltime and not running a useful payload job. The impact on a site can be significant,... -
Ivana Hrivnacova (Institut de Physique Nucléaire (IPNO), Université Paris-Sud, CNRS-IN2P3, France)11/10/2016, 15:30
A new analysis category based on g4tools was added in Geant4 release 9.5 with the aim of providing users with a lightweight analysis tool available as part of the Geant4 installation without the need to link to an external analysis package. It has progressively replaced the usage of external tools based on AIDA (Abstract Interfaces for Data Analysis) in all Geant4 examples. Frequent questions...
Go to contribution page -
Lucio Santi (Universidad de Buenos Aires), Soon Yung Jun (Fermi National Accelerator Laboratory (US))11/10/2016, 15:30
Simulation of particle-matter interactions in complex geometries is one of
the main tasks in high energy physics (HEP) research. Geant4 is the most
commonly used tool to accomplish it.An essential aspect of the task is an accurate and efficient handling
of particle transport and crossing volume boundaries within a
predefined (3D) geometry.At the core of the Geant4 simulation toolkit,...
Go to contribution page -
Dr Tian Yan (Institution of High Energy Physics, Chinese Academy of Science)11/10/2016, 15:30
The distributed computing system in Institute of High Energy Physics (IHEP), China, is based on DIRAC middleware. It integrates about 2000 CPU cores and 500 TB storage contributed by 16 distributed cites. These sites are of various type, such as cluster, grid, cloud and volunteer computing. This system went into production status in 2012. Now it supports multi-VO and serves three HEP...
Go to contribution page -
Marcus Ebert (University of Edinburgh (GB))11/10/2016, 15:30
Previous research has shown that it is relatively easy to apply a simple shim to conventional WLCG storage interfaces, in order to add Erasure coded distributed resilience to data.
Go to contribution page
One issue with simple EC models is that, while they can recover from losses without needing additional full copies of data, recovery often involves reading the all of the distributed chunks of the file (and their... -
11/10/2016, 15:30
Likelihood ratio tests are a well established technique for statistical inference in HEP. Because of the complicated detector response, we usually cannot evaluate the likelihood function directly. Instead, we usually build templates based on (Monte Carlo) samples from a simulator (or generative model). However, this approach doesn't scale well to high dimensional observations.
We describe...
Go to contribution page -
11/10/2016, 15:30
Many Grid sites have the need to reduce operational manpower, and running a storage element consumes a large amount of effort. In
Go to contribution page
addition, setting up a new Grid site including a storage element involves a steep learning curve and large investment of time. For
these reasons so-called storage-less sites are becoming more popular as a way to provide Grid computing resources with... -
Dr Elisabetta Ronchieri (INFN), Maria Grazia Pia (Universita e INFN Genova (IT))11/10/2016, 15:30
Maintainability is a critical issue for large scale, widely used software systems, characterized by a long life cycle. It is of paramount importance for a software toolkit, such as Geant4, which is a key instrument for research and industrial applications in many fields, not limited to high energy physics.
Maintainability is related to a number of objective metrics associated with...
Go to contribution page -
Francesco Giovanni Sciacca (Universitaet Bern (CH))11/10/2016, 15:30
Consolidation towards more computing at flat budgets beyond what pure chip technology
Go to contribution page
can offer, is a requirement for the full scientific exploitation of the future data from the
Large Hadron Collider. One consolidation measure is to exploit cloud infrastructures whenever
they are financially competitive. We report on the technical solutions and the performance used
and achieved running... -
Gabriele Sabato (Nikhef National institute for subatomic physics (NL))11/10/2016, 15:30
The ATLAS Experiment at the LHC is recording data from proton-proton collisions with 13 TeV
Go to contribution page
center-of-mass energy since spring 2015. The ATLAS collaboration has set up, updated
and optimized a fast physics monitoring framework (TADA) to automatically perform a broad
range of validation and to scan for signatures of new physics in the rapidly growing data.
TADA is designed to provide fast... -
Jerome Henri Fulachier (Centre National de la Recherche Scientifique (FR))11/10/2016, 15:30
The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence.
Go to contribution page
Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for
metadata aggregation and cataloguing. We briefly describe the architecture, the main services
and the benefits of using AMI in big collaborations, especially for high energy physics.
We focus on the... -
Graeme Stewart (University of Glasgow (GB))11/10/2016, 15:30
The ATLAS experiment explores new hardware and software platforms that, in the future,
Go to contribution page
may be more suited to its data intensive workloads. One such alternative hardware platform
is the ARM architecture, which is designed to be extremely power efficient and is found
in most smartphones and tablets.
CERN openlab recently installed a small cluster of ARM 64-bit evaluation prototype servers.... -
11/10/2016, 15:30
The ATLAS Distributed Data Management system stores more than 180PB of physics data across more than 130 sites globally. Rucio, the
Go to contribution page
new data management system of the ATLAS collaboration, has now been successfully operated for over a year. However, with the
forthcoming resumption of data taking for Run 2 and its expected workload and utilization, more automated and advanced methods... -
Tomasz Szumlak (AGH University of Science and Technology (PL))11/10/2016, 15:30
The LHCb Vertex Locator (VELO) is a silicon strip semiconductor detector operating at just 8mm distance to the LHC beams. Its 172,000 strips are read at a frequency of 1 MHz and processed by off-detector FPGAs followed by a PC cluster that reduces the event rate to about 10 kHz. During the second run of the LHC, which lasts from 2015 until 2018, the detector performance will undergo continued...
Go to contribution page -
Prof. Wenjing Wu (Computer Center, IHEP, CAS)11/10/2016, 15:30
The exploitation of volunteer computing resources has become a popular practice in the HEP computing community as the huge amount of potential computing power it provides. In the recent HEP experiments, the grid middleware has been used to organize the services and the resources, however it relies heavily on the X.509 authentication, which is contradictory to the untrusted feature of volunteer...
Go to contribution page -
Domenico Giordano (CERN)11/10/2016, 15:30
Performance measurements and monitoring are essential for the efficient use of computing resources. In a commercial cloud environment an exhaustive resource profiling has additional benefits due to the intrinsic variability of the virtualised environment. In this context resource profiling via synthetic benchmarking quickly allows to identify issues and mitigate them. Ultimately it provides...
Go to contribution page -
Graeme Stewart (University of Glasgow (GB))11/10/2016, 15:30
In this paper we explain how the C++ code quality is managed in ATLAS using a range of tools from compile-time through to run time testing and reflect on the substantial progress made in the last two years largely through the use of static analysis tools such as Coverity®, an industry-standard tool which enables quality comparison with general open source C++ code. Other available code...
Go to contribution page -
Thomas Beermann (CERN)11/10/2016, 15:30
This contribution introduces a new dynamic data placement agent for the ATLAS distributed data management system. This agent is
Go to contribution page
designed to pre-place potentially popular data to make it more widely available. It uses data from a variety of sources. Those
include input datasets and sites workload information from the ATLAS workload management system, network metrics from different
sources like... -
Dirk Hutter (Johann-Wolfgang-Goethe Univ. (DE))11/10/2016, 15:30
CBM is a heavy-ion experiment at the future FAIR facility in
Go to contribution page
Darmstadt, Germany. Featuring self-triggered front-end electronics and
free-streaming read-out event selection will exclusively be done by
the First Level Event Selector (FLES). Designed as an HPC cluster,
its task is an online analysis and selection of
the physics data at a total input data rate exceeding 1 TByte/s. To
allow... -
Ludmila Marian (CERN)11/10/2016, 15:30
CERN Document Server (CDS) is the CERN Institutional Repository, playing a key role in the storage, dissemination and archival for all research material published at CERN, as well as multimedia and some administrative documents. As the CERN’s document hub, it joins together submission and publication workflows dedicated to the CERN experiments, but also to the video and photo teams, to the...
Go to contribution page -
11/10/2016, 15:30
OpenAFS is the legacy solution for a variety of use cases at CERN, most notably home-directory services. OpenAFS has been used as the primary shared file-system for Linux (and other) clients for more than 20 years, but despite an excellent track record the project's age and architectural limitations are becoming more evident. We are now working to offer an alternative solution based on...
Go to contribution page -
Jakub Moscicki (CERN)11/10/2016, 15:30
A new approach to providing scientific computing services is currently investigated at CERN. It combines solid existing components and services (EOS Storage, CERNBox Cloud Sync&Share layer, ROOT Analysis Framework) with rising new technologies (Jupyter Notebooks) to create a unique environment for Interactive Data Science, Scientific Computing and Education Applications.
Go to contribution page
EOS is the main disk... -
Kenyi Paolo Hurtado Anampa (University of Notre Dame (US))11/10/2016, 15:30
The CMS experiment collects and analyzes large amounts of data coming from high energy particle collisions produced by the Large Hadron Collider (LHC) at CERN. This involves a huge amount of real and simulated data processing that needs to be handled in batch-oriented platforms. The CMS Global Pool of computing resources provide +100K dedicated CPU cores and another 50K to 100K CPU cores from...
Go to contribution page -
Frank Wuerthwein (Univ. of California San Diego (US))11/10/2016, 15:30
CMS deployed a prototype infrastructure based on Elastic Search that stores all classAds from the global pool. This includes detailed information on IO, CPU, datasets, etc. etc. for all analysis as well as production jobs. We will present initial results from analyzing this wealth of data, describe lessons learned, and plans for the future to derive operational benefits from analyzing this...
Go to contribution page -
Othmane Bouhali (Texas A & M University (US))11/10/2016, 15:30
One of the primary objectives of the research on GEMs at CERN is the testing and simulation of prototypes, manufacturing of large-scale GEM detectors and installation into CMS detector sections at the outer layer, where only highly energetic muons particles are detected. When a muon particle traverses a GEM detector, it ionizes the gas molecules generating a freely moving electron that starts...
Go to contribution page -
Kyle Knoepfel (Fermi National Accelerator Laboratory)11/10/2016, 15:30
One of the difficulties experimenters encounter when using a modular event-processing framework is determining the appropriate configuration for the workflow they intend to execute. A typical solution is to provide documentation external to the C++ code source that explains how a given component of the workflow is to be configured. This solution is fragile, because the documentation and the...
Go to contribution page -
Ryan Taylor (University of Victoria (CA))11/10/2016, 15:30
Throughout the first year of LHC Run 2, ATLAS Cloud Computing has undergone
Go to contribution page
a period of consolidation, characterized by building upon previously established systems,
with the aim of reducing operational effort, improving robustness, and reaching higher scale.
This paper describes the current state of ATLAS Cloud Computing.
Cloud activities are converging on a common contextualization... -
Anze Zupanc (Jozef Stefan Institute)11/10/2016, 15:30
The Belle II experiment is the upgrade of the highly successful Belle experiment located at the KEKB asymmetric-energy e+e- collider at KEK in Tsukuba, Japan. The Belle experiment collected e+e- collision data at or near the centre-of-mass energies corresponding to $\Upsilon(nS)$ ($n\leq 5$) resonances between 1999 and 2010 with the total integrated luminosity of 1 ab$^{-1}$. The data...
Go to contribution page -
Tobias Stockmanns11/10/2016, 15:30
PANDA is a planned experiment at FAIR (Darmstadt, Germany) with a cooled antiproton beam in a range [1.5; 15] GeV/c, allowing a wide physics program in nuclear and particle physics. It is the only experiment worldwide, which combines a solenoid field (B=2T) and a dipole field (B=2Tm) in an experiment with a fixed target topology, in that energy regime. The tracking system of PANDA involves the...
Go to contribution page -
Kihyeon Cho11/10/2016, 15:30
Let me introduce the convergence research cluster for dark matter which is supported by National Research Council of Science and Technology in Korea. The goal is to build research cluster of nationwide institutes from accelerator-based physics to astrophysics based on computational science using infrastructures at KISTI (Korea Institute of Science Technology Information) and KASI (Korea...
Go to contribution page -
Alberto Valero Biot (Instituto de Fisica Corpuscular (ES))11/10/2016, 15:30
The LHC has planned a series of upgrades culminating in the High Luminosity LHC (HL-LHC) which will have
an average luminosity 5-7 times larger than the nominal Run-2 value. The ATLAS Tile Calorimeter (TileCal) will
undergo an upgrade to accommodate to the HL-LHC parameters. The TileCal read-out electronics will be redesigned,
introducing a new read-out strategy.The photomultiplier signals...
Go to contribution page -
German Cancio Melia (CERN)11/10/2016, 15:30
CERN has been archiving data on tapes in its Computer Center for decades and its archive system is now holding more than 135 PB of HEP data in its premises on high density tapes.
For the last 20 years, tape areal bit density has been doubling every 30 months, closely following HEP data growth trends. During this period, bits on the tape magnetic substrate have been shrinking exponentially;...
Go to contribution page -
Rifki Sadikin (Indonesian Institute of Sciences (ID))11/10/2016, 15:30
Data Flow Simulation of the ALICE Computing System with OMNET++
Rifki Sadikin, Furqon Hensan Muttaqien, Iosif Legrand, Pierre Vande Vyvre for the ALICE Collaboration
The ALICE computing system will be entirely upgraded for Run 3 to address the major challenge of sampling the full 50 kHz Pb-Pb interaction rate increasing by a factor 100 times the present limit. We present, in this...
Go to contribution page -
Gerhard Ferdinand Rzehorz (Georg-August-Universitaet Goettingen (DE))11/10/2016, 15:30
This contribution reports on the feasibility of executing data intensive workflows on Cloud infrastructures. In order to assess this, the metric ETC = Events/Time/Cost is formed, which quantifies the different workflow and infrastructure configurations that are tested against each other.
Go to contribution page
In these tests ATLAS reconstruction Jobs are run, examining the effects of overcommitting (more parallel... -
11/10/2016, 15:30
dCache is a distributed multi-tiered data storage system widely used
Go to contribution page
by High Energy Physics and other scientific communities. It natively
supports a variety of storage media including spinning disk, SSD and
tape devices. Data migration between different media tiers is handled
manually or automatically based on policies. In order to provide
different levels of quality of... -
Azher Mughal (California Institute of Technology (US))11/10/2016, 15:30
We review and demonstrate the design of efficient data transfer nodes (DTNs), from the perspectives of the highest throughput over both local and wide area networks, as well as the highest performance per unit cost. A careful system-level design is required for the hardware, firmware, OS and software components. Furthermore, additional tuning of these components, and the identification and...
Go to contribution page -
11/10/2016, 15:30
The deployment of Openstack Magnum at CERN has given the possibility to manage container orchestration engines such as Docker and Kubernetes as first class resources in Openstack.
Go to contribution page
In this poster we will show the work done to exploit a docker Swarm cluster deployed via Magnum to setup a docker infrastructure running FTS ( the WLCG file transfer service). FTS has been chosen as one of the... -
Fabian Lambert (Centre National de la Recherche Scientifique (FR))11/10/2016, 15:30
The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence.
Go to contribution page
Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem
for metadata aggregation and cataloguing. AMI is used by the ATLAS production system,
therefore the service must guarantee a high level of availability. We describe our monitoring system
and the... -
Mr Terry Froy (Queen Mary, University of London)11/10/2016, 15:30
With many parts of the world having run out of IPv4 address space and the Internet Engineering Task Force (IETF) depreciating IPv4 the use of and migration to IPv6 is becoming a pressing issue. A significant amount of effort has already been expended by the HEPiX IPv6 Working Group (http://hepix-ipv6.web.cern.ch/) on testing dual-stacked hosts and IPv6-only CPU resources. The Queen Mary grid...
Go to contribution page -
Mr Qi Xu (Institute of High Energy Physics,Chinese Academy of Sciences)11/10/2016, 15:30
Abstract: Nowadays, the High Energy Physics experiments produce a large amount of data. These data is stored in massive storage system, which need to balance the cost, performance and manageability. HEP is a typical data-intensive application, and process a lot of data to achieve scientific discoveries. A hybrid storage system including SSD (Solid-state Drive) and HDD (Hard Disk Drive) layers...
Go to contribution page -
Runqun Xiong (Southeast University (CN))11/10/2016, 15:30
Abstract: Monte Carlo (MC) simulation production plays an important part in physics analysis of the Alpha Magnetic Spectrometer (AMS-02) experiment. To facilitate the metadata retrieving for data analysis needs among the millions of database records, we developed a monitoring tool to analyze and visualize the production status and progress. In this paper, we discuss the workflow of the...
Go to contribution page -
Mr Barthelemy Von Haller (CERN)11/10/2016, 15:30
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). A major upgrade of the experiment is planned for 2020. In order to cope with a data rate 100 times higher and with the continuous readout of the Time Projection Chamber (TPC), it is necessary to...
Go to contribution page -
Ian Collier (STFC - Rutherford Appleton Lab. (GB))11/10/2016, 15:30
The growing use of private and public clouds, and volunteer computing are driving significant changes in the way large parts of the distributed computing for our communities are carried out. Traditionally HEP workloads within WLCG were almost exclusively run via grid computing at sites where site administrators are responsible for and have full sight of the execution environment. The...
Go to contribution page -
Emanuele Leonardi (INFN Roma)11/10/2016, 15:30
The long standing problem of reconciling the cosmological evidence of the existence of dark matter with the lack of any clear experimental observation of it, has recently revived the idea that the new particles are not directly connected with the Standard Model gauge fields, but only through mediator fields or ''portals'', connecting our world with new ''secluded'' or ''hidden'' sectors. One...
Go to contribution page -
Simon George (Royal Holloway, University of London)11/10/2016, 15:30
The trigger system of the ATLAS detector at the LHC is a combination of hardware, firmware and software, associated to various sub-detectors that must seamlessly cooperate in order to select 1 collision of interest out of every 40,000 delivered by the LHC every millisecond. This talk will discuss the challenges, workflow and organization of the ongoing trigger software development, validation...
Go to contribution page -
Qiulan Huang (Chinese Academy of Sciences (CN))11/10/2016, 15:30
The new generation of high energy physics(HEP) experiments have been producing gigantic data. How to store and access those data with high performance have been challenging the availability, scalability, and I/O performance of the underlying massive storage system. At the same time, a series of researches focusing on big data have been more and more active, and the research about metadata...
Go to contribution page -
11/10/2016, 15:30
Binary decision trees are a widely used tool for supervised classification of high-dimensional data, for example among particle physicists. We present our proposal of the supervised binary divergence decision tree with nested separation method based on kernel density estimation. A key insight we provide is the clustering driven only by a few selected physical variables. The proper selection...
Go to contribution page -
11/10/2016, 15:30
Load Balancing is one of the technologies enabling deployment of large scale applications on cloud resources. At CERN we have developed a DNS Load Balancer as a cost-effective way to do it for applications accepting DNS timing dynamics and not requiring memory. We serve 378 load balanced aliases with two small VMs acting as master and slave. These aliases are based on 'delegated' DNS zones the...
Go to contribution page -
Donato De Girolamo (INFN)11/10/2016, 15:30
Requests for computing resources from LHC experiments are constantly
Go to contribution page
mounting, and so are their peak usage. Since dimensioning
a site to handle the peak usage times is impractical due to
constraints on resources that many publicly-owned computing centres
have, opportunistic usage of resources from external, even commercial
cloud providers is becoming more and more interesting, and is even... -
Jean-Roch Vlimant (California Institute of Technology (US))11/10/2016, 15:30
The CMS experiment at LHC relies on HTCondor and glideinWMS as its primary batch and pilot-based Grid provisioning systems. Given the scale of the global queue in CMS, the operators found it increasingly difficult to monitor the pool to find problems and fix them. The operators had to rely on several different web pages, with several different levels of information, and sifting tirelessly...
Go to contribution page -
Marco Mascheroni (Fermi National Accelerator Lab. (US))11/10/2016, 15:30
CRAB3 is a tool used by more than 500 users all over the world for distributed Grid analysis of CMS data. Users can submit sets of Grid jobs with similar requirements (tasks) with a single user request. CRAB3 uses a client-server architecture, where a lightweight client, a server, and ancillary services work together and are maintained by CMS operators at CERN.
As with most complex...
Go to contribution page -
Lorenzo Rinaldi (Universita e INFN, Bologna (IT))11/10/2016, 15:30
The computing infrastructures serving the LHC experiments have been
Go to contribution page
designed to cope at most with the average amount of data recorded. The
usage peaks, as already observed in Run-I, may however originate large
backlogs, thus delaying the completion of the data reconstruction and
ultimately the data availability for physics analysis. In order to
cope with the production peaks, the LHC... -
11/10/2016, 15:30
EMMA is a framework designed to build a family of configurable systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The architecture relies on asynchronous communicating components as a basis for decomposition of the system.
EMMA is embracing a fine-grained, component-based architecture, which produces a network of...
Go to contribution page -
Rolf Seuster (University of Victoria (CA))11/10/2016, 15:30
The use of opportunistic cloud resources by HEP experiments has significantly increased over the past few years. Clouds that are owned or managed by the HEP community are connected to the LHCONE network or the research network with global access to HEP computing resources. Private clouds, such as those supported by non-HEP research funds are generally connected to the international...
Go to contribution page -
wenxiao kan11/10/2016, 15:30
Traditional cluster computing resources can only partly meet the demand for massive data processing in the High Energy Physics (HEP) experiments, and volunteer computing remains a potential resource for this domain. It collects idle CPU time of desktop computers. Desktop Grid is the infrastructure to aggregate multiple volunteer computers to be included into a larger scale heterogeneous...
Go to contribution page -
11/10/2016, 15:30
Within the WLCG project EOS is evaluated as a platform to demonstrate efficient deployment of geographically distributed storage. Aim of distributed storage deployments is to reduce the number of individual end-points for LHC experiments (>100 today) and to minimize the required effort for small storage sites. The split of meta-data and data component in EOS allows to operate one regional...
Go to contribution page -
Emanuele Leonardi (INFN Roma)11/10/2016, 15:30
The long standing problem of reconciling the cosmological evidence of the existence of dark matter with the lack of any clear experimental observation of it, has recently revived the idea that the new particles are not directly connected with the Standard Model gauge fields, but only through mediator fields or ''portals'', connecting our world with new ''secluded'' or ''hidden'' sectors. One...
Go to contribution page -
11/10/2016, 15:30
The Simulation at Point1 project is successfully running traditional ATLAS simulation jobs
on the trigger and data aquisition high level trigger resources.
The pool of the available resources changes dynamically and quickly, therefore we need to be very
effective in exploiting the available computing cycles.We will present our experience with using the Event Service that provides...
Go to contribution page -
Tim Smith (CERN)11/10/2016, 15:30
CERN Print Services include over 1000 printers and multi-function devices as well as a centralised print shop. Every year, some 12 million pages are printed. We will present the recent evolution of CERN print services, both from the technical perspective (automated web-based configuration of printers, Mail2Print) and the service management perspective.
Go to contribution page -
David Lange (Princeton University (US))11/10/2016, 15:30
The algorithms and infrastructure of the CMS offline software are under continuous change in order to adapt to a changing accelerator, detector and computing environment. In this presentation, we discuss the most important technical aspects of this evolution, the corresponding gains in performance and capability, and the prospects for continued software improvement in the face of challenges...
Go to contribution page -
Alexandr Zaytsev (Brookhaven National Laboratory (US))11/10/2016, 15:30
Ceph based storage solutions and especially object storage systems based on it are now well recognized and widely used across the HEP/NP community. Both object storage and block storage layers of Ceph are now supporting production ready services for HEP/NP experiments at many research organizations across the globe, including CERN and Brookhaven National Laboratory (BNL), and even the Ceph...
Go to contribution page -
Rainer Schwemmer (CERN)11/10/2016, 15:30
Since its original commissioning in 2008, the LHCb data acquisition system has seen several fundamental architectural changes. The original design had a single, continuous stream of data in mind, going from the read-out boards through a software trigger straight to a small set of parallelly written files. Over the years the enormous increase in available storage capacity has made it possible...
Go to contribution page -
Adrian Mönnich (CERN)11/10/2016, 15:30
Over the last two years, a small team of developers worked on an extensive rewrite of the Indico application based on a new technology stack. The result, Indico 2.0, leverages open source packages in order to provide a web application that is not only more feature-rich but, more importantly, builds on a solid foundation of modern technologies and patterns.
Indico 2.0 has the peculiarity of...
Go to contribution page -
11/10/2016, 15:30
After two years of maintenance and upgrade, the Large Hadron Collider (LHC) has started its second four year run. In the mean time, the CMS experiment at the LHC has also undergone two years of maintenance and upgrade, especially in the field of the Data Acquisition and online computing cluster, where the system was largely redesigned and replaced. Various aspects of the supporting computing...
Go to contribution page -
Gerhard Ferdinand Rzehorz (Georg-August-Universitaet Goettingen (DE))11/10/2016, 15:30
The researchers at the Google Brain team released their second generation Deep Learning library, TensorFlow, as an open-source package under the Apache 2.0 license in November, 2015. Google has already deployed the first generation library using DistBelief in various systems such as Google Search, advertising systems, speech recognition systems, Google Images, Google Maps, Street View, Google...
Go to contribution page -
Mr Felice Pantaleo (CERN - Universität Hamburg)11/10/2016, 15:30
The High Luminosity LHC (HL-LHC) is a project to increase the luminosity of the Large Hadron Collider to 5*10^34 cm-2 s-1. The CMS experiment is planning a major upgrade in order to cope with an expected average number of overlapping collisions per bunch crossing of 140. The dataset sizes will increase by several orders of magnitude and so will be the request for larger computing...
Go to contribution page -
Mr Konstantin Pugachev (Budker Institute of Nuclear Physics (RU))11/10/2016, 15:30
We present a new experiment management system for the SND detector at the VEPP-2000 collider (Novosibirsk). Substantially, it includes as important part operator access to experimental databases (configuration, conditions and metadata).
The system is designed in client-server architecture. A user interacts with it via web-interface. The server side includes several logical layers: user...
Go to contribution page -
Mr Xianghu Zhao (NanJing University), Xiaomei Zhang (Chinese Academy of Sciences (CN))11/10/2016, 15:30
The BESIII experiment located in Beijing is an electron-positron collision experiment to study Tau-Charm physics. Now in its middle age BESIII has aggregated more than 1PB raw data and the distributed computing system has been built up based on DIRAC and put into productions since 2012 to deal with peak demands. Nowadays cloud becomes popular ways to provide resources among BESIII...
Go to contribution page -
Mr Stefan Pflueger (Helmholtz Institute Mainz), Stefan Pflueger11/10/2016, 15:30
The high precision experiment PANDA is specifically designed to shed new light on the structure and properties of hadrons. PANDA is a fixed target antiproton proton experiment and will be part of Facility for Antiproton and Ion Research (FAIR) in Darmstadt, Germany. When measuring the total cross sections or determining the properties of intermediate states very precisely e.g. via the energy...
Go to contribution page -
11/10/2016, 15:30
Simulated samples of various physics processes are a key ingredient
Go to contribution page
within analyses to unlock the physics behind LHC collision data. Samples
with more and more statistics are required to keep up with the
increasing amounts of recorded data. During sample generation,
significant computing time is spent on the reconstruction of charged
particle tracks from energy deposits which additionally... -
Timur Ablyazimov (J)11/10/2016, 15:30
Charmonium is one of the most interesting, yet most challenging observables for the CBM experiment. CBM will try to measure
Go to contribution page
charmonium in the di-muon decay channel in heavy-ion collisions close to or even below the kinematic threshold for elementary interactions. The expected signal yield is consequently extremely low - less than one in a million collisions. CBM as a high-rate experiment shall... -
Tim Smith (CERN IT-CDA)11/10/2016, 15:30
In October 2015, CERN’s core website has been moved to a new address, [http://home.cern][1], marking the launch of the brand new top-level domain .cern. In combination with a formal governance and registration policy, the IT infrastructure needed to be extended to accommodate the hosting of Web sites in this new top level domain. We will present the technical implementation in the framework...
Go to contribution page -
Grigori Rybkin (Laboratoire de l'Accelerateur Lineaire (FR))11/10/2016, 15:30
Processing of the large amount of data produced by the ATLAS experiment requires fast and reliable access to what we call Auxiliary Data Files (ADF). These files, produced by Combined Performance, Trigger and Physics groups, contain conditions, calibrations, and other derived data used by the ATLAS software. In ATLAS this data has, thus far for historical reasons, been collected and accessed...
Go to contribution page -
Marek Domaracky (CERN)11/10/2016, 15:30
It's been for almost 10 years that CERN has been providing live webcast of events using Adobe Flash technology. This year is finally the year that flash died at CERN! At CERN we closely follow the broadcast industry and are always trying to provide our users with the same experience as they have on other commercial streaming services. With Flash being slowly phased out on most of the streaming...
Go to contribution page -
Ksenia Gasnikova (Deutsches Elektronen-Synchrotron (DE))11/10/2016, 15:30
Accurate simulation of calorimeter response for high energy electromagnetic
Go to contribution page
particles is essential for the LHC experiments. Detailed simulation of the
electromagnetic showers using Geant4 is however very CPU intensive and
various fast simulation methods were proposed instead. The frozen shower
simulation substitutes the full propagation of the showers for energies
below $1$~GeV by showers... -
Cristovao Cordeiro (CERN)11/10/2016, 15:30
The current tier-0 processing at CERN is done on two managed sites, the CERN computer centre and the Wigner computer centre. With the proliferation of public cloud resources at increasingly competitive prices, we have been investigating how to transparently increase our compute capacity to include these providers. The approach taken has been to integrate these resources using our existing...
Go to contribution page -
11/10/2016, 15:30
The LHCb Software Framework Gaudi was initially designed and developed almost twenty years ago, when computing was very different from today. It has also been used by a variety of other experiments, including ATLAS, Daya Bay, GLAST, HARP, LZ, and MINERVA. Although it has been always actively developed all these years, stability and backward compatibility have been favoured, reducing the...
Go to contribution page -
11/10/2016, 15:30
After an initial R&D stage of prototyping portable performance for particle transport simulation, the GeantV project reaches a new phase where the different components such as kernel libraries, scheduling, geometry and physics are rapidly developing. The increase in complexity is accelerating by the multiplication of demonstrator examples and tested platforms, while trying to maintain a...
Go to contribution page -
Derek John Weitzel (University of Nebraska (US)), Robert Quick (Indiana University)11/10/2016, 15:30
Throughout the last decade the Open Science Grid (OSG) has been fielding requests from user communities, resource owners, and funding agencies to provide information about utilization of OSG resources. Requested data include traditional “accounting” - core-hours utilized - as well as user’s certificate Distinguished Name, their affiliations, and field of science. The OSG accounting service,...
Go to contribution page -
Dave Dykstra (Fermi National Accelerator Laboratory)11/10/2016, 15:30
It is well known that submitting jobs to the grid and transferring the
Go to contribution page
resulting data are not trivial tasks, especially when users are required
to manage their own X.509 certificates. Asking users to manage their
own certificates means that they need to keep the certificates secure,
remember to renew them periodically, frequently create proxy
certificates, and make them available to... -
11/10/2016, 15:30
Grid Site Availability Evaluation and Monitoring at CMS
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyze the vast quantity of scientific data recorded every year.
The computing resources are grouped into sites and organized in a tiered structure. A tier consists of sites in various countries...
Go to contribution page -
Dr Fred Stober (Hamburg University (DE))11/10/2016, 15:30
grid-control is an open source job submission tool that supports common HEP workflows.
Since 2007 it has been used by a number of HEP analyses to process tasks which routinely reach the order of tens of thousands of jobs.The tool is very easy to deploy, either from its repository or the python package index (pypi). The project aims at being lightweight and portable. It can run in...
Go to contribution page -
Dr Martin Ritter (LMU / Cluster Universe)11/10/2016, 15:30
The Belle II experiment at the SuperKEKB e+e- accelerator is preparing for taking first collision data next year. For the success of the experiment it is essential to have information about varying conditions available in the simulation, reconstruction, and analysis code.
The interface to the conditions data in the client code was designed to make the life for developers as easy as possible....
Go to contribution page -
11/10/2016, 15:30
At the Large Hadron Collider, numerous physics processes expected within the standard model and theories beyond it give rise to very high momentum particles decaying to multihadronic final states. Development of algorithms for efficient identification of such “boosted” particles while rejecting the background from multihadron jets from light quarks and gluons can greatly aid in the sensitivity...
Go to contribution page -
Daniel Fazio (CERN)11/10/2016, 15:30
The online farm of the ATLAS experiment at the LHC, consisting of nearly 4000 PCs with various characteristics, provides configuration and control of the detector and performs the collection, processing, selection and conveyance of event data from the front-end electronics to mass storage.
Go to contribution page
The status and health of every host must be constantly monitored to ensure the correct and reliable... -
Daniel Murphy-Olson (Argonne National Laboratory)11/10/2016, 15:30
Argonne provides a broad portfolio of computing resources to researchers. Since 2011 we have been providing a cloud computing resource to researchers, primarily using Openstack. Over the last year we’ve been working to better support containers in the context of HPC. Several of our operating environments now leverage a combination of the three technologies which provides infrastructure...
Go to contribution page -
Alexander Dibbo (STFC RAL)11/10/2016, 15:30
The Scientific Computing Department of the STFC runs a cloud service for internal users and various user communities. The SCD Cloud is configured using a Configuration Management System called Aquilon. Many of the virtual machine images are also created/configured using Aquilon. These are not unusual however our Integrations also allow Aquilon to be altered by the Cloud. For instance creation...
Go to contribution page -
Francesco Prelz (Università degli Studi e INFN Milano (IT))11/10/2016, 15:30
IPv4 network addresses are running out and the deployment of IPv6 networking in many places is now well underway. Following the work of the HEPiX IPv6 Working Group, a growing number of sites in the Worldwide Large Hadron Collider Computing Grid (WLCG) have deployed dual-stack IPv6/IPv4 services. The aim of this is to support the use of IPv6-only clients, i.e. worker nodes, virtual machines or...
Go to contribution page -
Michele Martinelli (INFN)11/10/2016, 15:30
Hybrid systems are emerging as an efficient solution in the HPC arena, with an abundance of approaches for integration of accelerators into the system (i.e. GPU, FPGA). In this context, one of the most important features is the chance of being able to address the accelerators, whether they be local or off-node, on an equal footing. Correct balancing and high performance in how the network...
Go to contribution page -
Blake Oliver Burghgrave (Northern Illinois University (US))11/10/2016, 15:30
We present an overview of Data Processing and Data Quality (DQ) Monitoring for the ATLAS Tile Hadronic
Go to contribution page
Calorimeter. Calibration runs are monitored from a data quality perspective and used as a cross-check for physics
runs. Data quality in physics runs is monitored extensively and continuously. Any problems are reported and
immediately investigated. The DQ efficiency achieved was 99.6% in 2012... -
Dorian Kcira (California Institute of Technology (US))11/10/2016, 15:30
The SDN Next Generation Integrated Architecture (SDN-NGeNIA) program addresses some of the key challenges facing the present and next generations of science programs in HEP, astrophysics, and other fields whose potential discoveries depend on their ability to distribute, process and analyze globally distributed petascale to exascale datasets.
Go to contribution page
The SDN-NGenIA system under development by the... -
Mathias Michel (Helmholtz Institute Mainz)11/10/2016, 15:30
A large part of the programs of hadron physics experiments deal with the search for new conventional and exotic hadronic states like e.g. hybrids and glueballs. In a majority of analyses a Partial Wave Analysis (PWA) is needed to identify possible exotic states and to classifiy known states. Of special interest is the comparison or combination of data from multiple experiments. Therefore, a...
Go to contribution page -
Andrew McNab (University of Manchester)11/10/2016, 15:30
This paper describes GridPP's Vacuum Platform for managing virtual machines (VMs), which has been used to run production workloads for WLCG, other HEP experiments, and some astronomy projects. The platform provides a uniform interface between VMs and the sites they run at, whether the site is organised as an Infrastructure-as-a-Service cloud system such as OpenStack with a push model, or an...
Go to contribution page -
Peter Elmer (Princeton University (US))11/10/2016, 16:45
-
David Britton (University of Glasgow (GB)), Elizabeth Sexton-Kennedy (Fermi National Accelerator Lab. (US)), Frank Wuerthwein (Univ. of California San Diego (US)), Graeme Stewart (University of Glasgow (GB)), Ian Bird (CERN), Wahid Bhimji (Lawrence Berkeley National Lab. (US))11/10/2016, 17:15
-
Karan Bahtia (Google), Randy Sobie (University of Victoria (CA)), Taylor Newill (Microsoft), Tim Bell (CERN)12/10/2016, 08:45
-
Federico Carminati (CERN)12/10/2016, 09:40
-
John Martinis (Google)12/10/2016, 10:15
-
Max Fischer (KIT - Karlsruhe Institute of Technology (DE))12/10/2016, 11:15
Over the past several years, rapid growth of data has affected many fields of science. This has often resulted in the need for overhauling or exchanging the tools and approaches in the disciplines’ data life cycles, allowing the application of new data analysis methods and facilitating improved data sharing.
The project Large-Scale Data Management and Analysis (LSDMA) of the German Helmholtz...
Go to contribution page -
Fons Rademakers (CERN)12/10/2016, 11:15
CERN openlab is a unique public-private partnership between CERN and leading IT companies and research institutes. Several of the CERN openlab projects investigate technologies that have the potential to become game changers in HEP software development (like Intel Xeon-FPGA, Intel 3DXpoint memory, Micron Automata Processor, etc.). In this presentation I will highlight a number of these...
Go to contribution page -
Mikhail Hushchyn (Yandex School of Data Analysis (RU))12/10/2016, 11:15
The LHCb collaboration is one of the four major experiments at the Large Hadron Collider at CERN. Petabytes of data are generated by the detectors and Monte-Carlo simulations. The LHCb Grid interware LHCbDIRAC is used to make data available to all collaboration members around the world. The data is replicated to the Grid sites in different locations. However, disk storage on the Grid is...
Go to contribution page -
Lisa Zangrando (Universita e INFN, Padova (IT))12/10/2016, 11:15
Performing efficient resource provisioning is a fundamental aspect for any resource provider. Local Resource Management Systems (LRMS) have been used in data centers for decades in order to obtain the best usage of the resources, providing their fair usage and partitioning for the users. In contrast, current cloud schedulers are normally based on the immediate allocation of resources on a...
Go to contribution page -
Daniel Sherman Riley (Cornell University (US))12/10/2016, 11:15
Limits on power dissipation have pushed CPUs to grow in parallel processing capabilities rather than clock rate, leading to the rise of "manycore" or GPU-like processors. In order to achieve the best performance, applications must be able to take full advantage of vector units across multiple cores, or some analogous arrangement on an accelerator card. Such parallel performance is becoming a...
Go to contribution page -
Luisa Arrabito (LUPM/CNRS)12/10/2016, 11:15
The Cherenkov Telescope Array (CTA) – an array of many tens of Imaging Atmospheric Cherenkov Telescopes deployed on an unprecedented scale – is the next-generation instrument in the field of very high energy gamma-ray astronomy. An average data stream of about 0.9 GB/s for about 1300 hours of observation per year is expected, therefore resulting in 4 PB of raw data per year and a total of 27...
Go to contribution page -
Matteo Manzali (Universita di Ferrara & INFN (IT))12/10/2016, 11:15
The INFN’s project KM3NeT-Italy, supported with Italian PON (National Operative Programs) fundings, has designed a distributed Cherenkov neutrino telescope for collecting photons emitted along the path of the charged particles produced in neutrino interactions. The detector consists of 8 vertical structures, called towers, instrumented with a total number of 672 Optical Modules (OMs) and its...
Go to contribution page -
Christian Gumpert (CERN)12/10/2016, 11:30
The reconstruction of charged particles trajectories is a crucial task for most particle physics
Go to contribution page
experiments. The high instantaneous luminosity achieved at the LHC leads to a high number
of proton-proton collisions per bunch crossing, which has put the track reconstruction
software of the LHC experiments through a thorough test. Preserving track reconstruction
performance under... -
Soohyung Lee (Institute for Basic Science)12/10/2016, 11:30
Axion is a candidate of dark matter and is believed to be a breakthrough of strong CP problem in QCD [1]. CULTASK (CAPP Ultra-Low Temperature Axion Search in Korea) experiment is an axion search experiment which is being performed at Center for Axion and Precision Physics Research (CAPP), Institute for Basic Science (IBS) in Korea. Based on Sikivie’s method [2], CULTASK uses a resonant cavity...
Go to contribution page -
12/10/2016, 11:30
The upgraded Dynamic Data Management framework, Dynamo, is designed to manage the majority of the CMS data in an automated fashion. At the moment all CMS Tier-1 and Tier-2 data centers host about 50 PB of officical CMS production data which are all managed by this system. There are presently two main pools that Dynamo manages: the Analysis pool for user analysis data, and the Production pool...
Go to contribution page -
Federica Legger (Ludwig-Maximilians-Univ. Muenchen (DE))12/10/2016, 11:30
More than one thousand physicists analyse data collected by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN through 150 computing facilities around the world. Efficient distributed analysis requires optimal resource usage and the interplay of several
Go to contribution page
factors: robust grid and software infrastructures, and system capability to adapt to different workloads. The continuous... -
Dr Martin Ritter (LMU / Cluster Universe)12/10/2016, 11:30
Over the last seven years the software stack of the next generation B factory experiment Belle II has grown to over 400,000 lines of C++ and python code, counting only the part included in offline software releases. There are several thousand commits to the central repository by about 100 individual developers per year. To keep a coherent software stack of high quality such that it can be...
Go to contribution page -
Enric Tejedor Saavedra (CERN)12/10/2016, 11:30
SWAN is a novel service to perform interactive data analysis in the cloud. SWAN allows users to write and run their data analyses with only a web browser, leveraging the widely-adopted Jupyter notebook interface. The user code, executions and data live entirely in the cloud. SWAN makes it easier to produce and share results and scientific code, access scientific software, produce tutorials and...
Go to contribution page -
Diego MICHELOTTO (INFN - CNAF)12/10/2016, 11:45
Open City Platform (OCP) is an industrial research project funded by the Italian Ministry of University and Research, started in 2014. It intends to research, develop and test new technological solutions open, interoperable and usable on-demand in the field of Cloud Computing, along with new sustainable organizational models for the public administration, to innovate, with scientific results,...
Go to contribution page -
Marcel Rieger (Rheinisch-Westfaelische Tech. Hoch. (DE))12/10/2016, 11:45
In particle physics, workflow management systems are primarily used as tailored solutions in dedicated areas such as Monte Carlo production. However, physicists performing data analyses are usually required to steer their individual workflows manually which is time-consuming and often leads to undocumented relations between particular workloads.
Go to contribution page
We present a generic analysis design pattern... -
qiulan huang (Institute of High Energy Physics, Beijing)12/10/2016, 11:45
As a new approach to manage resource, virtualization technology is more and more widely applied in high-energy physics field. A virtual computing cluster based on Openstack was built at IHEP, and with HTCondor as the job queue management system. An accounting system which can record the resource usages of different experiment groups in details was also developed. There are two types of the...
Go to contribution page -
Maxim Potekhin (Brookhaven National Laboratory (US))12/10/2016, 11:45
The Deep Underground Neutrino Experiment (DUNE) will employ a uniquely large (40kt) Liquid Argon Time Projection chamber as the main component of its Far Detector. In order to validate this design and characterize the detector performance an ambitious experimental program (called "protoDUNE") has been created which includes a beam test of a large-scale DUNE prototype at CERN. The amount of...
Go to contribution page -
Max Fischer (KIT - Karlsruhe Institute of Technology (DE))12/10/2016, 11:45
With the LHC Run2, end user analyses are increasingly challenging for both users and resource providers.
Go to contribution page
On the one hand, boosted data rates and more complex analyses favor and require larger data volumes to be processed.
On the other hand, efficient analyses and resource provisioning require fast turnaround cycles.
This puts the scalability of analysis infrastructures to new... -
Jason Webb (Brookhaven National Lab)12/10/2016, 11:45
The reconstruction and identification of charmed hadron decays provides an important tool for the study of heavy quark behavior in the Quark Gluon Plasma. Such measurements require high resolution to topologically identify decay daughters at vertices displaced <100 microns from the primary collision vertex, placing stringent demands on track reconstruction software. To enable these...
Go to contribution page -
Dr William Badgett (Fermilab)12/10/2016, 11:45
The LArIAT Liquid Argon Time Projection Chamber (TPC) in a Test Beam experiment explores the interaction of charged particles such as pions, kaons, electrons, muons and protons within the active liquid argon volume of the TPC detector. The LArIAT experiment started data collection at the Fermilab Test Beam Facility (FTBF) in April 2015 and continues to run in 2016. LArIAT provides important...
Go to contribution page -
Sandro Christian Wenzel (CERN)12/10/2016, 12:00
The VecGeom geometry library is a relatively recent effort aiming to provide
a modern and high performance geometry service for particle-detector simulation
in hierarchical detector geometries common to HEP experiments.One of its principal targets is the effective use of vector SIMD hardware
Go to contribution page
instructions to accelerate geometry calculations for single-track as well
as multiple-track... -
Dario Berzano (CERN)12/10/2016, 12:00
Apache Mesos is a resource management system for large data centres, initially developed by UC Berkeley, and now maintained under the Apache Foundation umbrella. It is widely used in the industry by companies like Apple, Twitter, and AirBnB and it's known to scale to 10'000s of nodes. Together with other tools of its ecosystem, like Mesosphere Marathon or Chronos, it provides an end-to-end...
Go to contribution page -
12/10/2016, 12:00
One of the challenges a scientific computing center has to face is to keep delivering a computational framework well consolidated within the community (i.e. the batch farm), while complying to modern computing paradigms. The aim is to ease system administration at all levels (from hardware to applications) and to provide a smooth end-user experience.
Go to contribution page
HTCondor is a LRMS widely used in the... -
Jose Caballero Bejar (Brookhaven National Laboratory (US))12/10/2016, 12:00
Over the past few years, Grid Computing technologies have reached a high
level of maturity. One key aspect of this success has been the development and adoption of newer Compute Elements to interface the external Grid users with local batch systems. These new Compute Elements allow for better handling of jobs requirements and a more precise management of diverse local resources.However,...
Go to contribution page -
PATRICK MEADE (University of Wisconsin-Madison)12/10/2016, 12:00
The IceCube Neutrino Observatory is a cubic kilometer neutrino telescope located at the Geographic South Pole. IceCube collects 1 TB of data every day. An online filtering farm processes this data in real time and selects 10% to be sent via satellite to the main data center at the University of Wisconsin-Madison. IceCube has two year-round on-site operators. New operators are hired every year,...
Go to contribution page -
Tobias Stockmanns (Forschungszentrum Jülich GmbH)12/10/2016, 12:00
One of the large challenges of future particle physics experiments is the trend to run without a first level hardware trigger. The typical data rates exceed easily hundreds of GBytes/s, which is way too much to be stored permanently for an offline analysis. Therefore a strong data reduction has to be done by selection only those data, which is physically interesting. This implies that all...
Go to contribution page -
12/10/2016, 12:00
With the advent of a post-Moore’s law field of computation, novel architectures continue to emerge. HEP experiments, with their ever-increasing computing requirements, are exploring new methods of computation and data handling. With composite multi-million connection neuromorphic chips like IBM’s TrueNorth, neural engineering has now become a feasible technology in this novel computing...
Go to contribution page -
Enrico Mazzoni (INFN-Pisa)12/10/2016, 12:15
Clouds and Virtualization are typically used in computing centers to satisfy diverse needs: different operating systems, software releases or fast servers/services delivery. On the other hand solutions relying on Linux kernel capabilities such as Docker are well suited for applications isolation and software developing. In our previous work (Docker experience at INFN-Pisa Grid Data Center*) we...
Go to contribution page -
12/10/2016, 12:15
The HTCondor-CE is the primary Compute Element (CE) software for the Open Science Grid. While it offers many advantages for large sites, for smaller, WLCG Tier-3 sites or opportunistic clusters, it can be a difficult task to install and configure the HTCondor-CE. Installing a CE typically involves understanding several pieces of software, installing hundreds of packages on a dedicated node,...
Go to contribution page -
Janusz Martyniak12/10/2016, 12:15
The international Muon Ionization Cooling Experiment (MICE) currently operating at the Rutherford Appleton Laboratory in the UK, is designed to demonstrate the principle of muon ionization cooling for application to a future Neutrino Factory or Muon Collider. We present the status of the framework for the movement and curation of both raw and reconstructed data. We also review the...
Go to contribution page -
Louis-Guillaume Gagnon (Universite de Montreal (CA))12/10/2016, 12:15
ATLAS track reconstruction code is continuously evolving to match the demands from the increasing instantaneous luminosity of LHC, as well as the increased centre-of-mass energy. With the increase in energy, events with dense environments, e.g. the cores of jets or boosted tau leptons, become much more abundant. These environments are characterised by charged particle separations on the order...
Go to contribution page -
Prof. Martin Sevior (University of Melbourne)12/10/2016, 12:15
The Toolkit for Multivariate Analysis (TMVA) is a component of the ROOT data analysis framework and is widely used for classification problems. For example, TMVA might be used for the binary classification problem of distinguishing signal from background events.
The classification methods included in TMVA are standard, well-known machine learning techniques which can be implemented in other...
Go to contribution page -
Dmitry Arkhipkin (Brookhaven National Laboratory)12/10/2016, 12:15
One of the STAR experiment's modular Messaging Interface and Reliable Architecture framework (MIRA) integration goals is to provide seamless and automatic connections with the existing control systems. After an initial proof of concept and operation of the MIRA system as a parallel data collection system for online use and real-time monitoring, the STAR Software and Computing group is now...
Go to contribution page -
Lisa Zangrando (Universita e INFN, Padova (IT))12/10/2016, 12:15
The Cloud Area Padovana has been running for almost two years. This is an OpenStack-based scientific cloud, spread across two different sites: the INFN Padova Unit and the INFN Legnaro National Labs.
Go to contribution page
The hardware resources have been scaled horizontally and vertically, by upgrading some hypervisors and by adding new ones: currently it provides about 1100 cores.
Some in-house developments were... -
Ricardo Brito Da Rocha (CERN)12/10/2016, 12:30
Containers remain a hot topic in computing, with new use cases and tools appearing every day. Basic functionality such as spawning containers seems to have settled, but topics like volume support or networking are still evolving. Solutions like Docker Swarm, Kubernetes or Mesos provide similar functionality but target different use cases, exposing distinct interfaces and APIs.
The CERN...
Go to contribution page -
Malachi Schram12/10/2016, 12:30
Motivated by the complex workflows within Belle II, we propose an approach for efficient execution of workflows on distributed resources that integrates provenance, performance modeling, and optimization-based scheduling. The key components of this framework include modeling and simulation methods to quantitatively predict workflow component behavior; optimized decision making such as choosing...
Go to contribution page -
Dr Thomas Hauth (KIT - Karlsruhe Institute of Technology (DE))12/10/2016, 12:30
This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers.
Rather than relying on...
Go to contribution page -
Dr Kenneth Richard Herner (Fermi National Accelerator Laboratory (US))12/10/2016, 12:30
Gravitational wave (GW) events can have several possible progenitors, including binary black hole mergers, cosmic string cusps, core-collapse supernovae, black hole-neutron star mergers, and neutron star-neutron star mergers. The latter three are expected to produce an electromagnetic signature that would be detectable by optical and infrared
Go to contribution page
telescopes. To that end, the LIGO-Virgo... -
Peter Hobson (Brunel University (GB))12/10/2016, 12:30
We investigate the combination of a Monte Carlo Tree Search, hierarchical space decomposition, Hough Transform techniques and
parallel computing to the problem of line detection and shape recognition in general.Paul Hough introduced in 1962 a method for detecting lines in binary images. Extended in the 1970s to the detection of space forms, what
Go to contribution page
came to be known as the Hough Transform... -
Tammy Walton (Fermilab)12/10/2016, 12:30
The Muon g-2 experiment will measure the precession rate of positive charged muons subjected to an external magnetic field in a storage ring. To prevent interference in the magnetic field, both the calorimeter and tracker detectors are situated along the ring and measure the muon's properties via the decay positron. The influence of the magnetic field and oscillation motions of the muon beam...
Go to contribution page -
Lisa Gerhardt (LBNL)12/10/2016, 12:30
Bringing HEP computing to HPC can be difficult. Software stacks are often very complicated with numerous dependencies that are difficult to get installed on an HPC system. To address this issue, amongst others, NERSC has created Shifter, a framework that delivers Docker-like functionality to HPC. It works by extracting images from native formats (such as a Docker image) and converting them to...
Go to contribution page -
Gregor Mittag (Deutsches Elektronen-Synchrotron (DE))12/10/2016, 12:45
The all-silicon design of the tracking system of the CMS experiment provides excellent resolution for charged tracks and an efficient tagging of jets. As the CMS tracker, and in particular its pixel detector, underwent repairs and experienced changed conditions with the start of the LHC Run-II in 2015, the position and orientation of each of the 15148 silicon strip and 1440 silicon pixel...
Go to contribution page -
12/10/2016, 12:45
The distributed cloud using the CloudScheduler VM provisioning service is one of the longest running systems for HEP workloads. It has run millions of jobs for ATLAS and Belle II over the past few years using private and commercial clouds around the world. Our goal is to scale the distributed cloud to the 10,000-core level, with the ability to run any type of application (low I/O, high I/O...
Go to contribution page -
Dr Leng Tau (Supermicro)12/10/2016, 12:45
COTS HPC has evolved for two decades to become an undeniable mainstream computing solution. It represents a major shift away from yesterday’s proprietary, vector-based processors and architectures to modern supercomputing clusters built on open industry standard hardware. This shift enabled the Industry with a cost-effective path to high-performance, scalable and flexible supercomputers (from...
Go to contribution page -
Alessandro Lonardo (Universita e INFN, Roma I (IT))12/10/2016, 12:45
In order to face the LHC luminosity increase planned for the next years, new high-throughput network mechanisms interfacing the detectors readout to the software trigger computing nodes are being developed in several CERN experiments.
Go to contribution page
Adopting many-core computing architectures such as Graphics Processing Units (GPUs) or the Many Integrated Core (MIC) would allow to reduce drastically the size... -
Dr Rongqiang Cao (Computer Network Information Center, Chinese Academy of Sciences)12/10/2016, 12:45
The development of scientific computing is increasingly moving to web and mobile applications. All these clients need high-quality implementations of accessing heterogeneous computing resources provided by clusters, grid computing or cloud computing. We present a web service called SCEAPI and describe how it can abstract away many details and complexities involved in the use of scientific...
Go to contribution page -
Peter Hristov (CERN)13/10/2016, 08:45
-
Mario Cromaz (LBNL)13/10/2016, 09:07
-
Jacek Becla (SLAC)13/10/2016, 09:30
-
Dula Parkinson13/10/2016, 10:00
-
Cristovao Cordeiro (CERN)13/10/2016, 11:00
With the imminent upgrades to the LHC and the consequent increase of the amount and complexity of data collected by the experiments, CERN's computing infrastructures will be facing a large and challenging demand of computing resources. Within this scope, the adoption of cloud computing at CERN has been evaluated and has opened the doors for procuring external cloud services from providers,...
Go to contribution page -
Jeremi Niedziela (Warsaw University of Technology (PL))13/10/2016, 11:00
Events visualisation in ALICE - current status and strategy for Run 3
Jeremi Niedziela for the ALICE Collaboration
A Large Ion Collider Experiment (ALICE) is one of the four big experiments running at the Large Hadron Collider (LHC), which focuses on the study of the Quark-Gluon Plasma (QGP) being produced in heavy-ion collisions.
The ALICE Event Visualisation Environment (AliEVE) is...
Go to contribution page -
Maria Girone (CERN)13/10/2016, 11:00
Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last...
Go to contribution page -
Adrian Mönnich (CERN)13/10/2016, 11:00
The last two years have been atypical to the Indico community, as the development team undertook an extensive rewrite of the application and deployed no less than 9 major releases of the system. Users at CERN have had the opportunity to experience the results of this ambitious endeavour. They have only seen, however, the "tip of the iceberg".
Indico 2.0 employs a completely new stack,...
Go to contribution page -
Patricia Conde Muino (LIP Laboratorio de Instrumentacao e Fisica Experimental de Part)13/10/2016, 11:00
General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at...
Go to contribution page -
Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US)), Dr Roger Cottrell (SLAC National Accelerator Laboratory), Wei Yang (SLAC National Accelerator Laboratory (US)), Dr Wilko Kroeger (SLAC National Accelerator Laboratory)13/10/2016, 11:00
The exponentially increasing need for high speed data transfer is driven by big data, cloud computing together with the needs of data intensive science, High Performance Computing (HPC), defense, the oil and gas industry etc. We report on the Zettar ZX software that has been developed since 2013 to meet these growing needs by providing high performance data transfer and encryption in a...
Go to contribution page -
Dr Andrii Tykhonov (Universite de Geneve (CH))13/10/2016, 11:00
DAMPE is a powerful space telescope launched in December 2015, able to detect electrons and photons in a wide range of energy (5 GeV to 10 TeV) and with unprecedented energy resolution. Silicon tracker is a crucial component of detector, able to determine the direction of detected particles and trace the origin of incoming gamma rays. This contribution covers the reconstruction software of...
Go to contribution page -
Dr Thomas Hauth (KIT)13/10/2016, 11:15
Today’s analyses for high energy physics experiments involve processing a large amount of data with highly specialized algorithms. The contemporary workflow from recorded data to final results is based on the execution of small scripts - often written in Python or ROOT macros which call complex compiled algorithms in the background - to perform fitting procedures and generate plots. During...
Go to contribution page -
Paul James Laycock (University of Liverpool (GB))13/10/2016, 11:15
In this presentation, the data preparation workflows for Run 2 are
Go to contribution page
presented. Online data quality uses a new hybrid software release
that incorporates the latest offline data quality monitoring software
for the online environment. This is used to provide fast feedback in
the control room during a data acquisition (DAQ) run, via a
histogram-based monitoring framework as well as the online... -
13/10/2016, 11:15
As many Tier 3 and some Tier 2 centers look toward streamlining operations, they are considering autonomously managed storage elements as part of the solution. These storage elements are essentially file caching servers. They can operate as whole file or data block level caches. Several implementations exist. In this paper we explore using XRootD caching servers that can operate in either...
Go to contribution page -
David Rohr (Johann-Wolfgang-Goethe Univ. (DE))13/10/2016, 11:15
ALICE (A Large Heavy Ion Experiment) is one of the four major experiments at the Large Hadron Collider (LHC) at CERN.
Go to contribution page
The High Level Trigger (HLT) is an online compute farm which reconstructs events measured by the ALICE detector in real-time.
The most compute-intensive part is the reconstruction of particle trajectories called tracking and the most important detector for tracking is the... -
Patrick Fuhrmann (Deutsches Elektronen-Synchrotron (DE))13/10/2016, 11:15
INDIGO-DataCloud (INDIGO for short, https://www.indigo-datacloud.eu) is a project started in April 2015, funded under the EC Horizon 2020 framework program. It includes 26 European partners located in 11 countries and addresses the challenge of developing open source software, deployable in the form of a data/computing platform, aimed to scientific communities and designed to be deployed on...
Go to contribution page -
Sebastian Lopienski (CERN)13/10/2016, 11:15
The CERN Computer Security Team is assisting teams and individuals at CERN who want to address security concerns related to their computing endeavours. For projects in the early stages, we help incorporate security in system architecture and design. For software that is already implemented, we do penetration testing. For particularly sensitive components, we perform code reviews. Finally, for...
Go to contribution page -
Helge Meinhard (CERN)13/10/2016, 11:15
HEP is only one of many sciences with sharply increasing compute requirements that cannot be met by profiting from Moore's law alone. Commercial clouds potentially allow for realising larger economies of scale. While some small-scale experience requiring dedicated effort has been collected, public cloud resources have not been integrated yet with the standard workflows of science organisations...
Go to contribution page -
Mr Felice Pantaleo (CERN - Universität Hamburg)13/10/2016, 11:30
In 2019 the Large Hadron Collider will undergo upgrades in order to increase the luminosity by a factor two if compared to today's nominal luminosity. Currently CMS software parallelization strategy is oriented at scheduling one event per thread. However tracking timing performance depends from the factorial of the pileup leading the current approach to increase latency. When designing a HEP...
Go to contribution page -
Hannah Short (CERN)13/10/2016, 11:30
HEP has long been considered an exemplary field in Federated Computing; the benefit of this technology has been recognised by the thousands of researchers who have used the grid for nearly 15 years. Whilst the infrastructure is mature and highly successful, Federated Identity Management (FIM) is one area in which the HEP community should continue to evolve.
The ability for a researcher to use...
Go to contribution page -
Riccardo Maria Bianchi (University of Pittsburgh (US))13/10/2016, 11:30
At the beginning, HEP experiments made use of photographical images both to record and store experimental data and to illustrate their findings. Then the experiments evolved and needed to find ways to visualize their data. With the availability of computer graphics, software packages to display event data and the detector geometry started to be developed. Here a brief history of event displays...
Go to contribution page -
Jean-Roch Vlimant (California Institute of Technology (US))13/10/2016, 11:30
The main goal of the project to demonstrate the ability of using HTTP data
Go to contribution page
federations in a manner analogous to today.s AAA infrastructure used from
the CMS experiment. An initial testbed at Caltech has been built and
changes in the CMS software (CMSSW) are being implemented in order to
improve HTTP support. A set of machines is already set up at the Caltech
Tier2 in order to improve the... -
Dmitri Smirnov (BNL)13/10/2016, 11:30
Since 2014, the STAR experiment has been exploiting data collected by the Heavy Flavor Tracker (HFT), a group of high precision silicon-based detectors installed to enhance track reconstruction and pointing resolution of the existing Time Projection Chamber (TPC). The significant improvement in the primary vertex resolution resulting from this upgrade prompted us to revisit the variety of...
Go to contribution page -
Martin Gasthuber (DESY)13/10/2016, 11:30
The HNSciCloud project (presented in general by another contribution) faces the challenge to accelerate developments performed by the selected commercial providers. In order to guarantee cost-efficient usage of IaaS resources across a wide range of scientific communities, the technical requirements had to be carefully constructed. With respect to current IaaS offerings, data-intensive science...
Go to contribution page -
Ricardo Brito Da Rocha (CERN)13/10/2016, 11:30
The INDIGO-DataCloud project's ultimate goal is to provide a sustainable European software infrastructure for science, spanning multiple computer centers and existing public clouds.
Go to contribution page
The participating sites form a set of heterogeneous infrastructures, some running OpenNebula, some running OpenStack. There was the need to find a common denominator for the deployment of both the required PaaS... -
Thomas Mc Cauley (University of Notre Dame (US))13/10/2016, 11:45
Modern web browsers are powerful and sophisticated applications that support an ever-wider range of uses. One such use is rendering high-quality, GPU-accelerated, interactive 2D and 3D graphics in an HTML canvas. This can be done via WebGL, a JavaScript API based on OpenGL ES. Applications delivered via the browser have several distinct benefits for the developer and user. For example, they...
Go to contribution page -
Brian Paul Bockelman (University of Nebraska (US))13/10/2016, 11:45
Data federations have become an increasingly common tool for large collaborations such as CMS and Atlas to efficiently distribute large data files. Unfortunately, these typically come with weak namespace semantics and a non-POSIX API. On the other hand, CVMFS has provided a POSIX-compliant read-only interface for use cases with a small working set size (such as software distribution). The...
Go to contribution page -
Dr Paul Millar (Deutsches Elektronen-Synchrotron (DE))13/10/2016, 11:45
For over a decade, X509 Proxy Certificates are used in High Energy Physics (HEP) to authenticate users and guarantee their membership in Virtual Organizations, on which subsequent authorization, e.g. for data access, is based upon. Although the established infrastructure worked well and provided sufficient security, the implementation of procedures and the underlying software is often seen as...
Go to contribution page -
Alessandro Degano (Universita e INFN Torino (IT)), Felice Pantaleo (CERN - Universität Hamburg)13/10/2016, 11:45
The increase in instantaneous luminosity, number of interactions per bunch crossing and detector granularity will pose an interesting challenge for the event reconstruction and the High Level Trigger system in the CMS experiment at the High Luminosity LHC (HL-LHC), as the amount of information to be handled will increase by 2 orders of magnitude. In order to reconstruct the Calorimetric...
Go to contribution page -
Xavier Espinal Curull (CERN)13/10/2016, 11:45
In the competitive 'market' for large-scale storage solutions, EOS has been showing its excellence in the multi-Petabyte high-concurrency regime. It has also shown a disruptive potential in powering the CERNBox service in providing sync&share capabilities and in supporting innovative analysis environments along the storage of LHC data. EOS has also generated interest as generic storage...
Go to contribution page -
Wenjing Wu (Computer Center, IHEP, CAS)13/10/2016, 11:45
JUNO (Jiangmen Underground Neutrino Observatory) is a multi-purpose neutrino experiment designed to measure the neutrino mass hierarchy and mixing parameters. JUNO is estimated to be in operation in 2019 with 2PB/year raw data rate. The IHEP computing center plans to build up virtualization infrastructure to manage computing resources in the coming years and JUNO is selected to be one of the...
Go to contribution page -
Kathryn Grimm (Lancaster University (GB))13/10/2016, 11:45
Efficient and precise reconstruction of the primary vertex in
Go to contribution page
an LHC collision is essential in both the reconstruction of the full
kinematic properties of a hard-scatter event and of soft interactions as a
measure of the amount of pile-up. The reconstruction of primary vertices in
the busy, high pile-up environment of Run-2 of the LHC is a challenging
task. New methods have been developed by... -
Hannah Short (CERN)13/10/2016, 12:00
Access to WLCG resources is authenticated using an X509 and PKI infrastructure. Even though HEP users have always been exposed to certificates directly, the development of modern Web Applications by the LHC experiments calls for simplified authentication processes keeping the underlying software unmodified.
Go to contribution page
In this work we will show an integrated Web-oriented solution (code name Kipper) with... -
13/10/2016, 12:00
The DD4hep detector description tool-kit offers a flexible and easy to use solution for the consistent and complete description of particle physics detectors in one single system. The sub-component DDRec provides a dedicated interface to the detector geometry as needed for event reconstruction. With DDRec there is no need to define an additional, separate reconstruction geometry as is often...
Go to contribution page -
Stefano Gallorini (Universita e INFN, Padova (IT))13/10/2016, 12:00
In view of Run3 (2020) the LHCb experiment is planning a major upgrade to fully readout events at 40 MHz collision rate. This in order to highly increase the statistic of the collected samples and go further in precision beyond Run2. An unprecedented amount of data will be produced, which will be fully reconstructed real-time to perform fast selection and categorization of interesting events....
Go to contribution page -
Christopher Jones (Fermi National Accelerator Lab. (US))13/10/2016, 12:00
ParaView [1] is a high performance visualization application not widely used in HEP. It is a long standing open source project led by Kitware[2] and involves several DOE and DOD laboratories and has been adopted by many DOE supercomputing centers and other sites. ParaView is unique in speed and efficiency by using state-of-the-art techniques developed by the academic visualization community...
Go to contribution page -
Daniela Bauer (Imperial College Sci., Tech. & Med. (GB))13/10/2016, 12:00
When first looking at converting a part of our site’s grid infrastructure into a cloud based system in late 2013 we needed to ensure the continued accessibility of all of our resources during a potentially lengthy transition period.
Go to contribution page
Moving a limited number of nodes to the cloud proved ineffective as users expected a significant number of cloud resources to be available to justify the effort... -
Mario Lassnig (CERN)13/10/2016, 12:00
The increasing volume of physics data is posing a critical challenge to the ATLAS experiment. In anticipation of high luminosity
Go to contribution page
physics, automation of everyday data management tasks has become necessary. Previously many of these tasks required human
decision-making and operation. Recent advances in hardware and software have made it possible to entrust more complicated duties to
automated... -
13/10/2016, 12:00
ATLAS@Home is a volunteer computing project which allows the public to contribute to computing for the ATLAS experiment through
Go to contribution page
their home or office computers. The project has grown continuously since its creation in mid-2014 and now counts almost 100,000
volunteers. The combined volunteers' resources make up a sizable fraction of overall resources for ATLAS simulation. This paper
takes... -
David Yu (Brookhaven National Laboratory (US))13/10/2016, 12:15
Randomly restoring files from tapes degrades the read performance primarily due to frequent tape mounts. The high latency and time-consuming tape mount and dismount is a major issue when accessing massive amounts of data from tape storage. BNL's mass storage system currently holds more than 80 PB of data on tapes, managed by HPSS. To restore files from HPSS, we make use of a scheduler...
Go to contribution page -
Andrey Ustyuzhanin (Yandex School of Data Analysis (RU))13/10/2016, 12:15
Reproducibility is a fundamental piece of the scientific method and increasingly complex problems demand ever wider collaboration between scientists. To make research fully reproducible and accessible to collaborators a researcher has to take care of several aspects: research protocol description, data access, preservation of the execution environment, workflow pipeline, and analysis script...
Go to contribution page -
Dr Robert Andrew Currie (Imperial College Sci., Tech. & Med. (GB))13/10/2016, 12:15
This talk will present the result of recent developments to support new users from the Large Scale Survey Telescope (LSST) group on the GridPP DIRAC instance. I will describe a workflow used for galaxy shape identification analyses whilst highlighting specific challenges as well as the solutions currently being explored. The result of this work allows this community to make best use of...
Go to contribution page -
Daniel Hugo Campora Perez (Universidad de Sevilla (ES))13/10/2016, 12:15
The 2020 upgrade of the LHCb detector will vastly increase the rate of collisions the Online system needs to process in software, in order to filter events in real time. 30 million collisions per second will pass through a selection chain, where each step is executed conditional to its prior acceptance.
The Kalman Filter is a fit applied to all reconstructed tracks which, due to its time...
Go to contribution page -
Marian Stahl (Ruprecht-Karls-Universitaet Heidelberg (DE))13/10/2016, 12:15
The LHCb detector at the LHC is a general purpose detector in the forward region with a focus on reconstructing decays of c- and b-hadrons. For Run II of the LHC, a new trigger strategy with a real-time reconstruction, alignment and calibration was developed and employed. This was made possible by implementing an offline-like track reconstruction in the high level trigger. However, the ever...
Go to contribution page -
Romain Wartel (CERN)13/10/2016, 12:15
This presentation offers an overview of the current security landscape - the threats, tools, techniques and procedures followed by attackers. These attackers range from cybercriminals aiming to make a profit, to nation-states searching for valuable information. Threat vectors have evolved in recent years; focus has shifted significantly, from targeting computer services directly, to aiming for...
Go to contribution page -
Jeffrey Michael Dost (Univ. of California San Diego (US))13/10/2016, 14:00
The Pacific Research Platform is an initiative to interconnect Science DMZs between campuses across the West Coast of the United States over a 100 gbps network. The LHC @ UC is a proof of concept pilot project that focuses on interconnecting 6 University of California campuses. It is spearheaded by computing specialists from the UCSD Tier 2 Center in collaboration with the San Diego...
Go to contribution page -
Federico Stagni (CERN)13/10/2016, 14:00
The DIRAC project is developing interware to build and operate distributed
Go to contribution page
computing systems. It provides a development framework and a rich set of services
for both Workload and Data Management tasks of large scientific communities.
A number of High Energy Physics and Astrophysics collaborations have adopted
DIRAC as the base for their computing models. DIRAC was initially developed for... -
Lukas Alexander Heinrich (New York University (US))13/10/2016, 14:00
The Durham High Energy Physics Database (HEPData) has been built up over the past four decades as a unique open-access repository for scattering data from experimental particle physics. It is comprised of data points from plots and tables underlying over eight thousand publications, some of which are from the Large Hadron Collider (LHC) at CERN.
HEPData has been rewritten from the ground up...
Go to contribution page -
13/10/2016, 14:00
The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC),...
Go to contribution page -
Maxim Borisyak (National Research University Higher School of Economics (HSE) (RU); Yandex School of Data Analysis (RU))13/10/2016, 14:00
The CRAYFIS experiment proposes usage of private mobile phones as a ground detector for Ultra High Energy Cosmic Rays. Interacting with Earth's atmosphere they produce extensive particle showers which can be detected by cameras on mobile phones. A typical shower contains minimally-ionizing particles such as muons. As they interact with CMOS detector they leave low-energy tracks that sometimes...
Go to contribution page -
Oliver Gutsche (Fermi National Accelerator Lab. (US))13/10/2016, 14:00
Experimental Particle Physics has been at the forefront of analyzing the world’s largest datasets for decades. The HEP community was the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems collectively called “Big Data” technologies have emerged to support the analysis of Petabyte and Exabyte datasets in industry. While the principles...
Go to contribution page -
Hanna Malygina (GSI Darmstadt), Volker Friese (GSI Darmstadt)13/10/2016, 14:15
Precise modelling of detectors in simulations is the key to the understanding of their performance, which, in turn, is a prerequisite for the proper design choice and, later, for the achievement of valid physics results. In this report,
Go to contribution page
we describe the implementation of the Silicon Tracking System (STS), the main tracking device of the CBM experiment, in the CBM software environment. The STS... -
Christian Faerber (CERN)13/10/2016, 14:15
The LHCb experiment at the LHC will upgrade its detector by 2018/2019 to a 'triggerless' readout scheme, where all the readout electronics and several sub-detector parts will be replaced. The new readout electronics will be able to readout the detector at 40MHz. This increases the data bandwidth from the detector down to the event filter farm to 40TBit/s, which also has to be processed to...
Go to contribution page -
Prof. Douglas Thain (University of Notre Dame)13/10/2016, 14:15
Reproducibility is an essential component of the scientific process.
Go to contribution page
It is often necessary to check whether multiple runs of the same software
produce the same result. This may be done to validate whether a new machine
produces correct results on old software, whether new software produces
correct results on an old machine, or to compare the equality of two different approaches to the... -
Christoph Paus (Massachusetts Inst. of Technology (US))13/10/2016, 14:15
We describe the development and deployment of a distributed campus computing infrastructure consisting of a single job submission portal linked to multiple local campus resources, as well the wider computational fabric of the Open Science Grid (OSG). Campus resources consist of existing OSG-enabled clusters and clusters with no previous interface to the OSG. Users accessing the single...
Go to contribution page -
Volodimir Begy (University of Vienna (AT))13/10/2016, 14:15
European Strategy for Particle Physics update 2013, the study explores different designs of circular colliders for the post-LHC era. Reaching unprecedented energies and luminosities require to understand system reliability behaviour from the concept phase onwards and to design for availability and sustainable operation. The study explores industrial approaches to model and simulate the...
Go to contribution page -
Federico Stagni (CERN)13/10/2016, 14:15
In the last few years, new types of computing models, such as IAAS (Infrastructure as a Service) and IAAC (Infrastructure as a Client), gained popularity. New resources may come as part of pledged resources, while others are in the form of opportunistic ones. Most but not all of these new infrastructures are based on virtualization techniques. In addition, some of them, present opportunities...
Go to contribution page -
13/10/2016, 14:30
The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centers affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centers all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the...
Go to contribution page -
Sang Un Ahn (KiSTi Korea Institute of Science & Technology Information (KR))13/10/2016, 14:30
Global Science experimental Data hub Center (GSDC) at Korea Institute of Science and Technology Information (KISTI) located at Daejeon in South Korea is the unique data center in the country which helps with its computing resources fundamental research fields deal with the large-scale of data. For historical reason, it has run Torque batch system while recently it starts running HTCondor for...
Go to contribution page -
507. HEP Track Finding with the Micron Automata Processor and Comparison with an FPGA-based SolutionJohn Freeman (Fermi National Accelerator Lab. (US))13/10/2016, 14:30
Moore’s Law has defied our expectations and remained relevant in the semiconductor industry in the past 50 years, but many believe it is only a matter of time before an insurmountable technical barrier brings about its eventual demise. Many in the computing industry are now developing post-Moore’s Law processing solutions based on new and novel architectures. An example is the Micron...
Go to contribution page -
Ana Trisovic (University of Cambridge (GB))13/10/2016, 14:30
The Large Hadron Collider beauty (LHCb) experiment at CERN specializes in investigating the slight differences between matter and antimatter by studying the decays of beauty or bottom (B) and charm (D) hadrons. The detector has been recording data from proton-proton collisions since 2010. Data preservation (DP) project at the LHCb insures preservation of the experimental and simulated (Monte...
Go to contribution page -
Marilena Bandieramonte (CERN)13/10/2016, 14:30
High-energy particle physics (HEP) has advanced greatly over recent years and current plans for the future foresee even more ambitious targets and challenges that have to be coped with. Amongst the many computer technology R&D areas, simulation of particle detectors stands out as the most time consuming part of HEP computing. An intensive R&D and programming effort is required to exploit the...
Go to contribution page -
Luca Menichetti (CERN), Marco Meoni (Universita di Pisa & INFN (IT)), Nicolo Magini (Fermi National Accelerator Lab. (US))13/10/2016, 14:30
The CMS experiment has implemented a computing model where distributed monitoring infrastructures are collecting any kind of data and metadata about the performance of the computing operations. This data can be probed further by harnessing Big Data analytics approaches and discovering patterns and correlations that can improve the throughput and the efficiency of the computing model.
CMS...
Go to contribution page -
Heiko Engel (Johann-Wolfgang-Goethe Univ. (DE))13/10/2016, 14:45
ALICE (A Large Ion Collider Experiment) is a detector system
Go to contribution page
optimized for the study of heavy ion collision detector at the
CERN LHC. The ALICE High Level Trigger (HLT) is a computing
cluster dedicated to the online reconstruction, analysis and
compression of experimental data. The High-Level Trigger receives
detector data via serial optical links into custom PCI-Express
based FPGA... -
Prasanth Kothuri (CERN)13/10/2016, 14:45
The statistical analysis of infrastructure metrics comes with several specific challenges, including the fairly large volume of unstructured metrics from a large set of independent data sources. Hadoop and Spark provide an ideal environment in particular for the first steps of skimming rapidly through hundreds of TB of low relevance data to find and extract the much smaller data volume that is...
Go to contribution page -
Prof. Martin Sevior (University of Melbourne)13/10/2016, 14:45
The Belle II experiment will generate very large data samples. In order to reduce the time for data analyses, loose selection criteria will be used to create files rich in samples of particular interest for a specific data analysis (data skims). Even so, many of the resultant skims will be very large, particularly for highly inclusive analyses. The Belle II collaboration is investigating the...
Go to contribution page -
Andreas Gellrich (DESY)13/10/2016, 14:45
We present the consolidated batch system at DESY. As one of the largest resource centres DESY has to support differing work flows by HEP experiments in WLCG or Belle II as well as local users. By abandoning specific worker node setups in favour of generic flat nodes with middleware resources provided via CVMFS, we gain flexibility to subsume different use cases in a homogeneous environment. ...
Go to contribution page -
Torre Wenaus (Brookhaven National Laboratory (US))13/10/2016, 14:45
HEP software today is a rich and diverse domain in itself and exists within the mushrooming world of open source software. As HEP software developers and users we can be more productive and effective if our work and our choices are informed by a good knowledge of what others in our community have created or found useful. The HEP Software and Computing Knowledge Base, [hepsoftware.org][1], was...
Go to contribution page -
Sunanda Banerjee (Fermi National Accelerator Lab. (US))13/10/2016, 14:45
CMS has tuned its simulation program and chosen a specific physics model of Geant4 by comparing the simulation results with dedicated test beam experiments. CMS continues to validate the physics models inside Geant4 using the test beam data as well as collision data. Several physics lists (collection of physics models) inside the most recent version of Geant4 provide good agreement of the...
Go to contribution page -
Simone Stracka (Universita di Pisa & INFN (IT))13/10/2016, 15:00
The goal of the “INFN-RETINA” R&D project is to develop and implement a parallel computational methodology that allows to reconstruct events with an extremely high number (>100) of charged-particle tracks in pixel and silicon strip detectors at 40 MHz, thus matching the requirements for processing LHC events at the full crossing frequency.
Our approach relies on a massively parallel...
Go to contribution page -
Ilija Vukotic (University of Chicago (US))13/10/2016, 15:00
Big Data technologies have proven to be very useful for storage, processing and visualization of derived
Go to contribution page
metrics associated with ATLAS distributed computing (ADC) services. Log file data and database records, and
metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for
ATLAS ADC operations analysis. Dashboards, wide area data access cost... -
13/10/2016, 15:00
The higher energy and luminosity from the LHC in Run2 has put increased pressure on CMS computing resources. Extrapolating to even higher luminosities (and thus higher event complexities and trigger rates) in Run3 and beyond, it becomes clear the current model of CMS computing alone will not scale accordingly. High Performance Computing (HPC) facilities, widely used in scientific computing...
Go to contribution page -
Dr Aurora Tamborini (INFN Section of Pavia)13/10/2016, 15:00
Purpose
Go to contribution page
The aim of this work consists in the full simulation and measurements of a GEMPix (Gas Electron Multiplier) detector for a possible application as monitor for beam verification at CNAO Center (National Center for Oncological Hadrontherapy).
A triple GEMPix detector read by 4 Timepix chips could provide a beam monitoring, dose verification and quality checks with good resolution... -
Christopher Hollowell (Brookhaven National Laboratory)13/10/2016, 15:00
Traditionally, the RHIC/ATLAS Computing Facility (RACF) at Brookhaven National Laboratory has only maintained High Throughput Computing (HTC) resources for our HEP/NP user community. We've been using HTCondor as our batch system for many years, as this software is particularly well suited for managing HTC processor farm resources. Recently, the RACF has also begun to design/administrate some...
Go to contribution page -
Lukas Alexander Heinrich (New York University (US))13/10/2016, 15:00
LHC data analyses consist of workflows that utilize a diverse set of software tools to produce physics results. The different set of tools range from large software frameworks like Gaudi/Athena to single-purpose scripts written by the analysis teams. The analysis steps that lead to a particular physics result are often not reproducible without significant assistance from the original authors....
Go to contribution page -
Philippe Charpentier (CERN)13/10/2016, 15:15
In order to estimate the capabilities of a Computing slot with limited processing time, it is necessary to know with a rather good precision its “power”. This allows for example pilot job to match a task for which the required CPU work is known, or to define the number of events to be processed knowing the CPU work per event. Otherwise one always has the risk that the task is aborted because...
Go to contribution page -
Prasanth Kothuri (CERN)13/10/2016, 15:15
This contribution is about sharing our recent experiences of building Hadoop based application. Hadoop ecosystem now offers myriad of tools which can overwhelm new users, yet there are successful ways these tools can be leveraged to solve problems. We look at factors to consider when using Hadoop to model and store data, best practices for moving data in and out of the system and common...
Go to contribution page -
Sergey Panitkin (Brookhaven National Laboratory (US))13/10/2016, 15:15
The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment.
Go to contribution page
PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year.
While... -
Maxim Borisyak (National Research University Higher School of Economics (HSE) (RU); Yandex School of Data Analysis (RU))13/10/2016, 15:15
High-energy physics experiments rely on reconstruction of the trajectories of particles produced at the interaction point. This is a challenging task, especially in the high track multiplicity environment generated by p-p collisions at the LHC energies. A typical event includes hundreds of signal examples (interesting decays) and a significant amount of noise (uninteresting examples).
This...
Go to contribution page -
Tao Lin (IHEP), Tao Lin13/10/2016, 15:15
The JUNO (Jiangmen Underground Neutrino Observatory) is a multipurpose neutrino experiment which is mainly designed to determine neutrino mass hierarchy and precisely measure oscillation parameters. As one of the most important systems, the JUNO offline software is being developed using the SNiPER software. In this presentation, we focus on the requirements of JUNO simulation and present the...
Go to contribution page -
Valerio Bocci (Universita e INFN, Roma I (IT))13/10/2016, 15:30
The advent of microcontrollers with enough CPU power and with analog and digital peripherals give the possibility to design a complete acquisition system in one chip. The existence of an world wide data infrastructure as internet allows to think at distributed network of detectors capable to elaborate and send data or respond to settings commands.
Go to contribution page
The internet infrastructure allow us to do... -
Michal Svatos (Acad. of Sciences of the Czech Rep. (CZ))13/10/2016, 15:30
The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC)
Go to contribution page
shift at the start of LHC Run2 in 2015. The main goal was to rely on a person with a good overview
of the ADC activities to ease the ADC experts' workload. The CRC shifter keeps track of ADC tasks
related to their fields of expertise and responsibility. At the same time, the shifter maintains... -
Kenyi Paolo Hurtado Anampa (University of Notre Dame (US))13/10/2016, 15:30
The connection of diverse and sometimes non-Grid enabled resource types to the CMS Global Pool, which is based on HTCondor and glideinWMS, has been a major goal of CMS. These resources range in type from a high-availability, low latency facility at CERN for urgent calibration studies, called the CAF, to a local user facility at the Fermilab LPC, allocation-based computing resources at NERSC...
Go to contribution page -
Ms SHAN ZENG (IHEP)13/10/2016, 15:30
High energy physics experiments produce huge amounts of raw data, while because of the sharing characteristics of the network resources, there is no guarantee of the available bandwidth for each experiment which may cause link competition problems. On the other side, with the development of cloud computing technologies,IHEP have established a cloud platform based on OpenStack which can ensure...
Go to contribution page -
Elzbieta Banas (Polish Academy of Sciences (PL))13/10/2016, 15:30
The ATLAS Forward Proton (AFP) detector upgrade project consists of two forward detectors located at 205 m and 217 m on each side of the ATLAS experiment. The aim is to measure momenta and angles of diffractively scattered protons. In 2016 two detector stations on one side of the ATLAS interaction point have been installed and are being commissioned.
Go to contribution page
The detector infrastructure and necessary... -
Maria Girone (CERN)13/10/2016, 15:30
LHC Run3 and Run4 represent an unprecedented challenge for HEP computing in terms of both data volume and complexity. New approaches are needed for how data is collected and filtered, processed, moved, stored and analyzed if these challenges are to be met with a realistic budget. To develop innovative techniques we are fostering relationships with industry leaders. CERN openlab is a...
Go to contribution page -
Marcel Rieger (Rheinisch-Westfaelische Tech. Hoch. (DE)), Robert Fischer (Rheinisch-Westfaelische Tech. Hoch. (DE))13/10/2016, 15:30
The Visual Physics Analysis (VISPA) project defines a toolbox for accessing software via the web. It is based on latest web technologies and provides a powerful extension mechanism that enables to interface a wide range of applications. Beyond basic applications such as a code editor, a file browser, or a terminal, it meets the demands of sophisticated experiment-specific use cases that focus...
Go to contribution page -
Gordon Watts (University of Washington (US))13/10/2016, 15:30
A modern high energy physics analysis code is complex. As it has for decades, it must handle high speed data I/O, corrections to physics objects applied at the last minute, and multi-pass scans to calculate corrections. An analysis has to accommodate multi-100 GB dataset sizes, multi-variate signal/background separation techniques, larger collaborative teams, and reproducibility and data...
Go to contribution page -
Emanuele Angelo Bagnaschi (DESY Hamburg), Isabel Campos Plasencia (Consejo Superior de Investigaciones Cientificas (CSIC) (ES))13/10/2016, 15:30
The MasterCode collaboration (http://cern.ch/mastercode) is concerned with the investigation of supersymmetric models that go beyond the current status of the Standard Model of particle physics. It involves teams from CERN, DESY, Fermilab, SLAC, CSIC, INFN, NIKHEF, Imperial College London,King's College London, the Universities of Amsterdam, Antwerpen, Bristol, Minnesota and ETH...
Go to contribution page -
Andrey Kirianov (B.P. Konstantinov Petersburg Nuclear Physics Institute - PNPI ()13/10/2016, 15:30
Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted national physics groups to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics...
Go to contribution page -
Nathalie Rauschmayr (CERN)13/10/2016, 15:30
Memory has become a critical parameter for many HEP applications and as a consequence some experiments had already to move from single- to multicore jobs. However in the case of LHC experiment software, benchmark studies have shown that many applications are able to run with a much lower memory footprint than what is actually allocated. In certain cases even half of the allocated memory being...
Go to contribution page -
Miguel Rubio-Roy (CNRS)13/10/2016, 15:30
Data quality monitoring (DQM) in high-energy physics (HEP) experiments is essential and widely implemented in most large experiments. It provides important real-time information during the commissioning and production phases that allows the early identification of potential issues and eases their resolution.
Existing and performant solutions for online monitoring exist for large experiments...
Go to contribution page -
13/10/2016, 15:30
In 2016 the Large Hadron Collider (LHC) will continue to explore the physics at the high-energy frontier. The integrated luminosity is expected to be about 25 fb$^{-1}$ in 2016 with the estimated peak luminosity of around 1.1 $\times$ 10$^{34}$ cm$^{-2}$ s$^{-1}$ and the peak mean pile-up of about 30. The CMS experiment will upgrade its hardware-based Level-1 trigger system to keep its...
Go to contribution page -
13/10/2016, 15:30
EOS, the CERN open-source distributed disk storage system, provides the high-performance storage solution for HEP analysis and the back-end for various work-flows. Recently EOS became the back-end of CERNBox, the cloud synchronisation service for CERN users.
EOS can be used to take advantage of wide-area distributed installations: for the last few years CERN EOS uses a common deployment...
Go to contribution page -
Andrey Shevel (Petersburg Nuclear Physics Institute - PNPI, ITMO University)13/10/2016, 15:30
The volume of the coming data in HEP is growing. Also growing volume of the data to be hold long time. Actually large volume of data – big data – is distributed around the planet. In other words now there is situation where the data storage does integrate storage resources from many data centers located far from each other. That means the methods, approaches how to organize, manage the...
Go to contribution page -
Dr Ran Du (Computing Center, Institute of High Energy Physics, University of Chinese Academy of Sciences)13/10/2016, 15:30
HazelNut is a block based Hierarchical Storage System, in which logical data blocks are migrated among storage tiers to achieve better I/O performance. In order to choose migrated blocks, data block I/O process is traced to collect enough information for migration algorithms. There are many ways to trace I/O process and implement block migration. However, how to choose trace metrics and ...
Go to contribution page -
Helge Meinhard (CERN)13/10/2016, 15:30
The HELIX NEBULA Science Cloud (HNSciCloud) project (presented in general by another contribution) is run by a consortium of ten procurers and two other partners; it is funded partly by the European Commission, has a total volume of 5.5 MEUR and runs from January 2016 to June 2018. By its nature as a pre-commercial procurement (PCP) project, it addresses needs that are not covered by any...
Go to contribution page -
Slava Krutelyov (Univ. of California San Diego (US))13/10/2016, 15:30
High luminosity operations of the LHC are expected to deliver
proton-proton collisions to experiments with average number of pp
interactions reaching 200 every bunch crossing.
Reconstruction of charged particle tracks in this environment is
computationally challenging.At CMS, charged particle tracking in the outer silicon tracker detector
Go to contribution page
is among the largest contributors to the overall CPU... -
13/10/2016, 15:30
The development and new discoveries of a new generation of high-energy physics cannot be separated from the mass data processing and analysis. The BESIII experiments studies physics in the tau-charm energy region from 2GeV to 4.6 GeV, at the Institute of High Energy Physics (IHEP) in Beijing, China, which is a typical data-intensive computing requiring mass storage and efficient computing...
Go to contribution page -
Sergey Linev (GSI DARMSTADT)13/10/2016, 15:30
JavaScript ROOT (JSROOT) aims to provide ROOT-like graphics in web browsers. JSROOT supports reading of binary and JSON ROOT files, and drawing of ROOT classes like histograms (TH1/TH2/TH3), graphs (TGraph), functions (TF1) and many others. JSROOT implements a user interface for THttpServer-based applications.
With the version 4 of JSROOT, many improvements and new features are...
Go to contribution page -
13/10/2016, 15:30
The offline software of the ATLAS experiment at the LHC
Go to contribution page
(Large Hadron Collider) serves as the platform for
detector data reconstruction, simulation and analysis.
It is also used in the detector trigger system to
select LHC collision events during data taking.
ATLAS offline software consists of several million lines of
C++ and Python code organized in a modular design of
more than 2000... -
Josh Bendavid (California Institute of Technology (US))13/10/2016, 15:30
The increases in both luminosity and center of mass energy of the LHC in Run 2 impose more stringent requirements on the accuracy of the Monte Carlo simulation. An important element in this is the inclusion of matrix elements with high parton multiplicity and NLO accuracy, with the corresponding increase in computing requirements for the matrix element generation step posing a significant...
Go to contribution page -
michele pezzi (Infn-cnaf)13/10/2016, 15:30
The long term preservation and sharing of scientific data is becoming nowadays an integral part of any new scientific project. In High Energy Physics experiments (HEP) this is particularly challenging, given the large amount of data to be preserved and the fact that each experiment has its own specific computing model. In the case of HEP experiments that have already concluded the data taking...
Go to contribution page -
13/10/2016, 15:30
Used as lightweight virtual machines or as enhanced chroot environments, Linux containers, and in particular the Docker abstraction over them, are more and more popular in the virtualization communities.
LHCb Core Software team decided to investigate how to use Docker containers to provide stable and reliable build environments for the different supported platforms, including the obsolete...
Go to contribution page -
13/10/2016, 15:30
Because of user demand and to support new development workflows based on code review and multiple development streams, LHCb decided to port the source code management from Subversion to Git, using the CERN GitLab hosting service.
Although tools exist for this kind of migration, LHCb specificities and development models required careful planning of the migration, development of migration...
Go to contribution page -
Ben Couturier (CERN), Christophe Haen (CERN)13/10/2016, 15:30
The LHCb experiment relies on LHCbDIRAC, an extension of DIRAC, to drive its offline computing. This middleware provides a development framework and a complete set of components for building distributed computing systems. These components are currently installed and ran on virtual machines (VM) or bare metal hardware. Due to the increased load of work, high availability is becoming more and...
Go to contribution page -
13/10/2016, 15:30
LStore was developed to satisfy the ever-growing need for
Go to contribution page
cost-effective, fault-tolerant, distributed storage. By using erasure
coding for fault-tolerance, LStore has an
order of magnitude lower probability of data loss than traditional
3-replica storage while incurring 1/2 the storage overhead. LStore
was integrated with the Data Logistics Toolkit (DLT) to introduce
LStore to a wider... -
Christopher Jon Lee (University of Cape Town (ZA))13/10/2016, 15:30
Within the ATLAS detector, the Trigger and Data Acquisition system is responsible for the online processing of data streamed from the detector during collisions at the Large Hadron Collider at CERN. The online farm is comprised of ~4000 servers processing the data read out from ~100 million detector channels through multiple trigger levels. Configuring of these servers is not an easy task,...
Go to contribution page -
Michael David Sokoloff (University of Cincinnati (US))13/10/2016, 15:30
MCBooster is a header-only, C++11-compliant library for the generation of large samples of phase-space Monte Carlo events on massively parallel platforms. It was released on GitHub in the spring of 2016. The library core algorithms implement the Raubold-Lynch method; they are able to generate the full kinematics of decays with up to nine particles in the final state. The library supports the...
Go to contribution page -
13/10/2016, 15:30
The European project INDIGO-DataCloud aims at developing an advanced computing and data platform. It provides advanced PaaS functionalities to orchestrate the deployment of Long-Running Services (LRS) and the execution of jobs (workloads) across multiple sites through a federated AAI architecture.
The multi-level and multi-site orchestration and scheduling capabilities of the INDIGO PaaS...
Go to contribution page -
Jack Cranshaw (Argonne National Laboratory (US))13/10/2016, 15:30
High energy physics experiments are implementing highly parallel solutions for event processing on resources that support
concurrency at multiple levels. These range from the inherent large-scale parallelism of HPC resources to the multiprocessing and
multithreading needed for effective use of multi-core and GPU-augmented nodes.Such modes of processing, and the efficient opportunistic use of...
Go to contribution page -
13/10/2016, 15:30
Any time you modify an implementation within a program, change compiler version or operating system, you should also do regression testing. You can do regression testing by rerunning existing tests against the changes to determine whether this breaks anything that worked prior to the change and by writing new tests where necessary. At LHCb we have a huge codebase which is maintained by many...
Go to contribution page -
Andreas Gellrich (DESY)13/10/2016, 15:30
Collaborative services and tools are essential for any (HEP) experiment.
Go to contribution page
They help to integrate global virtual communities by allowing to share
and exchange relevant information among members by way of web-based
services.
Typical examples are public and internal web pages, wikis, mailing list
services, issue tracking system, services for meeting organization and
document and authorship... -
Stephen Jones (Liverpool University)13/10/2016, 15:30
Traditional T2 grid sites still process large amounts of data flowing from the LHC and elsewhere. More flexible technologies, such as virtualisation and containerisation, are rapidly changing the landscape, but the right migration paths to these sunlit uplands are not well defined yet. We report on the innovations and pressures that are driving these changes and we discuss their pros and cons....
Go to contribution page -
Roland Sipos (Eotvos Lorand University (HU))13/10/2016, 15:30
The Compact Muon Solenoid (CMS) experiment makes a vast use of alignment and calibration measurements in several crucial workflows: in the event selection at the High Level Trigger (HLT), in the processing of the recorded collisions and in the production of simulated events. A suite of services addresses the key requirements for the handling of the alignment and calibration conditions such as:...
Go to contribution page -
13/10/2016, 15:30
Monitoring the quality of the data, DQM, is crucial in a high-energy physics experiment to ensure the correct functioning of the apparatus during the data taking. DQM at LHCb is carried out in two phase. The first one is performed on-site, in real time, using unprocessed data directly from the LHCb detector, while the second, also performed on-site, requires the reconstruction of the data...
Go to contribution page -
Andrea Dotti (SLAC National Accelerator Laboratory (US))13/10/2016, 15:30
As more detailed and complex simulations are required in different application domains, there is much interest in adapting the code for parallel and multi-core architectures. Parallelism can be achieved by tracking many particles at the same time. This work presents MPEXS, a CUDA implementation of the core Geant4 algorithm used for the simulation of electro-magnetic interactions (electron,...
Go to contribution page -
Donato De Girolamo (INFN)13/10/2016, 15:30
In a large Data Center, such as a LHC Tier-1, where the structure of the Local Area Network and Cloud Computing Systems varies on a daily basis, network management has become more and more complex.
In order to improve the operational management of the network, this article presents a real-time network topology auto-discovery tool named Netfinder.
The information required for effective...
Go to contribution page -
Jakob Blomer (CERN)13/10/2016, 15:30
The CernVM File System today is commonly used to host and distribute application software stacks. In addition to this core task, recent developments expand the scope of the file system into two new areas. Firstly, CernVM-FS emerges as a good match for container engines to distribute the container image contents. Compared to native container image distribution (e.g. through the ``Docker...
Go to contribution page -
Robert Fay (University of Liverpool (GB))13/10/2016, 15:30
Monitoring of IT infrastructure and services is essential to maximize availability and minimize disruption, by detecting failures and developing issues to allow rapid intervention.
The HEP group at Liverpool have been working on a project to modernize local monitoring infrastructure (previously provided using Nagios and ganglia) with the goal of increasing coverage, improving visualization...
Go to contribution page -
Vincent Garonne (University of Oslo (NO))13/10/2016, 15:30
In this paper, we'll talk about our experiences with different data storage technologies within the ATLAS Distributed Data Management
Go to contribution page
system, and in particular about object-based storage. Object-based storage differs in many points from traditional file system
storage and offers a highly scalable, simple and most common storage solution for the cloud. First, we describe the needed changes
in... -
13/10/2016, 15:30
The offline software for the CMS Leve-1 trigger provides a reliable bitwise emulation of the high-speed custom FPGA-based hardware at the foundation of the CMS data acquisition system. The staged upgrade of the trigger system requires flexible software that accurately reproduces the system at each stage using recorded running conditions. The high intensity of the upgraded LHC necessitates new...
Go to contribution page -
Ioannis Charalampidis (CERN)13/10/2016, 15:30
With the demand for more computing power and the widespread use of parallel and distributed computing, applications are looking for message-based transport solutions for fast, stateless communication. There are many solutions already available, with competing performances, but with varying APIs, making it difficult to support all of them. Trying to find a solution to this problem we decided to...
Go to contribution page -
Lisa Zangrando (Universita e INFN, Padova (IT))13/10/2016, 15:30
Managing resource allocation in a Cloud based data center serving multiple virtual organizations is a challenging issue. In fact, while batch systems are able to allocate resources to different user groups according to specific shares imposed by the data center administrators, without a static partitioning of such resources, this is not so straightforward in the most common Cloud frameworks,...
Go to contribution page -
Alexandre Lossent (CERN)13/10/2016, 15:30
The CERN Web Frameworks team has deployed OpenShift Origin to facilitate deployment of web applications and improve resource efficiency. OpenShift leverages Docker containers and Kubernetes orchestration to provide a Platform-as-a-service solution oriented for web applications. We will review use cases and how OpenShift was integrated with other services such as source control, web site...
Go to contribution page -
13/10/2016, 15:30
The PANDA experiment, one of the four scientific pillars of the FAIR facility currently in construction in Darmstadt, Germany, is a next-generation particle detector that will study collisions of antiprotons with beam momenta of 1.5–15 GeV/c on a fixed proton target.
Go to contribution page
Because of the broad physics scope and the similar signature of signal and background events in the energy region of... -
Peter Hobson (Brunel University (GB))13/10/2016, 15:30
This work combines metric and parallel computing on both multi-GPU and distributed memory architectures when applied to
multi-million or even billion bodies simulations.Metric trees are data structures for indexing multidimensional sets of points in arbitrary metric spaces. First proposed by Jeffrey
Go to contribution page
K. Uhlmann [1], as a structure to efficiently solve neighbourhood queries, they have... -
Remi Mommsen (Fermi National Accelerator Lab. (US))13/10/2016, 15:30
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz. It transports event data at an aggregate throughput of ~100 GB/s to the high-level trigger (HLT) farm. The CMS DAQ system has been completely rebuilt during the first long shutdown of the LHC in 2013/14. The new DAQ architecture is based on state-of-the-art...
Go to contribution page -
13/10/2016, 15:30
Graphical Processing Units (GPUs) represent one of the most sophisticated
Go to contribution page
and versatile parallel computing architectures available that are nowadays
entering the High Energy Physics field. GooFit is an open source tool
interfacing ROOT/RooFit to the CUDA platform on nVidia GPUs (it also
supports OpeMP). Specifically it acts as an interface between the MINUIT
minimization algorithm and a... -
Matteo Concas (Universita e INFN Torino (IT))13/10/2016, 15:30
The computing power of most modern commodity computers is far from being fully exploited by standard usage patterns.
The work we present describes the development and setup of a virtual computing cluster based on Docker containers used as worker nodes. The facility is based on Plancton[1]: a lightweight fire-and-forget background service that spawns and controls a local pool of Docker...
Go to contribution page -
Baosong Shan (Beihang University (CN))13/10/2016, 15:30
The Alpha Magnetic Spectrometer (AMS) on board of the International Space Station (ISS) requires a large amount of computing power for data production and Monte Carlo simulation. A large fraction of the computing resource has been contributed by the computing centers among the AMS collaboration. AMS has 12 “remote” computing centers outside of Science Operation Center at CERN, with different...
Go to contribution page -
David Schultz (University of Wisconsin-Madison)13/10/2016, 15:30
A major challenge for data production at the IceCube Neutrino Observatory presents itself in connecting a large set of small clusters together to form a larger computing grid. Most of these clusters do not provide a Grid interface. Using a local account on each submit machine, HTCondor glideins can be submitted to virtually any type of scheduler. The glideins then connect back to a main...
Go to contribution page -
Joshua Heneage Dawes (University of Manchester (GB))13/10/2016, 15:30
The Alignment, Calibrations and Databases group at the CMS Experiment delivers Alignment and Calibration Conditions Data to a large set of workflows which process recorded event data and produce simulated events. The current infrastructure for releasing and consuming Conditions Data was designed in the two years of the first LHC long shutdown to respond to use cases from the preceding...
Go to contribution page -
Wim Lavrijsen (Lawrence Berkeley National Lab. (US))13/10/2016, 15:30
Cppyy provides fully automatic Python/C++ language bindings and so doing
Go to contribution page
covers a vast number of use cases. Use of conventions and known common
patterns in C++ (such as smart pointers, STL iterators, etc.) allow us to
make these C++ constructs more "pythonistic." We call these treatments
"pythonizations", as the strictly bound C++ code is turned into bound code
that has a Python "feel."... -
Krzysztof Marian Korcyl (Polish Academy of Sciences (PL))13/10/2016, 15:30
AFP, the ATLAS Forward Proton detector upgrade project consists of two
Go to contribution page
forward detectors at 205 m and 217 m on each side of the ATLAS
experiment at the LHC. The new detectors aim to measure momenta and
angles of diffractively scattered protons. In 2016 two detector stations
on one side of the ATLAS interaction point have been installed and are
being commissioned.
The front-end electronics... -
Tomasz Szumlak (AGH University of Science and Technology (PL))13/10/2016, 15:30
The current LHCb trigger system consists of a hardware level, which reduces the LHC bunch-crossing rate of 40 MHz to 1 MHz, a rate at which the entire detector is read out. A second level, implemented in a farm of around 20k parallel processing CPUs, the event rate is reduced to around 12.5 kHz. The LHCb experiment plans a major upgrade of the detector and DAQ system in the LHC long shutdown...
Go to contribution page -
Cristovao Cordeiro (CERN)13/10/2016, 15:30
The ongoing integration of clouds into the WLCG raises the need for a detailed health and performance monitoring of the virtual resources in order to prevent problems of degraded service and interruptions due to undetected failures. When working in scale, the existing monitoring diversity can lead to a metric overflow whereby the operators need to manually collect and correlate data from...
Go to contribution page -
Daren Lewis Sawkey13/10/2016, 15:30
In this work we report on recent progress of the Geant4 electromagnetic (EM) physics sub-packages. A number of new interfaces and models recently introduced are already used in LHC applications and may be useful for any type of simulation.
Go to contribution page
To improve usability, a new set of User Interface (UI) commands and corresponding C++ interfaces have been added for easier configuration of EM physics. In... -
Dr Shengsen Sun (Institute of High Energy Physics, Chinese Academy of Sciences)13/10/2016, 15:30
The endcap time of flight(TOF) detector of the BESIII experiment at the BEPCII was upgraded based on multigap resistive plate chamber technology. During 2015-2016 data taking the TOF system has achieved a total time resolution of 65ps for electrons in Bhabha events. Details of reconstruction and calibration procedures, detector alignment and performance with data will be described.
Go to contribution page -
Jason Webb (Brookhaven National Lab)13/10/2016, 15:30
The STAR Heavy Flavor Tracker (HFT) was designed to provide high-precision tracking for the identification of charmed hadron decays in heavy ion collisions at RHIC. It consists of three independently mounted subsystems, providing four precision measurements along the track trajectory, with the goal of pointing decay daughters back to vertices displaced by <100 microns from the primary event...
Go to contribution page -
54. Research and application of OpenStack in Chinese Spallation Neutron Source Computing environmentYakang li (ihep)13/10/2016, 15:30
Cloud computing can make IT resources configuration flexible and reduce the hardware cost,it also can privide computing service according to the real need.We are applying this computing mode to the Chinese Spallation Neutron Source(CSNS) computing environment.So from the research and practice aspects,firstly,the application status of cloud computing science in High Energy Physics Experiments...
Go to contribution page -
13/10/2016, 15:30
IhepCloud is a multi-user virtualization platform which based on Openstack icehouse and deployed at Nov. 2014. The platform provides multiple types virtual machine, such as test VM, UI and WN, is a part of local computing system. There are 21 physical machines and 120 users on this platform and about 300 virtual machines running on it.
Go to contribution page
Upgrading IhepCloud from Icehouse to Kilo is difficult,... -
Igor Pelevanyuk (Joint Inst. for Nuclear Research (RU)), Mr Jiong Chen (SuZhou University)13/10/2016, 15:30
Multi-VO supports based on DIRAC have been set up to provide workload and data management for several high energy experiments in IHEP. The distributed computing platform has 19 heterogeneous sites including Cluster, Grid and Cloud. The heterogeneous resources belong to different Virtual Organizations. Due to scale and heterogeneity, it is complicated to monitor and manage these resources...
Go to contribution page -
Vincent Garonne (University of Oslo (NO))13/10/2016, 15:30
One of the biggest challenge with Large scale data management system is to ensure the consistency between the global file catalog
Go to contribution page
and what is physically on all storage elements.
To tackle this issue, the Rucio software which is used by the ATLAS Distributed Data Management system has been extended to
automatically handle lost or unregistered files (aka Dark Data). This system automatically... -
Thomas Beermann (CERN)13/10/2016, 15:30
With the current distributed data management system for ATLAS, called Rucio, all user interactions, e.g. the Rucio command line
Go to contribution page
tools or the ATLAS workload management system, communicate with Rucio through the same REST-API. This common interface makes it
possible to interact with Rucio using a lot of different programming languages, including Javascript. Using common web... -
Miguel Martinez Pedreira (CERN)13/10/2016, 15:30
The AliEn file catalogue is a global unique namespace providing mapping between a UNIX-like logical name structure and the corresponding physical files distributed over 80 storage elements worldwide. Powerful search tools and hierarchical metadata information are integral part of the system and are used by the Grid jobs as well as local users to store and access all files on the Grid storage...
Go to contribution page -
Anna Elizabeth Woodard (University of Notre Dame (US)), Matthias Wolf (University of Notre Dame (US))13/10/2016, 15:30
The University of Notre Dame (ND) CMS group operates a modest-sized Tier-3 site suitable for local, final-stage analysis of CMS data. However, through the ND Center for Research Computing (CRC), Notre Dame researchers have opportunistic access to roughly 25k CPU cores of computing and a 100 Gb/s WAN network link. To understand the limits of what might be possible in this scenario, we...
Go to contribution page -
Andrzej Dworak (CERN)13/10/2016, 15:30
A central timing (CT) is a dedicated system responsible for driving an accelerator behaviour. It allows operation teams to interactively select and schedule cycles. While executing a scheduled cycle a CT sends out events which (a) provide precise synchronization and (b) information what to do - to all equipment operating an accelerator. The events are also used to synchronize accelerators...
Go to contribution page -
Mrs cong wang (CC-IHEP)13/10/2016, 15:30
Simulation has been used for decades in various areas of computing science, such as network protocol design ,microprocessor design. By comparison, current practice in storage simulation is in its infancy. So we are trying to fulfill a simulator with Simgrid to simulate the storage part of application . Cluefs is a lightweight utility to collect data on the I/O events induced by an application...
Go to contribution page -
Enrico Bagli (INFN)13/10/2016, 15:30
Beam manipulation of high- and very-high-energy particle beams is a hot topic in accelerator physics. Coherent effects of ultra-relativistic particles in bent crystals allow the steering of particle trajectories thanks to the strong electrical field generated between atomic planes. Recently, a collimation experiment with bent crystals was carried out at the CERN-LHC [1], paving the way to the...
Go to contribution page -
Andrea Dotti (SLAC National Accelerator Laboratory (US))13/10/2016, 15:30
Geant4 is a toolkit for the simulation of the passage of particles through matter. Its areas of application include high energy, nuclear and accelerator physics as well as studies in medical and space science.
Go to contribution page
The Geant4 collaboration regularly performs validation and regression tests through its development cycle. A validation test compares results obtained with a specific Geant4 version... -
Dr Mustafa Mustafa (Lawrence Berkeley National Laboratory)13/10/2016, 15:30
The expected growth in HPC capacity over the next decade makes such resources attractive for meeting future computing needs of HEP/NP experiments, especially as their cost is becoming comparable to traditional clusters. However, HPC facilities rely on features like specialized operating systems and hardware to enhance performance that make them difficult to be used without significant changes...
Go to contribution page -
Claude Andre Pruneau (Wayne State University (US))13/10/2016, 15:30
SWIFT is a compiled object-oriented language similar in spirit to C++ but with the coding simplicity of a scripting language. Built with the LLVM compiler framework used within Xcode 6 and later versions, SWIFT features interoperability with C, Objective-C, and C++ code, truly comprehensive debugging and documentation features, and a host of language features that make for rapid and effective...
Go to contribution page -
Vitaly Choutko (Massachusetts Inst. of Technology (US))13/10/2016, 15:30
This paper introduces the storage strategy and tools of the science data of the Alpha Magnetic Spectrometer (AMS) at Science Operation Center (SOC) at CERN.
The AMS science data includes flight data, reconstructed data and simulation data, as well as the metadata of them. The data volume is 1070 TB per year of operation, and currently reached 5086 TB in total. We have two storage levels:...
Go to contribution page -
David Crooks (University of Glasgow (GB))13/10/2016, 15:30
Operational and other pressures have lead to WLCG experiments moving increasingly to a stratified model for Tier-2 resources, where "fat" Tier-2s ("T2Ds") and "thin" Tier-2s ("T2Cs") provide different levels of service.
Go to contribution page
In the UK, this distinction is also encouraged by the terms of the current GridPP5 funding model. In anticipation of this, testing has been performed on the implications, and... -
Adrian Bevan (University of London (GB))13/10/2016, 15:30
We review the concept of support vector machines before proceeding to discuss examples of their use in a number of scenarios. Using the Toolkit for Multivariate Analysis (TMVA) implementation we discuss examples relevant to HEP including background suppression for H->tau+tau- at the LHC. The use of several different kernel functions and performance benchmarking is discussed.
Go to contribution page -
Christos Lazaridis (University of Wisconsin-Madison (US))13/10/2016, 15:30
The Large Hadron Collider at CERN restarted in 2015 with a higher
Go to contribution page
centre-of-mass energy of 13 TeV. The instantaneous luminosity is expected
to increase significantly in the coming
years. An upgraded Level-1 trigger system is being deployed in the CMS
experiment in order to maintain the same efficiencies for searches and
precision measurements as those achieved in
the previous run. This system... -
Daniele Francesco Kruse (CERN)13/10/2016, 15:30
CERN currently manages the largest data archive in the HEP domain; over 135PB of custodial data is archived across 7 enterprise tape libraries containing more than 20,000 tapes and using over 80 tape drives. Archival storage at this scale requires a leading edge monitoring infrastructure that acquires live and lifelong metrics from the hardware in order to assess and proactively identify...
Go to contribution page -
Riccardo Maria Bianchi (University of Pittsburgh (US))13/10/2016, 15:30
The ATLAS collaboration has recently setup a number of citizen science projects which have a strong IT component and could not have been envisaged without the growth of general public computing resources and network connectivity: event simulation through volunteer computing, algorithms improvement via Machine Learning challenges, event display analysis on citizen science platforms, use of...
Go to contribution page -
Jose Guillermo Panduro Vazquez (Royal Holloway, University of London)13/10/2016, 15:30
The LHC has been providing pp collisions with record luminosity and energy since the start of Run 2 in 2015. In the ATLAS experiment the Trigger and Data Acquisition system has been upgraded to deal with the increased event rates. The dataflow element of the system is distributed across hardware and software and is responsible for buffering and transporting event data from the Readout system...
Go to contribution page -
Tim Martin (University of Warwick (GB))13/10/2016, 15:30
The LHC, at design capacity, has a bunch-crossing rate of 40 MHz whereas the ATLAS experiment at the LHC has an average recording rate of about 1000 Hz. To reduce the rate of events but still maintain a high efficiency of selecting rare events such as physics signals beyond the Standard Model, a two-level trigger system is used in ATLAS. Events are selected based on physics signatures such as...
Go to contribution page -
Dr Nicola De Filippis (Politecnico e INFN Bari (IT))13/10/2016, 15:30
An overview of the CMS Data analysis school (CMSDAS) model and experience is provided. The CMSDAS is the official school that CMS organize every year in US, in Europe and in Asia to train students, Ph.D and young post-docs for the physics analysis. It consists of two days of short exercises about physics objects reconstruction and identification and 2.5 days of long exercises about physics...
Go to contribution page -
Dr Jiri Chudoba (CESNET), Jiri Chudoba (Acad. of Sciences of the Czech Rep. (CZ))13/10/2016, 15:30
The Czech National Grid Infrastructure is operated by MetaCentrum, a CESNET department responsible for coordinating and managing activities related to distributed computing. CESNET as the Czech National Research and Education Network (NREN) provides many e-infrastructure services, which are used by 94% of the scientific and research community in the Czech Republic. Computing and storage...
Go to contribution page -
Francesco Prelz (Università degli Studi e INFN Milano (IT))13/10/2016, 15:30
In the sociology of small- to mid-sized (O(100) collaborators) experiments the issue of data collection and storage is sometimes felt as a residual problem for which well-established solutions are known. Still, the DAQ system can be one of the few forces that drive towards the integration of otherwise loosely coupled detector systems. As such it may be hard to complete with
Go to contribution page
off-the-shelf... -
Gordon Watts (University of Washington (US))13/10/2016, 15:30
The Data and Software Preservation for Open Science (DASPOS) collaboration has developed an ontology for describing particle physics analyses. The ontology, a series of data triples, is designed to describe dataset, selection cuts, and measured quantities for an analysis. The ontology specification, written in the Web Ontology Language (OWL), is designed to be interpreted by many pre-existing...
Go to contribution page -
Mr Vincenzo Capone (GÉANT)13/10/2016, 15:30
The growth in size and geographical distribution of scientific collaborations, while enabling researcher to achieve always higher and bolder results, also poses new technological challenges, one of these being the additional efforts to analyse and troubleshoot network flows that travel for thousands of miles, traversing a number of different network domains. While the day-to-day multi-domain...
Go to contribution page -
13/10/2016, 15:30
In order to generate the huge number of Monte Carlo events that will be required by the ATLAS experiment over the next several runs, a very fast simulation is critical. Fast detector simulation alone, however, is insufficient: with very high numbers of simultaneous proton-proton collisions expected in Run 3 and beyond, the digitization (detector response emulation) and event reconstruction...
Go to contribution page -
Paul Millar (DESY)13/10/2016, 15:30
Contemporary distributed computing infrastructures (DCIs) are not easily and securely accessible by common users. Computing environments are typically hard to integrate due to interoperability problems resulting from the use of different authentication mechanisms, identity negotiation protocols and access control policies. Such limitations have a big impact on the user experience making it...
Go to contribution page -
13/10/2016, 15:30
High-throughput computing requires resources to be allocated so that jobs can be run. In a highly distributed environment that may be comprised of multiple levels of queueing, it may not be certain where, what and when jobs will run. It is therefore desirable to first acquire the resource before assigning it a job. This late-binding approach has been implemented in resources managed by batch...
Go to contribution page -
Mikhail Hushchyn (Yandex School of Data Analysis (RU))13/10/2016, 15:30
The LHCb Grid access if based on the LHCbDirac system. It provides access to data and computational resources to researchers with different geographical locations. The Grid has a hierarchical topology with multiple sites distributed over the world. The sites differ from each other by their number of CPUs, amount of disk storage and connection bandwidth. These parameters are essential for the...
Go to contribution page -
13/10/2016, 15:30
Within the HEPiX virtualization group and the WLCG MJF Task Force, a mechanism has been developed which provides access to detailed information about the current host and the current job to the job itself. This allows user payloads to access meta information, independent of the current batch system or virtual machine model. The information can be accessed either locally via the filesystem on a...
Go to contribution page -
Dr Vardan Gyurjyan (Jefferson Lab)13/10/2016, 15:30
IO optimizations along with the vertical and horizontal elasticity of an application are essential to achieve data processing performance linear scalability. However to deploy these three critical concepts in a unified software environment presents a challenge and as a result most of the existing data processing frameworks rely on external solutions to address them. For example in a multicore...
Go to contribution page -
Jose Guillermo Panduro Vazquez (Royal Holloway, University of London)13/10/2016, 15:30
The ATLAS Trigger & Data Acquisition project was started almost twenty years ago with the aim of providing a scalable distributed data collection system for the experiment. While the software dealing with physics dataflow was implemented by directly using low level communication protocols, like TCP and UDP, the control and monitoring infrastructure services for the system were implemented on...
Go to contribution page -
Audrius Mecionis (Vilnius University (LT))13/10/2016, 15:30
The Compact Muon Solenoid (CMS) experiment makes a vast use of alignment and calibration measurements in several data processing workflows. Such measurements are produced either by automated workflows or by analysis tasks carried out by experts in charge. Very frequently, experts want to inspect and exchange with others in CMS the time evolution of a given calibration, or want to monitor the...
Go to contribution page -
13/10/2016, 15:30
The Alpha Magnetic Spectrometer (AMS) on board of the International Space Station (ISS) requires a large amount of computing power for data production and Monte Carlo simulation. Recently the AMS Offline software was ported to IBM Blue Gene/Q architecture. The supporting software/libraries which have been successfully ported include: ROOT 5.34, GEANT4.10, CERNLIB, and AMS offline data...
Go to contribution page -
William Panduro Vazquez (Royal Holloway, University of London)13/10/2016, 15:30
The Resource Manager is one of the core components of the Data Acquisition system of the ATLAS experiment at the LHC. The Resource Manager marshals the right for applications to access resources which may exist in multiple but limited copies, in order to avoid conflicts due to program faults or operator errors.
Go to contribution page
The access to resources is managed in a manner similar to what a lock manager... -
Martin Ritter (LMU Munich)13/10/2016, 15:30
SuperKEKB, a next generation B factory, has finished being constructed in Japan as an upgrade of the KEKB e+e- collider. Currently it is running with the BEAST II detector, whose purpose is to understand the interaction and background events at the beam collision region in preparation for the 2018 launch of the Belle II detector. Overall SuperKEKB is expected to deliver a rich data set for the...
Go to contribution page -
Andrii Verbytskyi (Max-Planck-Institut fuer Physik (Werner-Heisenberg-Institut) (D)13/10/2016, 15:30
The ZEUS data preservation (ZEUS DP) project assures
Go to contribution page
continued access to the analysis software, experimental data and
related documentation.
The ZEUS DP project supports the possibility to derive valuable
scientific results from the ZEUS data in the future.
The implementation of the data preservation is discussed in the
context of contemporary data analyses and of planning of... -
Maxim Borisyak (National Research University Higher School of Economics (HSE) (RU); Yandex School of Data Analysis (RU))13/10/2016, 15:30
Daily operation of a large scale experimental setup is a challenging task both in terms of maintenance and monitoring. In this work we describes an approach for automated Data Quality system. Based on the Machine Learning methods it can be trained online on manually-labeled data by human experts. Trained model can assist data quality managers filtering obvious cases (both good and bad) and...
Go to contribution page -
13/10/2016, 15:30
Software development in high energy physics follows the open-source
Go to contribution page
software (OSS) approach and relies heavily on software being developed
outside the field. Creating a consistent and working stack out of 100s
of external, interdependent packages on a variety of platforms is a
non-trivial task. Within HEP, multiple technical solutions exist to
configure and build those stacks (so-called... -
Paul Millar13/10/2016, 15:30
As a robust and scalable storage system, dCache has always allowed the number of storage nodes and user accessible endpoints to be scaled horizontally, providing several levels of fault tolerance and high throughput. Core management services like the POSIX name space and central load balancing components however are merely vertically scalable. This greatly limits the scalability of the core...
Go to contribution page -
Mikhail Hushchyn (Yandex School of Data Analysis (RU))13/10/2016, 15:30
The SHiP is a new fixed-target experiment at the CERN SPS accelerator. The goal of the experiment is searching for hidden particles predicted by the models of Hidden Sectors. The purposes of the SHiP Spectrometer Tracker is to reconstruct the tracks of charged particles from the decay of neutral New Physics objects with high efficiency, while rejecting background events. The problem is to...
Go to contribution page -
Masahiro Tanaka (Tokyo Institute of Technology (JP))13/10/2016, 15:30
Electron, muon and photon triggers covering transverse energies from a few GeV to several TeV are essential for signal selection in a wide variety of ATLAS physics analyses to study Standard Model processes and to search for new phenomena. Final states including leptons and photons had, for example, an important role in the discovery and measurement of the Higgs particle. Dedicated triggers...
Go to contribution page -
Alexandre Lossent (CERN IT-CDA)13/10/2016, 15:30
CERN’s enterprise Search solution “CERN Search” provides a central search solution for users and CERN service providers. A total of about 20 million public and protected documents from a wide range of document collections is indexed, including Indico, TWiki, Drupal, SharePoint, JACOW, E-group archives, EDMS, and CERN Web pages.
In spring 2015, CERN Search was migrated to a new...
Go to contribution page -
Dr Daniel Traynor (Queen Mary University of London)13/10/2016, 15:30
The Queen Mary University of London grid site's Lustre file system has recently undergone a major upgrade from version 1.8 to the most recent 2.8 release, and the capacity increased to over 3 PB. Lustre is an open source, POSIX compatible, clustered file system presented to the Grid using the StoRM Storage Resource Manager. The motivation and benefits of upgrading including hardware and...
Go to contribution page -
Dr J.J. Nebrensky (Brunel University)13/10/2016, 15:30
The international Muon Ionization Cooling Experiment (MICE) is designed to demonstrate the principle of muon ionisation cooling for the first time, for application to a future Neutrino Factory or Muon Collider. The experiment is currently under construction at the ISIS synchrotron at the Rutherford Appleton Laboratory, UK. As presently envisaged, the programme is divided into three Steps:...
Go to contribution page -
Mrs Leah Welty-Rieger (Fermilab)13/10/2016, 15:30
The storage ring for the Muon g-2 experiment is composed of twelve custom vacuum chambers designed to interface with tracking and calorimeter detectors. The irregular shape and complexity of the chamber design made implementing these chambers in a GEANT simulation with native solids difficult. Instead, we have developed a solution that uses the CADMesh libraries to convert STL files from 3D...
Go to contribution page -
Adam Tadeusz Wegrzynek (Warsaw University of Technology (PL))13/10/2016, 15:30
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider).
ALICE has been successfully collecting physics data of Run 2 since spring 2015. In parallel, preparations for a major upgrade of the computing system, called O2 (Online-Offline) and scheduled for...
Go to contribution page -
Marc Paterno (Fermilab)13/10/2016, 15:30
Docker is a container technology that provides a way to "wrap up a
Go to contribution page
piece of software in a complete filesystem that contains everything it
needs to run" [1]. We have experimented with Docker to investigate its
utility in three broad realms: (1) allowing existing complex software
to run in very different environments from that in which the software
was built (such as Cori, NERSC's newest... -
13/10/2016, 15:30
CPU cycles for small experiments and projects can be scarce, thus making use of
Go to contribution page
all available resources, whether dedicated or opportunistic, is
mandatory. While enabling uniform access to the LCG computing elements (ARC,
CREAM), the DIRAC grid interware was not able to use OSG computing elements
(GlobusCE, HTCondor-CE) without dedicated support at the grid site through so
called... -
Alexandr Zaytsev (Brookhaven National Laboratory (US))13/10/2016, 15:30
RHIC & ATLAS Computing Facility (RACF) at BNL is a 15000 sq. ft. facility hosting the IT equipment of the BNL ATLAS WLCG Tier-1 site, offline farms for the STAR and PHENIX experiments operating at the Relativistic Heavy Ion Collider (RHIC), BNL Cloud installations, various Open Science Grid (OSG) resources, and many other physics research oriented IT installations of a smaller scale. The...
Go to contribution page -
Burt Holzman (Fermi National Accelerator Lab. (US)), Gabriele Garzoglio, Steven Timm (Fermilab), Stuart Fuess (Fermilab)13/10/2016, 15:30
The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the...
Go to contribution page -
Julian Piotr Maciejewski (CERN)13/10/2016, 15:30
The ATLAS experiment is one of four detectors located on the Large Hardon Collider (LHC) based at CERN. Its detector control system (DCS) stores the slow control data acquired within the back-end of distributed WinCC OA applications. The data can be retrieved for future analysis, debugging and detector development in an Oracle relational database.
The ATLAS DCS Data Viewer (DDV) is a...
Go to contribution page -
Sebastian Lopienski (CERN)13/10/2016, 15:30
In order to patch web servers and web application in a timely manner, we first need to know which software packages are used, and where. But, a typical web stack is composed of multiple layers, including the operating system, web server, application server, programming platform and libraries, database server, web framework, content management system etc. as well as client-side tools. Keeping...
Go to contribution page -
Tim Smith (CERN)13/10/2016, 15:30
Windows Terminal Servers provide application gateways for various parts of the CERN accelerator complex, used by hundreds of CERN users every day. The combination of new tools such as Puppet, HAProxy and Microsoft System Center suite enable automation of provisioning workflows to provide a terminal server infrastructure that can scale up and down in an automated manner. The orchestration does...
Go to contribution page -
Robert Fischer (Rheinisch-Westfaelische Tech. Hoch. (DE))13/10/2016, 15:30
We present the novel Analysis Workflow Management (AWM) that provides users with the tools and competences of professional large scale workflow systems. The approach presents a paradigm shift from executing parts of the analysis to defining the analysis.
Within AWM an analysis consists of steps. For example, a step defines to run a certain executable for multiple files of an input data...
Go to contribution page -
Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US)), Wei Yang (SLAC National Accelerator Laboratory (US))13/10/2016, 15:30
When we first introduced XRootD storage system to the LHC, we needed a filesystem interface so that XRootD system could function as a Grid Storage Element. The result was XRootDfs, a FUSE based mountable posix filesystem. It glues all the data servers in a XRootD storage system together and presents it as a single, posix compliant, multi-user networked filesystem. XRootD's unique redirection...
Go to contribution page -
Timon Heim (Lawrence Berkeley National Lab. (US))13/10/2016, 15:30
The Yet Another Rapid Readout (YARR) system is a DAQ system designed for the readout of current generation ATLAS Pixel FE-I4 and next generation ATLAS ITk chips. It utilises a commercial-of-the-shelf PCIe FPGA card as a reconfigurable I/O interface, which acts as a simple gateway to pipe all data from the pixel chips via the high speed PCIe connection into the host systems memory. Relying on...
Go to contribution page -
Pierre Baldi (UC Irvine)13/10/2016, 16:45
-
Kathy Copic (Insight Data Science)13/10/2016, 17:30
-
Randy Sobie (University of Victoria (CA))14/10/2016, 08:45Oral
-
David Lange (Princeton University (US))14/10/2016, 09:00Oral
-
Latchezar Betev (CERN)14/10/2016, 09:15Oral
-
Elizabeth Gallas (University of Oxford (GB))14/10/2016, 09:30Oral
-
Frank Winklmeier (University of Oregon (US))14/10/2016, 09:45Oral
-
Brian Lee Winer (Ohio State University (US))14/10/2016, 10:00
-
Horst Schellong (Sony Everspan)14/10/2016, 10:30
-
14/10/2016, 11:15
-
Concetta Cartaro (SLAC)14/10/2016, 11:16Oral
-
Olof Barring (CERN)14/10/2016, 11:30Oral
-
Hannah Short (CERN)14/10/2016, 11:45Oral
-
Jan Fridolf Strube14/10/2016, 12:00Oral
-
Peter Hristov (CERN), Petya Tsvetanova Petrova (University of Texas at Arlington (US)), Vasil Vasilev (CERN)14/10/2016, 12:15
-
Richard Philip Mount (SLAC National Accelerator Laboratory (US))14/10/2016, 12:45
-
Richard Philip Mount (SLAC National Accelerator Laboratory (US))
Choose timezone
Your profile timezone: