Conveners
Plenary: S1
- There are no conveners in this block
Plenary: S2
- Marco Cattaneo (CERN)
Plenary: S3
- Maria Girone (CERN)
Plenary: S4
- Graeme Stewart
Plenary: S5
- Simone Campana (CERN)
Plenary: S6
- Gang Chen (Chinese Academy of Sciences (CN))
Plenary: S7
- Patrick Fuhrmann (Deutsches Elektronen-Synchrotron (DE))
Plenary: S8
- Latchezar Betev (CERN)
Plenary: S9
- Latchezar Betev (CERN)
The region of South-East Europe has a long history of successful collaboration in sharing resources and managing distributed electronic infrastructures for the needs of research communities. The HPC resources like supercomputers and big clusters with low-latency interconnection are an especially valuable and scarce resource in the region. Building upon the successfully tested operational and...
Helix Nebula Science Cloud (HNSciCloud) has developed a hybrid cloud platform that links together commercial cloud service providers and research organisations’ in-house IT resources via the GEANT network.
The platform offers data management capabilities with transparent data access where applications can be deployed with no modifications on both sides of the hybrid cloud and compute services...
As the result of joint R&D work with 10 of Europe’s leading public research organisations, led by CERN and funded by the EU, T-Systems provides a hybrid cloud solution, enabling science users to seamlessly extend their existing e-Infrastructures with one of the leading European public cloud services based on OpenStack – the Open Telekom Cloud.
With this new approach large-scale data-intensive...
Ten of Europe’s leading public research organisations led by CERN launched the Helix Nebula Science Cloud (HNSciCloud) Pre-Commercial Procurement to establish a European hybrid cloud platform that will support the high-performance, data-intensive scientific use-cases of this “Buyers Group” and of the research sector at large. It calls for the design and implementation of innovative...
Machine Learning (known as Multi Variate Analysis) has been used somewhat in HEP since the nighties. If Boosted Decision Trees are now common place, there is now an explosion of novel algorithms following the « deep learning revolution » in industry, applicable to data taking, triggering and handling, reconstruction, simulation and analysis. This talk will review some of these algorithms and...
Initial studies have suggested generative adversarial networks (GANs) have promise as fast simulations within HEP. These studies, while promising, have been insufficiently precise and also, like GANs in general, suffer from stability issues. We apply GANs to to generate full particle physics events (not individual physics objects), and to large weak lensing cosmology convergence maps. We...
Machine learning methods are becoming ubiquitous across particle physics. However, the exploration of such techniques in low-latency environments like L1 trigger systems has only just begun. We present here a new software, based on High Level Synthesis (HLS), to generically port several kinds of network models (BDTs, DNNs, CNNs) into FPGA firmware. As a benchmark physics use case, we consider...
In 2016 was started the CERN Digital Memory project with the main goal of preventing loss of historical content produced by the organisation. The first step of the project was targeted to address the risk of deterioration of the most vulnerable materials, mostly the multimedia assets created in analogue formats from 1954 to the late 1990's, like still and moving images on films or magnetic...
The ATLAS and CMS experiments at CERN are planning a second phase of upgrades to prepare for the "High Luminosity LHC", with collisions due to start in 2026. In order to deliver an order of magnitude more data than previous runs, protons at 14 TeV center-of-mass energy will collide with an instantaneous luminosity of 7.5 x 10^34 cm^-2 s^-1, resulting in much higher pileup and data rates than...
The reconstruction of particle trajectories is one of the most complex and CPU intensive tasks of event reconstruction at current LHC experiments. The growing particle multiplicity stemming from an increasing number of instantaneous collisions as forseen for the upcoming high luminosity upgrade of the LHC (HL-LHC) and future hadron collider studies will intensify this problem significantly. In...
Most HEP experiments coming in the next decade will have computing requirements that cannot be met by adding more hardware (HL-LHC, FAIR, DUNE...). A major software re-engineering is needed and more collaboration between experiments around software developments is
needed. This was the reason for setting up the HEP Software Foundation (HSF) in 2015. In 2017, the HSF published "A Roadmap for ...
The Belle II experiment is taking first collision data in 2018. This is an exciting time for the collaboration and allows to not only assess the performance of accelerator and detector, but also of the computing system and the software. Is Belle II ready to quickly process the data and produce physics results? Which parts are well prepared and where do we have to invest more effort? The...
DUNE will be the world's largest neutrino experiment due to take data in 2025. Here are described the data acquisition (DAQ) systems for both of its prototypes, ProtoDUNE single-phase (SP) and ProtoDUNE dual-phase (DP) - due to take data later this year. ProtoDUNE also breaks records as the largest beam test experiment yet constructed, and are the fundamental elements of CERN's Neutrino...
The HL-LHC will present enormous storage and computational demands, creating a total dataset of up to 200 Exabytes and requiring commensurate computing power to record, reconstruct, calibrate, and analyze these data. Addressing these needs for the HL-LHC will require innovative approaches to deliver the necessary processing and storage resources. The "blockchain" is a recent technology for...
X.509 certificates and [VOMS][voms] have proved to be a secure and reliable solution for authentication and authorization on the Grid, but also showed usability issues and required the development of ad-hoc services and libraries to support VO-based authorization schemes in Grid middleware and experiment computing frameworks. The need to move beyond X.509 certificates is recognized as an...
High energy physics is no longer the main user or developer of data analysis tools. Open source tools developed primarily for data science, business intelligence, and finance are available for use in HEP, and adopting them would the reduce in-house maintenance burden and provide users with a wider set of training examples and career options. However, physicists have been analyzing data with...
After 20 years of evolution, ROOT is currently undergoing a change of gears, bringing our vision of simplicity, robustness and speed closer to physicists' reality. ROOT is now offering a game-changing, fundamentally superior approach to writing analysis code. It is working on a rejuvenation of the graphics system and user interaction. It automatically leverages modern CPU vector and multi-core...
The EOS project started as a specialized disk-only storage software solution for physics analysis use-cases at CERN in 2010.
Over the years EOS has evolved into an open storage platform, leveraging several open source building blocks from the community. The service at CERN manages around 250 PB, distributed across two data centers and provides user- and project-spaces to all CERN experiments....
The CernVM File System (CernVM-FS) provides a scalable and reliable software distribution and---to some extent---a data distribution service. It gives POSIX access to more than half a billion binary files of experiment application software stacks and operating system containers to end user devices, grids, clouds, and supercomputers. Increasingly, CernVM-FS also provides access to certain...
The year 2017 was most likely a turning point for the INFN Tier-1. In fact, on November 9th 2017 early at morning, a large pipe of the city aqueduct, located under the road next to CNAF, broke. As a consequence, a river of water and mud flowed towards the Tier-1 data center. The level of the water did not exceeded the threshold of safety of the waterproof doors but, due to the porosity of the...