Conveners
Plenary: S1
- There are no conveners in this block
Plenary: S2
- Marco Cattaneo (CERN)
Plenary: S3
- Maria Girone (CERN)
Plenary: S4
- Graeme Stewart
Plenary: S5
- Simone Campana (CERN)
Plenary: S6
- Gang Chen (Chinese Academy of Sciences (CN))
Plenary: S7
- Patrick Fuhrmann (Deutsches Elektronen-Synchrotron (DE))
Plenary: S8
- Latchezar Betev (CERN)
Plenary: S9
- Latchezar Betev (CERN)
-
09/07/2018, 09:00presentation
-
Emanouil Atanassov (Unknown)09/07/2018, 09:15presentation
The region of South-East Europe has a long history of successful collaboration in sharing resources and managing distributed electronic infrastructures for the needs of research communities. The HPC resources like supercomputers and big clusters with low-latency interconnection are an especially valuable and scarce resource in the region. Building upon the successfully tested operational and...
Go to contribution page -
João Fernandes (CERN)09/07/2018, 09:45Track 7 – Clouds, virtualization and containerspresentation
Helix Nebula Science Cloud (HNSciCloud) has developed a hybrid cloud platform that links together commercial cloud service providers and research organisations’ in-house IT resources via the GEANT network.
Go to contribution page
The platform offers data management capabilities with transparent data access where applications can be deployed with no modifications on both sides of the hybrid cloud and compute services... -
Jurry de la Mar (T-Systems International GmbH)09/07/2018, 10:00presentation
As the result of joint R&D work with 10 of Europe’s leading public research organisations, led by CERN and funded by the EU, T-Systems provides a hybrid cloud solution, enabling science users to seamlessly extend their existing e-Infrastructures with one of the leading European public cloud services based on OpenStack – the Open Telekom Cloud.
Go to contribution page
With this new approach large-scale data-intensive... -
Mr Alastair Pidgeon (RHEA System S.A.)09/07/2018, 10:15presentation
Ten of Europe’s leading public research organisations led by CERN launched the Helix Nebula Science Cloud (HNSciCloud) Pre-Commercial Procurement to establish a European hybrid cloud platform that will support the high-performance, data-intensive scientific use-cases of this “Buyers Group” and of the research sector at large. It calls for the design and implementation of innovative...
Go to contribution page -
Lee Bitsoi09/07/2018, 16:30
-
Philippe Charpentier (CERN)09/07/2018, 17:00
-
Daniel S. Katz (University of Illinois)09/07/2018, 17:30
-
David Rousseau (LAL-Orsay, FR)10/07/2018, 09:00presentation
Machine Learning (known as Multi Variate Analysis) has been used somewhat in HEP since the nighties. If Boosted Decision Trees are now common place, there is now an explosion of novel algorithms following the « deep learning revolution » in industry, applicable to data taking, triggering and handling, reconstruction, simulation and analysis. This talk will review some of these algorithms and...
Go to contribution page -
Steven Andrew Farrell (Lawrence Berkeley National Lab. (US))10/07/2018, 09:30Track 6 – Machine learning and physics analysispresentation
Initial studies have suggested generative adversarial networks (GANs) have promise as fast simulations within HEP. These studies, while promising, have been insufficiently precise and also, like GANs in general, suffer from stability issues. We apply GANs to to generate full particle physics events (not individual physics objects), and to large weak lensing cosmology convergence maps. We...
Go to contribution page -
Jennifer Ngadiuba (INFN, Milano)10/07/2018, 09:50Track 6 – Machine learning and physics analysispresentation
Machine learning methods are becoming ubiquitous across particle physics. However, the exploration of such techniques in low-latency environments like L1 trigger systems has only just begun. We present here a new software, based on High Level Synthesis (HLS), to generically port several kinds of network models (BDTs, DNNs, CNNs) into FPGA firmware. As a benchmark physics use case, we consider...
Go to contribution page -
Jean-Yves Le Meur (CERN)10/07/2018, 10:10Track 4 - Data Handlingpresentation
In 2016 was started the CERN Digital Memory project with the main goal of preventing loss of historical content produced by the organisation. The first step of the project was targeted to address the risk of deterioration of the most vulnerable materials, mostly the multimedia assets created in analogue formats from 1954 to the late 1990's, like still and moving images on films or magnetic...
Go to contribution page -
Imma Riu (IFAE Barcelona (ES))10/07/2018, 17:00Track 1 - Online computingpresentation
The ATLAS and CMS experiments at CERN are planning a second phase of upgrades to prepare for the "High Luminosity LHC", with collisions due to start in 2026. In order to deliver an order of magnitude more data than previous runs, protons at 14 TeV center-of-mass energy will collide with an instantaneous luminosity of 7.5 x 10^34 cm^-2 s^-1, resulting in much higher pileup and data rates than...
Go to contribution page -
Gerhard Raven (Natuurkundig Laboratorium-Vrije Universiteit (VU)-Unknown)10/07/2018, 17:30presentation
-
Andreas Salzburger (CERN)10/07/2018, 18:00Track 2 – Offline computingpresentation
The reconstruction of particle trajectories is one of the most complex and CPU intensive tasks of event reconstruction at current LHC experiments. The growing particle multiplicity stemming from an increasing number of instantaneous collisions as forseen for the upcoming high luminosity upgrade of the LHC (HL-LHC) and future hadron collider studies will intensify this problem significantly. In...
Go to contribution page -
Michel Jouvin (Université Paris-Saclay (FR))11/07/2018, 09:00presentation
Most HEP experiments coming in the next decade will have computing requirements that cannot be met by adding more hardware (HL-LHC, FAIR, DUNE...). A major software re-engineering is needed and more collaboration between experiments around software developments is
Go to contribution page
needed. This was the reason for setting up the HEP Software Foundation (HSF) in 2015. In 2017, the HSF published "A Roadmap for ... -
Thomas Kuhr11/07/2018, 09:30Track 5 – Software developmentpresentation
The Belle II experiment is taking first collision data in 2018. This is an exciting time for the collaboration and allows to not only assess the performance of accelerator and detector, but also of the computing system and the software. Is Belle II ready to quickly process the data and produce physics results? Which parts are well prepared and where do we have to invest more effort? The...
Go to contribution page -
Rosie Bolton11/07/2018, 10:00presentation
-
Karol Hennessy (University of Liverpool (GB))11/07/2018, 10:30Track 1 - Online computingpresentation
DUNE will be the world's largest neutrino experiment due to take data in 2025. Here are described the data acquisition (DAQ) systems for both of its prototypes, ProtoDUNE single-phase (SP) and ProtoDUNE dual-phase (DP) - due to take data later this year. ProtoDUNE also breaks records as the largest beam test experiment yet constructed, and are the fundamental elements of CERN's Neutrino...
Go to contribution page -
Lindsey Gray (Fermi National Accelerator Lab. (US))12/07/2018, 09:00Track 3 – Distributed computingpresentation
The HL-LHC will present enormous storage and computational demands, creating a total dataset of up to 200 Exabytes and requiring commensurate computing power to record, reconstruct, calibrate, and analyze these data. Addressing these needs for the HL-LHC will require innovative approaches to deliver the necessary processing and storage resources. The "blockchain" is a recent technology for...
Go to contribution page -
Elizabeth Sexton-Kennedy (Fermi National Accelerator Lab. (US))12/07/2018, 09:30presentation
-
Andrea Ceccanti12/07/2018, 10:00Track 3 – Distributed computingpresentation
X.509 certificates and [VOMS][voms] have proved to be a secure and reliable solution for authentication and authorization on the Grid, but also showed usability issues and required the development of ad-hoc services and libraries to support VO-based authorization schemes in Grid middleware and experiment computing frameworks. The need to move beyond X.509 certificates is recognized as an...
Go to contribution page -
Jim Pivarski (Princeton University)12/07/2018, 16:00presentation
High energy physics is no longer the main user or developer of data analysis tools. Open source tools developed primarily for data science, business intelligence, and finance are available for use in HEP, and adopting them would the reduce in-house maintenance burden and provide users with a wider set of training examples and career options. However, physicists have been analyzing data with...
Go to contribution page -
Axel Naumann (CERN)12/07/2018, 16:30Track 2 – Offline computingpresentation
After 20 years of evolution, ROOT is currently undergoing a change of gears, bringing our vision of simplicity, robustness and speed closer to physicists' reality. ROOT is now offering a game-changing, fundamentally superior approach to writing analysis code. It is working on a rejuvenation of the graphics system and user interaction. It automatically leverages modern CPU vector and multi-core...
Go to contribution page -
Andreas Joachim Peters (CERN)12/07/2018, 17:00Track 4 - Data Handlingpresentation
The EOS project started as a specialized disk-only storage software solution for physics analysis use-cases at CERN in 2010.
Go to contribution page
Over the years EOS has evolved into an open storage platform, leveraging several open source building blocks from the community. The service at CERN manages around 250 PB, distributed across two data centers and provides user- and project-spaces to all CERN experiments.... -
Jakob Blomer (CERN)12/07/2018, 17:20Track 7 – Clouds, virtualization and containerspresentation
The CernVM File System (CernVM-FS) provides a scalable and reliable software distribution and---to some extent---a data distribution service. It gives POSIX access to more than half a billion binary files of experiment application software stacks and operating system containers to end user devices, grids, clouds, and supercomputers. Increasingly, CernVM-FS also provides access to certain...
Go to contribution page -
Luca dell'Agnello (INFN)12/07/2018, 17:40Track 8 – Networks and facilitiespresentation
The year 2017 was most likely a turning point for the INFN Tier-1. In fact, on November 9th 2017 early at morning, a large pipe of the city aqueduct, located under the road next to CNAF, broke. As a consequence, a river of water and mud flowed towards the Tier-1 data center. The level of the water did not exceeded the threshold of safety of the waterproof doors but, due to the porosity of the...
Go to contribution page -
Latchezar Betev (CERN)13/07/2018, 08:55presentation
-
Catrin Bernius (SLAC National Accelerator Laboratory (US))13/07/2018, 09:00presentation
-
Patricia Mendez Lorenzo (CERN)13/07/2018, 09:20presentation
-
Hannah Short (CERN)13/07/2018, 09:40presentation
-
Costin Grigoras (CERN)13/07/2018, 10:00presentation
-
Gene Van Buren (Brookhaven National Laboratory)13/07/2018, 10:20presentation
-
Sergei Gleyzer (University of Florida (US))13/07/2018, 10:40presentation
-
Dave Dykstra (Fermi National Accelerator Lab. (US))13/07/2018, 11:30presentation
-
Jose Flix Molina (Centro de Investigaciones Energéti cas Medioambientales y Tecno)13/07/2018, 11:50presentation
-
13/07/2018, 12:10presentation
-
Dr Waseem Kamleh (University of Adelaide)13/07/2018, 12:25presentation