CERN openlab Open Day
from
Tuesday 9 June 2015 (09:00)
to
Wednesday 10 June 2015 (18:45)
Monday 8 June 2015
Tuesday 9 June 2015
Wednesday 10 June 2015
09:30
openlab@CERN, Welcome
-
Alberto Di Meglio
(
CERN
)
openlab@CERN, Welcome
Alberto Di Meglio
(
CERN
)
09:30 - 09:55
Room: 500/1-001 - Main Auditorium
09:55
CERN openlab: an Approach to Successful Technical Achievements
-
Sverre Jarp
(
CERN
)
CERN openlab: an Approach to Successful Technical Achievements
Sverre Jarp
(
CERN
)
09:55 - 10:10
Room: 500/1-001 - Main Auditorium
10:10
European Bioinformatics Institute: Research Infrastructure needed for Life Science
-
Steven Newhouse
European Bioinformatics Institute: Research Infrastructure needed for Life Science
Steven Newhouse
10:10 - 10:25
Room: 500/1-001 - Main Auditorium
The life science community is an ever increasing source of data from increasing diverse range of instruments and sources. EMBL-EBI has a remit to store and exploit this data, collected and made available openly across the world, for the benefit of the whole research community. The research infrastructure needed to support the big data analysis around this mission encompasses high performance networks, high-throughput computing, and a range of cloud and storage solutions - and will be described in the presentation.
10:25
GSI Helmholz Centre for Heavy Ion Research - ALFA: Next generation concurrent framework for ALICE and FAIR experiments
-
Mohammad Al-Turany
GSI Helmholz Centre for Heavy Ion Research - ALFA: Next generation concurrent framework for ALICE and FAIR experiments
Mohammad Al-Turany
10:25 - 10:40
Room: 500/1-001 - Main Auditorium
FAIR is a new, unique international accelerator facility for the research with antiprotons and ions. It is being built at the GSI Darmstadt in Hesse, Germany. The commonalities between the ALICE and FAIR experiments and their computing requirements led to the development of a common software framework in an experiment independent way; ALFA (ALICE-FAIR framework). ALFA is designed for high quality parallel data processing and reconstruction on heterogeneous computing systems. It provides a data transport layer and the capability to coordinate multiple data processing components. ALFA is a flexible, elastic system which balances reliability and ease of development with performance by using a message based multi-processing in addition to multi-threading. The framework allows for usage of heterogeneous computing architectures by offloading (portions of code are accelerated on the device) or natively (where the full program is executed on the device ).
10:40
Data Storage Institute: Speeding-up large-scale storage with non-volatile memory
-
Khai Leong Yong
(
DSI
)
Sergio Ruocco
(
DSI
)
Data Storage Institute: Speeding-up large-scale storage with non-volatile memory
Khai Leong Yong
(
DSI
)
Sergio Ruocco
(
DSI
)
10:40 - 10:55
Room: 500/1-001 - Main Auditorium
CERN EOS distributed, scalable, fault-tolerant filesystem serves data to 100s-1000s of clients. To speed-up the service, it keeps and updates a large and fast data Catalog in RAM that can reach a footprint of 100+ GBs. When a EOS server node crashes the Master node must rebuild a new coherent Catalog in memory from disk logs, a long operation that disrupts the activity of the clients. We propose to design a persistent, version of the EOS Catalog to be stored in the new Non-Volatile Memory that will remain consistent after faults. Upon restart, the EOS server process will find the Catalog in NVM and immediately resume serving all the clients, skipping the slow reconstructions from disk-based logs altogether.
10:55
Coffee
Coffee
10:55 - 11:25
Room: 500/1-201 - Mezzanine
11:25
Intel: High Throughput Computing Collaboration: A CERN openlab / Intel collaboration
-
Niko Neufeld
(
CERN
)
Intel: High Throughput Computing Collaboration: A CERN openlab / Intel collaboration
Niko Neufeld
(
CERN
)
11:25 - 11:40
Room: 500/1-001 - Main Auditorium
The Intel/CERN High Throughput Computing Collaboration studies the application of upcoming Intel technologies to the very challenging environment of the LHC trigger and data-acquisition systems. These systems will need to transport and process many terabits of data every second, in some cases with tight latency constraints. Parallelisation and tight integration of accelerators and classical CPU via Intel's OmniPath fabric are the key elements in this project.
11:40
Intel: GeantV - taking up the technology challenge
-
Andrei Gheata
(
CERN
)
Intel: GeantV - taking up the technology challenge
Andrei Gheata
(
CERN
)
11:40 - 11:55
Room: 500/1-001 - Main Auditorium
In a context where LHC plans on blowing up the luminosity limits forcing experiments into stretching their original performance, the HEP software is waking up from its long "sequential" hibernation. Particle transport simulation rises up as the ideal candidate to redesign the code towards a massively parallel approach and to respond to the increasing need for simulated samples. The R&D challenge is far from being trivial given the expectations: a factor between 3 to 5 gain in throughput with same or better physics performance, by using parallelism on multiple levels and triggering SIMD vectorisation and better cache usage, while keeping the code portable and able to use standard or opportunistic resources such as GPGPU or co-processors. The presentation will briefly describe the project path to face these challenges, its current status and plans.
11:55
Siemens: Smart Technologies for Large Control Systems
-
Elisabeth Bakany
(
Siemens/ETM
)
Filippo Maria Tilaro
(
CERN
)
Siemens: Smart Technologies for Large Control Systems
Elisabeth Bakany
(
Siemens/ETM
)
Filippo Maria Tilaro
(
CERN
)
11:55 - 12:25
Room: 500/1-001 - Main Auditorium
The CERN Large Hadron Collider (LHC) is known to be one of the most complex scientific machines ever built by mankind. Its correct functioning relies on the integration of a multitude of interdependent industrial control systems, which provide different and essential services to run and protect the accelerators and experiments. These systems have to deal with several millions of data points (e.g. sensors, actuators, configuration parameters, etc…) which need to be acquired, processed, archived and analysed. Since more than 20 years, CERN and Siemens have developed a strong collaboration to deal with the challenges for these large systems. The presentation will cover the current work on the SCADA (Supervisory Control and Data Acquisition) systems and Data Analytics Frameworks.
12:25
IDT: RapidIO Interconnect based Low Latency Multi Processor Heterogeneous Solutions
-
Devashish Paul
(
Director of Strategic Marketing, Systems Solutions, IDT
)
IDT: RapidIO Interconnect based Low Latency Multi Processor Heterogeneous Solutions
Devashish Paul
(
Director of Strategic Marketing, Systems Solutions, IDT
)
12:25 - 12:40
Room: 500/1-001 - Main Auditorium
In this presentation IDT will present some background on the RapidIO technology including 100% wireless macro base station global deployments and the applicability of this embedded interconnect to CERN’s particle physics challenges. The presentation will cover attributes of the RapidIO interconnect for latency, scalable and energy efficiency and will touch on RapidIO based solutions for the HPC market including what is being done in the Open Compute Project.
12:40
Lunch
Lunch
12:40 - 14:00
Room: Restaurant 1
Press lunch session
Press lunch session
12:40 - 13:30
Room: 61/1-009 - Room C
14:00
Welcome note from the CERN Director General, Dr Rolf Heuer
Welcome note from the CERN Director General, Dr Rolf Heuer
14:00 - 14:10
Room: 500/1-001 - Main Auditorium
14:10
CERN Physics and IT Challenges
-
Sergio Bertolucci
(
CERN
)
CERN Physics and IT Challenges
Sergio Bertolucci
(
CERN
)
14:10 - 14:50
Room: 500/1-001 - Main Auditorium
14:50
Brocade: Optimal flow placement in SDN networks
-
Mohammad Hanif
(
Brocade
)
Brocade: Optimal flow placement in SDN networks
Mohammad Hanif
(
Brocade
)
14:50 - 15:05
Room: 500/1-001 - Main Auditorium
Today' network poses several challanges to network providers. These challanges fall in to a variety of areas ranging from determining efficient utilization of network bandwidth to finding out which user applications consume majority of network resources. Also, how to protect a given network from volumetric and botnet attacks. Optimal placement of flows deal with identifying network issues and addressing them in a real-time. The overall solution helps in building new services where a network is more secure and more efficient. Benefits derived as a result are increased network efficiency due to better capacity and resource planning, better security with real-time threat mitigation, and improved user experience as a result of increased service velocity.
15:05
Oracle: Database and Data Management Innovations with CERN
-
Manuel Martin Marquez
(
CERN
)
Kevin Jernigan
(
Oracle
)
Lorena Lobato Pardavila
(
CERN
)
Oracle: Database and Data Management Innovations with CERN
Manuel Martin Marquez
(
CERN
)
Kevin Jernigan
(
Oracle
)
Lorena Lobato Pardavila
(
CERN
)
15:05 - 15:35
Room: 500/1-001 - Main Auditorium
A short review of Oracle's 33 year partnership with CERN, with an eye towards the future.
15:35
Rackspace: Significance of Cloud Computing to CERN
-
Giri Fox
(
Rackspace, Director, Customer Technology Services
)
Rackspace: Significance of Cloud Computing to CERN
Giri Fox
(
Rackspace, Director, Customer Technology Services
)
15:35 - 15:50
Room: 500/1-001 - Main Auditorium
The research collaboration between Rackspace and CERN is contributing to how OpenStack cloud computing will move science work around the world for CERN, and to reducing the barriers between clouds for Rackspace.
15:50
Coffee
Coffee
15:50 - 16:20
Room: 500/1-001 - Main Auditorium
16:20
Seagate: "Expanding CERN’s EOS storage with Seagate Kinetic disk technology"
-
Paul Hermann Lensing
(
Seagate (CH)
)
James Hughes
(
Seagate
)
Seagate: "Expanding CERN’s EOS storage with Seagate Kinetic disk technology"
Paul Hermann Lensing
(
Seagate (CH)
)
James Hughes
(
Seagate
)
16:20 - 16:35
Room: 500/1-001 - Main Auditorium
16:35
Cisco: "Next Generation Operating Systems"
-
Zeljko Susnjar
(
Cisco
)
Cisco: "Next Generation Operating Systems"
Zeljko Susnjar
(
Cisco
)
16:35 - 16:50
Room: 500/1-001 - Main Auditorium
Next-gen computing (NGC) platform is a new architectural concept, which will greatly enhance application performance in modern data centers, and will fundamentally deal with following challenges: Application’s parallelization, like e.g. map/reduce, driven by rapidly growing number of CPU cores per socket Efficient use of resources, driven by diversity of hardware as well as power constrains Data center wide system consistency, addressing failure of the system components will be a norm not an exception The paradox of the IT industry is that hardware evolution has advanced at much faster pace than system software features. Hardware has become more powerful, using technologies like SRIOV, VTx, vNIC, SGX, multi core etc., while Operating Systems are still treating them as simple devices, and doing all the heavy lifting by themselves. NGC is proposing an Operating System redesign by enabling application control over physical hardware (a.k.a. the data plane) and keeping the kernel out of the way as much as possible, while providing management (a.k.a. control plane) functionality. The recent fast adoption of Linux containers (see e.g. Docker, Rancher, LTMCTFY and others), which are essentially an Operating System virtualization technique, is the proof that Data Center is on the quest for more application aware and/or efficient way of abstracting the hardware. The Data Plane Computing System project in the CERN openlab, executed between Cisco and the ALICE experiment, provides a proof of concept for the above ideas.
16:50
Wrap-up
Wrap-up
16:50 - 17:00
Room: 500/1-001 - Main Auditorium
17:00
Cocktail and poster session
Cocktail and poster session
17:00 - 18:00
Room: 500/1-001 - Main Auditorium