CERN openlab Open Day

Europe/Zurich
500/1-001 - Main Auditorium (CERN)

500/1-001 - Main Auditorium

CERN

400
Show room on map
Description

CERN openlab is proposing a first-of-its-kind ‘Open Day’ event on 10 June, 2015. This will be the first in a series of annual events at which research and industrial teams from CERN openlab present their projects, share their achievements, and collect feedback from their user communities.

 

 

Poster
Participants
  • Abhijit Bhattacharyya
  • Abraham Villatoro
  • Alberto Di Meglio
  • Alexis Susset
  • Alfredo Saab
  • Ali Yasar
  • Andrei Gheata
  • Andrew Purcell
  • Andrey Ustyuzhanin
  • Andrzej Nowak
  • Ankita Mehta
  • Anmol Arora
  • Antonio Romero Marin
  • Anuj Jain
  • Artur Barczyk
  • Axel Voitier
  • Benjamin Lesueur
  • Bernd Panzer-Steindel
  • Bernhard Reichl
  • Bob Jones
  • Branko Blagojević
  • Bruno GUILLAUME
  • Christiaan Kuun
  • Christian Faerber
  • Claudio Bellini
  • Cristian Raul Vintila
  • David Mazur
  • David Burks
  • David Zerulla
  • Devashish Paul
  • Dirk Duellmann
  • DJ Forza
  • Edoardo Martelli
  • Elisabeth Bakany
  • Eric Grancher
  • Erik Smit
  • Eva Dafonte Perez
  • Federico Carminati
  • Filippo Tilaro
  • Fons Rademakers
  • Frederic Hemmer
  • Giacomo Tenaglia
  • Gianluca Valentino
  • Giri Fox
  • Giuseppe Lo Presti
  • Gregor Molan
  • Hakan Kuyucu
  • Harri Toivonen
  • Helder Antunes
  • Hervé OHEIX
  • Ian Bird
  • Isabel Redondo
  • James Hughes
  • Jari Koivisto
  • Jean-Marie Le Goff
  • Jochen Klein
  • Joe Fagan
  • Joona Juhani Kurikka
  • Karel Ha
  • Karen Ernst
  • Kevin Jernigan
  • Khai Leong Yong
  • Khin Mi Mi Aung
  • Konstantinos Papangelopoulos
  • Kristina Gunne
  • Lada Ducheckova
  • Livio Mapelli
  • Lorena Lobato Pardavila
  • Luca canali
  • Luis Rodriguez Fernandez
  • Lupsa Liana
  • Maciej Muszkowski
  • Maite Barroso Lopez
  • Manjit Dosanjh
  • Manuel Gonzalez Berges
  • Manuel Martin Marquez
  • Manuela Cirilli
  • Marcelo Yannuzzi
  • Marco Manca
  • Marco Neumann
  • Marek Kamil Denis
  • Maria Arsuaga-Rios
  • Maria Girone
  • Maria Serrador
  • Marie-Christine Sawley
  • Markus Nordberg
  • Massimo Lamanna
  • Michael Davis
  • Miguel Marquina
  • Mihajlo Zdravkovic
  • Mohammad Al-Turany
  • Mohammad Hanif
  • MUFFAT-ES-JACQUES Frederic
  • Nathalie Rauschmayr
  • Nick Ziogas
  • Niko Neufeld
  • Olof Barring
  • Pattabiraman Venkatesan
  • Pawel Szostek
  • Petya Georgieva
  • Philippe Gayet
  • Pierre Hardoin
  • Pierre VANDE VYVRE
  • Praveen Sampath Kumar
  • Predrag Buncic
  • Rade Radumilo
  • Rainer Schwemmer
  • Ramandeep Kumar
  • Renu Bala
  • Rick O'Connor
  • Rogerio Iope
  • Sailesh Chittipeddi
  • Sami Kama
  • Sergio Ruocco
  • Sofia Intoudi
  • Sofia Vallecorsa
  • Stephan Monterde
  • Steven Newhouse
  • Sue Foffano
  • Sverre Jarp
  • Sylvia Zoffmann
  • Thomas Langkjaer
  • Thorsten Kollegger
  • Tilmann Wittenhorst
  • Tim Bell
  • Timothy Roscoe
  • Tomas Blazek
  • Ulrich Plechschmidt
  • Vedran Vujinović
  • Volker Wolff
  • Wayne Salter
  • Xavier Espinal
  • Yang Zhang
  • Youssef GHORBAL
  • Zeljko Susnjar
  • Ömer Ufuk Efendioğlu
Webcast
There is a live webcast for this event
  • Wednesday, 10 June
    • 09:30 09:55
      openlab@CERN, Welcome 25m 500/1-001 - Main Auditorium

      500/1-001 - Main Auditorium

      CERN

      400
      Show room on map
      Speaker: Dr Alberto Di Meglio (CERN)
      Slides
      Video in CDS
    • 09:55 10:10
      CERN openlab: an Approach to Successful Technical Achievements 15m 500/1-001 - Main Auditorium

      500/1-001 - Main Auditorium

      CERN

      400
      Show room on map
      Speaker: Sverre Jarp (CERN)
    • 10:10 10:25
      European Bioinformatics Institute: Research Infrastructure needed for Life Science 15m 500/1-001 - Main Auditorium

      500/1-001 - Main Auditorium

      CERN

      400
      Show room on map
      The life science community is an ever increasing source of data from increasing diverse range of instruments and sources. EMBL-EBI has a remit to store and exploit this data, collected and made available openly across the world, for the benefit of the whole research community. The research infrastructure needed to support the big data analysis around this mission encompasses high performance networks, high-throughput computing, and a range of cloud and storage solutions - and will be described in the presentation.
      Speaker: Dr Steven Newhouse
      Slides
      Video in CDS
    • 10:25 10:40
      GSI Helmholz Centre for Heavy Ion Research - ALFA: Next generation concurrent framework for ALICE and FAIR experiments 15m 500/1-001 - Main Auditorium

      500/1-001 - Main Auditorium

      CERN

      400
      Show room on map
      FAIR is a new, unique international accelerator facility for the research with antiprotons and ions. It is being built at the GSI Darmstadt in Hesse, Germany. The commonalities between the ALICE and FAIR experiments and their computing requirements led to the development of a common software framework in an experiment independent way; ALFA (ALICE-FAIR framework). ALFA is designed for high quality parallel data processing and reconstruction on heterogeneous computing systems. It provides a data transport layer and the capability to coordinate multiple data processing components. ALFA is a flexible, elastic system which balances reliability and ease of development with performance by using a message based multi-processing in addition to multi-threading. The framework allows for usage of heterogeneous computing architectures by offloading (portions of code are accelerated on the device) or natively (where the full program is executed on the device ).
      Speaker: Dr Mohammad Al-Turany
      Slides
      Video in CDS
    • 10:40 10:55
      Data Storage Institute: Speeding-up large-scale storage with non-volatile memory 15m 500/1-001 - Main Auditorium

      500/1-001 - Main Auditorium

      CERN

      400
      Show room on map
      CERN EOS distributed, scalable, fault-tolerant filesystem serves data to 100s-1000s of clients. To speed-up the service, it keeps and updates a large and fast data Catalog in RAM that can reach a footprint of 100+ GBs. When a EOS server node crashes the Master node must rebuild a new coherent Catalog in memory from disk logs, a long operation that disrupts the activity of the clients. We propose to design a persistent, version of the EOS Catalog to be stored in the new Non-Volatile Memory that will remain consistent after faults. Upon restart, the EOS server process will find the Catalog in NVM and immediately resume serving all the clients, skipping the slow reconstructions from disk-based logs altogether.
      Speakers: Khai Leong Yong (DSI), Sergio Ruocco (DSI)
    • 10:55 11:25
      Coffee 30m 500/1-201 - Mezzanine

      500/1-201 - Mezzanine

      CERN

      10
      Show room on map
    • 11:25 11:40
      Intel: High Throughput Computing Collaboration: A CERN openlab / Intel collaboration 15m 500/1-001 - Main Auditorium

      500/1-001 - Main Auditorium

      CERN

      400
      Show room on map
      The Intel/CERN High Throughput Computing Collaboration studies the application of upcoming Intel technologies to the very challenging environment of the LHC trigger and data-acquisition systems. These systems will need to transport and process many terabits of data every second, in some cases with tight latency constraints. Parallelisation and tight integration of accelerators and classical CPU via Intel's OmniPath fabric are the key elements in this project.
      Speaker: Niko Neufeld (CERN)
      Slides
      Video in CDS
    • 11:40 11:55
      Intel: GeantV - taking up the technology challenge 15m 500/1-001 - Main Auditorium

      500/1-001 - Main Auditorium

      CERN

      400
      Show room on map
      In a context where LHC plans on blowing up the luminosity limits forcing experiments into stretching their original performance, the HEP software is waking up from its long "sequential" hibernation. Particle transport simulation rises up as the ideal candidate to redesign the code towards a massively parallel approach and to respond to the increasing need for simulated samples. The R&D challenge is far from being trivial given the expectations: a factor between 3 to 5 gain in throughput with same or better physics performance, by using parallelism on multiple levels and triggering SIMD vectorisation and better cache usage, while keeping the code portable and able to use standard or opportunistic resources such as GPGPU or co-processors. The presentation will briefly describe the project path to face these challenges, its current status and plans.
      Speaker: Andrei Gheata (CERN)
      Slides
      Video in CDS
    • 11:55 12:25
      Siemens: Smart Technologies for Large Control Systems 30m 500/1-001 - Main Auditorium

      500/1-001 - Main Auditorium

      CERN

      400
      Show room on map
      The CERN Large Hadron Collider (LHC) is known to be one of the most complex scientific machines ever built by mankind. Its correct functioning relies on the integration of a multitude of interdependent industrial control systems, which provide different and essential services to run and protect the accelerators and experiments. These systems have to deal with several millions of data points (e.g. sensors, actuators, configuration parameters, etc…) which need to be acquired, processed, archived and analysed. Since more than 20 years, CERN and Siemens have developed a strong collaboration to deal with the challenges for these large systems. The presentation will cover the current work on the SCADA (Supervisory Control and Data Acquisition) systems and Data Analytics Frameworks.
      Speakers: Elisabeth Bakany (Siemens/ETM), Filippo Maria Tilaro (CERN)
      Slides
      Video in CDS
    • 12:25 12:40
      IDT: RapidIO Interconnect based Low Latency Multi Processor Heterogeneous Solutions 15m 500/1-001 - Main Auditorium

      500/1-001 - Main Auditorium

      CERN

      400
      Show room on map
      In this presentation IDT will present some background on the RapidIO technology including 100% wireless macro base station global deployments and the applicability of this embedded interconnect to CERN’s particle physics challenges. The presentation will cover attributes of the RapidIO interconnect for latency, scalable and energy efficiency and will touch on RapidIO based solutions for the HPC market including what is being done in the Open Compute Project.
      Speaker: Devashish Paul (Director of Strategic Marketing, Systems Solutions, IDT)
      Slides
    • 12:40 14:00
      Lunch 1h 20m Restaurant 1

      Restaurant 1

      CERN

    • 12:40 13:30
      Press lunch session 50m 61/1-009 - Room C

      61/1-009 - Room C

      CERN

      22
      Show room on map
    • 14:00 14:10
      Welcome note from the CERN Director General, Dr Rolf Heuer 10m 500/1-001 - Main Auditorium

      500/1-001 - Main Auditorium

      CERN

      400
      Show room on map
      Video in CDS
    • 14:10 14:50
      CERN Physics and IT Challenges 40m 500/1-001 - Main Auditorium

      500/1-001 - Main Auditorium

      CERN

      400
      Show room on map
      Speaker: Sergio Bertolucci (CERN)
      Video in CDS
    • 14:50 15:05
      Brocade: Optimal flow placement in SDN networks 15m 500/1-001 - Main Auditorium

      500/1-001 - Main Auditorium

      CERN

      400
      Show room on map
      Today' network poses several challanges to network providers. These challanges fall in to a variety of areas ranging from determining efficient utilization of network bandwidth to finding out which user applications consume majority of network resources. Also, how to protect a given network from volumetric and botnet attacks. Optimal placement of flows deal with identifying network issues and addressing them in a real-time. The overall solution helps in building new services where a network is more secure and more efficient. Benefits derived as a result are increased network efficiency due to better capacity and resource planning, better security with real-time threat mitigation, and improved user experience as a result of increased service velocity.
      Speaker: Mohammad Hanif (Brocade)
      Slides
      Video in CDS
    • 15:05 15:35
      Oracle: Database and Data Management Innovations with CERN 30m 500/1-001 - Main Auditorium

      500/1-001 - Main Auditorium

      CERN

      400
      Show room on map
      A short review of Oracle's 33 year partnership with CERN, with an eye towards the future.
      Speakers: Kevin Jernigan (Oracle), Lorena Lobato Pardavila (CERN), Manuel Martin Marquez (CERN)
      Slides
      Video
      Video in CDS
    • 15:35 15:50
      Rackspace: Significance of Cloud Computing to CERN 15m 500/1-001 - Main Auditorium

      500/1-001 - Main Auditorium

      CERN

      400
      Show room on map
      The research collaboration between Rackspace and CERN is contributing to how OpenStack cloud computing will move science work around the world for CERN, and to reducing the barriers between clouds for Rackspace.
      Speaker: Giri Fox (Rackspace, Director, Customer Technology Services)
      Slides
      Video in CDS
    • 15:50 16:20
      Coffee 30m 500/1-001 - Main Auditorium

      500/1-001 - Main Auditorium

      CERN

      400
      Show room on map
    • 16:20 16:35
      Seagate: "Expanding CERN’s EOS storage with Seagate Kinetic disk technology" 15m 500/1-001 - Main Auditorium

      500/1-001 - Main Auditorium

      CERN

      400
      Show room on map
      Speakers: James Hughes (Seagate), Paul Hermann Lensing (Seagate (CH))
      Slides
      Video in CDS
    • 16:35 16:50
      Cisco: "Next Generation Operating Systems" 15m 500/1-001 - Main Auditorium

      500/1-001 - Main Auditorium

      CERN

      400
      Show room on map
      Next-gen computing (NGC) platform is a new architectural concept, which will greatly enhance application performance in modern data centers, and will fundamentally deal with following challenges: Application’s parallelization, like e.g. map/reduce, driven by rapidly growing number of CPU cores per socket Efficient use of resources, driven by diversity of hardware as well as power constrains Data center wide system consistency, addressing failure of the system components will be a norm not an exception The paradox of the IT industry is that hardware evolution has advanced at much faster pace than system software features. Hardware has become more powerful, using technologies like SRIOV, VTx, vNIC, SGX, multi core etc., while Operating Systems are still treating them as simple devices, and doing all the heavy lifting by themselves. NGC is proposing an Operating System redesign by enabling application control over physical hardware (a.k.a. the data plane) and keeping the kernel out of the way as much as possible, while providing management (a.k.a. control plane) functionality. The recent fast adoption of Linux containers (see e.g. Docker, Rancher, LTMCTFY and others), which are essentially an Operating System virtualization technique, is the proof that Data Center is on the quest for more application aware and/or efficient way of abstracting the hardware. The Data Plane Computing System project in the CERN openlab, executed between Cisco and the ALICE experiment, provides a proof of concept for the above ideas.
      Speaker: Zeljko Susnjar (Cisco)
      Slides
      Video in CDS
    • 16:50 17:00
      Wrap-up 10m 500/1-001 - Main Auditorium

      500/1-001 - Main Auditorium

      CERN

      400
      Show room on map
      Video in CDS
    • 17:00 18:00
      Cocktail and poster session 1h 500/1-001 - Main Auditorium

      500/1-001 - Main Auditorium

      CERN

      400
      Show room on map
      • Feasibility study to use Intel Xeon/Phi CPUs, Xeon-FPGA computing accelerators and the Omni-Path network in the Event Filter Farm of the LHCb Upgrade. 2m
        This poster describes the planned feasibility studies to use Intel Xeon/Phi CPUs, the new concept of a Xeon-FPGA based accelerate compute platform and the Omni-Path network in the foreseen Event Filter Farm of the LHCb Upgrade in 2018, in which the whole detector readout chain will be modified to make a detector readout of 40 MHz possible. The Intel Xeon/Phi is a coprocessor architecture, which uses a Many Integrated Core architecture and is optimized for highly parallelized tasks. The new Knights Landing version is interesting not only for the greater speed-up, due to a high number of computational cores +50, especially it is better integration with the Intel Omni-Path network, which is also considered to be used as network technology in the future Event Filter Farm. The concept platform of a Xeon-FPGA based computing accelerator uses a CPU linked via a high speed link to a high performance FPGA, on which an accelerator is implemented. The FPGA has cache-coherent memory access to the main memory of the server and can collaborate with the CPU. At the beginning the performance of a Muon Trigger using a normal CPU will be compared with the performance of the Muon Trigger using the accelerator platform.
        Speakers: Christian Faerber (CERN), Karel Ha (Charles University (CZ))
      • Saving LHC grid data storage using machine learning technique 2m
        The amount of data produced by the LHCb experiment every year consists of several petabytes. This data is kept on disk and tape storage systems. Disks are much faster than tapes, but are way more expensive and hence disk space is limited. It is impossible to fit the whole data taken during the experiment's lifetime on disk, but fortunately fast access to datasets are no longer needed after the analysis requiring them are over. So it is highly important to identify which datasets should be kept on disk and which ones should be kept as archives on tape. The metrics to be used for deprecating datasets’ caching is based on the “popularity” of this dataset, i.e. whether it is likely to be used in the future or not. We discuss here the approach and the studies carried out for optimizing such a Data Popularity estimator. Input information to the estimator are the dataset usage history and metadata (size, type, configuration etc). The system is designed to select the datasets which may be used in the future and thus should remain on disk. Studies have therefore been performed on how to optimize the usage of dataset information from the past for predicting its future popularity. In particular, we have carried out a detailed comparison of various time series analysis, machine learning classifier, clustering and regression algorithms. We demonstrate that our approach is capable of improving significantly the disk usage efficiency.
        Speaker: Andrey Ustyuzhanin (Yandex School of Data Analysis (RU))
      • Evolution of Database Replication Technologies for WLCG 2m
        Thanks to the Openlab program, Oracle and CERN have been working together during several years on database replication technologies used at WLCG. The result of this collaboration is an improvement and notable changes in this area, especially with the introduction of Oracle GoldenGate and Oracle Active Data Guard as a replacement of Oracle Streams. In this poster, we provide an overview of the evolution of database replication technologies since its first deployment in 2007 until 2015 when LHC was restarted for Run2. In particular each replication solution used within WLCG project will be covered with a general description of its features in context of usability for CERN’s use cases as well as the key monitoring tools which combined with Oracle GoldenGate, provides stability within environment replications at WLCG
        Speaker: Lorena Lobato Pardavila (CERN)
      • Evaluating the power efficiency and performance of multi-core platforms using HEP workloads 2m
        As Moore's Law drives the silicon industry towards higher transistor counts, processor designs are becoming more and more complex. The area of development includes core count, execution ports, vector units, uncore architecture and finally instruction sets. This increasing complexity leads us to a place where access to the shared memory is the major limiting factor, making feeding the cores with data a real challenge. On the other hand, the significant focus on power efficiency paves the way for power-aware computing and less complex architectures to data centers. In this paper we try to examine these trends and present results of our experiments with "Haswell-EP" processor family and highly scalable HEP workloads.
        Speaker: Pawel Szostek (CERN)
      • Big Data Discovery: A Self-service Analytics Platform 2m
        CERN’s particle accelerator infrastructure is comprehensively heterogeneous. A number of critical subsystems, which represent cutting-edge technology in several engineering fields, need to be considered: cryogenics, power converters, magnet protection, etc. The historical monitoring and control data derived from these systems has persisted mainly using Oracle database technologies, but also other sorts of data formats such as JSON, XML and plain text files. All of these must be integrated and combined in order to provide a full picture of the overall status of the accelerator complex. Oracle Big Data Discovery aims to empower self-service data exploration and discovery in such scenarios using the power of Hadoop ecosystem to transform and analyze large volume of heterogeneous data.
        Speakers: Antonio Romero Marin (CERN), Manuel Martin Marquez (CERN)
      • Smart Data analysis of CERN Control Systems 2m
        The CERN Large Hadron Collider (LHC) is known to be one of the most complex scientific machines ever built by mankind. Its correct functioning relies on the integration of a multitude of interdependent industrial control systems, which provide different and essential services to run and protect the accelerators. Each of these control system produces a huge amount of data, that holds an important potential value to optimize the entire system in terms of its operational efficiency and reliability. The extremely large control data sets, often termed as “Big Data”, cannot be processed through the use of traditional database tools. This is why it is necessary to design and develop new analytical frameworks, data models and data-mining algorithms scalable enough to cope with the huge volume of data and at the same time capable of fulfilling the time constraints. Moreover due to the critical nature of industrial systems the analysis should be carried out without affecting the industrial processes and the system availability: no downtimes can be tolerated because of their inherent expensive consequences. CERN also represents a unique and critical environment to evaluate the developed analytical solutions due to the size and heterogeneity of its industrial control systems. The main goal of the openlab collaboration between Siemens and CERN is to design and implement a Big Data platform which makes use of innovative analytical techniques and software packages flexible enough to match the challenging data analytics requirements of the various industrial control systems.
        Speaker: Filippo Maria Tilaro (CERN)
      • A Success Story: CERN - SIEMENS/ETM Collaboration to Develop WinCC Open Architecture 2m
        Speakers: Fernando Varela Rodriguez (CERN), Piotr Golonka (CERN)
      • Evaluation of the Huawei UDS cloud storage systems for HEP applications 2m
        Amazon S3 is a widely adopted web API for scalable cloud storage that could also fulll storage requirements of the high-energy physics community. CERN has been evaluating this option using some key HEP applications such as ROOT and the CernVM file system (CvmFS) with S3 back-ends. In this contribution, we present an evaluation of two versions of the Huawei UDS storage system stressed with a large number of clients executing HEP software applications. The performance of concurrently storing individual objects is presented alongside with more complex data access patterns as produced by the ROOT data analysis framework. Both Huawei UDS generations show a successful scalability by supporting multiple-byte range requests in contrast with Amazon S3 or Ceph which do not support these commonly used HEP operations. We further report the S3 integration with recent CvmFS versions and summarize the experience with CvmFS/S3 for publishing daily releases of the full LHCb experiment software stack.
        Speakers: Maria Arsuaga Rios (CERN), Seppo Heikkila (CERN)