EOS workshop

31/3-004 - IT Amphitheatre (CERN)

31/3-004 - IT Amphitheatre


Show room on map

The 3rd EOS workshop is in preparation to bring together the EOS community.

This two-day event at CERN is organized to provide a platform for exchange between developers, users and sites running EOS.

The first day of the workshop takes place in the IT amphitheater covering a wide range of topics related to EOS development, operations, deployments, applications, collaborations and various use-cases!

The second day will focus on operational aspects, demonstrations, hands-on tutorials with a deep-dive and the future roadmap & service evolution. The practical sessions will take place in a CERN computer center meeting room (513-1-024).

We invite all participants to join a social dinner on Monday evening (at your expense).


The workshop participation will be without fee.

Please register yourself to the workshop here. Don't forget to submit an abstract if you would like to share your experience/ideas within the EOS community.

If you are interested in joining the EOS community, this is the perfect occasion!

We look forward to seeing and talking to many of you in February 2019!

Your CERN EOS team.

EOS Site Survey
Videoconference Rooms
EOS Workshop
Herve Rousseau
Auto-join URL
Useful links
Phone numbers
There is a live webcast for this event
EOS Workshop
89 / 90
  • A. Egon Cholakian
  • Ahmad Siar Hesam
  • Alberto Pace
  • Aleksei Golunov
  • Alex Davis
  • Alexander Gerbershagen
  • Andrea Ceccanti
  • Andrea Manzi
  • Andreas Joachim Peters
  • Andrey KIRYANOV
  • Andrey Zarochentsev
  • Armin Burger
  • Belinda Chan
  • Bo Jayatilaka
  • Branko Blagojevic
  • Brian Paul Bockelman
  • Caio Costa
  • Cristian Contescu
  • Crystal Michelle Chua
  • Dan Szkola
  • Darrell Long
  • Dejan Cusic
  • Denis Pugnere
  • Dietrich Liko
  • Diogo Castro
  • Dirk Duellmann
  • Ean Mackney
  • Elvin Alin Sindrilaru
  • Enrico Bocchi
  • Erich Birngruber
  • Fabio Luchetti
  • Felix Böhm
  • Franck Eyraud
  • Gavin Kennedy
  • Georgios Bitzes
  • Georgios Kaklamanos
  • German Cancio Melia
  • Germano Massullo
  • Gregor Molan
  • Gregor Molan
  • Guido Aben
  • Gyan Shrestha
  • Herve Rousseau
  • Holger Angenent
  • Hugo Gonzalez Labrador
  • Iago Santos Pardo
  • Igor Tkachenko
  • Ingrid Kulkova
  • Ivan Arizanovic
  • Ivan Kadochnikov
  • Ivan Kashunin
  • Jakub Moscicki
  • Jan Iven
  • Jean-Michel Barbet
  • João Vicente
  • Julien Leduc
  • Jörn Friedrich Dreyer
  • Lepeke Phukungoane
  • Luca Mascetti
  • Maria Arsuaga Rios
  • Martin Vala
  • Michael D'Silva
  • Michael Davis
  • Michal Simon
  • Mihai Carabas
  • Mihai Patrascoiu
  • Miloslav Straka
  • Nick Ziogas
  • Nikola Hardi
  • Nitin Agarwal
  • Nuri Twebti
  • Oliver Keeble
  • Paul Musset
  • Petra Loncar
  • Pier Valerio Tognoli
  • Prasun Singh Roy
  • Rainer Toebbicke
  • Roberto Valverde Cameselle
  • Sean Murray
  • Simone Campana
  • Stefan Ost
  • Sumio Kato
  • Tom Needham
  • Valery Mitsyn
  • Vikas Singhal
  • Volodymyr Yurchenko
  • Xavier Espinal
  • Yujiang BI
  • Yuri Butenko
  • Monday, 4 February
    • 09:30 10:00
      Welcome: Registration/Coffee/Introduction 31/3-004 - IT Amphitheatre

      31/3-004 - IT Amphitheatre


      Show room on map
    • 10:00 12:00
      EOS Developement 31/3-004 - IT Amphitheatre

      31/3-004 - IT Amphitheatre


      Show room on map
      Convener: Jakub Moscicki (CERN)
      • 10:00
        EOS Citrine updates and developments 20m

        This presentation will outline the most important developments and changes that have gone into the EOS Citrine release in the past year.

        Speaker: Mr Elvin Alin Sindrilaru (CERN)
      • 10:20
        New namespace in production: An overview, and future plans 20m

        The new EOS namespace implementation based on QuarkDB entered production during 2018 with full success. In this presentation, we report the current status and experience with running new-namespace instances in production, as well as some preliminary plans for deprecating the old namespace.

        Speaker: Georgios Bitzes (CERN)
      • 10:40
        Status report of eosxd as EOS filesystem interface 20m

        The new FUSE interface for EOS is now two years under development and entered production usage in Q4 2018 at CERN. This presentation will highlight the current state of development, pending issues and outline the target performance and usability as a filesystem interface.

        Speaker: Andreas Joachim Peters (CERN)
      • 11:00
        EOS ACL enhancements 10m

        Interfacing EOS ACLs to Linux RichACLs triggered some enhancements, and a spin-off.

        Speaker: Rainer Toebbicke (CERN)
      • 11:10
        EOS Testing Service development: leveraging CI + Kubernetes 15m

        Development work-flows and quality assurance of tagged releases are of pivotal importance in any large scale software project environment.

        In order to prepare EOS for the upcoming storage challenges of LHC run 3, the next evolution in the CI framework is to add a fully automated distributed EOS storage setup and testing deployment using the Kubernetes/OpenShift platform provided by CERN IT.

        Speaker: Fabio Luchetti (CERN)
      • 11:25
        CERN Tape Archive: update on continuous integration 15m

        CTA and EOS integration requires parallel development of features in both software that needs to be synchronized and systematically tested on a specific distributed development infrastructure for each commit in the code base.

        CTA Continuous Integration development initially started as a place to run functional system tests against the freshly built software. But its importance grew over time.

        This presentation is an update on CTA continuous integration use cases.

        Speaker: Julien Leduc (CERN)
      • 11:40
        XRootD - Releases, Status & Planning 2019 20m

        Overview of XRootD release status and planned developments for 2019.

        Speaker: Michal Kamil Simon (CERN)
    • 12:00 13:40
      Lunch Break 1h 40m 31/3-004 - IT Amphitheatre

      31/3-004 - IT Amphitheatre


      Show room on map

      Restaurant 2

    • 13:40 13:45
      Workshop Dinner Information 5m 31/3-004 - IT Amphitheatre

      31/3-004 - IT Amphitheatre


      Show room on map
    • 13:45 15:35
      EOS Operations 31/3-004 - IT Amphitheatre

      31/3-004 - IT Amphitheatre


      Show room on map
      Convener: Elvin Alin Sindrilaru (CERN)
      • 13:45
        EOSOps @ CERN 20m


        Speakers: Cristian Contescu (CERN) , Herve Rousseau (CERN)
      • 14:05
        Migration from EOSUSER to EOSHOME 20m

        The presentation will report about the migration from the eosuser instance running EOS aquamarine to the new EOHOME cluster architecture running EOS citrine with quarkdb namespace.

        Speaker: Luca Mascetti (CERN)
      • 14:25
        EOSHOME Backup Prototype using restic 10m

        EOS Home (CERNBox) has now more than 16K users and more than 500Mil files which are backed up every day using a set of scheduled scripts. The process of restoring the data is currently an internal process and requires a lot of manual work. Therefore we are exploring alternatives to get a more modern and scalable backup system which can eventually make possible the integration of the restore processes into end-user tools like CERNBox interface.

        In this case in particular, we are exploring a distributed backup system based in the open source tool restic using CERN S3 service as a storage back-end.

        Speaker: Roberto Valverde Cameselle (Universidad de Oviedo (ES))
      • 14:35
        EOS using docker and expirience 15m

        Talk will present update on automatic tool to configure/update EOS storage using docker (eos-docker-utils). Currently plain EOS and ALICE EOS storage configurations are supported. First production storage is running for ALICE experiment (ALICE:Kosice::EOS) and for supercomputer Govorun at JINR.

        Speaker: Martin Vala (Technical University of Kosice (SK))
      • 14:50
        EOS at the Fermilab LPC Physics Center 20m

        We report on current experiences and challenges with running an EOS instance for use by the Fermilab LHC Physics Center (LPC) computing cluster. The LPC cluster is a 4500-core user analysis cluster with 7 PB of EOS storage which is an increase of about 40% over the last year. The LPC cluster supports several hundred active CMS users at any given time. We will also discuss our recent upgrade from aquamarine to citrine as well as plans for the near future.

        Speaker: Dan Szkola (Fermi National Accelerator Lab. (US))
      • 15:10
        EOS Documentation and Tesla Box 15m

        The current version of CERN EOS that provides a storage solution for collecting results from CERN LHC experiments is a result of 10 years research work of providing the best disk-based low-latency high-availability storage solution. These research results are ready for productization. The first step in CERN-Comtrade EOS Productization is providing professional documentation for EOS.

        We are presenting the development of professional documentation for EOS open source software. This project is unifying CERN research results of providing the best low-latency storage solution with Comtrade’s knowledge and experiences of 23 years in providing professional documentation for cutting edge storage and data management. Development of EOS documentation has the following steps:

        (1) Knowledge transfer of CERN research results to Comtrade developers.
        (2) A real deployment of EOS that enables Comtrade’s developers to prepare the list of EOS features.
        (3) Documentation of separate EOS features based on experiences from real installations
        (4) Joining of Comtrade developers with CERN storage researchers to achieve the goal of EOS productization. This allows Comtrade developer submitting change requests (bugs), improvement requests, and improvement proposals for CERN EOS team.
        (5) Structuring of collected and documented EOS features into the coherent documentation.
        (6) Providing the complete structure and style of professional documentation for EOS.

        This is a description of documentation process that has started in 2015. Currently, it is an independent research process – the research work of providing professional documentation for end customers.

        Speaker: Gregor Molan (COMTRADE D.O.O (SI))
      • 15:25
        EOS at RO-03-UPB 10m

        Last year I have deployed EOS Citrine at RO-03-UPB for Alice experiment. I can share the steps and the issues I had. It would be a short presentation of lessons learns.

        ACKNOWLEDGMENT - Work supported by SIMULATE project (no. 15 /17.10.2017): Simulations of Ultra-High Intensity Laser Pulse Interaction with Solid Targets.

        Speaker: Mr Mihai Carabas (University POLITEHNICA of Bucharest)
    • 15:35 16:00
      Coffee Break 25m 31/3-004 - IT Amphitheatre

      31/3-004 - IT Amphitheatre


      Show room on map
    • 16:00 17:20
      EOS Ecosystems 31/3-004 - IT Amphitheatre

      31/3-004 - IT Amphitheatre


      Show room on map
      Convener: Andreas Joachim Peters (CERN)
      • 16:00
        CloudStor Minio: Improving S3 performance in CloudStor 20m

        We at AARNet as well as the research community in Australia need bulk data access to our synch servers because one-off ingest of seriously large datasets performs subpar across the webdav/synch pathway. This presentation will discuss AARNet’s experiences, journey and many iterations to achieve high-speed data transfers via S3 protocol (the de facto standard) and the challenges and improvements made along the way.

        Minio helps some users interact with CloudStor using the S3 protocol. In the beginning we mounted EOS, CloudStor’s storage backend via FUSE and ran Minio over the top. We found that by using this approach transfers were very slow for large files and our FUSE mount kept crashing due to overloaded metadata queries. Fortunately Minio is Open Source and written in the Go programming language which means we can start hacking it!

        We then modified Minio to locally stage file uploads, then use xrootd’s xrdcopy command as a background task which (from the users point of view), increased uploads from ~4mb/s to ~800mb/s. Later on we had a user group uploading a dataset with many tiny files (>100,000) in the one bucket. This uploaded without issue, but doing an object listing on the bucket took over 2 hours. We then modified Minio again so that file listing was done via EOS’s /proc/user interface rather than listing via the EOS FUSE mount reducing the time down to 40 seconds. This worked well but the code was no longer maintainable with modifications all over the code base.

        From here we made the decision to start over, but rather than hacking the Minio code, we instead decided to write a separate EOS gateway module for Minio. The goal was to fork Minio and improve it to work with EOS in a way that is easy to maintain and update. The other goal of the EOS gateway was to remove the need to use EOS’s FUSE connector as it is a source of slowness.

        The EOS Gateway for Minio communicates to EOS via EOS’s WebDAV, EOS’s /proc/user web services and xrootd’s, xrdcopy. We also added ownCloud hooks so that files coming in and out of S3 are file scanned so that users can share uploaded data with other users and groups.

        AARNet’s Minio modifications provides an S3 implementation that allows for tighter integration to Open Source and commercial products that are already integrated in the users’ workflow. These includes collaboration and backup products such as FigShare, LabArchive, Alfresco, Commvault to name just a few. This enables users to upload, download, view and share as well as use S3 on the same data area.

        Speaker: Michael D'Silva (AARNet)
      • 16:20
        EOS as storage back-end for JRC scientific data processing 20m

        The Joint Research Centre (JRC) of the European Commission has set up the JRC Earth Observation Data and Processing Platform (JEODPP) as an infrastructure to enable the JRC projects to process and analyze big geospatial data in support to EU policy needs. This platform is built upon commodity hardware and has become fully operational mid 2016, with platform extensions in 2017 and 2018. EOS was deployed on the JEODPP as the core storage system and is used in production with strong support by the CERN EOS team.

        The JEODPP infrastructure is actively used by more than 25 JRC projects for data storage and various types of data processing and analysis. In order to serve the growing needs for data storage and processing capacity by the JRC projects, the platform has been extended in 2018. It currently consists of the EOS system as storage back-end with a total gross capacity of 13 PB, and processing and service nodes with a total of 1400 cores. A further extension to 17 PB gross storage and 1800 CPU cores are already in the pipeline.

        In 2018 the EOS service has been migrated to the new Citrine release. The presentation will give an overview about the implemented platform, the current status, issues identified during the migration to the Citrine release, and experiences made with EOS and FUSE client as main storage back-end. Testing of new functionalities, like the MGM namespace in QuarkDB, wider usage of the fusex client, or the workflow engine are envisaged for 2019.

        Speaker: Mr Armin Burger (European Commission - Joint Research Centre)
      • 16:40
        EOS as a online DAQ buffer for ProtoDUNE Dual Phase experiment 20m

        The ProtoDUNE-Dual Phase experiment is based at CERN and is carried out with the support of the Neutrino Platform. The two ProtoDUNE experiments are the prototypes of the DUNE (Deep Underground Neutrino Experiment) detector, which has just begun construction in the United States.
        The ProtoDUNE-DP detector will generate a data flow of up to 130Gb/s, uncompressed, to which a compression factor estimated of 10 should be applied. The challenges to be met include both the storage and online processing of this data in a local buffer before the data is exported to remote storage systems.
        In this presentation I will present the tests carried out on the EOS storage system, the choices and the infrastructure put in place as part of this experiment.

        Speaker: Denis Pugnere (Centre National de la Recherche Scientifique (FR))
      • 17:00
        CERNBox: EOS Powered CS3 Platform 20m

        CERNBox is the CERN cloud storage hub. It allows synchronizing and sharing files on all major desktop and mobile platforms (Linux, Windows, MacOSX, Android, iOS) aiming to provide universal access and offline availability to any data stored in the CERN EOS infrastructure. With more than 16000 users registered in the system, CERNBox has responded to the high demand in our diverse community to an easily and accessible cloud storage solution that also provides integration with other CERN services for big science: visualization tools, interactive data analysis and real-time collaborative editing. Collaborative authoring of documents is now becoming standard practice with public cloud services, and within CERNBox we are looking into several options: from the collaborative editing of shared office documents with different solutions (Microsoft, OnlyOffice, Collabora) to integrating mark-down as well as LaTeX editors, to exploring the evolution of Jupyter Notebooks towards collaborative editing, where the latter leverages on the existing SWAN Physics analysis service. We report on our experience managing this technology and applicable use-cases, also in a broader scientific and research context and its future evolution with highlights on the current development status and future road map. In particular we will highlight the future move to an architecture based on micro services to easily adapt and evolve the service to the technology and usage evolution, notably to unify CERN home directory services.

        Speaker: Hugo Gonzalez Labrador (CERN)
    • 19:00 21:00
      Social Dinner 2h 31/3-004 - IT Amphitheatre

      31/3-004 - IT Amphitheatre


      Show room on map

      The social dinner will take in the restaurant Luigia / Petit Sacconex.

  • Tuesday, 5 February
    • 09:30 10:10
      EOS Ecosystems 513/1-024



      Show room on map
      • 09:30
        CERN Tape Archive initial deployments 20m

        CTA is designed to replace CASTOR as the CERN Tape Archive solution in order to face the scalability and performance challenges of LHC Run-3.

        This presentation will give an overview of the initial software deployment on production grade infrastructure. We discuss its performance against various workloads: from artificial stress tests to production conditions with an LHC experiment. CTA's recent participation in the Heavy Ion Data Challenge will also be covered and a roadmap for future deployments will be presented.

        Speaker: Julien Leduc (CERN)
      • 09:50
        EOS XDC Developments 20m

        Short presentation of the XDC project.
        Highlights of EOS developments as part of XDC.
        Future development roadmap.

        Speaker: Mihai Patrascoiu (CERN)
    • 10:10 10:30
      Coffee Break 20m 513/1-024



      Show room on map
    • 10:30 11:30
      EOS Tutorials: Citrine & QuarkDB & Geoscheduler 513/1-024



      Show room on map
      Convener: Jan Iven (CERN)
      • 10:30
        Demo: Setting up and operating a new-namespace EOS instance 20m

        In this interactive demo, we show how to setup and operate a new-namespace EOS instance, including the corresponding highly-available QuarkDB backend cluster.

        Speaker: Georgios Bitzes (CERN)
      • 10:50
        EOS high-availability using QuarkDB 20m

        This tutorial will walk you though the steps of enabling the master-slave setup when using the new QuarkDB namespace.

        Speaker: Mr Elvin Alin Sindrilaru (CERN)
      • 11:10
        EOS Scheduler tutorial 20m

        The tutorial will go through the configuration of the EOS scheduler

        Speaker: Andrea Manzi (CERN)
    • 11:30 12:00
      EOS Tutorials: eosxd as a filesystem 513/1-024



      Show room on map
      Convener: Jan Iven (CERN)
      • 11:30
        Tips & tricks using eosxd for filesystem access 30m

        This tutorial will introduce how to use and configure eosxd to provide filesystem access to EOS. A particular focus will be to explain how to debug issues, understand performance, illustrate different authentication mechanisms and how to manage thousand of clients in a production setup.

        Speaker: Andreas Joachim Peters (CERN)
    • 12:00 13:45
      Lunch Break 1h 45m
    • 13:45 14:00
      Service Evolution: Erasure Encoding 513/1-024



      Show room on map
      • 13:45
        Using EOS with Erasure Encoding 5m

        This will be a quick overview about erasure coding and layout handling in EOS.

        Speaker: Andreas Joachim Peters (CERN)
      • 13:50
        EOS as tape storage replacement 10m

        People started to consider running a dedicated disk storage system as a replacement for a tape storage system.

        This presentation will discuss various scenarios and possible developments helping to increase the data safety of EOS as a tape replacement storage system and several possible configuration options and their operational impact.

        Speaker: Andreas Joachim Peters (CERN)
    • 14:00 14:45
      Service Evolution: Protocols - XRootD - HTTP - GRPC - TPC 513/1-024



      Show room on map
      • 14:00
        HTTP(S) based APIs for EOS 15m

        EOS is built on the XRootD framework and XRootD protocol. To provide HTTP protocol for data access (DAV) and metadata access (DAV, REST) libmicrohttpd has been added to the FST and MGM daemons. libmicrohttpd runs a separate threadpool and event loop and limits the request rate to ~100 Hz. In the recent past XRootD got a native HTTP protocol bridge (XrdHttp) which shares the same thread pool as XRootD protocol and scales better in terms of request/s. This presentation will explain how XrdHttp can be integrated to enable HTTP and HTTPS protocol out of the box bridging two APIs without rewriting the dedicated protocol handlers for DAV and CERNBOX (Owncloud) protocols.
        A second add-on to the protocols is GRPC, which is a widely accepted standard to implement REST apis with synchronous and asynchronous IO. Both protocol extensions are currently in separate development branches but can be added to the next major release of EOS.

        Speaker: Andreas Joachim Peters (CERN)
      • 14:15
        Microservices, gRPC, Protobuf and EOS 20m

        Microservices architectures make applications easier to scale and faster to develop, enabling innovation and accelerating time-to-market for new features. They also enable the continuous delivery and deployment of large and complex applications. They also enable organizations to evolve its technology stack in smalls steps as they allow for on-boarding new technologies at a fairly low cost.

        gRPC is a RPC platform originally developed by Google (under the Cloud Native Computing Foundation since 2017) which was announced and made open source in late Feb 2015. The letters “gRPC” are a recursive acronym which means, gRPC Remote Procedure Call.

        The protocol itself is based on http2, and exploits many of its benefits. It supports several built-in features inherited from http2, such as compressing headers, persistent single TCP connections, cancellation and timeout contracts between client and server. The protocol has built-in flow control from http2 on data.

        EOS has recently introduced a gRPC endpoint for metadata operations which can be consumed from a variety of clients, fostering the integration with other services.

        Speaker: Hugo Gonzalez Labrador (CERN)
      • 14:35
        EOS TPC transfers with GSI delegation 10m

        The presentation will outline a possible deployment scenario in which we use a vanilla XRootD server as a gateway for doing TPC transfers to/from EOS while using client credential (GIS) delegation.

        Speaker: Elvin Alin Sindrilaru (CERN)
    • 14:45 15:35
      Service Evolution: Authentication & Authorization - Kerberos - OpenID - Macaroons - Tokens 513/1-024



      Show room on map
      Remote Contributions
      Andreas Joachim Peters
      Auto-join URL
      Useful links
      Phone numbers
      • 14:45
        Beyond X.509: token-based AuthN & AuthZ for HEP 25m

        How-to use token-based AuthN & AuthZ for HEP.

        Speaker: Andrea Ceccanti (Universita e INFN, Bologna (IT))
      • 15:10
        Progress in token-based auth for the WLCG 25m

        In 2018, as part of the effort to replace the use of the Globus Toolkit's security infrastructure, a flurry of new approaches toward token-based authentication and authorization were attempted in the WLCG. This includes work in token formats (such as SciTokens, Macaroons, or WLCG JWT) and token acquisition workflows. After a year of experimentation, some common patterns are starting to emerge.

        The token-based approach - relying on describing the bearer's capabilities - is a more flexible scheme than the traditional GSI setup, which relies on identity mapping. In this presentation, I'll outline the difference between the two and take a tour through the different token schemes. I'll discuss the new feature support in XRootD for these different authorization techniques.

        Speaker: Brian Paul Bockelman (University of Nebraska Lincoln (US))
    • 15:35 16:00
      EOS Developement: Roadmap Discussion 513/1-024



      Show room on map
      Convener: Luca Mascetti (CERN)
      • 15:35
        EOS Project Roadmap 2019 10m

        We will present the plan for the development roadmap in 2019. The following discussion should help to complement the plan with requests from the community.

        Speaker: Andreas Joachim Peters (CERN)
      • 15:45
        Discussion / Community Input 15m

        Please put forward what you would like to see (prioritized) in the roadmap proposal. Report missing features, make suggestions for simplications or enhancements.

    • 16:00 16:15
      Final Session: QA & Closing 513/1-024



      Show room on map
      Convener: Andreas Joachim Peters (CERN)