EOS Workshop

Europe/Zurich
31/3-004 - IT Amphitheatre (CERN)

31/3-004 - IT Amphitheatre

CERN

105
Show room on map
Description

The second EOS workshop is in preparation to bring together the EOS community.

A two-day event at CERN is organized to provide a platform for  exchange between developers, users and sites running EOS.

Outline

The EOS development and operation teams will present the current state of the art, best practices and the future road map.

In particular we will report on the progress of the scale-out namespace architecture and the new FUSE implementation.

We warmly invite sites to present their current deployment, operational experiences, possible future deployment plans and input for improvements.

We encourage experiment representatives to present their future storage requirements.

Hands-on sessions are foreseen to enable exchange of information between operation teams at different sites and the development teams.

The first day will finish with a social dinner (at your expenses).

Fees

The workshop participation will be without fee.

If you are interested in joining the EOS community - this is the perfect occasion!

Please register yourself to the workshop here. Don't forget to submit an abstract if you want to share your experience/ideas within the EOS community.

We hope to see many of you in February 2018!

Your CERN EOS team.

Registration
EOS Workshop registration form
Participants
  • Alberto Pace
  • Aleksei Golunov
  • Andrea Manzi
  • Andreas Joachim Peters
  • Andrey Baginyan
  • Andrey Kirianov
  • Andrey Zarochentsev
  • Armin Burger
  • Belinda Chan
  • Bo Jayatilaka
  • Branko Blagojevic
  • Brett Clifford
  • Costin Grigoras
  • Cristian Contescu
  • Crystal Chua
  • Dan van der Ster
  • Daniel Szkola
  • Daniel Valbuena Sosa
  • David Jericho
  • Denis PUGNERE
  • Diana Scannicchio
  • Elvin Sindrilaru
  • Enrico Bocchi
  • FaHui Lin
  • Franck Eyraud
  • Franco Brasolin
  • Georgios Alexandropoulos
  • Georgios Bitzes
  • Giuseppe Lo Presti
  • Gyan Shrestha
  • Haibo li
  • Herve Rousseau
  • Hugo Gonzalez Labrador
  • Igor Doko
  • Ingrid Kulkova
  • Ivan Arizanovic
  • Ivan Kashunin
  • Jakub Moscicki
  • Jan Iven
  • Jean-Michel Barbet
  • Jesus LOPEZ
  • Joel Closier
  • Jozsef Makai
  • Julien Collet
  • Latchezar Betev
  • Lubos Kopecky
  • Luca Mascetti
  • Maria Arsuaga Rios
  • Martin Vala
  • Mason Proffitt
  • Massimo Lamanna
  • Michael D'Silva
  • Michael Davis
  • Michal Strnad
  • Michal Zimniewicz
  • Miguel Martinez Pedreira
  • Mihai Ciubancan
  • Mihai Patrascoiu
  • Miloslav Straka
  • Miroslav Bauer
  • Monika Grothe
  • Nuri Twebti
  • Oliver Keeble
  • Paul Musset
  • Pete Eby
  • Pierre Vande Vyvre
  • Piotr Mrowczynski
  • Qi Mengyao
  • Radu Popescu
  • Remy Pelletier
  • Roberto Valverde Cameselle
  • Simone Campana
  • Tim Hallyburton
  • Ulrich Fuchs
  • Valery Mitsyn
  • Veselin Vasilev
  • Volodymyr Yurchenko
  • Xavier Espinal
  • Yaodong Cheng
  • Yuri Butenko
Surveys
EOS Workshop QA
  • Monday, February 5
    • 9:15 AM 9:45 AM
      Welcome: Registration/Coffee
    • 9:45 AM 12:00 PM
      Developing EOS & CO: Kickoff Session
      • 9:45 AM
        Introdution: from workshop to workshop 10m

        This presentation will be a short introduction to the workshop agenda and provide some basic context to understand the current status and the future roadmap.

        Speaker: Andreas Joachim Peters (CERN)
      • 9:55 AM
        The new EOS website 15m

        The aim of this presentation is the introduction of the new EOS website, where users and developers can find all the information that they need in one place with an easy interaction and accessibility from all type of devices.

        Speaker: Dr Maria Arsuaga Rios (CERN)
      • 10:10 AM
        The EOS Citrine Version 20m

        This presentation will cover the development and current status of the EOS Citrine release.

        Speaker: Elvin Alin Sindrilaru (CERN)
      • 10:30 AM
        The new namespace and QuarkDB 20m

        EOS has outgrown the limits of its legacy in-memory namespace implementation, presenting the need for a more scalable solution. In response to this need we developed QuarkDB, a highly-available datastore capable of serving as the metadata backend for EOS.

        We will present the overall system design, and several important aspects associated with it, such as our efforts in providing comparable performance to the in-memory namespace through extensive caching and latency-hiding techniques.

        Speaker: Georgios Bitzes (CERN)
      • 10:50 AM
        A new FUSE based file system client for EOS 30m

        Since the last workshop, the FUSE client has been rewritten. In this presentation we will discuss in detail the new implementation, its configuration and the new performance metrics.

        Speaker: Andreas Joachim Peters (CERN)
      • 11:20 AM
        The EOS Citrine Scheduler and new Centralized Drain 15m

        This presentation will show the status and plans for the EOS Citrine Scheduler component focusing in particular on the configuration aspects. The talk will also introduce the new implementation of the Drain subsystem which now uses the GeoTreeEngine component for the drain placement selection.

        Speaker: Andrea Manzi (CERN)
      • 11:35 AM
        New approach of FST metadata storage 15m

        EOS FST has been storing file metadata in different relational databases, so far. In order to simplify handling them, the way of storing file metadata is going to be changed to store Base64 encoded, serialized Protobuf metadata objects as extended attributes.

        This approach also gave us the advantage to easily compress the metadata, allowing an average compression ratio of 0.5 and saving 50% of space consumed by metadata. This can mean saving up to hundreds of GB storage space per FST machine which could be used as effective storage space instead.
        The compression is based on the ZStandrad algorithm using a pre-trained compression dictionary for better compression ratios. We extended it with a wrapper to eliminate bottlenecks and making it thread-safe in order to be able to use it in a massively concurrent environment without losing much time with synchronization.

        FST has been extended with an automatic conversion detection from the old way to the new approach in order to be able to perform the conversion in case of necessity on its own.

        Speaker: Jozsef Makai (CERN)
      • 11:50 AM
        WLCG Accounting of EOS and smart files 10m

        WLCG Accounting is an important task to monitor the available and used resources of the LHC computation grid. Accountable resources involve EOS storage space for the experiments.

        In order to support this task force from the EOS side, EOS has introduced a new accounting interface (see accounting CLI command) to make the necessary information easily available. The accounting information consist of the quota nodes statistics and other custom, user specified data which can be provided as specific extended attributes. The output of the command is JSON text standardized for this purpose. It also supports a wide range of caching possibilities.

        EOS has introduced a new feature, called “smart files” to make this new feature easy to access. The purpose of this is to be able create special (empty) files in the EOS namespace which execute specified EOS command instead upon reading them. So the accounting command can be configured as a “smart file” and the report can be easily accessed through the REST interface.

        Speaker: Jozsef Makai (CERN)
    • 12:00 PM 1:45 PM
      Lunch Break 1h 45m

      Restaurant 2

    • 1:45 PM 2:40 PM
      Using EOS: Use Cases
      • 1:45 PM
        EOS as a Data Lake technology 20m

        The computing strategy document for HL-LHC identifies storage as one of the main WLCG challenges in one decade from now. In the naive assumption of applying today¹s computing model, the ATLAS and CMS experiments will need one order of magnitude more storage resources than what could be realistically provided by the funding agencies at the same cost of today. The evolution of the computing facilities and the way storage will be organized and consolidated will play a key role in how this possible shortage of resources will be addressed. In this contribution we will describe the architecture of a WLCG data lake, intended as a storage service geographically distributed across large data centers connected by fast network with low latency, and how a prototype of such architecture can be implemented using the EOS technology.

        Speaker: Xavier Espinal Curull (CERN)
      • 2:05 PM
        The eXtreme Data Cloud (XDC) Project 15m

        EOS is participating in the EU-funded eXtreme Data Cloud (XDC) Project which will support work on distributed deployment, caching and federation. This contribution gives an overview of the project and EOS's role within it.

        Speaker: Oliver Keeble (CERN)
      • 2:20 PM
        O2 Disk Buffer - WP15 EOS Performance Testing Framework 20m

        The ALICE Online/Offline (O2) Disk Buffer project will deploy a 60PB EOS filesystem at CERN to accommodate the Pb-Pb data taking period planned for 2020. An initial ~6PB evaluation system is planned for deployment in May 2018.

        Members from CERN, Oak Ridge National Lab (ORNL), and Lawrence Berkeley National Lab (LBNL) are collaborating on Work Package 15 (WP15) in the development of a performance testing and evaluation framework.

        One objective of the framework is to validate the O2 disk buffer storage environment through the development of an EOS testing framework which uses synthetic (fio, etc.) and simulated O2 workloads under expected levels of concurrency for standardized, reproducible results and SE performance analysis.

        It is envisioned this framework may be of value to the EOS community for storage design and performance evaluation decisions and benchmarking.

        This talk presents a design overview of the planned testing framework modules, their implementation, and how to contribute to the development effort.

        Speaker: Pete Eby (Oak Ridge National Laboratory - (US))
    • 2:40 PM 3:05 PM
      Using EOS: CERN Operations
      • 2:40 PM
        EOS Ops at CERN: 25m

        The EOS operations team at CERN operates multiple instances of EOS for the physics experiments and other activities from the laboratory.

        In this presentation we will focus on infrastructure changes, best practices and evolution. A second part will mention the upgrade process we're going through to run Citrine, as well as tools we wrote and use to manage our EOS instances. We will end the talk with a glance at what's coming for 2018 and the implications for the team.

        Speaker: Herve Rousseau (CERN)
    • 3:05 PM 3:25 PM
      Using EOS: Site Reports/Operations
      • 3:05 PM
        EOS as storage back-end for geospatial data analysis 20m

        The Joint Research Centre (JRC) of the European Commission has set up the JRC Earth Observation Data and Processing Platform (JEODPP) as a pilot infrastructure to enable the knowledge production Units to process and analyze big geospatial data in support to EU policy needs. This platform is built upon commodity hardware and first operational services were made available mid 2016. It currently consists of processing and service nodes with a total of 1,200 cores, and the EOS system as storage back-end with a total gross capacity of 1.9 petabyte. EOS was deployed on the JEODPP with strong support by the CERN EOS team thanks to the CERN-JRC collaboration agreement. The JEODPP EOS instance relies on the EOS FUSE client given that currently there is no XrootD driver for the Geospatial Data Abstraction Library (GDAL) mainly used for reading and writing geospatial data files.

        Multiple data processing levels have been implemented in the JEODPP. The batch processing system based on HTCondor is used for running large-scale data processing tasks based on HTCondor Docker or parallel universes and with all application dependent processes running in Docker containers. The web-based remote desktop level provides access to tools and software libraries for fast prototyping in a standard desktop environment. The interactive data processing in Jupyter notebooks allows for on-the-fly advanced data analysis and visualization.

        The JEODPP platform is actively used by more than 15 JRC projects for data storage and various types of data processing and analysis. This required an additional monitoring system based on Grafana to better monitor the platform status. In order to better deal with user needs for data transfers and sharing, JRC will test the usage of CERNBox since it provides better integration with EOS than the currently deployed solution based on NextCloud.

        The intensified usage of the platform and new data sources made it necessary to head for a major system extension which is currently underway. This will increase the EOS storage to a total gross capacity of 13 petabytes and the processing and service nodes to a total of 1,600 cores. The EOS service is planned to be migrated to the new Citrine release, and the usage of the new metadata management environment is envisaged once available and stable. The RAIN layout will be tested more extensively in 2018 as an alternative to the replica layout. The storage and processing platform in 2018 is going to be opened to JRC projects with new data domains and shall see a more extensive usage of machine learning technology. This way the platform is becoming the main scientific data hub at JRC.

        Speakers: Armin Burger (European Commission - Joint Research Centre), Veselin Vasilev (European Commission, Joint Research Centre (JRC))
    • 3:25 PM 3:45 PM
      Coffee Break 20m
    • 3:45 PM 5:05 PM
      Using EOS: Site Reports/Operations
      • 3:45 PM
        EOS at the Fermilab LPC Physics Center 20m

        We report on operational experiences and future plans with the Fermilab LHC Physics Center (LPC) computing cluster. The LPC cluster is a 4500-core user analysis cluster with 5 PB of storage running EOS. The LPC cluster supports several hundred users annually, from CMS university groups across the US. We anticipate the total EOS storage pool to grow by 50% by the start of Run 3 of the LHC.

        Speaker: Dan Szkola (Fermi National Accelerator Lab. (US))
      • 4:05 PM
        The Adventures of AARNet Across the EOS Dimension 20m

        AARNet's use of EOS for both our production CDN and our CloudStor platform over the last two years has been an adventure in collaboration, experiencing bugs, and extracting esoteric knowledge from both people and the code base.

        EOS exists in a space that isn't met by any existing open source scale out storage solutions. Neither Ceph, or any of the less common scale out systems provide the capabilities that EOS can deliver at tens of petabytes per cluster. That is even assuming they can scale to such a size.

        AARNet is investigating how to scale up to the tens of petabytes on their continent spanning EOS storage environment, while maintaining high availability of data. The major concern is not the technical development of EOS, but rather the surrounding issues of governance, technical debt, maintenance and documentation.

        This presentation discusses in brief some of the issues that have been experienced, how they were resolved (or not), and proposes some possible solutions to taking EOS from the targeted in-house open source project by CERN, to a possible contender in the increasingly common massive storage scale clusters.

        Speaker: David Jericho (AARNet)
      • 4:25 PM
        EOS status of IHEP site 20m

        This report will talk about the current status and recent updates of EOS at IHEP Site since the first EOS workshop in 2017, covering storage expansion, issues encountered and other related work.

        Speaker: Haibo li (Institute of High Energy Physics Chinese Academy of Science)
      • 4:45 PM
        T2 Experiences Scaling EOS: Capacity and Performance Observations 20m

        During the last two years Oak Ridge National Laboratory (ORNL) has administered the ORNL::EOS T2 site which has seen two storage capacity expansions with installed capacity increasing from 1PB to 2.5PB. As utilization and capacity have grown observations on the performance impact of underlaying storage architecture, RAID size, filesystem design decisions, and performance tunings have been evaluated. While deploying the latest 1PB expansion performance tests iterated through different storage layouts to identify performance effects and help identify a more optimized storage configuration to meet EOS demands. Will will share observations and the evolution of their effects on deploying new capacity.

        Speaker: Pete Eby (Oak Ridge National Laboratory - (US))
    • 7:00 PM 9:00 PM
      Social Dinner 2h
  • Tuesday, February 6
    • 9:00 AM 10:00 AM
      Developing EOS & CO: Development Session
      • 9:00 AM
        XROOT development update - year 2017 20m

        XRootD is a distributed, scalable system for low-latency file access. It is the primary data access framework for the high-energy physics community, and the backbone of EOS project.
        In this contribution we (briefly) discuss the most important new features introduced in year 2017 including: support for systemd socket inheritance, XrdSsi, Caching Proxy v2, support for local files and redirections and extreme copy. Also, we report on the most important bugfixes and enhancements to the
        client. Finally, we give an overview of the plans for the year 2018.

        Speaker: Michal Kamil Simon (CERN)
      • 9:20 AM
        XrootD Erasure Coding Plugin 10m

        In order to bring the potential of Erasure Coding (EC) to the XrootD / EOS ecosystem an effort has been undertaken to implement a native EC XrootD plugin based on the Intel Storage Acceleration Library (ISAL). In this contribution we discuss the architecture of the plugin, carefully engineered in order to enable low latency data streaming and 2D erasure coding. Also, we report on the status, and the future work.

        Speaker: Michal Kamil Simon (CERN)
      • 9:30 AM
        User authentication in eosxd: A tale of /proc/pid/environ and kernel deadlocks 15m

        Supporting multiple parallel users in eosxd requires some mechanism of distinguishing their identities, and assigning a different set of credentials to each.

        In this presentation, we detail our efforts in implementing the eosxd authentication subsystem based on process environment variables.

        However, reading the environment variables of a process (/proc/pid/environ) from within a FUSE daemon comes with a major caveat: The possibility of triggering a deadlock in the Linux kernel. We will outline the root cause of this issue, and describe various mitigations and workarounds for preventing it, thus making environment-based authentication in a FUSE daemon feasible.

        Speaker: Georgios Bitzes (CERN)
      • 9:45 AM
        EOS Code Structure & Testing 15m

        This presentation will give an overview of the code structure, resources, simple docker-based testing and more.

        Speaker: Elvin Alin Sindrilaru (CERN)
    • 10:00 AM 10:30 AM
      Coffee Break 30m
    • 10:30 AM 12:00 PM
      Using EOS: Deployment, Configuration, Testing, Hands-on
      • 10:30 AM
        New CI platform for EOS and XrootD 15m

        In the past year, we have migrated the continuous integration platform of EOS, XrootD and all related projects from Jenkins to Gitlab CI in order to provide a more agile, satisfying and all-automated build environment.

        Numerous achievements have been reached during the year.

        We have introduced builds and packages for new platforms. For EOS, we have created an all-inclusive dmg package for Mac OS Sierra. Both for EOS and XrootD, Debian packaging has been made available with the support of Ubuntu Artful packages for EOS and XrootD, and with the support of Ubuntu Xenial for XrootD. A new, fully-functional apt repository has been established for making widely available the built Debian packages.

        For non-release builds, compiler caching has been made available for all platofrms to reduce compilation time as much as possible.

        A lot of efforts have been made towards the verification of the EOS software in hope to constantly improve the quality.
        We have introduced unit testing based on Google tests framework.
        We started to use multiple static analysis tools, Coverity (once a day) and cppcheck with Sonar on a regular basis to detect problems as early as possible.
        We introduced a containerized environment based on Docker images (which are built and published for each code changes) to be able to conduct complex tests (FUSE, FUSEX, EOS CLI, stress tests) requiring a fully functional (including authentication) running instance of EOS for each code changes. A similar effort has been made for testing XrootD, as well.

        Packages are now automatically signed for released RPMs and all Debian packages.
        Our continuous integration environment also has been integrated with Koji to automatically publish release SRPMs which will be rebuilt and client packages will be available in the EPEL repositories.

        Speaker: Jozsef Makai (CERN)
      • 10:45 AM
        Down the Rabbit Hole: Adventures in Bug-Hunting 20m

        This talk covers a journey through fuzz-testing CERN's EOS file system with AFL, from compiling EOS with afl-gcc/afl-g++, to learning to use AFL, and finally, making sense of the results obtained.

        Fuzzing is a software testing process that aims to find bugs, and subsequently potential security vulnerabilities, by attempting to trigger unexpected behaviour with random inputs. It is particularly effective on programs or libraries that handle file or input parsing as these areas are often susceptible to buffer overflow or other vulnerabilities, for example libxml2, ImageMagick and even the Bash shell.

        This approach to automated bug discovery dates back to the early 1950s, and has been steadily gaining popularity in recent years as fuzzing tools become more sophisticated - and more importantly, easier to use. Of particular note is american fuzzy lop (AFL), a genetic fuzzer written by Michał Zalewski (lcamtuf@google), which has seen massive success - to date, it has been used in the discovery of over three hundred CVEs and many other non-exploitable bugs, in programs such as firefox, nginx, clang/llvm, and irssi.

        Initial experimental fuzzing attempts against EOS with AFL have been promising, and it is hoped that further efforts to establish a process around this will be greatly beneficial in the long run.

        Speaker: Crystal Chua (AARNet)
      • 11:05 AM
        Boxed: Docker-based service deployment in private and public clouds 20m

        Docker containers are rapidly becoming the preferred way to distribute, deploy, and run services by developers and system administrators. Their popularity is rapidly increasing as they constitute an appealing alternative to virtual machines: Containers require a negligible amount of time to set-up, provide performance comparable to the one of the host, and are easy to manage, replicate, and scale-out. Also, Docker containers allow to ship software and deterministically run it by self-containing all the required dependencies and decoupling the execution environment from the host.

        In this work, we present Boxed: A container-based version of EOS (the CERN disk/cloud storage for science), CERNBox (Cloud storage & synchronization service), and SWAN (Service for Web-based ANalysis). Boxed is available in two flavors: (i) A one-click setup for personal use where all services run on a single host; and (ii) a production-oriented deployment with the ability to scale out according to the storage and computing needs.

        Boxed demonstrates how CERN core services can be deployed in diverse scenarios, ranging from desktop and laptop computers to private and public clouds. In all contexts, Boxed delivers the same fully-fledged services used daily by CERN scientists in demanding scenarios. All in all, Boxed contributes to the adoption of CERN cloud technologies by helping interested partners in deploying CERN services on their cloud infrastructure.

        Speaker: Enrico Bocchi (CERN)
      • 11:25 AM
        EOS storage at ALICE using docker 20m

        Talk will present new automatic tool to configure/update EOS storage using docker (eos-docker-utils). Currently plain EOS and ALICE EOS storage configurations are supported. First production storage is running for ALICE experiment (ALICE:Kosice::EOS).

        Speaker: Martin Vala (Technical University of Kosice (SK))
      • 11:45 AM
        Demo: Setting up QuarkDB 15m

        During this presentation, we will demo the setup and operation of a highly-available QuarkDB cluster, ready to be used as backend for the new EOS namespace.

        Speaker: Georgios Bitzes (CERN)
    • 12:00 PM 2:00 PM
      Lunch Break 2h
    • 2:00 PM 2:55 PM
      Using EOS: Storage Tiering
      • 2:00 PM
        CERN Tape Archive Update 15m

        The CERN Tape Archive (CTA) is the tape archival back-end for EOS and the successor to CASTOR. This talk will give an update on CTA developments since last year's EOS workshop.

        Speaker: Michael Davis (CERN)
      • 2:15 PM
        Building Client-Server APIs using the XRootD Scalable Service Interface 30m

        This talk will give an overview of the XRootD Scalable Service Interface (SSI), which provides an asynchronous request-response framework with an emphasis on efficient data transfers. This will include a case study explaining how we used SSI and Google Protocol Buffers to develop the API between EOS and the CERN Tape Archive (CTA). The SSI-Protobuf bindings are available as a generic framework which can be used by other projects which need an efficient client-server protocol stack.

        Speaker: Michael Davis (CERN)
      • 2:45 PM
        Extensions for policy driven data management 10m

        In this presentation we will briefly explain the foreseen developments to implement the XDC and data lake concepts.

        Speaker: Andreas Joachim Peters (CERN)
    • 2:55 PM 4:00 PM
      Using EOS: Frontend Services
      • 2:55 PM
        CERNBox: the CERN cloud storage driven by EOS 20m

        CERNBox is a the CERN cloud storage service. It allows synchronising and sharing files on all major desktop and mobile platforms (Linux, Windows, MacOSX, Android, iOS) aiming to provide universal access and offline availability to any data stored in the CERN EOS infrastructure.
        With more than 12k users registered in the system, CERNBox has responded to the high demand in our diverse community to an easily and accessible cloud storage solution that also provides integration with other CERN services for big science: visualization tools, interactive data analysis and real-time collaborative editing.
        We report on our experience managing the service and on some insight on the operations of all the underlying technologies that allow us to grow exponentially the service.

        Speakers: Luca Mascetti (CERN), Hugo Gonzalez Labrador (CERN)
      • 3:15 PM
        Experience of deployment of CERNbox_SWAN at SPbSU 10m
        Speaker: Andrey Zarochentsev (St Petersburg State University (RU))
      • 3:25 PM
        CERNBox sharing reloaded: graceful coexistance of POSIX and sync/share in EOS ACLs 15m

        I'll discuss possible improvements of the EOS permission system to gracefully support ACLS for both sync/share access (CERNBox) and filesystem access (POSIX). This will also include an implementation of automatic synchronization of the shares from EOS.

        This is to address functional shortcoming for current CERNBox users and prepare for future massive filesystem access to EOS user instances at CERN.

        Speaker: Jakub Moscicki (CERN)
      • 3:40 PM
        A Microservice Architecture for CERNBOX 20m
        Speaker: Hugo Gonzalez Labrador (CERN)
    • 4:00 PM 5:00 PM
      Final Session: QA, Open Discussion & Workshop Wrap-up