EOS workshop

Europe/Zurich
Andreas Joachim Peters (CERN), Jakub Moscicki (CERN), Luca Mascetti (CERN), Oliver Keeble (CERN)
Description

EOS Logo

The 5th EOS workshop is in preparation to virtually bring together the EOS community.

This four days virtual event is organized to provide a platform for exchange between developers, users and sites running EOS.

The workshop will cover a wide range of topics related to EOS development, operations, deployments, applications, collaborations and various use-cases!

A dedicated session will cover the EOS extended tape storage functionalities provided by the CERN Tape Archive project (CTA).

A particular focus will be on operational aspects, native and fabrics deployments in the context of WLCG and EGI, demonstrations, hands-on tutorials with a deep-dive and the future roadmap & service evolution.

Timetable
 

We will try to adopt the agenda schedule to give people around the world the possibility  to participate to the workshop. We will try to make all presentation recordings with your previous agreement available during the same day.

Fees

The workshop participation will be without fee.


Registrations

Registration is open to everbody and unlimited.

Please register yourself to the workshop. Don't forget to submit an abstract if you would like to share your experience/ideas within the EOS community.

If you are interested in joining the EOS community, this is the perfect occasion!

We look forward to having you at the virtual workshop in March 2021!

Your CERN EOS team.

Participants
  • Abhishek Lekshmanan
  • Adrian Negru
  • Adrian-Eduard Negru
  • Adrien Ramparison
  • Alberto Pace
  • Aleksandra Wardzinska
  • Alexandre Franck Boyer
  • Alexandre Lossent
  • Alice Suiu
  • Alison Packer
  • Andrea Manzi
  • Andrea Sciabà
  • Andreas Joachim Peters
  • Andreas Stoeve
  • Andreas Stoeve
  • Andreas Wagner
  • Andrey Kirianov
  • Andrey Zarochentsev
  • Aritz Brosa Iartza
  • Armin Burger
  • Armin Nairz
  • Aurelien Gounon
  • Bradley Marshall
  • Branko Blagojevic
  • Caio Costa
  • Cedric Caffy
  • Chien-De Li
  • Chih-Hao Huang
  • Chih-Hao Huang
  • Cristian Contescu
  • Crystal Michelle Chua
  • Dan Szkola
  • Dan van der Ster
  • Danila Oleynik
  • David Cohen
  • David Jericho
  • David Smith
  • Denis Lujanski
  • Denis Pugnere
  • Diogo Castro
  • Dirk Duellmann
  • Doug Benjamin
  • Edita Kizinevic
  • Egon Cholakian
  • Elena Gianolio
  • Elena Maria Planas Teruel
  • Elena Planas
  • Elisabetta Maria Pennacchio
  • Elizaveta Ragozina
  • Elvin Alin Sindrilaru
  • Enrico Bocchi
  • Erich Birngruber
  • Fabio Luchetti
  • Fabrizio Furano
  • Federico Gargiulo
  • Franck Eyraud
  • Frederik Ferner
  • Gavin Charles Kennedy
  • George Patargias
  • Germano Massullo
  • Gianmaria Del Monte
  • Giuseppe Lo Presti
  • Gregor Molan
  • Haibo li
  • Hamlet Arrieta
  • Heejune Han
  • Hugo Gonzalez Labrador
  • Ian Liu Rodrigues
  • Ian Rodrigues
  • Ingrid Kulkova
  • Irakli Chakaberia
  • Ishank Arora
  • Ivan Arizanovic
  • Ivan Kashunin
  • izhar ------
  • Jakub Klimek
  • Jakub Moscicki
  • James Walder
  • Janusz Oleniacz
  • Jaroslav Guenther
  • Jean-Michel Barbet
  • Jeff Derbyshire
  • Jeff Porter
  • John White
  • João Calado Vicente
  • Julien Leduc
  • Junyeong Lee
  • Jörn Dreyer
  • Karen Fernsler
  • Karolin Wachsmuth
  • Ken Oyama
  • Klaas Freitag
  • Latchezar Betev
  • Lea Morschel
  • Liviu Mihai Ciubancan
  • Lorena Lobato
  • Lu Wang
  • Luca Mascetti
  • Manuel Jesus Parra Royon
  • Manuel Parra
  • Manuel Reis
  • Marco Femia
  • Marco Leoni
  • Marco Leoni
  • Marco Scavazzon
  • Marek Szuba
  • Maria Arsuaga Rios
  • Mario Lassnig
  • Markus Schulz
  • Martin Barisits
  • Martin Gasthuber
  • Martin Vala
  • Martin Vala
  • Masanori Ogino
  • Massimo Lamanna
  • Michael D'Silva
  • Michael Davis
  • Michael Usher
  • Michal Simon
  • Michel Jouvin
  • Mihai Ciubancan
  • Mihai Patrascoiu
  • Miloslav Straka
  • minxing zhang
  • Mohammad Mahdi Akbarzad
  • Mwai Karimi
  • Natalia Gromova
  • Natascha Krammer
  • Neill Cox
  • Nick Papoutsis
  • Nithyasree Mariappan
  • Ofer Rind
  • Olga Chuchuk
  • Oliver Keeble
  • Paul Hasenohr
  • Paul Hasenohr
  • Pete Eby
  • Petr Vokac
  • Pier Valerio Tognoli
  • Pierre Soille
  • PRASUN SINGH ROY
  • Ricardo Macedo
  • Riccardo Di Maria
  • Robert Pocklington
  • Roberto Valverde Cameselle
  • Sami Mohamed Chebbi
  • Samuel Alfageme Sainz
  • Sang Un Ahn
  • Sean Murray
  • Sergiu Weisz
  • Sergiu Weisz
  • Sophie Catherine Ferry
  • Stefan Piperov
  • Steve Moulton
  • Steven Murray
  • Sujan Gowda
  • Svenja Meyer
  • Svetlana milenkovic
  • Tariq Mahmood
  • Teodor Ivanoaica
  • thirsa de boer
  • Tigran Mkrtchyan
  • Tom Byrne
  • Tom Wezepoel
  • Valeri Mitsyn
  • Varun Maheshwari
  • Vikas Singhal
  • Volodymyr Yurchenko
  • William Phukungoane
  • Xavier Espinal
  • Yaodong CHENG
  • Yaodong Cheng
  • Yaosong Cheng
  • Yujiang Bi
  • Yujun Wu
  • Yuri Ivanov
  • Yuri Ivanov
  • Ümit Seren
    • EOS: Introduction

      Core development activities and operations

      Convener: Andreas Joachim Peters (CERN)
    • EOS: Overview, EOS4, EOS5 & Roadmap

      Core development activities and operations

      Convener: Luca Mascetti (CERN)
      • 2
        EOS Basic Concepts and Design

        This talk will give summary of the main concepts and features of EOS as a storage system.

        • namespace design
        • user concept
        • access control
        • access protocols
        • high availability
          • meta data
          • data
        • scheduling
        Speakers: Andreas Joachim Peters (CERN), Elvin Alin Sindrilaru (CERN)
      • 3
        EOS developments overview 2020

        Summary of the most important development done throughout 2020.

        Speaker: Elvin Alin Sindrilaru (CERN)
      • 4
        EOS Version 5 Timeline and Roadmap

        We will present an overview of the upcoming EOS Version 5 release (Diopside) and a development roadmap.

        Speakers: Andreas Joachim Peters (CERN), Elvin Alin Sindrilaru (CERN)
    • 09:25
      Virtual Coffee Break
    • OPS: Reports

      Topics related to operations of EOS

      Convener: Luca Mascetti (CERN)
      • 5
        EOS service @CERN

        General description of the EOS service @CERN

        Speaker: Dr Maria Arsuaga Rios (CERN)
      • 6
        EOS at the Joint Research Centre

        The Joint Research Centre (JRC) of the European Commission has set up the Big Data Analytics Platform to enable the JRC projects to process and analyse big data, extracting knowledge and insights in support of EU policy making.

        Since 2016, EOS is the main storage component of the platform. In 2020, the total gross capacity of this instance has reached 19 PiB.

        The Big Data Analytics Platform is actively used by more than 40 JRC projects, covering a wide range of data analysis activities. To support the growing needs for data storage and processing capacity, the platform has been extended over the last year. Eight new FSTs have been added, for a space increase of 3.5 PiB. In addition, to increase the security of the platform, the team started to migrate the EOS nodes in a segregated VLAN.

        The presentation will give an overview about the Big Data Analytics Platform and its EOS storage back-end, presenting the current status, experiences made and the issues identified.

        Speaker: Marco Scavazzon (European Commission)
      • 7
        EOS - ALICE choice for Run3 + large O2 disk buffer
        Speaker: Latchezar Betev (CERN)
    • 15:00
      Virtual Coffee Break
    • OPS: Reports

      Topics related to operations of EOS

      Convener: Maria Arsuaga Rios (CERN)
      • 8
        EOS for Physics at CERN in 2020

        Our team is in charge of providing storage and transfer services for the LHC and non-LHC experiments at CERN. In this presentation we are going to walk you through the activities of the EOS operations team at CERN in 2020. We are going to focus on the achievements, hurdles and lessons learned throughout the past year.

        Speaker: Cristian Contescu (CERN)
      • 9
        EOS at the Fermilab LHC Physics Center

        Fermilab has been running an EOS instance since testing began in June 2012. By May 2013, before becoming production storage, there was 600TB allocated for EOS. Today, there is approximately 11PB of storage available in the EOS instance.

        An update of our current experiences and challenges running an EOS instance for use by the Fermilab LHC Physics Center (LPC) computing cluster. The LPC cluster is a 4500-core user analysis cluster with 11 PB of EOS storage. This is an increase of about 80% over 2018. The LPC cluster supports several hundred active CMS users at any given time.

        Speaker: Mr Dan Szkola (Fermi National Accelerator Lab. (US))
      • 10
        EOS at the Austrian T2

        The institutes at the Vienna Biocenter (GMI, IMBA, IMP) have run HPC services for their life sciences research for several years. With our new infrastructure "CLIP", additional partners came on board in 2019, including the Austrian high-energy physics community.

        Beginning in 2020 the Austrian grid T2 setup was modernized and based on the CLIP infrastructure.

        We run a converged EOS instance for Alice, Belle2 and CMS and want to share our experience in getting our setup into production.

        Speakers: Erich Birngruber (Austrian Academy of Sciences (AT)), Umit Seren (Austrian Academy of Sciences (AT))
      • 11
        WebEOS for websites hosting

        This presentation will briefly describe the usage of EOS for website hosting at CERN.

        Speaker: Alexandre Lossent (CERN)
    • EOS: XRootD

      Core development activities and operations

      Convener: Maria Arsuaga Rios (CERN)
      • 12
        Powered by XRootD

        XRootD is a distributed scalable system for low-latency file access. It is primary data access framework for the high-energy physics community, and the foundation of EOS project. In this contribution we give an overview of the release 5. In particular, we discuss the TLS based, secure version of the xroot/root protocol and the several enhance tailor made for EOS, like the so-called redirect collapse, I/O error recovery at MGM and kernel buffer support.

        Speaker: Michal Kamil Simon (CERN)
      • 13
        Status and deployment of EOS XrdHttp with TPC support

        Overview of the XrdHttp integration with EOS together with token support.

        Speaker: Elvin Alin Sindrilaru (CERN)
      • 14
        XRootD I/O Server Benchmarking on flash and disk

        This presentation will summarize few bandwidth and IOPS measurements using root:// and http:// protocol using XRootD Version 5 in front of disk, NVMe, SSDs and CephFS and an outlook with posisble future improvements.

        Speaker: Andreas Joachim Peters (CERN)
    • EOS: SRE Concepts

      Core development activities and operations

      Convener: Maria Arsuaga Rios (CERN)
      • 15
        SRE fundamentals in EOS

        The EOS system is an advanced distributed storage system that deals with many extreme uses-cases (massive data injection from the LHC, latency-critical online home directories and massive throughput accesses from batch farms).

        EOS implements many site reliability engineering best practices to support these uses cases at scale and also to support the work done by the operations team maintaining the production clusters.

        In this presentation we explain some of the functionalities implemented in the core of EOS (logging, retry mechanism, QoS) that allows a smooth operation of the service while accommodating the diverse use-cases cited above.

        Speaker: Hugo Gonzalez Labrador (CERN)
    • OPS: Reports

      Topics related to operations of EOS

      Convener: Jakub Moscicki (CERN)
      • 16
        The first disk-based custodial storage for the ALICE experiment

        We present a disk-based custodial storage for the ALICE experiment at CERN to preserve its raw data alternative to tape with the EOS QRAIN. In this presentation, we describe the detailed system deployment of disk-based custodial storage, the integration to the ALICE experiment and the current status of system monitoring such as hardware error detection and power consumption measurement.

        Speaker: Sang Un Ahn (Korea Institute of Science & Technology Information (KR))
      • 17
        AARNet FST Investigations
        Speaker: Denis Sergeevich Lujanski
      • 18
        EOS at IHEP

        In this presentation, we will report our current experiences and challenges with running EOS instances for used by IHEP CAS. Currently, IHEP has a total of 42PB storage, of which EOS accounts for 16PB, an increase of 10PB in 2020. At present, the LHAASO experiment mainly uses EOS as its mass storage system. In addition, the JUNO experiment has completed the construction of EOS testbed, and EOS is also considered after the evaluation. We will discuss our recent upgrade, including operating experiences, the progress of EOS CTA and EOS test on ARM. Finally, we will discuss our plans in the future.

        Speaker: Haibo li (Institute of High Energy Physics Chinese Academy of Science)
      • 19
        A scheme to implement local server computation on EOS system based on Xrootd plug-in

        Particle physics computing model has a kind of high statistical calculation, such applications need to access a large amount of data for analysis, the data I/O capability is very high requirements.For example, the LHAASO experiment generates trillions of events each year, and the large raw data needs to be decode to encode and mark before it can be analyzed. In this process, very high I/O bandwidth is required, otherwise an I/O bottleneck will form.When using the EOS file system, the user cannot know the physical storage location of the file, and when the user needs to access the file, it needs to search the MGM, transfer the file from the FST to the client, and the client provides the target file to the user.In this process, if the user needs to perform such IO intensive operations as mentioned above, there are two limitations on I/O bandwidth, one is the storage node's hard disk read and write efficiency, the other is the network bandwidth between the FST and the client.In this case, if the data storage unit and the computing unit can be integrated into one, the data handling can be significantly reduced, and the parallelism and energy efficiency of computing can be greatly improved.Currently, the potential of this kind of integrated memory and computing storage is attracting the attention of many companies and standards bodies.SNIA has formed a working group to establish standards for interoperability between computable storage devices, and the OpenFog Consortium is also working on standards for computable storage.
        Therefore, we propose a scheme to implement local server computation on EOS system based on Xrootd plug-in.Flags can be added after a file is accessed when a user needs to use computable storage.After receiving the access request, the client will forward the request to the FST where the file is located and perform the default decode calculation in the background on the FST.After testing, we found that using this method to simultaneously decode 10 1G raw files stored on the same FST can save about 45.9% of the time compared to the traditional method.The next work plan is to sink the computable module onto the hard disk to reduce the CPU consumption of the FST, and to customize the acceleration module on the hardware to increase the speed of the computation.

        Speaker: minxing zhang (The Institute of High Energy Physics of the Chinese Academy of Sciences)
    • 09:15
      Virtual Coffee Break
    • EOS: Development

      Core development activities and operations

      Convener: Jakub Moscicki (CERN)
      • 20
        The virtue of composability

        In this contribution we report on the new XRootD client declarative API that is in line with the modern C++ programming practices (ranges v3 inspired, support for lambdas and std::futures), offers much improved code readability and genuine composability.

        Speaker: Michal Kamil Simon (CERN)
      • 21
        High throughput erasure coding with XRootD client

        In this contribution we give the design details of the new Intel ISAL based XRootD erasure coding library and discuss the preliminary results obtained on the Alice O2 cluster.

        Speaker: Michal Kamil Simon (CERN)
      • 22
        EOS-wnc console

        Context: EOS-wnc console for EOS client on Windows operating system.

        Objectives: The usage of the EOS-wnc on Windows platform with the functionalities of the EOS client on Linux platform should be on the same level as the usage of EOS Linux client.

        Method: EOS client can be used as a set of command line interface (CLI) commands, where each EOS command is executed independently, or through a EOS client console, where EOS commands are executed in a console.

        Result: EOS-wnc includes a Windows command eos.exe that is a console for all EOS client commands. Beside all features from EOS client console on Linux, there is the additional user-friendly feature: command-line completion (tab completion). It provides completion of commands, completion of command arguments, and completion of directory and file names. The last completion feature allows user to get “on the fly” directory content for any EOS command without previously using EOS ls command.

        Speaker: Gregor Molan (COMTRADE D.O.O (SI))
      • 23
        Windows drive for EOS-wnc

        Context: Windows nature connection of EOS-wnc to Windows operating system.

        Objectives: The connection of the EOS-wnc on Windows platform should be as it is for Windows local disks, external disk storages, it means as a Windows disk driver letter.

        Method: A storage on Windows operating is presented as a “disk drive letter”. Architecture of Windows storage drivers has following layers as follows:

        1. IRP for Upper-filter driver
        2. IRP for Storage-class driver
        3. SRB for Lower-filter driver
        4. SRB for Storage port driver
        5. SRB for Bus-specific commands

        where
        IRP (I/O request packets): kernel mode structures that are used by Windows Driver Model (WDM)
        SRB (SCSI Request Block): SCSI command descriptor blocks (CDBs)

        For EOS-wnc is implemented “thin” Windows disk driver.

        Result: Windows driver software is low-level software and bugs in low-level software can be extremely painful; bugs can cause a loose of all data on disk drive including operating system. Therefore, the EOS-wnc Windows drivers implementation is a “thin driver” to maximize stability and security of this part of software.

        Speaker: Gregor Molan (COMTRADE D.O.O (SI))
      • 24
        OSD-Model implementation on EOS-wnc

        Context: Optimal Software Implementation Model (OSD-Model) is to supervise and control development of EOS-wnc, where EOS-wnc is an important extension of Linux based EOS system for Windows platform.

        Objectives: OSD-Model is used to manage development process to assure appropriate performance of the EOS-wnc on Windows platform on the same level as the performance of EOS Linux client. EOS-wnc has the same functionalities as the EOS client on Linux platform, where EOS Linux fulfil the highest demands for CERN experiments.

        Method: Development process is managed with an OSD-Model in such a way, that graph vertices are requested functionalities and graph edges are test cases and f-influences between requested functionalities. Graph weights in the functionality graph are
        (a) estimations for development costs for functionalities and functionality influences,
        (b) estimations for test costs for functionality influences,
        (c) functionality and f-influence significance,
        (d) value for end user related to functionalities and f-influences.

        Result: For each of required EOS-wnc command is defined their value and their significance. Defined are influences (f-influences) between required EOS-wnc commands with their values and their significance, similarly as values and significance of functionalities. According to available development resources, that could be changing during the development process, algorithms of the proposed OSD-Model determine the set of functionalities and f-influences to get the optimal EOS-wnc. In this case, the optimal EOS-wnc is the software that is at least on the same level of performance as the EOS Linux client.

        Speaker: Gregor Molan (COMTRADE D.O.O (SI))
    • 10:30
      Short Break
    • OPS: Reports

      Topics related to operations of EOS

      Convener: Jakub Moscicki (CERN)
      • 25
        LHC Data Storage: RUN-3 preparation

        LHC Data Storage: RUN-3 preparation

        Speaker: Dr Maria Arsuaga Rios (CERN)
      • 26
        Experience of using EOS LRU and File Converter engines for write buffering in the Data Lake prototype

        In this talk we will share our experience in implementing write buffering with background stage-out of files from a site accessing the Data Lake prototype using EOS built-in LRU and File Converter engines. This study was aimed at improving resource usage for CPU-only sites by reducing the data stage-out overhead.

        Speaker: Mr Andrey Kirianov (NRC Kurchatov Institute PNPI (RU))
      • 27
        EOS for ALICE O2 - HW setup and OS challenges

        This presentation will briefly showcase the ALICE O2 HW setup for the pilot storage nodes and the OS challenges we have faced when trying to tweak it for maximum performance, in view of ALICE's Run3 data taking.

        Speaker: Cristian Contescu (CERN)
      • 28
        What’s coming for EOS Data Durability

        EOS Data Durability is a set of tools that automatically detects and repairs problematic files to ensure that data is not lost or compromised.

        Speaker: Manuel Reis (Universidade de Lisboa (PT))
      • 29
        EOS on CephFS

        This presentation will highlight how to deploy EOS effectively using CephFS as a storage backend, the basic operational aspects for EOS and CephFS and performance expectations.

        Speakers: Andreas Joachim Peters (CERN), Dan van der Ster (CERN)
    • 15:30
      Virtual Coffee Break
    • HANDS-ON: Installation,Operation & Tools
      Convener: Andreas Joachim Peters (CERN)
      • 30
        EOS in 5 Minutes

        How to install EOS in 5 minutes and run it.

        Speakers: Andreas Joachim Peters (CERN), Elvin Alin Sindrilaru (CERN)
      • 31
        EOS Operations: bits and pieces

        In this presentation we will be sharing some tips and recommendations about different operational procedures on EOS, from techniques to reduce the load on FST's system disk to how to use geoscheduler mechanism for draining and for adding capacity to the instances.

        Speaker: Roberto Valverde Cameselle (CERN)
      • 32
        Practical use cases for eos-ns-inspect tools

        Practical use cases for eos-ns-inspect tools

        Speakers: Dr Maria Arsuaga Rios (CERN), Cristian Contescu (CERN), Roberto Valverde Cameselle (CERN)
      • 33
        EOS and SquashFS

        A quick tutorial, how to use squashfs images for software/small file distribution.

        Speaker: Jaroslav Guenther (CERN)
      • 34
        OCIS meets EOS

        In this hands-on we show how to connect an out-of-the-box OCIS (ownCloud Infinite Scale) and connect it to an existing EOS instance.

        Speaker: Samuel Alfageme Sainz (CERN)
    • CTA: Tape Service

      All about CTA

      Convener: Oliver Keeble (CERN)
      • 35
        EOS+CTA WorkFlows: Tape Archival and Retrieval

        The CERN Tape Archive (CTA) is the tape back-end to EOS. EOS provides an event-driven interface, the WorkFlow Engine (WFE), which is used to trigger the processes of archival and retrieval. When EOS is configured with its tape back-end enabled, the CREATE and CLOSEW (CLOSE Write) events are used to trigger the archival of a file to tape, while the PREPARE event triggers the retrieval of a file from tape and the creation of a disk replica.

        This talk will present the details of these tape-related workflows, including the state machine for the processes of archival and retrieval, and the metadata which is communicated between EOS and CTA.

        Speaker: Michael Davis (CERN)
      • 36
        Running an EOS instance with tape on the back

        An EOSCTA instance is an EOS instance commonly called a tape buffer configured with a CERN Tape Archive (CTA) back-end.

        This EOS instance is entirely bandwidth oriented: it offers an SSD based tape interconnection, it can contain disks if needed and it is optimized for the various tape workflows.

        This talk will present the specific details of the EOS tape buffer tweaks and the Swiss horlogery gears in place to maximize tape hardware usage while meeting experiment workflow requirements.

        Speaker: Julien Leduc (CERN)
      • 37
        AARNet CTA Investigations
        Speaker: David Jericho (AARNet)
    • 09:00
      Virtual Coffee Break
    • CTA: Tape Service

      All about CTA

      Convener: Oliver Keeble (CERN)
      • 38
        CTA best practices for data taking workflows

        There is significant diversity in the Data Acquisition (DAQ) systems of the non-LHC experiments supported at CERN. Each system can potentially have its own data taking software and helper scripts, and each can use their preferred data transfer commands and apply different checks and retry policies. The task of the CERN Tape Archive (CTA) team is to provide support for all of these different use cases and to define the best practices for integrating with an EOSCTA instance.

        In this talk we will present an overview of typical DAQ workflows and discuss which protocols, commands and APIs we recommended to use with EOSCTA. We will provide examples of submitting archive and retrieve requests using FTS and XRootD tools. We will explain how to monitor the status of a file on tape. We will explain the best way to ensure a file is safely stored on tape. We will also give an overview of the CTA authentication policies.

        Speaker: Volodymyr Yurchenko (National Academy of Sciences of Ukraine (UA))
      • 39
        A brief overview of the CTA mount scheduling logic

        Accessing data in a tape archival system can be costly in terms of time. The time taken to mount a tape into a drive, to position the tape head to a file and to unmount the tape when this file has been read can take more than 2 minutes.

        A tape drive cannot be used to archive or retrieve data during the mounting and unmounting of a tape. We therefore need a solution to avoid mounting a tape when it is not worth it. Indeed, imagine a user who retrieves a single file from a tape and then 5 minutes later wants another file from the same tape. Without the CTA scheduling logic, the drive would lose twice the amount of mount, unmount and positioning time! A CTA tape server contains the scheduling logic that decides when to mount a tape in order to optimise drive usage for reading and writing data.

        The aim of this presentation is to explain the different elements taken into account by the scheduler of each CTA tape server to decide whether or not a tape is worth mounting.

        Speaker: Cedric Caffy (CERN)
      • 40
        ALICE and the CTA Garbage Collectors

        In the standard layout of an EOSCTA deployment there are two SSD buffers in front of the tape drives. One is called the “default” space and is used for writing files to tape and the other is called the “retrieve” space and is used for reading them back. These buffers prevent direct file transfers between HDDs and tape drives. Such direct transfers would suffer from the unacceptable performance penalties incurred by mixing the preferred access patterns of disk and tape. A HDD usually has thousands of concurrently open files with data bandwidth being shared across them. A tape drive on the other hand simply reads or writes one file at a time at high speed. The mechanical thrashing of a HDD that is associated with thousands of open files may be acceptable to end users but it is unacceptable to a tape drive requiring high bandwidth for a single file.

        The lifetime of the files within the two SSD buffers is relatively short. Files being written to tape are deleted from the default space as soon as they have been safely stored on tape. Files being retrieved from tape are deleted from the retrieve space as soon as they have been copied to their destination system.

        The layout of the EOSCTA deployment for ALICE experiment is different from the standard layout because it has an additional HDD disk cache called the “spinners” space which sits between the retrieve SSD buffer and the ALICE end users. The spinners space is a true disk cache because the lifetime of files within it are relatively long. These files are automatically deleted by one of two garbage collectors when space needs to be freed up in order to make room for newly retrieved files. This workshop presentation describes the ALICE HDD disk cache and the automatic garbage collectors that free up space within it.

        Speaker: Steven Murray (CERN)
    • 10:20
      Short Break
    • STORAGE ANALYTICS
      Convener: Luca Mascetti (CERN)
    • CLOUD: CERNBOX SAMBA

      Cloud Service related topics like CERNBox ...

      Convener: Elvin Alin Sindrilaru (CERN)
      • 42
        EOS for CERNBox report

        EOS provides the backend to CERNBox, the cloud sync and share service implementation used at CERN. EOS for CERNBox is storing 12PB of user and project space data across 9 different instances running in multi-fst configuration. This presentation will give an overview of 2020 challenges, how we tried to address them and talk about the roadmap for the service for 2021.

        Speaker: Roberto Valverde Cameselle (CERN)
      • 43
        CERNBox: Horizon 2030

        CERNBox is a sync and share collaborative cloud storage solution built at CERN on top of EOS. The service is used by more than 37K users and stores over 12PB of data. CERNBox has responded to the high demand in our diverse community to an easily and accessible cloud storage solution that provides integrations with other CERN services for big science: visualisation tools, interactive data analysis and real-time collaborative editing.

        In this presentation we take a glimpse to the evolution of the service and the vision we have for it for the next decade.

        Speaker: Hugo Gonzalez Labrador (CERN)
      • 44
        SAMBA: lessons learned

        Last year it was already presented the architecture of the SAMBA service within CERNBox, this year the topic will be the journey to improve the service, problems faced and the lessons learned for the future.

        Speaker: Aritz Brosa Iartza (CERN)
    • 16:00
      Virtual Coffee Break
    • CLOUD: CERNBOX REVA OCIS

      Cloud Service related topics like CERNBox ...

      • 45
        Backing up CERNBox: Lessons learned.

        CERNBox is the cloud sync and share service implementation at CERN which is used by physicists and collaborators across the globe. Data stored in CERNBox is becoming more and more critical and having a backup system is crucial for its preservation.

        Two years ago we started a prototype of a backup orchestrator based on the open source tool restic. In 2020 the project reached its maturity and was enabled as the main production system for backup and restore. At the time being, more than 3PB of backup data is stored in S3 and more than 36K backup jobs are scheduled every day over the eosxd mounts.

        In this presentation, we will give an overview of the project focusing on challenges, what we have learnt and talk about future plans.

        Speakers: Roberto Valverde Cameselle (CERN), Joao Calado Vicente (CERN)
      • 46
        Multi-lock support for Office offline and online applications in CERNBox

        This short contribution will describe the offer of Office online and offline applications for our CERNBox users, and how we support their interplay to facilitate users' collaboration.

        Speaker: Dr Giuseppe Lo Presti (CERN)
      • 47
        Making Reva talk to EOS: ultimate scalability and performance for CERNBox

        The Reva component, at the heart of the CERNBox project at CERN
        will soon get new plugins that build on the experience
        accumulated with the current production deployment,
        where its data is stored centrally in EOS at CERN.

        Making Reva natively interfaced to EOS through high performance gRPC
        and standard HTTPS interfaces will open a new scenario in terms
        of scalability and manageability of the CERNBox service,
        whose requirements in terms of data will continue to grow in the
        next decade. In this contribution we will technically introduce
        this near-future scenario.

        Speaker: Fabrizio Furano (CERN)
      • 48
        EOS meets Helm: K8s-based instances for testing and external deployments

        This contribution reports on the recent development of Helm charts for the deployment of EOS in kubernetes-orchestrated clusters. An excursus on the state of the art will lead to the underlying motivations and the description of several use cases where a container-based deployment of EOS comes in handy, from disposable clusters for internal testing to installations in commercial clouds for HEP analysis and education.

        Speakers: Fabio Luchetti (CERN), Enrico Bocchi (CERN), Samuel Alfageme Sainz (CERN)
    • OPEN: Exchange (Eastern Timezones)

      The session is reserved for QA, proposals, exchange between users ...

    • OPEN: Exchange (Western Timezones)

      The session is reserved for QA, proposals, exchange between users ...