1–4 Mar 2021
Europe/Zurich timezone

Contribution List

48 out of 48 displayed
Export to PDF
  1. Andreas Joachim Peters (CERN)
    01/03/2021, 08:00
    EOS
  2. Andreas Joachim Peters (CERN), Elvin Alin Sindrilaru (CERN)
    01/03/2021, 08:15
    EOS

    This talk will give summary of the main concepts and features of EOS as a storage system.

    • namespace design
    • user concept
    • access control
    • access protocols
    • high availability
      • meta data
      • data
    • scheduling
    Go to contribution page
  3. Elvin Alin Sindrilaru (CERN)
    01/03/2021, 08:35
    EOS

    Summary of the most important development done throughout 2020.

    Go to contribution page
  4. Andreas Joachim Peters (CERN), Elvin Alin Sindrilaru (CERN)
    01/03/2021, 08:55
    EOS

    We will present an overview of the upcoming EOS Version 5 release (Diopside) and a development roadmap.

    Go to contribution page
  5. Dr Maria Arsuaga Rios (CERN)
    01/03/2021, 09:50
    OPS

    General description of the EOS service @CERN

    Go to contribution page
  6. Marco Scavazzon (European Commission)
    01/03/2021, 10:10
    OPS

    The Joint Research Centre (JRC) of the European Commission has set up the Big Data Analytics Platform to enable the JRC projects to process and analyse big data, extracting knowledge and insights in support of EU policy making.

    Since 2016, EOS is the main storage component of the platform. In 2020, the total gross capacity of this instance has reached 19 PiB.

    The Big Data Analytics...

    Go to contribution page
  7. Latchezar Betev (CERN)
    01/03/2021, 10:30
    OPS
  8. Cristian Contescu (CERN)
    01/03/2021, 15:20
    OPS

    Our team is in charge of providing storage and transfer services for the LHC and non-LHC experiments at CERN. In this presentation we are going to walk you through the activities of the EOS operations team at CERN in 2020. We are going to focus on the achievements, hurdles and lessons learned throughout the past year.

    Go to contribution page
  9. Mr Dan Szkola (Fermi National Accelerator Lab. (US))
    01/03/2021, 15:40
    OPS

    Fermilab has been running an EOS instance since testing began in June 2012. By May 2013, before becoming production storage, there was 600TB allocated for EOS. Today, there is approximately 11PB of storage available in the EOS instance.

    An update of our current experiences and challenges running an EOS instance for use by the Fermilab LHC Physics Center (LPC) computing cluster. The LPC...

    Go to contribution page
  10. Erich Birngruber (Austrian Academy of Sciences (AT)), Umit Seren (Austrian Academy of Sciences (AT))
    01/03/2021, 16:00
    OPS

    The institutes at the Vienna Biocenter (GMI, IMBA, IMP) have run HPC services for their life sciences research for several years. With our new infrastructure "CLIP", additional partners came on board in 2019, including the Austrian high-energy physics community.

    Beginning in 2020 the Austrian grid T2 setup was modernized and based on the CLIP infrastructure.

    We run a converged EOS...

    Go to contribution page
  11. Alexandre Lossent (CERN)
    01/03/2021, 16:15
    OPS

    This presentation will briefly describe the usage of EOS for website hosting at CERN.

    Go to contribution page
  12. Michal Kamil Simon (CERN)
    01/03/2021, 16:25
    EOS

    XRootD is a distributed scalable system for low-latency file access. It is primary data access framework for the high-energy physics community, and the foundation of EOS project. In this contribution we give an overview of the release 5. In particular, we discuss the TLS based, secure version of the xroot/root protocol and the several enhance tailor made for EOS, like the so-called redirect...

    Go to contribution page
  13. Elvin Alin Sindrilaru (CERN)
    01/03/2021, 16:45
    EOS

    Overview of the XrdHttp integration with EOS together with token support.

    Go to contribution page
  14. Andreas Joachim Peters (CERN)
    01/03/2021, 17:00
    EOS

    This presentation will summarize few bandwidth and IOPS measurements using root:// and http:// protocol using XRootD Version 5 in front of disk, NVMe, SSDs and CephFS and an outlook with posisble future improvements.

    Go to contribution page
  15. Hugo Gonzalez Labrador (CERN)
    01/03/2021, 17:10
    EOS

    The EOS system is an advanced distributed storage system that deals with many extreme uses-cases (massive data injection from the LHC, latency-critical online home directories and massive throughput accesses from batch farms).

    EOS implements many site reliability engineering best practices to support these uses cases at scale and also to support the work done by the operations team...

    Go to contribution page
  16. Sang Un Ahn (Korea Institute of Science & Technology Information (KR))
    02/03/2021, 08:00
    OPS

    We present a disk-based custodial storage for the ALICE experiment at CERN to preserve its raw data alternative to tape with the EOS QRAIN. In this presentation, we describe the detailed system deployment of disk-based custodial storage, the integration to the ALICE experiment and the current status of system monitoring such as hardware error detection and power consumption measurement.

    Go to contribution page
  17. Denis Sergeevich Lujanski
    02/03/2021, 08:20
    OPS
  18. Haibo li (Institute of High Energy Physics Chinese Academy of Science)
    02/03/2021, 08:35
    OPS

    In this presentation, we will report our current experiences and challenges with running EOS instances for used by IHEP CAS. Currently, IHEP has a total of 42PB storage, of which EOS accounts for 16PB, an increase of 10PB in 2020. At present, the LHAASO experiment mainly uses EOS as its mass storage system. In addition, the JUNO experiment has completed the construction of EOS testbed, and EOS...

    Go to contribution page
  19. minxing zhang (The Institute of High Energy Physics of the Chinese Academy of Sciences)
    02/03/2021, 08:55
    OPS

    Particle physics computing model has a kind of high statistical calculation, such applications need to access a large amount of data for analysis, the data I/O capability is very high requirements.For example, the LHAASO experiment generates trillions of events each year, and the large raw data needs to be decode to encode and mark before it can be analyzed. In this process, very high I/O...

    Go to contribution page
  20. Michal Kamil Simon (CERN)
    02/03/2021, 09:35
    EOS

    In this contribution we report on the new XRootD client declarative API that is in line with the modern C++ programming practices (ranges v3 inspired, support for lambdas and std::futures), offers much improved code readability and genuine composability.

    Go to contribution page
  21. Michal Kamil Simon (CERN)
    02/03/2021, 09:45
    EOS

    In this contribution we give the design details of the new Intel ISAL based XRootD erasure coding library and discuss the preliminary results obtained on the Alice O2 cluster.

    Go to contribution page
  22. Gregor Molan (COMTRADE D.O.O (SI))
    02/03/2021, 10:00
    EOS

    Context: EOS-wnc console for EOS client on Windows operating system.

    Objectives: The usage of the EOS-wnc on Windows platform with the functionalities of the EOS client on Linux platform should be on the same level as the usage of EOS Linux client.

    Method: EOS client can be used as a set of command line interface (CLI) commands, where each EOS command is executed...

    Go to contribution page
  23. Gregor Molan (COMTRADE D.O.O (SI))
    02/03/2021, 10:10
    EOS

    Context: Windows nature connection of EOS-wnc to Windows operating system.

    Objectives: The connection of the EOS-wnc on Windows platform should be as it is for Windows local disks, external disk storages, it means as a Windows disk driver letter.

    Method: A storage on Windows operating is presented as a “disk drive letter”. Architecture of Windows storage drivers has...

    Go to contribution page
  24. Gregor Molan (COMTRADE D.O.O (SI))
    02/03/2021, 10:20
    EOS

    Context: Optimal Software Implementation Model (OSD-Model) is to supervise and control development of EOS-wnc, where EOS-wnc is an important extension of Linux based EOS system for Windows platform.

    Objectives: OSD-Model is used to manage development process to assure appropriate performance of the EOS-wnc on Windows platform on the same level as the performance of EOS Linux client....

    Go to contribution page
  25. Dr Maria Arsuaga Rios (CERN)
    02/03/2021, 10:35
    OPS

    LHC Data Storage: RUN-3 preparation

    Go to contribution page
  26. Mr Andrey Kirianov (NRC Kurchatov Institute PNPI (RU))
    02/03/2021, 10:55
    OPS

    In this talk we will share our experience in implementing write buffering with background stage-out of files from a site accessing the Data Lake prototype using EOS built-in LRU and File Converter engines. This study was aimed at improving resource usage for CPU-only sites by reducing the data stage-out overhead.

    Go to contribution page
  27. Cristian Contescu (CERN)
    02/03/2021, 11:15
    OPS

    This presentation will briefly showcase the ALICE O2 HW setup for the pilot storage nodes and the OS challenges we have faced when trying to tweak it for maximum performance, in view of ALICE's Run3 data taking.

    Go to contribution page
  28. Manuel Reis (Universidade de Lisboa (PT))
    02/03/2021, 11:35
    OPS

    EOS Data Durability is a set of tools that automatically detects and repairs problematic files to ensure that data is not lost or compromised.

    Go to contribution page
  29. Andreas Joachim Peters (CERN), Dan van der Ster (CERN)
    02/03/2021, 11:55
    OPS

    This presentation will highlight how to deploy EOS effectively using CephFS as a storage backend, the basic operational aspects for EOS and CephFS and performance expectations.

    Go to contribution page
  30. Andreas Joachim Peters (CERN), Elvin Alin Sindrilaru (CERN)
    02/03/2021, 16:00

    How to install EOS in 5 minutes and run it.

    Go to contribution page
  31. Roberto Valverde Cameselle (CERN)
    02/03/2021, 16:20

    In this presentation we will be sharing some tips and recommendations about different operational procedures on EOS, from techniques to reduce the load on FST's system disk to how to use geoscheduler mechanism for draining and for adding capacity to the instances.

    Go to contribution page
  32. Dr Maria Arsuaga Rios (CERN), Cristian Contescu (CERN), Roberto Valverde Cameselle (CERN)
    02/03/2021, 16:40

    Practical use cases for eos-ns-inspect tools

    Go to contribution page
  33. Jaroslav Guenther (CERN)
    02/03/2021, 17:00

    A quick tutorial, how to use squashfs images for software/small file distribution.

    Go to contribution page
  34. Samuel Alfageme Sainz (CERN)
    02/03/2021, 17:25

    In this hands-on we show how to connect an out-of-the-box OCIS (ownCloud Infinite Scale) and connect it to an existing EOS instance.

    Go to contribution page
  35. Michael Davis (CERN)
    03/03/2021, 08:00
    CTA

    The CERN Tape Archive (CTA) is the tape back-end to EOS. EOS provides an event-driven interface, the WorkFlow Engine (WFE), which is used to trigger the processes of archival and retrieval. When EOS is configured with its tape back-end enabled, the CREATE and CLOSEW (CLOSE Write) events are used to trigger the archival of a file to tape, while the PREPARE event triggers the retrieval of a file...

    Go to contribution page
  36. Julien Leduc (CERN)
    03/03/2021, 08:20
    CTA

    An EOSCTA instance is an EOS instance commonly called a tape buffer configured with a CERN Tape Archive (CTA) back-end.

    This EOS instance is entirely bandwidth oriented: it offers an SSD based tape interconnection, it can contain disks if needed and it is optimized for the various tape workflows.

    This talk will present the specific details of the EOS tape buffer tweaks and the Swiss...

    Go to contribution page
  37. David Jericho (AARNet)
    03/03/2021, 08:40
    CTA
  38. Volodymyr Yurchenko (National Academy of Sciences of Ukraine (UA))
    03/03/2021, 09:20
    CTA

    There is significant diversity in the Data Acquisition (DAQ) systems of the non-LHC experiments supported at CERN. Each system can potentially have its own data taking software and helper scripts, and each can use their preferred data transfer commands and apply different checks and retry policies. The task of the CERN Tape Archive (CTA) team is to provide support for all of these different...

    Go to contribution page
  39. Cedric Caffy (CERN)
    03/03/2021, 09:40
    CTA

    Accessing data in a tape archival system can be costly in terms of time. The time taken to mount a tape into a drive, to position the tape head to a file and to unmount the tape when this file has been read can take more than 2 minutes.

    A tape drive cannot be used to archive or retrieve data during the mounting and unmounting of a tape. We therefore need a solution to avoid mounting a tape...

    Go to contribution page
  40. Steven Murray (CERN)
    03/03/2021, 10:00
    CTA

    In the standard layout of an EOSCTA deployment there are two SSD buffers in front of the tape drives. One is called the “default” space and is used for writing files to tape and the other is called the “retrieve” space and is used for reading them back. These buffers prevent direct file transfers between HDDs and tape drives. Such direct transfers would suffer from the unacceptable performance...

    Go to contribution page
  41. Andrea Sciaba (CERN), Dr Andrea Sciabà (CERN), Federico Gargiulo (Universita e sezione INFN di Napoli (IT)), Olga Chuchuk (Université Côte d'Azur (FR))
    03/03/2021, 10:25
  42. Roberto Valverde Cameselle (CERN)
    03/03/2021, 15:00

    EOS provides the backend to CERNBox, the cloud sync and share service implementation used at CERN. EOS for CERNBox is storing 12PB of user and project space data across 9 different instances running in multi-fst configuration. This presentation will give an overview of 2020 challenges, how we tried to address them and talk about the roadmap for the service for 2021.

    Go to contribution page
  43. Hugo Gonzalez Labrador (CERN)
    03/03/2021, 15:20

    CERNBox is a sync and share collaborative cloud storage solution built at CERN on top of EOS. The service is used by more than 37K users and stores over 12PB of data. CERNBox has responded to the high demand in our diverse community to an easily and accessible cloud storage solution that provides integrations with other CERN services for big science: visualisation tools, interactive data...

    Go to contribution page
  44. Aritz Brosa Iartza (CERN)
    03/03/2021, 15:40

    Last year it was already presented the architecture of the SAMBA service within CERNBox, this year the topic will be the journey to improve the service, problems faced and the lessons learned for the future.

    Go to contribution page
  45. Roberto Valverde Cameselle (CERN), Joao Calado Vicente (CERN)
    03/03/2021, 16:20

    CERNBox is the cloud sync and share service implementation at CERN which is used by physicists and collaborators across the globe. Data stored in CERNBox is becoming more and more critical and having a backup system is crucial for its preservation.

    Two years ago we started a prototype of a backup orchestrator based on the open source tool restic. In 2020 the project reached its maturity...

    Go to contribution page
  46. Dr Giuseppe Lo Presti (CERN)
    03/03/2021, 16:40

    This short contribution will describe the offer of Office online and offline applications for our CERNBox users, and how we support their interplay to facilitate users' collaboration.

    Go to contribution page
  47. Fabrizio Furano (CERN)
    03/03/2021, 16:50

    The Reva component, at the heart of the CERNBox project at CERN
    will soon get new plugins that build on the experience
    accumulated with the current production deployment,
    where its data is stored centrally in EOS at CERN.

    Making Reva natively interfaced to EOS through high performance gRPC
    and standard HTTPS interfaces will open a new scenario in terms
    of scalability and manageability...

    Go to contribution page
  48. Fabio Luchetti (CERN), Enrico Bocchi (CERN), Samuel Alfageme Sainz (CERN)
    03/03/2021, 17:10

    This contribution reports on the recent development of Helm charts for the deployment of EOS in kubernetes-orchestrated clusters. An excursus on the state of the art will lead to the underlying motivations and the description of several use cases where a container-based deployment of EOS comes in handy, from disposable clusters for internal testing to installations in commercial clouds for HEP...

    Go to contribution page