-
Andreas Joachim Peters (CERN)05/02/2018, 09:45
This presentation will be a short introduction to the workshop agenda and provide some basic context to understand the current status and the future roadmap.
Go to contribution page -
Dr Maria Arsuaga Rios (CERN)05/02/2018, 09:55
The aim of this presentation is the introduction of the new EOS website, where users and developers can find all the information that they need in one place with an easy interaction and accessibility from all type of devices.
Go to contribution page -
Elvin Alin Sindrilaru (CERN)05/02/2018, 10:10
This presentation will cover the development and current status of the EOS Citrine release.
Go to contribution page -
Georgios Bitzes (CERN)05/02/2018, 10:30
EOS has outgrown the limits of its legacy in-memory namespace implementation, presenting the need for a more scalable solution. In response to this need we developed QuarkDB, a highly-available datastore capable of serving as the metadata backend for EOS.
We will present the overall system design, and several important aspects associated with it, such as our efforts in providing comparable...
Go to contribution page -
Andreas Joachim Peters (CERN)05/02/2018, 10:50
Since the last workshop, the FUSE client has been rewritten. In this presentation we will discuss in detail the new implementation, its configuration and the new performance metrics.
Go to contribution page -
Andrea Manzi (CERN)05/02/2018, 11:20
This presentation will show the status and plans for the EOS Citrine Scheduler component focusing in particular on the configuration aspects. The talk will also introduce the new implementation of the Drain subsystem which now uses the GeoTreeEngine component for the drain placement selection.
Go to contribution page -
Jozsef Makai (CERN)05/02/2018, 11:35
EOS FST has been storing file metadata in different relational databases, so far. In order to simplify handling them, the way of storing file metadata is going to be changed to store Base64 encoded, serialized Protobuf metadata objects as extended attributes.
This approach also gave us the advantage to easily compress the metadata, allowing an average compression ratio of 0.5 and saving 50%...
Go to contribution page -
Jozsef Makai (CERN)05/02/2018, 11:50
WLCG Accounting is an important task to monitor the available and used resources of the LHC computation grid. Accountable resources involve EOS storage space for the experiments.
In order to support this task force from the EOS side, EOS has introduced a new accounting interface (see accounting CLI command) to make the necessary information easily available. The accounting information consist...
Go to contribution page -
Xavier Espinal Curull (CERN)05/02/2018, 13:45
The computing strategy document for HL-LHC identifies storage as one of the main WLCG challenges in one decade from now. In the naive assumption of applying today¹s computing model, the ATLAS and CMS experiments will need one order of magnitude more storage resources than what could be realistically provided by the funding agencies at the same cost of today. The evolution of the computing...
Go to contribution page -
Oliver Keeble (CERN)05/02/2018, 14:05
EOS is participating in the EU-funded eXtreme Data Cloud (XDC) Project which will support work on distributed deployment, caching and federation. This contribution gives an overview of the project and EOS's role within it.
Go to contribution page -
Pete Eby (Oak Ridge National Laboratory - (US))05/02/2018, 14:20
The ALICE Online/Offline (O2) Disk Buffer project will deploy a 60PB EOS filesystem at CERN to accommodate the Pb-Pb data taking period planned for 2020. An initial ~6PB evaluation system is planned for deployment in May 2018.
Members from CERN, Oak Ridge National Lab (ORNL), and Lawrence Berkeley National Lab (LBNL) are collaborating on Work Package 15 (WP15) in the development of a...
Go to contribution page -
Herve Rousseau (CERN)05/02/2018, 14:40
The EOS operations team at CERN operates multiple instances of EOS for the physics experiments and other activities from the laboratory.
In this presentation we will focus on infrastructure changes, best practices and evolution. A second part will mention the upgrade process we're going through to run Citrine, as well as tools we wrote and use to manage our EOS instances. We will end the talk...
Go to contribution page -
Armin Burger (European Commission - Joint Research Centre), Veselin Vasilev (European Commission, Joint Research Centre (JRC))05/02/2018, 15:05
The Joint Research Centre (JRC) of the European Commission has set up the JRC Earth Observation Data and Processing Platform (JEODPP) as a pilot infrastructure to enable the knowledge production Units to process and analyze big geospatial data in support to EU policy needs. This platform is built upon commodity hardware and first operational services were made available mid 2016. It currently...
Go to contribution page -
Dan Szkola (Fermi National Accelerator Lab. (US))05/02/2018, 15:45
We report on operational experiences and future plans with the Fermilab LHC Physics Center (LPC) computing cluster. The LPC cluster is a 4500-core user analysis cluster with 5 PB of storage running EOS. The LPC cluster supports several hundred users annually, from CMS university groups across the US. We anticipate the total EOS storage pool to grow by 50% by the start of Run 3 of the LHC.
Go to contribution page -
David Jericho (AARNet)05/02/2018, 16:05
AARNet's use of EOS for both our production CDN and our CloudStor platform over the last two years has been an adventure in collaboration, experiencing bugs, and extracting esoteric knowledge from both people and the code base.
EOS exists in a space that isn't met by any existing open source scale out storage solutions. Neither Ceph, or any of the less common scale out systems provide the...
Go to contribution page -
Haibo li (Institute of High Energy Physics Chinese Academy of Science)05/02/2018, 16:25
This report will talk about the current status and recent updates of EOS at IHEP Site since the first EOS workshop in 2017, covering storage expansion, issues encountered and other related work.
Go to contribution page -
Pete Eby (Oak Ridge National Laboratory - (US))05/02/2018, 16:45
During the last two years Oak Ridge National Laboratory (ORNL) has administered the ORNL::EOS T2 site which has seen two storage capacity expansions with installed capacity increasing from 1PB to 2.5PB. As utilization and capacity have grown observations on the performance impact of underlaying storage architecture, RAID size, filesystem design decisions, and performance tunings have been...
Go to contribution page -
Michal Kamil Simon (CERN)06/02/2018, 09:00
XRootD is a distributed, scalable system for low-latency file access. It is the primary data access framework for the high-energy physics community, and the backbone of EOS project.
Go to contribution page
In this contribution we (briefly) discuss the most important new features introduced in year 2017 including: support for systemd socket inheritance, XrdSsi, Caching Proxy v2, support for local files and... -
Michal Kamil Simon (CERN)06/02/2018, 09:20
In order to bring the potential of Erasure Coding (EC) to the XrootD / EOS ecosystem an effort has been undertaken to implement a native EC XrootD plugin based on the Intel Storage Acceleration Library (ISAL). In this contribution we discuss the architecture of the plugin, carefully engineered in order to enable low latency data streaming and 2D erasure coding. Also, we report on the status,...
Go to contribution page -
Georgios Bitzes (CERN)06/02/2018, 09:30
Supporting multiple parallel users in eosxd requires some mechanism of distinguishing their identities, and assigning a different set of credentials to each.
In this presentation, we detail our efforts in implementing the eosxd authentication subsystem based on process environment variables.
However, reading the environment variables of a process (/proc/pid/environ) from within a FUSE daemon...
Go to contribution page -
Elvin Alin Sindrilaru (CERN)06/02/2018, 09:45
This presentation will give an overview of the code structure, resources, simple docker-based testing and more.
Go to contribution page -
Jozsef Makai (CERN)06/02/2018, 10:30
In the past year, we have migrated the continuous integration platform of EOS, XrootD and all related projects from Jenkins to Gitlab CI in order to provide a more agile, satisfying and all-automated build environment.
Numerous achievements have been reached during the year.
We have introduced builds and packages for new platforms. For EOS, we have created an all-inclusive dmg package for...
Go to contribution page -
Crystal Chua (AARNet)06/02/2018, 10:45
This talk covers a journey through fuzz-testing CERN's EOS file system with AFL, from compiling EOS with afl-gcc/afl-g++, to learning to use AFL, and finally, making sense of the results obtained.
Fuzzing is a software testing process that aims to find bugs, and subsequently potential security vulnerabilities, by attempting to trigger unexpected behaviour with random inputs. It is...
Go to contribution page -
Enrico Bocchi (CERN)06/02/2018, 11:05
Docker containers are rapidly becoming the preferred way to distribute, deploy, and run services by developers and system administrators. Their popularity is rapidly increasing as they constitute an appealing alternative to virtual machines: Containers require a negligible amount of time to set-up, provide performance comparable to the one of the host, and are easy to manage, replicate, and...
Go to contribution page -
Martin Vala (Technical University of Kosice (SK))06/02/2018, 11:25
Talk will present new automatic tool to configure/update EOS storage using docker (eos-docker-utils). Currently plain EOS and ALICE EOS storage configurations are supported. First production storage is running for ALICE experiment (ALICE:Kosice::EOS).
Go to contribution page -
Georgios Bitzes (CERN)06/02/2018, 11:45
During this presentation, we will demo the setup and operation of a highly-available QuarkDB cluster, ready to be used as backend for the new EOS namespace.
Go to contribution page -
Michael Davis (CERN)06/02/2018, 14:00
The CERN Tape Archive (CTA) is the tape archival back-end for EOS and the successor to CASTOR. This talk will give an update on CTA developments since last year's EOS workshop.
Go to contribution page -
Michael Davis (CERN)06/02/2018, 14:15
This talk will give an overview of the XRootD Scalable Service Interface (SSI), which provides an asynchronous request-response framework with an emphasis on efficient data transfers. This will include a case study explaining how we used SSI and Google Protocol Buffers to develop the API between EOS and the CERN Tape Archive (CTA). The SSI-Protobuf bindings are available as a generic framework...
Go to contribution page -
Andreas Joachim Peters (CERN)06/02/2018, 14:45
In this presentation we will briefly explain the foreseen developments to implement the XDC and data lake concepts.
Go to contribution page -
Luca Mascetti (CERN), Hugo Gonzalez Labrador (CERN)06/02/2018, 14:55
CERNBox is a the CERN cloud storage service. It allows synchronising and sharing files on all major desktop and mobile platforms (Linux, Windows, MacOSX, Android, iOS) aiming to provide universal access and offline availability to any data stored in the CERN EOS infrastructure.
Go to contribution page
With more than 12k users registered in the system, CERNBox has responded to the high demand in our diverse community... -
Andrey Zarochentsev (St Petersburg State University (RU))06/02/2018, 15:15
-
Jakub Moscicki (CERN)06/02/2018, 15:25
I'll discuss possible improvements of the EOS permission system to gracefully support ACLS for both sync/share access (CERNBox) and filesystem access (POSIX). This will also include an implementation of automatic synchronization of the shares from EOS.
This is to address functional shortcoming for current CERNBox users and prepare for future massive filesystem access to EOS user instances at CERN.
Go to contribution page -
Hugo Gonzalez Labrador (CERN)06/02/2018, 15:40
Choose timezone
Your profile timezone: