This is a quick recap of the EOS community, a who-is-who and our communcation platforms.
This presentation will give a brief introduction to EOS and the Aquamarine release version.
This presentation summarizes the current EOS service deployment at CERN.
Introduction to the CERNBox Service at CERN for those who do not know it yet.
The Web Application Open Platform Interface (WOPI) protocol allows to integrate collaborative editing with Office Online applications in CERNBOX/EOS.
Impressed by CERN’s EOS filesystem providing a POSIX-like capability on top of a tried and tested large scale storage system, AARNet runs multiple EOS clusters, the most interesting being a three site single namespace running replicas across 65ms. This filesystem is used for user data delivered via ownCloud, FileSender and an internally developed tool for fast parallel bundled uploads.
...
The Copernicus Programme of the European Union with its fleet of Sentinel satellites will generate up to 10 terabyte of Earth Observation (EO) data per day once in full operational capacity. These data, combined with other geo-spatial data sources, form the basis of many JRC knowledge production activities. In order to handle this big amount of data and their processing, the JRC Earth...
A wide range of detector commissioning, calibration and data analysis tasks is carried out by members of the Compact Muon Solenoid (CMS) collaboration using dedicated storage resources available at the CMS CERN Tier-2 centre.
Relying on the functionalities of the EOS storage technology, the optimal exploitation of the CMS user and group resources has required the introduction of policies for...
XRootD is a distributed, scalable system for low-latency file access. It is the primary data access framework for the high-energy physics community and the backbone of EOS. One of the latest developments in the XRootD client has been to incorporate metalink and segmented file transfer technologies. We also report on the implementation of the signed requests and ZIP archive support.
We will review our experience from 5 years of EOS in production and introduce a generic architectural evolution.
EOS namespace on top of a key-value store
CERN has been developing and operating EOS as a disk storage solution successfully for over 6 years. The CERN deployment provides 140 PB of storage and more than 1.4 billion replicas distributed over two computer centres. Deployment includes four LHC instances, a shared instance for smaller...
quarkdb will soon become the storage backend for the EOS namespace. Implemented on top of rocksdb, a Key-Value store developed by Facebook, quarkdb offers a redis-compatible API and high availability through replication.
In this talk, I will go through some design decisions of quarkdb, and detail how replication is achieved through the raft consensus algorithm.
The CITRINE release provides a completely re-engineered scheduling algorithm for geographic-aware file placement. The presentation will highlight key concepts of scheduling tree, proxy groups, file sticky-ness ...
During 2016 the usage and scope of EOS at CERN and outside has been evolving towards usage as a POSIX like filesystem. The presentation will highlight the current state and improvements done during the last year. We will introduce the next generation (reimplementation) of the FUSE client in EOS which might overcome most limitations and non POSIX behaviour adding dedicated server side support...
We have added a similar mechanism as AFS/Kerberos to bind user applications to user credentials when interacting with EOS over FUSE. In this presentation we will describe the implementation and its integration into the login mechanism in interactive and batch nodes at CERN.
This talk will present the devops workflow used to validate and deploy new eos-fuse releases to the CERN computing infrastructure.
The goal of this talk is to share experience installing and using EOS storage at small cluster/sites for local users or students. Simple cluster setup contains FreeIPA (kerberos and ldap) as authentification with own Certificate authority, SLURM queuing system, CVMFS for software distribution, EOS as storage for data and home directories. For easy development and issue tracker GitLab is used....
There are many large scientific projects in Institute of High Energy Physics (IHEP), such as BESIII, JUNO, LHAASO and so on. These experiments have a huge demand for massive data storage. EOS as an open source distributed disk storage system provides good solution. IHEP now has deployed two EOS instances. One is used for batch computing, and another for public usage (Owncloud+EOS). In this...
In our talk we will cover development and implementation of federated data storage prototype for WLCG centers of different levels and university clusters within one Russian National Cloud. The prototype is based on computing resources located in Moscow, Dubna, St.-Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations with...
The CERN Tape Archive (CTA) will provide EOS instances with a tape backend. It inherits from the CASTOR's tape system, but will provide a new request queuing system which will allow more efficient tape resource utilization.
In this presentation we will present CTA's architecture and the project's status.
An introduction how to use the EOS REST API for management and user interfaces.
The talk describes a graphical interface under implementation to help administering an EOS cluster.