Features and changes since the last workshop (5.5.4 to 5.7.1).
In this contribution, we discuss infrastructure updates to XRootD and other development topics.
A roundup of planned features and improvmments.
- XRootD Development Model
- Documentation and Submitting Patches
- GitHub Actions Continuous Integration
- Building and Running Tests Locally
Pelican and the OSDF Overview
New resource monitoring infrastructure.
Purge plugin support.
Planned development:
- extensions of resource monitoring, planned and possible
- improvement of prefetching
The Pelican XrdCl plugin
We create a set of tests to test every hardware and software limit related to XrootD. The idea of this talk is to show the results of these tests. Moreover, it will show some OSDF statistics.
The Pelican Globus/HTTP/S3 OSS backend
With this contribution, I want to present our first production deployment of XCache for workflow and efficiency optimizations of CMS jobs at our local HPC cluster at KIT, HoreKa. The project is part of the preparations for the future German HEP computing strategy focusing on HPC contributions.
Our fully containerized setup is deployed on a login node of the cluster and uses a shared...
UK Storage (XrdCeph, CephFS+XrooTD, XCAche, and VP)
A presentation on CERN Central Monitoring, showcasing how data gets ingested, processed, enriched, aggregated and stored in OpenSearch.
From OpenSearch storage, tools such as Grafana can leverage the data and create dashboards and plots.
The old XRootD monitoring 'GLED' has been turned off. It will be replaced with Shoveler. This presentation looks at the testing and validation of this software, as well as other XRootD monitoring status.
We show how we've combined three means of monitoring of our gateways, and suggest some enhancements.
The types of information avalable via XRootD Monitoring
Outline:
- The need to expose multiple IPs
- Network namespaces for isolation
- Manual approach
- k8s/multus approach
Duration: 20 min Virtual Presentation
For a long time Ceph-based disk storage at RAL Tier-1 was not able to execute Vector Read requests effectively, causing problems for some VOs. The talk describes multiple changes that were made to the Xrootd-Ceph plugin and configuration to solve the problem.
XRootD is a robust, scalable service that supports globally distributed data management for diverse scientific communities. Within GridPP in the UK, XRootD is used by the Astronomy, High-Energy Physics (HEP) and other communities to access >100PB of storage. The optimal configuration for XRootD varies significantly across different sites due to unique technological frameworks and site-specific...
Present motivation & possibilities.
Discussion on feasibility and required changes for prototype implementation.
How to maneuver around the XRootD Github repository and find helpful treasures at every click.
This talk will present an overview of the different components of XROOTD and how FTS/XRootd fits into the data management and transfer parts of our Computing Model.