XRootD and FTS Workshop @ PIC (Barcelona)
Lluis Vives I
Hotel Exe Campus
LOGIN to see the Zoom connection details. :)
The 2025 XRootD and FTS workshop will take place on 13-17 October, 2025, at Hotel Exe Campus, Universitat Autonoma de Barcelona (UAB), Bellaterra (Barcelona), Spain. This event will be hosted by PIC, which is located at the UAB.
The XRootD and FTS workshop brings together the XRootD and FTS developers and people from Academia, Research, and Industry to discuss current and future data access activities as related to the XRootD framework and to the FTS project.
Presentations focus on achievements, shortcomings, requirements and future plans for both XRootD and FTS projects.
Registration will open on 15th May and close on 10th October (early-bird extended and closing on 19th September 23h59). The cost for the workshop is 325€ (early-bird) / 375€ (standard) for the five days, which includes lunches and coffee breaks. There is also the possibility to join the social dinner for 40€.

-
-
11:00
→
12:15
Registration Lluis Vives I
Lluis Vives I
Hotel Exe Campus
Bellaterra (Barcelona), Spain -
12:15
→
14:00
Lunch
-
14:00
→
14:30
Welcome & Logistics
-
14:00
Welcome to PIC and UAB 20mSpeaker: Gonzalo Merino (IFAE - Institute for High Energy Physics)
-
14:20
Workshop logistics 10mSpeaker: Jose Flix Molina (CIEMAT - Centro de Investigaciones Energéticas Medioambientales y Tec. (ES))
-
14:00
-
14:30
→
15:30
FTS Presentations
-
14:30
FTS4: Plans and Developments 40m
An overview of what's new in the FTS4 world, ending with the short, medium and long-term plans
Speaker: Steven Murray (CERN) -
15:10
FTS4: The Reinvented Scheduler 20m
An overview of the new FTS4 scheduler
Speaker: Nicola Pace
-
14:30
-
15:30
→
16:00
Coffee break
-
16:00
→
17:30
FTS Presentations
-
16:00
FTS4: The New File API 20m
An overview of the new File API, which will replace the usage of the Gfal2 library
Speaker: Louis Regnier -
16:20
FTS & Rucio 20m
FTS is the de facto transfer tool for Rucio communities to orchestrate site-to-site transfers. This talk will cover a variety of topics regarding the interaction between Rucio and FTS.
Speaker: Dimitrios Christidis (CERN)
-
16:00
-
18:00
→
18:30
Welcome drink Terrace
Terrace
Hotel Exe Campus
-
11:00
→
12:15
-
-
09:10
→
10:30
FTS Presentations
-
09:10
FTS3: State of stable affairs 40m
An overview of the FTS3 service deployment @ CERN, together with latest changes to the software in order to keep operations smooth.
Speaker: Mihai Patrascoiu (CERN) -
09:50
FTS@PIC 20m
Description and usage of the PIC FTS instance.
Speaker: Christian Neissner (PIC) -
10:10
Site update on FTS at RAL 20m
General overview of the current status of the FTS service being run at RAL and plans for it moving forward.
Speaker: Rose Cooper
-
09:10
-
10:30
→
11:00
Coffee break
-
11:00
→
12:30
FTS Presentations
-
11:00
ATLAS Community Talk 20m
FTS manages the transfer of millions of files every day for the ATLAS experiment. This talk aims to offer some insights into our use of FTS and outline our hopes for FTS4.
Speaker: Dimitrios Christidis (CERN) -
11:20
CMS community talk 20m
A description of CMS Data Management with particular emphasis on FTS. This will include the latest updates and changes since the last workshop.
Speaker: Panos Paparrigopoulos (CERN) -
11:40
LHCb <3 FTS 20m
I promise to make a better abstract very soon!
Speaker: Christophe Haen (CERN) -
12:00
FTS4: November test planning 30m
-
11:00
-
12:30
→
14:00
Lunch
-
14:00
→
15:15
WLCG Open Technical Forum #7Conveners: Alessandro Di Girolamo (CERN), James Letts (Univ. of California San Diego (US))
-
14:00
Introduction 10mSpeakers: Alessandro Di Girolamo (CERN), James Letts (Univ. of California San Diego (US))
-
14:10
FTS - status and plans 15m
Major plans for the coming years,
gaps identified in the current situation,
concrete milestones along the way to solving the major problems observedSpeaker: Steven Murray (CERN) -
14:40
XRootD - status and plans 20m
Major plans for the coming years,
gaps identified in the current situation,
concrete milestones along the way to solving the major problems observed.Speaker: Andrew Hanushevsky (Stanford University/SLAC)
-
14:00
-
15:15
→
15:45
Coffee break
-
15:45
→
17:00
WLCG Open Technical Forum #7
- 15:45
-
16:20
Evolution of Grid Data Management clients 20mSpeaker: Luca Mascetti (CERN)
-
09:10
→
10:30
-
-
09:00
→
10:30
FTS Presentations
-
09:00
FTS, Tape & Tokens: Making First Contact 30m
An overview of the Tape & Tokens proposal and how it was implemented in the FTS4 software
Speaker: Mihai Patrascoiu (CERN) -
09:30
EOSCTA tape integration with WLCG tokens 20m
As we move towards managing tape staging operations with WLGC tokens, EOSCTA has to be adapted for this new reality.
This presentation will cover the latest WLGC token developments on EOSCTA, as well as the plans towards full adoption of tokens on EOSCTA tape workflows.
Speaker: Joao Afonso (CERN) -
09:50
Tape & Tokens: Discussion 40m
-
09:00
-
10:30
→
11:00
Coffee break
-
11:00
→
12:15
FTS Presentations
-
11:00
FTS Open Discussion 1h
-
11:00
-
12:15
→
13:45
Lunch
-
13:45
→
14:00
Welcome & Logistics
-
14:00
→
15:00
XRootD Presentations Lluis Vives I
Lluis Vives I
Hotel Exe Campus
Bellaterra (Barcelona), Spain-
14:00
What happened since the last workshop 20m
Past developments.
Speaker: Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US)) -
14:20
Plans for XRootD 6.0 and Other News 20m
What we are looking to and when.
Speaker: Guilherme Amadio (CERN) -
14:40
XRootD Evolution Discussion 20m
Where to find it and how to drive it.
Speakers: Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US)), Guilherme Amadio (CERN)
-
14:00
-
15:00
→
15:30
Coffee break
-
15:30
→
17:00
XRootD Presentations Lluis Vives I
Lluis Vives I
Hotel Exe Campus
Bellaterra (Barcelona), Spain-
15:30
Introducing file cloning in XRootD R6 15m
15 min
Speaker: David Smith (CERN) -
15:45
Scitags: A Standardised Framework for Traffic Identification and Network Visibility 30m
High-Energy Physics (HEP) experiments rely on complex and large scale networks spanning vast geographical areas and interconnecting heterogeneous sites, data centres and instruments. Managing these networks in the face of high-intensity data flows such as those intrinsic to HEP workflows poses a significant operational and administrative challenge. These conditions are expected to worsen in the HL-LHC era with its accompanying increase in data rates. The currently limited visibility into network traffic flows hinders network operators' ability to understand user behaviour across different network segments, optimise them for performance and effectively debug and troubleshoot issues.
The Scitags initiative strives to address these challenges by improving network visibility through standardised datagram marking and flow labelling techniques. Formed within the Research Networking Technical Working Group (RNTWG) in 2020, Scitags aims to develop a generic framework and standards for identifying the owner and associated scientific activity of network traffic. This framework's potential use extends that of the HEP/WLCG experiments and steeps into any global community making use of Research and Education (R&E) networks.
This presentation will detail the current state of the Scitags initiative, including the evolving framework and its implementations alongside the tried and tested technologies they are built on including eBPF and IPv6 Extension Headers. The roadmap towards production deployments in both R&E networks and Storage Element implementations such as XRootD will also be discussed. By providing a larger network visibility, Scitags will empower network operators to optimise performance, troubleshoot issues more effectively and allow for a more performant use of networks in support of the needs of data-intensive scientific collaborations.
Speaker: Pablo Collado Soto (Universidad Autonoma de Madrid (ES)) -
16:15
Monitoring a distributed XRootD infrastructure 20m
Do you know what your XRootD-based service is doing?!
Monitoring is a key part of distributing data and information can be gathered from the network or storage layer at the client or server. Each data point provides insight into how the service is performing and where further investigation may be needed.
This talk will overview the monitoring statistics the Pelican project has found most useful in understanding our services, how we aggregate the information, and what inputs we still consider missing.
Speaker: Brian Paul Bockelman (University of Wisconsin Madison (US)) -
16:35
Evolution of Summary Monitoring 20m
New format, directives etc but with a migration plan
Speaker: Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US))
-
15:30
-
17:30
→
18:30
PIC DataCenter Visit
-
20:00
→
23:00
Workshop Dinner
-
09:00
→
10:30
-
-
09:30
→
10:30
XRootD Presentations Lluis Vives I
Lluis Vives I
Hotel Exe Campus
Bellaterra (Barcelona), Spain-
09:30
XCache -- recent developments 20m
20 Minute talk
Speaker: Alja Mrak Tadel (UCSD) -
09:50
XCache: plans, suggestions/feedback and Q&A 20m
20 Minutes
Speaker: Matevz Tadel (Univ. of California San Diego (US)) -
10:10
Pelican and XRootD: Moving Petabytes for the broader US Science and Engineering community 20m
The NSF-funded Pelican project provides a software platform for data federations for the broad US's Science and Engineering community. The flagship instance of Pelican is the Open Science Data Federation (OSDF) which moves over 100PB a year and connects data from a broad set of science domains, from particle physics to earth systems.
Peeling back enough layers and you'll find an xrootd server that is actually moving the data for Pelican.
This talk provides an overview of how Pelican leverages the XRootD software and how it has served the science community in the last two years.
Speaker: Brian Paul Bockelman (University of Wisconsin Madison (US))
-
09:30
-
10:30
→
11:00
Coffee break
-
11:00
→
12:30
XRootD Presentations Lluis Vives I
Lluis Vives I
Hotel Exe Campus
Bellaterra (Barcelona), Spain-
11:00
Implementing functionality via the Open Storage Service (OSS) Interface 20m
What can your XRootD server do?
The Open Storage Service (OSS) provides a plugin interface allowing one to manage how the server interacts with the underlying storage. This allows XRootD to do more than interact with a POSIX filesystem; plugins can work with other storage (such as HTTP, S3, multiple POSIX users, or Globus) or stack on top of other plugins (filtering the visible filesystem, managing I/O load, or providing more monitoring information).
This talk will cover the available plugins in the XRootD ecosystem managed by the Pelican and OSG teams and how they can provide value to deployments.
Speaker: Brian Paul Bockelman (University of Wisconsin Madison (US)) -
11:20
OssArc - Rucio aware backup plug-in 20m
Oss overlay plug-in to provide Rucio dataset backuos
Speaker: Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US)) -
11:40
Building plugins for the XRootD client 20m
The XRootD client exports a flexible asynchronous plugin interface that allows the client to manage protocols beyond the core
xroot.This talk will review the plugins managed by the Pelican project. While the core
pelican://protocol will remain internal to the project, it is layered on top of a basic HTTPS plugin that has no dependencies beyond libcurl. The same libcurl plugin also is the basis for as3://protocol backend, powering connectivity between the XRootD client (or the caching file proxy) and S3-compatible endpoints.Speaker: Brian Paul Bockelman (University of Wisconsin Madison (US)) -
12:00
Tracking file integrity with an XRootD plugin 15m
15 minutes
Speaker: David Smith (CERN) -
12:15
XrdCeph Streamed Checksums 15m
An update on streamed checksum for XrdCeph
Speaker: Jyothish Thomas (STFC)
-
11:00
-
12:30
→
14:00
Lunch
-
14:00
→
15:30
XRootD Presentations Lluis Vives I
Lluis Vives I
Hotel Exe Campus
Bellaterra (Barcelona), Spain-
14:00
How the ALICE experiment uses Xrootd 20m
Authors:
Costin Grigoras, Adrian Sevcenco
In the ALICE experiment Xrootd is the foundational protocol for all data interactions. It is the sole method for engaging with the 52 storage endpoints and the Xrootd client library and the command line utilities are integral to a wide range of tasks. These tasks include operations carried out by job agents, ROOT-based applications, user shells, and data transfer agents. Xrootd-based server implementations are providing access to 97% of the distributed storage volume, which today consists of more than 400PB of disk space and 375PB of tape archives.This presentation will show how the distributed data storage is federated with the help of a central file catalogue and centrally issued access tokens, how the different clients are used by the experiment middleware and processing framework and how we monitor the storage infrastructure.
Speaker: Costin Grigoras (CERN) -
14:20
XRootD developments at RAL 20m
A round-up of recent XRootD-related work at RAL
Speaker: Katy Ellis (Science and Technology Facilities Council STFC (GB)) -
14:40
Experiences for Early SKA Data Movement 15m
Recent development at SKA
Speaker: James William Walder (Science and Technology Facilities Council STFC (GB)) -
14:55
OU XRootD Site Report anf Plans 20m
20 Minute talk
Speaker: Horst Severini (University of Oklahoma (US)) -
15:15
Refactoring HTTP-TPC 15m
This talk will present the refactoring of the HTTP-TPC subsystem in XRootD aimed at improving efficiency and maintainability. The update addresses overhead from creating and destroying libcurl contexts for every transfer by introducing a persistent worker-pool model that reuses connections and manages transfer queues. This talk will outline the motivation behind this change, the key architectural decisions, early observations, and how this work lays the foundation for future improvements in error handling and monitoring.
Speaker: Rahul Chauhan (University of Wisconsin Madison (US))
-
14:00
-
15:30
→
16:00
Coffee break
-
16:00
→
17:35
XRootD Presentations Lluis Vives I
Lluis Vives I
Hotel Exe Campus
Bellaterra (Barcelona), Spain-
16:00
XRootD Admin Dashboard 20m
An admin dashboard for automating bulk operations and monitoring over an XRootD server cluster
Speaker: Jyothish Thomas (STFC) -
16:20
Explaining Xrootd to Users by AI 20m
Xrootd has lots of concepts and configuration options. Xrootd also has a set of comprehensive reference documents on its website to explain them. Yet Google's AI-based search engine
seems to prefer other Xrootd-related Wiki pages and how-to documents. We would like to understand the reason, if we ever want to build an "AskXrootd" chatbot. In this work, we developed our own Retrieval Augmented Generation (RAG) system and fed the system with those documents. We learned various techniques to convert the documents to formats that are preferred by Large Language Models (LLM) and RAG systems. These techniques include format converting methods using tools or Python libraries, and format improvement using LLMs themselves. We also learned the limitations of LLM/RAG systems: they are good at providing answers by following examples, rather than answers based on a comprehensive understanding of the documents. We will share our experience and lessons learned in this exercise. Finally, we make our RAG system available to the public via a MCP (Model Context Protocol) server, and provide a simple example configuration to access the RAG/MCP server using Google's free Gemini CLI.Speakers: Ian Erbacher, Sarah Yang, Wei Yang (SLAC National Accelerator Laboratory (US)) -
16:40
Benchmarking the Open Science Data Federation services - S3 plugin and cache selection 20m
Research has become dependent on processing power and storage, one crucial aspect being data sharing. The Open Science Data Federation (OSDF) project aims to create a scientific global data distribution network based on the Pelican Platform. OSDF does not develop new software but relies on the XrootD and Pelican projects. Nevertheless, OSDF must understand the XrootD limits under various configuration options, including transfer rate limits, proper buffer configuration, and storage type effect. This work describes the tests and results performed using National Research Platform (NRP) hosts, showing the S3 plugin and cache selection process. The tests cover various file sizes and parallel streams and use clients from various distances from the server host. We also used several standalone clients (wget, curl, pelican) and the native HTCondor file transfer mechanisms.
Speaker: Fabio Andrijauskas (Univ. of California San Diego (US)) -
17:00
XRootD’s improved support for SENSE deployments 20m
In this presentation we will talk about how we have been working, together with the XRootD team, towards easing the multi-subnet deployments needed by SENSE
Speaker: Diego Davila Foyo (Univ. of California San Diego (US)) -
17:20
WebDAV Error Improvement Initiative 15m
This session will discuss ongoing efforts to improve the clarity and consistency of WebDAV error reporting in XRootD. The initiative focuses on standardising and defining numeric codes within HTTP error messages, as recommended by WLCG, so that clients can better interpret and act on them, especially given the limited range of HTTP status codes. The talk will present examples of current inconsistencies, outline proposed mappings and validation strategies, and invite discussion on adoption, testing, and integration with existing monitoring tools.
Speaker: Rahul Chauhan (University of Wisconsin Madison (US))
-
16:00
-
09:30
→
10:30
-
-
09:00
→
10:30
XRootD Presentations Lluis Vives I
Lluis Vives I
Hotel Exe Campus
Bellaterra (Barcelona), Spain-
09:00
XRootD Collaboration Meeting 30mSpeaker: Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US))
-
09:30
XRootD Release Management 15m
In this contribution, we will discuss changes to the release management and release cycle of XRootD, as well updates to the development model.
Speaker: Guilherme Amadio (CERN) -
09:45
XRootD Continuous Integration Infrastructure 45m
In this contribution, we will present the current continuous integration infrastructure for XRootD on GitHub, which uses GitHub Actions. We will also show how to perform several of the tasks required during development, such as running the test suite locally and on GitHub Actions, enabling the CI on a fork repository, running CI builds on demand, and some auxiliary tools which we currently use, but have not yet been integrated into the CI system (e.g. ABI compliance checker and Coverity Static Analysis).
Speaker: Guilherme Amadio (CERN)
-
09:00
-
10:30
→
11:00
Coffee break
-
11:00
→
12:00
Workshop Wrap-up Lluis Vives I
Lluis Vives I
Hotel Exe Campus
Bellaterra (Barcelona), Spain -
12:00
→
13:00
Boxed lunch (take away) Lluis Vives I
Lluis Vives I
Hotel Exe Campus
Bellaterra (Barcelona), Spain
-
09:00
→
10:30