Eighth Rucio Community Workshop
Square Kilometre Array Observatory Global Headquarter
Rucio is a software framework that provides functionality to organize, manage, and access large volumes of scientific data using customisable policies. The data can be spread across globally distributed locations and across heterogeneous data centers, uniting different storage and network technologies as a single federated entity. Rucio offers advanced features such as distributed data recovery or adaptive replication, and is highly scalable, modular, and extensible. Rucio has been originally developed to meet the requirements of the high-energy physics experiment ATLAS, and is continuously extended to support LHC experiments and other diverse scientific communities.
For this 8th edition of our workshop we will meet at the Square Kilometre Array Observatory Global Headquarters, Jodrell Bank, UK (Nov 3 - Nov 7).
We have a mailing list and mattermost channel, to which you can subscribe, where we will send more details about the program.
We also support the possibility to participate to the workshop remotely via Zoom. Please register to receive the connection details.

-
-
11:00
→
12:00
Registration 1h
-
12:00
→
13:00
Lunch & Registration 1h
-
13:00
→
14:30
Welcome and Introduction
-
13:00
Welcome 5mSpeaker: Martin Barisits (CERN)
-
13:05
Logistics 10m
-
13:15
Welcome to SKAO 20mSpeaker: Rosie Bolton (SKA Observatory)
-
13:35
State of the Donkey 45mSpeaker: Martin Barisits (CERN)
-
13:00
-
14:30
→
15:15
Keynote
-
15:15
→
15:40
Community talks
-
15:15
UK Compute Roadmap and its implications on Storage and Data Management 20m
Recently the UK government has published a variety of computing strategy documents to take advantage of artificial intelligence and the way it is transforming the way we do research. New large scale facilities are being built to house AI super computers while other data centres will provide complimentary services. This talk will cover the high level UK strategy as well as the approach the STFC is taking.
Speaker: Alastair Dewhurst (Science and Technology Facilities Council STFC (GB))
-
15:15
-
15:40
→
16:00
Coffee & Tea 20m
-
16:00
→
17:00
Community talks
-
16:00
Evaluation of Rucio for EISCAT and EISCAT 3D 20m
EISCAT has operated high power large aperture radars for upper atmophere and near-Earth space studies since 1981, and has so far collected a dataset of about 100 TB.
At present, EISCAT is in the process of deploying EISCAT_3D, EISCAT's next generation imaging radar. EISCAT_3D data volumes and processing requirements will be more similar to those of high energy physics and radio astronomy, and we are evaluating Rucio for data management. EISCAT also leads two scientific use cases in the RI-SCALE project, where Rucio will be the basis of the Data Exploration Platform.Speaker: Dr Carl-Fredrik Enell (EISCAT AB) -
16:20
CMS Rucio Community Report 20m
This presentation will provide an overview of the Rucio instance deployed by the CMS Experiment. The focus will be on the operational challenges encountered over the past year, including issues affecting performance and reliability. We will also discuss the ongoing efforts and developments aimed at addressing these challenges to ensure robust and efficient data management for CMS.
Speaker: Hasan Ozturk (CERN) -
16:40
Rucio at LSST/Rubin [Remote] 20m
In this presentation, we will explore the Rucio experience with the Rubin Observatory experiment. Our discussion will cover several key areas:
Scalability Tests: Insights into the performance and scalability evaluations of Rucio in the context of Rubin's data needs and what we have learned, especially with many small files.
Role in Rubin's Data Curation: Rubin's Data Butler: An overview of how Rucio, along with with Rubin's Data Butler using Hermes-K, which involves message passing through Kafka, is integrated in the Rubin's data curation system.
Monitoring and Support: Current status of Rucio and PostgreSQL monitoring and Rucio deployment and support within the Rubin environment.
Tape RSE Implementation: Deal with the order of magnitude more files going to tape than HEP.
Future Needs: An examination of Rubin's evolving requirements for Rucio services and how we plan to address them.Speaker: Dennis Lee (Fermi National Accelerator Lab. (US))
-
16:00
-
17:00
→
20:00
Welcome reception 3h
-
11:00
→
12:00
-
-
09:00
→
10:40
Community talks
-
09:00
Enabling Small and Medium Experiments with Rucio-as-a-Service 20m
CERN IT is extending Rucio to support Small and Medium Experiments (SMEs) through a centrally managed, ready-to-use Rucio data management service. Leveraging best practices from large-scale deployments, the service offers reproducible, one-click setups with SME-specific enhancements. In this session, we will present early pilot projects, share lessons learned, and demonstrate how SMEs can quickly deploy Rucio, manage their data, and start collaborating seamlessly within a CERN-supported infrastructure.
Speaker: Hugo Gonzalez Labrador (CERN) -
09:20
Updates on the XENONnT data handling with Rucio [Remote] 20m
We present how the XENONnT experiment handles data with the help of Rucio as data management tool. We focus mainly on the way how we distribute data and the strategies adopted to fix transfer issues. Finally, we conclude with some remarks on the needs for next generation experiments.
Speaker: Luca SCOTTO LAVINA -
09:40
Rucio in the DUNE collaboration: tokens and uploads 20m
The DUNE collaboration has been making heavy use of Rucio for several years. The past few months have brought two particular challenges, namely transitioning from x509 certificates to token based authentication using CILogon tokens, and dealing with scalability problems connected with uploading files to Rucio. As DUNE is the first large experiment to attempt to use a token provider other than IAM with Rucio, as well as the only large experiment to make intensive use of the Rucio upload client, these have presented some unique issues. In both cases DUNE has been working closely with the core Rucio development team to identify solutions and make any necessary changes to the code. In the course of this we hope to improve Rucio's support for OIDC token providers, as well as optimising the upload client.
Speaker: James Perry -
10:00
Rucio for KM3NeT [Remote] 20m
The KM3NeT collaboration is building a neutrino telescope in the Mediterranean sea, to study both the intrinsic properties of neutrinos and cosmic high energy neutrino sources. Once fully constructed, our computing needs will rise to an eventual data volume of ~500TB of new data per year, and computing needs of ~2000 cores on average. This will require a transition towards distributed computing and data storage.
This talk will cover our plans for the infrastructure and software required for our distributed storage solution, based on RUCIO, the current status and testing of our implementation with ~20% of the detector constructed, and what is still left to do.
Speaker: Dr Francisco Vazquez de Sola Fernandez (Nikhef) -
10:20
Modernizing Rucio Metadata for Earth Observation Intelligence 20m
DaFab AI leverages Rucio to bridge the gap between EO mission realities and advanced analytics. This session breaks down Rucio’s metadata evolution, from fixed metadata columns and “key:value” attributes, to a schema-governed catalog.
Speaker: Dimitrios Xenakis (CERN)
-
09:00
-
10:40
→
11:00
Coffee & Tea 20m
-
11:00
→
13:00
Technology talks
-
11:00
Rucio Meets Open Data: Native Support and Tagging 20m
We are extending Rucio with native support for open data to better serve interdisciplinary research an sharing and re-use of results. The current approaches to open data require costly data duplication to comply with FAIR principles. The work we have been doing since the last workshop is to integrate open data support natively into Rucio. In this session, we will present the development progress and showcase how users can start tagging and managing open data seamlessly in Rucio. This work is funded by OSCARS.
Speaker: Luis Antonio Obis Aparicio (CERN) -
11:20
On DIRAC, DiracX, and Rucio [Remote] 20m
This contribution presents the current status of DIRAC and DiracX, with a focus on the following topics:
- how DIRAC interfaces to Rucio
- how DiracX will interface to Rucio
- integration testingSpeaker: Federico Stagni (CERN) -
11:40
The Secret Life of a Slow Pipeline (and How to Fix It!) 20m
Ever committed code and then waited forever for the build to finish? You’re not alone. CI can quickly become a bottleneck if pipelines aren’t optimized. In this session, I’ll share proven techniques to cut build and test times, with real-world examples from my work on Rucio, where long pipelines directly impacted developer productivity. We’ll also look at handy plugins, actions that automate performance improvements, making CI faster, more reliable, and less painful the team.
Speaker: Mr Karanjot Singh (CERN) -
12:00
Rucio WebUI 20m
This presentation will introduce the Rucio WebUI with a focus on helping communities get up and running quickly. We will also showcase the key features currently available and the upcoming developments, offering insight into how the WebUI will continue to evolve to support the Rucio community.
Speaker: Mayank Sharma (University of Michigan (US)) -
12:20
FTS Status and Plans, road towards FTS-4 [Remote] 20m
The File Transfer Service (FTS)is and has been a key component for large-scale data movement across distributed scientific infrastructures. This presentation provides an overview of the current status of FTS, highlighting recent developments, performance improvements, and operational experiences that support data-intensive collaborations worldwide. It then outlines the strategic roadmap toward FTS-4, focusing on the vision for the evolution of FTS to meet future data transfer challenges in High-Luminosity LHC.
Speaker: Mihai Patrascoiu (CERN)
-
11:00
-
13:00
→
14:00
Lunch 1h
-
14:00
→
15:00
Tutorials
-
14:00
The Future of Rucio Deployments with ArgoCD 45m
CERN IT’s Rucio-as-a-Service for Small and Medium Experiments (SMEs) introduces a modern infrastructure based on ArgoCD and Kubernetes, Vault, automatic DNS manipulation, etc ... moving beyond traditional Flux-based deployments seen in the community. This tutorial will demonstrate how new Rucio clusters can be created in minutes and explain why this approach should become the default for all Rucio deployments: it leverages widely supported tools, aligns with industry best practices, ensures reproducibility, and simplifies cluster management at scale.
This talk is suited for a tutorial/hands-on style slot.Speaker: Luis Antonio Obis Aparicio (CERN)
-
14:00
-
15:00
→
15:40
Community talks
-
15:00
CTAO Use Case 20m
This presentation gives an overview on the CTAO use case for intended and potential usage of Rucio in the operational and data management lifecycle.
Speakers: Matthias Fuessling (CTAO), Maximilian Linhoff (TU Dortmund | CTAO) -
15:20
Rucio at the EST Data Centre: Current Challenges and Considerations 20m
The EST Data Centre presentation will analyse its current state and evaluate how the proposed tool aligns with our specific needs. One of the main challenges the EST Data Centre faces is the massive volume of data generated by the telescope, which must be processed, distributed, and accessed efficiently.
Rucio has been identified as a critical component in addressing this challenge, playing a central role in the management and orchestration of data. However, pipeline processing times continue to be a significant bottleneck. Currently, these tasks can take several months to complete, and reducing this time is a key objective moving forward.
Additionally, the presentation will highlight areas that require further investigation and refinement, including the implementation and management of data embargoes.Speaker: Angela Hernandez (IAC)
-
15:00
-
15:40
→
16:00
Coffee & Tea 20m
-
16:00
→
16:40
Technology talks
-
16:00
WLCG IAM Deployment at CERN [Remote] 20m
This talk will be about the deployment of WLCG IAM at CERN, focusing on aspects like architecture, high availability, monitoring, user synchronization, etc. It will also include a view from the developers of upcoming/important changes.
Speaker: Berk Balci (CERN) -
16:20
What’s Next for Tokens in Rucio 20m
The transition from X.509 certificates to OAuth 2.0 tokens is an ongoing effort attracting universal interest. This talk aims to offer a status update since the previous Rucio workshop and outline the expected short- and medium-term developments.
Speaker: Dimitrios Christidis (CERN)
-
16:00
-
16:40
→
17:10
Discussion: Token Discussion
-
09:00
→
10:40
-
-
09:00
→
10:40
Astronomy
-
09:00
CTAO - Data Hierarchies 25m
-
09:25
Deployment strategies with SRCNet for SKA: Infrastructure, Automation, and Integration 25m
This talk explores the deployment of Rucio across SRCNet for SKA data, highlighting infrastructure choices, deployment, integration environments and developments with cloud providers.
Speakers: James Collinson (SKAO), Rob Barnsley -
09:50
Panel Discussion 50m
-
09:00
-
10:40
→
11:00
Coffee & Tea 20m
-
11:00
→
12:00
Astronomy
-
11:00
Open Discussion 1h
-
11:00
-
12:00
→
13:00
Community talks
-
12:00
Rucio in Belle II [Remote] 20m
The Belle II experiment at the SuperKEKB collider in Japan is a next-generation B-factory with a large international collaboration and demanding computing needs. Belle II has been using Rucio as its data management system since early 2021, supporting global distribution and access to physics data. The experiment is now transitioning to Rucio as its primary metadata service, tightly integrating it with DIRAC. Some recent developments include the evaluation of TimeScaleDB for efficient tracking of dataset popularity to guide data placement, the integration of Large Language Models to provide a natural interface for querying and managing data, and the deployment of OIDC tokens as a replacement for X.509 certificates. We will present operational experience, performance results, and lessons learned that may inform similar efforts in other experiments.
Speaker: Cedric Serfon (Brookhaven National Laboratory (US)) -
12:20
The interTwin Digital Twin Engine Data Lake 20m
The interTwin project, funded by Horizon Europe, developed a Digital Twin Engine (DTE) to support interdisciplinary Digital Twins (DTs). It brought together infrastructure providers, technology experts, and scientists from fields such as High Energy Physics, Astrophysics, Radio Astronomy, Climate Research, and Environmental Monitoring.
Our presentation focuses on the design and implementation of the interTwin Data Lake, highlighting key extensions and integrations that support diverse resource providers — from HTC and HPC to cloud environments — while meeting the needs of various user communities. The Data Lake is built around the Rucio data management software deployed at DESY, complemented by FTS as a file transfer service and multiple integrated storage technologies across sites such as EuroHPC VEGA, EODC, DESY, INFN, PSNC, CESGA, and KBFI, with ongoing integrations that include the University of Vilnius and DZA. It leverages EGI Check-in as the identity provider.
To facilitate site integration, we developed Teapot (https://intertwin-eu.github.io/teapot/), a multi-tenant WebDAV application built on StoRM-WebDAV. It includes a manager that handles user authentication, external-to-local identity mapping, and request forwarding to a dedicated StoRM-WebDAV server. It also integrates with ALISE (https://github.com/m-team-kit/alise), which implements site-local account linking, enabling users to associate their local accounts with multiple external identities.
Additionally, we will present integration details from selected DTs and upper architecture layers. For example, we support triggering workflows when new data become available in the Data Lake, integrate community-specific data catalogs, and more.
Speaker: Dijana Vrbanec -
12:40
Rucio data management system for the SPD experiment [Remote] 20m
The Spin Physics Detector (SPD) is a new experiment under construction at the second interaction point of the NICA collider, JINR. Its primary goal is to test fundamental aspects of Quantum Chromodynamics (QCD) by studying the spin structure of the nucleon. This will be achieved through collisions of longitudinally and transversely polarized protons and deuterons, reaching a center-of-mass energy of 27 GeV and a luminosity of up to 10³² cm⁻²s⁻¹. At peak performance, the detector is expected to generate data at a rate of 0.2 Tbit/s.
To manage this substantial data volume, the SPD will rely on a distributed computing environment. The Rucio Data Management system was selected to orchestrate this complex task effectively.
This report details the experience of deploying Rucio for the SPD experiment. It will cover integration with ancillary services, the development of custom utilities, the automation of workflows, and the implementation of comprehensive monitoring systems.
Speaker: Alexey Konak
-
12:00
-
13:00
→
14:00
Lunch 1h
-
14:00
→
15:40
Jodrell Bank Observatory tour
-
14:00
→
15:40
Meet the developers / Q&A
-
15:40
→
16:00
Coffee & Tea 20m
-
16:00
→
17:00
Technology talks
-
16:00
Scitags: Status and Near-term Plans [Remote] 20m
This presentation will detail the current state of the Scitags initiative, including the evolving framework and its implementations alongside the tried and tested technologies they are built on including eBPF and IPv6 Extension Headers. The roadmap towards production deployments in both R&E networks and Storage Element implementations such as XRootD will also be discussed. By providing a larger network visibility, Scitags will empower network operators to optimise performance, troubleshoot issues more effectively and allow for a more performant use of networks in support of the needs of data-intensive scientific collaborations.
Speaker: Tim Chown -
16:20
The EuroHPC initiative [Hybrid] 20m
This contribution provides updates and news about the EuroHPC initiative.
Speakers: Dr Maria Girone (CERN), Giovanni Guerrieri (CERN) -
16:40
Rucio/SENSE Road Towards Production [Remote] 20m
On this talk we will give an update about the Rucio/SENSE Integration Project. We will talk about the production-ready site deployments, new developments to simplify enabling support for SENSE from the site's point of view, and the next steps towards having the first End-to-End-Rucio-SENSE workflow in production.
Speaker: Diego Davila Foyo (Univ. of California San Diego (US))
-
16:00
-
17:30
→
21:00
Workshop dinner 3h 30m
-
09:00
→
10:40
-
-
09:00
→
09:45
Keynote
-
09:00
The Architecture of SRCNet and Its Integration with Rucio 45m
We explore the architecture of SRCNet, highlighting its design principles for scalable and interoperable access to scientific data. We examine how Rucio integrates within this framework to provide policy-driven data management, replication, and movement. Together, their architectural synergy enables efficient, reliable, and sustainable large-scale scientific workflows.
Speaker: Jesus Salgado
-
09:00
-
09:45
→
10:40
Technology talks
-
09:45
The Rucio JupyterLab extension 20m
This contribution focuses on the recent updates to the Jupyter extension, its use throughout the community, and future plans.
Speaker: Giovanni Guerrieri (CERN) -
10:05
Analysis Facilities data access 20m
A summary of the data access and sharing problem raised by the HEP user community.
Speaker: Alessandra Forti (The University of Manchester (GB)) -
10:25
Rucio Protocol Evolution 15m
Rucio supports different RSE protocol implementations (essentially acting as scheme handlers). With GridFTP and SRM being phased out, and GFAL support expected to cease in the future, now is the time to gather the requirements of our communities and plan ahead.
Speaker: Dimitrios Christidis (CERN)
-
09:45
-
10:40
→
11:00
Coffee & Tea 20m
-
11:00
→
12:00
Community talks
-
11:00
The Italian scientific datalake: case studies and insights 20m
The INFN (the Italian National Institute for Nuclear Physics) operates, since more than two decades, one of the italian largest distributed computing infrastructure, providing computing and storage resources for more than 100 scientific collaborations. A sizable fraction of the computing capacity integrates with the WLCG (Worldwide LHC Computing Grid) infrastructure, while others are currently accessible through multiple interfaces such as interactive, batch/grid and cloud.
Since 2022, the Italian National Recovery and Resilience Plan funded a comprehensive infrastructure modernization through the “Italian National Center on HPC, Big Data and Quantum computing” and “Terabit”. The aim is to address the upcoming computational challenges, among them the integration of the Grid, cloud infrastructures, and, notably, CINECA HPC center with its pre-exascale system “Leonardo”.As a follow-up of the previous workshop, we will showcase how we are leveraging RUCIO to create a datalake stack that can serve multiple communities with their own needs, providing seamless access to data over several distributed centers. We will present an update on how the deployment and operation phase is evolving (e.g. dynamic storage areas for HPC data staging) and what are success stories as well as the shortcomings emerged during these activities. We discuss how we tailored authorization models around our use cases, an evaluation of monitoring and UI solutions, and results of the first experimentations with external metadata catalogs. Furthermore, we are planning the evaluation of a “Rucio-as-a-Service” platform, aiming for streamlining the adoption within smaller communities.
Finally, we’ll present the main results of a preliminary security assessment based on the OWASP framework, with the aim to start a discussion about the feasibility of exploiting RUCIO/FTS for sensitive data management (e.g. for life science research).
The overall purpose is to provide insights valuable for similar large-scale and heterogeneous scientific data management initiatives, through the sharing of our experience.Speakers: Ahmad Alkhansa (INFN - CNAF), Diego Ciangottini (INFN, Perugia (IT)) -
11:20
The Rucio Revolutions: possibilities for Rucio within the European Open Science Cloud 20m
With the ESCAPE project, Rucio demonstrated its flexibility in delivering efficient, production-ready solutions for communities beyond high-energy physics.
Within EOSC, the emerging Federation will consist of multiple interconnected Nodes designed to share and manage data, knowledge, and resources across thematic and geographical research domains.
Building on the achievements of ESCAPE, Rucio is well-positioned to lead the development of European-scale data lakes, providing the scalability, and interoperability needed to support scientific research. This contribution is not a program of work, it aims to share the experience and the challenges that CERN is gathering while participating in the build-up phase of the EOSC Federation, as well as engaging with domains beyond HEP.Speaker: Giovanni Guerrieri (CERN) -
11:40
MADDEN Project: Multi-RI Rucio with POSIX-like Interfaces for Collaborative Data Access [Remote] 20m
Large-scale experiments, such as those in gravitational-wave (GW) science, generate massive datasets stored in isolated Data Lakes, which hinders collaboration and efficient data analysis. The MADDEN (Multi-RI [Research Infrastructure] Access and Discovery of Data for Experiment Networking) project aims to overcome this by extending Rucio to enable read-only access to data for users of other experiments, without the overhead of user management.
As part of this effort, we have implemented prototype Data Lakes for the Einstein Telescope and the Cosmic Explorer. A prototype Multi-RI Rucio client has been developed, enabling access to the Mock Cosmic Explorer server without the need for dedicated user accounts. Furthermore, initial work has begun on a POSIX-like view application built on Rucio. In particular, RucioFS—an existing POSIX-like prototype—was tested to evaluate its capabilities under conditions involving a large number of files and its performance when scaling to increasing numbers of concurrent users.
These developments represent key steps toward the realization of interoperable, user-friendly data management solutions that will support international collaborations in GW science and beyond.Speaker: Nikita Avdeev (INFN Torino)
-
11:00
-
12:00
→
13:00
Tutorials
-
12:00
Rucio policy packages tutorial 45m
An overview on how to create, use and maintain a policy package in Rucio
Speakers: James Perry, Riccardo Di Maio (CERN)
-
12:00
-
13:00
→
14:00
Lunch 1h
-
14:00
→
15:00
Community talks
-
14:00
Rucio Status at IHEP 20m
This report will introduce the application of Rucio at IHEP in last year, including running status, upgrading, plugins development for experiments at IHEP.
Speaker: Xuantong Zhang (Institute of High Enegry Physics, Chinese Academy of Sciences (CN)) -
14:20
ePIC Rucio Community Report [Remote] 20m
ePIC is an international collaboration for the primary experiment at the upcoming Electron-Ion Collider. The collaboration is currently co-designing the detector and computing for the experiment and preparing a Preliminary Design Report.
We present the use of Rucio within the ePIC experiment, focusing on Monte Carlo production, and plans for distributed computing. In production, Rucio manages large-scale Monte Carlo datasets, ensuring efficient distribution, replication, and access across multiple sites. We also highlight current challenges and future requirements for Rucio in ePIC, outlining areas for development and optimization to support the experiment’s evolving data management needs.
Speaker: Anil Panta -
14:40
ATLAS 20m
ATLAS
Speaker: Mario Lassnig (CERN)
-
14:00
-
15:00
→
15:40
Meet the developers / Q&A
-
15:40
→
16:00
Coffee & Tea 20m
-
16:00
→
16:45
Discussion: Development roadmap
-
09:00
→
09:45
-
-
09:00
→
10:00
Community talks
-
09:00
Rucio at PIC 20m
This talk will provide an overview of the status of Rucio at the Port d'Informació Científica (PIC). We'll detail our current and future plans for our different Rucio instances, which are used to manage data for experiments like MAGIC. The presentation will also highlight our latest developments within the Rucio ecosystem.
Speaker: Francesc Torradeflot -
09:20
Rucio for Einstein Telescope 20m
The Einstein Telescope is the third-generation ground-based observatory for Gravitational Waves in preparation phase in Europe. It is expected to observe a sky volume one thousand times larger than the (current) second generation observatories and this will be reflected in a higher observation rate. The physics information contained in the strain time series will increase, while on the machine side the size of the raw data from the interferometers will scale with the number and the complexity of the detectors. To meet ET specific computing needs, an adequate choice of the technologies, the tools and the framework to handle the collected data, share them among the interested users and enable their offline analysis is mandatory. The solution currently under test for the data management and distribution is based on Rucio and on the concept of Data Lake. This talk will provide an overview of the requirements and desiderata for the GW community: data type, metadata of interest, modalities for data access and data discovery. Moreover, the test setup deployed so far will be described.
Speaker: Lia Lavezzi (INFN Torino (IT)) -
09:40
RI-SCALE: Building Data Management Infrastructure for European Research Infrastructures 20m
RI-SCALE develops secure, large-scale data management and AI-driven analysis platforms for European Research Infrastructures. The project addresses the challenge of unlocked scientific value in massive, underutilized datasets by providing Data Exploitation Platforms with integrated AI/ML capabilities. We are deploying a Rucio test instance to serve as our core data orchestration layer, enabling secure distributed data access across partner facilities. Current focus includes Identity Provider integration (e.g. Keycloak) and developing comprehensive tests to ensure seamless authentication workflows with various IdP solutions.
Speaker: Marvin Gajek (CERN)
-
09:00
-
10:00
→
10:40
Tutorials
-
10:00
Rucio WebUI Tutorial 30m
This tutorial will guide operators through deploying the Rucio WebUI in a Kubernetes cluster and understanding its requirements. It will also show developers how to set up the development environment and contribute to the WebUI’s pages, client, and API layers.
Speaker: Mayank Sharma (University of Michigan (US))
-
10:00
-
10:40
→
11:00
Coffee & Tea 20m
-
11:00
→
11:40
Technology talks
-
11:00
SRCNet v0.1: Learning from Test Campaigns for SKA data management 20m
This talk presents findings from recent test campaigns within SRCNet v0.1, focusing on how Rucio was exercised across realistic science workflows. These results highlight emerging challenges and opportunities, prompting key questions around future policy decisions—such as data lifecycle rules, access control, and science artefact mapping—that will shape the evolution of data management practices.
Speaker: James William Walder (Science and Technology Facilities Council STFC (GB)) -
11:20
CTA improve tape collocation with Archive Metadata [Remote] 20m
Archive metadata will be used during Run-4 to improve data collocation on tape by grouping files logically close sequentially on tape media.
Speaker: Julien Leduc (CERN)
-
11:00
-
11:40
→
12:10
Welcome and Introduction: Closing
-
12:15
→
13:15
Lunch 1h
-
09:00
→
10:00