-
Alastair Dewhurst (Science and Technology Facilities Council STFC (GB))09/09/2024, 13:45
Start of the FTS & XRootd 2024 workshop
Go to contribution page -
Mihai Patrascoiu (CERN)09/09/2024, 14:10
Updates on recent developments, releases and all that's new in the FTS world of 2024
Go to contribution page -
Joao Pedro Lopes09/09/2024, 14:40
A description of the EOSC project and where FTS fits in the picture.
Go to contribution page -
Christophe Haen (CERN)09/09/2024, 15:05
To be added
Go to contribution page -
Alessandra Forti (University of Manchester (GB))09/09/2024, 16:00
ATLAS & FTS: Reflections and ideas
Go to contribution page -
Katy Ellis (Science and Technology Facilities Council STFC (GB))09/09/2024, 16:25
A description of CMS Data Management with particular emphasis on FTS. This will include the latest updates and changes since the last workshop.
Go to contribution page -
Steven Murray (CERN)10/09/2024, 09:00
General overview of the FTS deployment at CERN
Go to contribution page -
Rose Cooper10/09/2024, 09:30
General overview of the current status of the FTS service being run at RAL and plans for it moving forward.
Go to contribution page -
Hironori Ito (Brookhaven National Laboratory (US))10/09/2024, 09:50
Report of BNL FTS status
Go to contribution page -
James William Walder (Science and Technology Facilities Council STFC (GB))10/09/2024, 10:10
The Square Kilometre Array (SKA) Observatory will be supported by a global network of SKA Regional Centres (SRCNet) distributed across its member states. SRCNet v0.1 โ to be deployed in 2025 - represents the prototype compute, storage and service infrastructure needed to prepare for full operations.
Go to contribution page
For SRCNet v0.1, Rucio, FTS and Storage endpoint technologies from the HEP Community have... -
Steven Murray (CERN)10/09/2024, 11:00
An overview of what's being prepared for the new generation of FTS.
Go to contribution page -
Mihai Patrascoiu (CERN)11/09/2024, 09:00
Overview of FTS in the token ecosystem, reflections on the DC'24 and decisions moving forward
Go to contribution page -
Dimitrios Christidis (CERN)11/09/2024, 09:40
The Rucio communities depend on FTS to orchestrate site-to-site transfers.
Over the past year, the two development teams have worked closely to drive the transition from X.509 certificates to OAuth 2.0 tokens. This talk will focus on that effort. It will cover the original design, the preparation leading up to the Data Challenge 2024, the Data Challenge itself and the lessons learned from...
Go to contribution page -
Rahul Chauhan (CERN)11/09/2024, 10:45
Since Run 1, CMS has relied on certificates for user identification and experiment/group membership through extensions. However, as support for both certificates and extensions declines, CMS is transitioning to token-based authentication, aligned with the WLCG profile, for the upcoming High-Luminosity LHC run. With certificates, sites were responsible for mapping roles to capabilities. Tokens...
Go to contribution page -
Alastair Dewhurst (Science and Technology Facilities Council STFC (GB))11/09/2024, 12:15
Details about Wednesday's reception
Go to contribution page -
Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US))11/09/2024, 14:00
Features and changes since the last workshop (5.5.4 to 5.7.1).
Go to contribution page -
Guilherme Amadio (CERN)11/09/2024, 14:30
In this contribution, we discuss infrastructure updates to XRootD and other development topics.
Go to contribution page -
Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US))11/09/2024, 14:50
A roundup of planned features and improvmments.
Go to contribution page -
Guilherme Amadio (CERN)11/09/2024, 15:10
- XRootD Development Model
- Documentation and Submitting Patches
- GitHub Actions Continuous Integration
- Building and Running Tests Locally
-
Brian Paul Bockelman (University of Wisconsin Madison (US))12/09/2024, 09:00
Pelican and the OSDF Overview
Go to contribution page -
Matevz Tadel (Univ. of California San Diego (US))12/09/2024, 09:30
New resource monitoring infrastructure.
Go to contribution page
Purge plugin support.
Planned development:
- extensions of resource monitoring, planned and possible
- improvement of prefetching -
Brian Paul Bockelman (University of Wisconsin Madison (US))12/09/2024, 09:55
The Pelican XrdCl plugin
Go to contribution page -
Fabio Andrijauskas (Univ. of California San Diego (US))12/09/2024, 11:00
We create a set of tests to test every hardware and software limit related to XrootD. The idea of this talk is to show the results of these tests. Moreover, it will show some OSDF statistics.
Go to contribution page -
Brian Paul Bockelman (University of Wisconsin Madison (US))12/09/2024, 11:20
The Pelican Globus/HTTP/S3 OSS backend
Go to contribution page -
Robin Hofsaess (KIT - Karlsruhe Institute of Technology (DE))12/09/2024, 11:50
With this contribution, I want to present our first production deployment of XCache for workflow and efficiency optimizations of CMS jobs at our local HPC cluster at KIT, HoreKa. The project is part of the preparations for the future German HEP computing strategy focusing on HPC contributions.
Go to contribution page
Our fully containerized setup is deployed on a login node of the cluster and uses a shared... -
Horst Severini (University of Oklahoma (US))12/09/2024, 12:20
-
Alastair Dewhurst (Science and Technology Facilities Council STFC (GB))12/09/2024, 14:00
UK Storage (XrdCeph, CephFS+XrooTD, XCAche, and VP)
Go to contribution page -
Borja Garrido Bear (CERN)12/09/2024, 14:30
A presentation on CERN Central Monitoring, showcasing how data gets ingested, processed, enriched, aggregated and stored in OpenSearch.
From OpenSearch storage, tools such as Grafana can leverage the data and create dashboards and plots.
Go to contribution page -
Katy Ellis (Science and Technology Facilities Council STFC (GB))12/09/2024, 15:00
The old XRootD monitoring 'GLED' has been turned off. It will be replaced with Shoveler. This presentation looks at the testing and validation of this software, as well as other XRootD monitoring status.
Go to contribution page -
Steven Simpson (Lancaster University)12/09/2024, 16:00
We show how we've combined three means of monitoring of our gateways, and suggest some enhancements.
Go to contribution page -
Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US))12/09/2024, 16:20
The types of information avalable via XRootD Monitoring
Go to contribution page -
Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US))12/09/2024, 16:30
Discussion
Go to contribution page -
Diego Davila (UCSD)13/09/2024, 09:00
Outline:
- The need to expose multiple IPs
- Network namespaces for isolation
- Manual approach
- k8s/multus approachDuration: 20 min Virtual Presentation
Go to contribution page -
Alexander Rogovskiy (Rutherford Appleton Laboratory)13/09/2024, 09:20
For a long time Ceph-based disk storage at RAL Tier-1 was not able to execute Vector Read requests effectively, causing problems for some VOs. The talk describes multiple changes that were made to the Xrootd-Ceph plugin and configuration to solve the problem.
Go to contribution page -
Dr Robert Andrew Currie (The University of Edinburgh (GB))13/09/2024, 09:50
XRootD is a robust, scalable service that supports globally distributed data management for diverse scientific communities. Within GridPP in the UK, XRootD is used by the Astronomy, High-Energy Physics (HEP) and other communities to access >100PB of storage. The optimal configuration for XRootD varies significantly across different sites due to unique technological frameworks and site-specific...
Go to contribution page -
Mariam Demir13/09/2024, 10:20
As Tier 1 storage continues to expand, an increasing number of sites are contributing to the Worldwide LHC Grid, making efficient data transfer a critical component for big data analytics. XRootD is pivotal for scientific data management, facilitating seamless data movement and access across the 3 tiers. However, with the growing complexity and scale of grid infrastructures, it is essential to...
Go to contribution page -
Jyothish Thomas (STFC)13/09/2024, 11:00
To address the need for high transfer throughput seen for large datacentres using XRootD for projects such as the LHC experiments, it is important to make optimal and sustainable use of our available capacity. Load balancing algorithms play a crucial role in distributing incoming network traffic across multiple servers, ensuring optimal resource utilization, preventing server overload, and...
Go to contribution page -
Matevz Tadel (Univ. of California San Diego (US))13/09/2024, 11:30
Present motivation & possibilities.
Go to contribution page
Discussion on feasibility and required changes for prototype implementation. -
Emmanuel Bejide13/09/2024, 12:00
StorageD is the data aggregator component within archiving systems that supports the work of the Diamond
Go to contribution page
Light Source (DLS) and the Centre for Environmental Data Analysis (CEDA) at the Rutherford Appleton
Laboratory (RAL). StorageD provides file ingest and recall services to scientists and engineers internationally
through DLS and CEDA. StorageD currently support ingest of over 100TB daily.... -
Guilherme Amadio (CERN)13/09/2024, 13:30
How to maneuver around the XRootD Github repository and find helpful treasures at every click.
Go to contribution page -
Elizabeth Sexton-Kennedy (Fermi National Accelerator Lab. (US))
This talk will present an overview of the different components of XROOTD and how FTS/XRootd fits into the data management and transfer parts of our Computing Model.
Go to contribution page -
Elizabeth Sexton-Kennedy (Fermi National Accelerator Lab. (US))
Choose timezone
Your profile timezone: