-
Jan Jona Javorsek (Jozef Stefan Institute, Slovenia)27/03/2023, 14:00
Welcome talk: workshop logistics, Ljubljana survival guide and an introductory word from the head of the institute
Go to contribution page -
Mihai Patrascoiu (CERN)27/03/2023, 14:15
Last year in review, showcasing the evolution of the FTS project, as well as touching on what's new in the FTS world, community engagement and the future direction.
Go to contribution page -
Joao Pedro Lopes27/03/2023, 15:00
This talk will present recent QoS improvements, go into the details of the Tape REST API and how it is implemented in FTS & Gfal2, showcase Gfal2 tape interaction over HTTP and finally, look at what's upcoming in the tape world, such as Archive Metadata and Tape REST API evolution.
Go to contribution page -
Shubhangi Misra27/03/2023, 15:50
This talk will describe the future strategy of tokens in FTS, as well as implementation milestones to fully integrated tokens into the FTS landscape.
Go to contribution page -
Steven Murray (CERN)27/03/2023, 16:20
The FTS3 @ CERN site report, presenting the number of instances, volume of data served each year, database setup and various operation tips and tricks discovered throughout the years.
Go to contribution page -
Hironori Ito (Brookhaven National Laboratory (US))27/03/2023, 16:45
An overview of the FTS3 deployment at BNL
Go to contribution page -
Rose Cooper27/03/2023, 17:00
The File Transfer Service (FTS3) is a data movement service developed at CERN, designed to move the majority of the LHC’s data across the WLCG infrastructure. Currently, the Rutherford Appleton Laboratory (RAL) Tier 1 runs two production instances of FTS, serving WLCG users (lcgfts3), and the EGI community (fts3egi). During this talk, we are going to present the status of these production...
Go to contribution page -
Lorena Lobato Pardavila (Fermi National Accelerator Lab. (US)), Lorena Lobato Pardavila27/03/2023, 17:15
- Outline
- Introduction
- Configurations
- CMS configuration – physical server
- Public configuration – containers
- Differences
- Advantages & disadvantages of each configuration
- Summary
-
Mario Lassnig (CERN)28/03/2023, 09:30
The ATLAS view on data management and FTS involvement
Go to contribution page -
Katy Ellis (Science and Technology Facilities Council STFC (GB))28/03/2023, 09:55
This presentation will describe the usage of FTS by the CMS experiment at the Large Hadron Collider during the start of Run-3. I will describe the particular features recently developed for, and employed by CMS for our unique user case as well as current challenges and efforts to optimise performance on the boundary between FTS and Rucio. I will also discuss the future transfer requirements of CMS.
Go to contribution page -
Ben Couturier (CERN)28/03/2023, 10:25
-
Andrea Manzi28/03/2023, 11:15
The talk will focus on the activities in EGI related to Data transfer and orchestration, in particular focusing on integration with EGI Check-in AAI in the context of the EGI-ACE project and the new EOSC Data transfer service in EOSC Future project. An overview of the new EGI lead project interTwin will be also given and the role FTS has there in the infrastructure supporting Scientific Digital Twins.
Go to contribution page -
Radu Carpa (CERN)28/03/2023, 11:40
This talk focuses on the
Go to contribution pageRuciodata management framework and its interaction with FTS. -
Y. Richard Yang28/03/2023, 12:05
An overview of the FTS-Alto project in collaboration with Dr. Richard Yang and his research group (Yale University)
Go to contribution page -
Joao Pedro Lopes28/03/2023, 16:00
The word "monitoring" is used everywhere in the FTS world. This talk wants to dive into the different types of monitoring present in the FTS world and explain what each of them means.
Go to contribution page -
Mihai Patrascoiu (CERN)28/03/2023, 16:35
This talk will show an overview of the health and alarm metrics used at the FTS3@CERN deployment. The full lifecycle will be presented, from the software changes and scripts needed, to logging extraction via FluentBit and ultimately to the Grafana display.
Go to contribution page -
Edoardo Martelli (CERN), Maria Del Carmen Misa Moreira (CERN)28/03/2023, 17:00
An overview of the FTS-Noted project, aimed at shaping traffic through dynamic network switches.
Go to contribution page -
Jan Jona Javorsek (Jozef Stefan Institute, Slovenia)29/03/2023, 14:00
Welcome and logistics
Go to contribution page -
Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US))29/03/2023, 14:10
We will review the new XRootD features added since the last workshop.
Go to contribution page -
Michal Kamil Simon (CERN)29/03/2023, 15:00
-
Guilherme Amadio (CERN)29/03/2023, 16:00
- Current release procedure/automation
- Discussion on development workflow
- Plans for 5.6 and 6.0 releases later this year
- Python bindings (drop Python2 for good, packaging work)
-
Guilherme Amadio (CERN)29/03/2023, 16:20
- Recent CI developments (+Alpine, +Alma, -Ubuntu 18)
- Supported platforms and compilers
- Full (or almost full) migration from GitLab CI to GitHub Actions
- Test coverage and static analysis
- Plans for improving the docker-based tests, running them in CI
-
Horst Severini (University of Oklahoma (US))29/03/2023, 16:40
-
Soren Lars Gerald Fleischer (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))29/03/2023, 16:55
-
Hironori Ito (Brookhaven National Laboratory (US))29/03/2023, 17:15
-
James William Walder (Science and Technology Facilities Council STFC (GB))29/03/2023, 17:30
ECHO is the Ceph-backed erasure-coded object store, deployed at the Tier-1 facility RAL-LCG2. It’s frontend access to data is provided via XRootD - using the XrdCeph plugin via the libradosstriper library of Ceph, with a current usable capacity in excess of 40PB.
Go to contribution page
This talk will cover the work and experiences of optimising for, and operating in, Run-3 of the LHC, and the developments towards... -
Fabio Andrijauskas (Univ. of California San Diego (US))30/03/2023, 09:30
All research fields require tools to be successful. A crucial tool today is the computer. The Open Science Grid (OSG) provides ways to access computational power from different sites. Open science data federation (OSDF) provides data access to the OSG pool using several software stacks. OSDF has received upgrades related to storage space, monitoring checks, monitoring stream collection, and...
Go to contribution page -
Brian Bockelman (Morgridge Institute for Research)30/03/2023, 10:10
The Open Science Data Federation (OSDF) delivers petabytes of data each month to workflows running on the OSPool. To do so, one requires a reliable set of client tools. This presentation will take a look "under the hood" of the current OSDF client tooling, covering:
- Discovery of nearby cache instances.
- Acquisition of credentials for transfer, automated or otherwise.
- Experiences...
-
Matevz Tadel (Univ. of California San Diego (US))30/03/2023, 11:00
-
Ilija Vukotic (University of Chicago (US))30/03/2023, 11:30
Virtual Placement is a way to approximate a CDN-like network for the ATLAS experiment. XCache is an important component in a Virtual Placement mechanism and is expected to substantially improve performance and reliability, while simultaneously decreasing bandwidth needed. I will explain how we configure it, deploy and use it, share experience in more than one year of running it.
Go to contribution page -
Carlos Perez Dengra (PIC-CIEMAT)30/03/2023, 12:00
Over the last few years, the PIC Tier-1 and CIEMAT Tier-2 sites in Spain have been exploring XCache as a content delivery network service for CMS data in the region. This service aligns with the WLCG data management strategy towards HL-LHC. The caching mechanism allows data to be located closer to compute nodes, which has the potential to improve CPU efficiency for jobs, especially for...
Go to contribution page -
Ilija Vukotic (University of Chicago (US))30/03/2023, 12:20
-
Robin Hofsaess (KIT - Karlsruhe Institute of Technology (DE))30/03/2023, 14:00
In the talk, I want to present our ideas for a data-aware scheduling mechanism for our opportunistic resources attached to GridKa, the T1 center in Germany.
Go to contribution page
Opportunistic resources are non permanent computing sites (partly with cache storages) distributed in Germany that provide resources for the HEP community from time to time.
We are planning to implement a hash-based distribution of... -
Brian Paul Bockelman (University of Wisconsin Madison (US))30/03/2023, 14:30
-
Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US))30/03/2023, 14:50
-
Michal Kamil Simon (CERN)30/03/2023, 15:10
-
Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US))30/03/2023, 16:00
-
Gregor Molan (Comtrade 360's AI Lab)30/03/2023, 16:30
XRootD provides fast, low latency, and scalable data access. It also provides a hierarchical organization of a filesystem-like namespace organized as a directory. As part of CERN EOS, XRootD assures another possibility for a fast connection for data transfer between the client and the EOS FST.
This is the presentation of Comtrade's work at the CERN's project of productization of EOS, and...
Go to contribution page -
ALBERT ROSSI (Fermi National Accelerator Laboratory)30/03/2023, 17:00
-
Jakob Blomer (CERN)31/03/2023, 09:30
This talk provides an introduction to RNTuple, ROOT's designated TTree successor. RNTuple is active R&D, available in the ROOT::Experimental namespace. Benchmarks using common analysis tasks and experiment AODs suggest a 3x - 5x better single-core performance and 10%-20 smaller files compared to TTree. The talk will specifically focus on RNTuple's I/O scheduling and optimization opportunities...
Go to contribution page -
Edoardo Martelli (CERN), Marian Babik (CERN)31/03/2023, 10:00
In this talk we’ll give an update on the LHCOPN/LHCONE networks, current activities, challenges and recent updates. We will also focus on the various R&D projects that are currently on-going and could impact XRootD and FTS. Finally, we will also cover our plans for mini-challenges and major milestones in anticipation of the DC24.
Go to contribution page -
Fabio Andrijauskas (Univ. of California San Diego (US))31/03/2023, 10:20
Bioscience, material sciences, physics, and other research fields require several tools to achieve new results, discoveries, and innovations. All these research fields require computation power. The Open Science Grid (OSG) provides ways to access the computation power from different sites for several research fields. Besides the processing power, it is essential to access the data for all...
Go to contribution page -
Guilherme Amadio (CERN)31/03/2023, 11:00
-
Jan Knedlik (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))31/03/2023, 11:30
-
Matevz Tadel (Univ. of California San Diego (US))31/03/2023, 11:40
30 minutes
Go to contribution page
10 minutes introduction
20 minutes discussion -
Michal Kamil Simon (CERN)31/03/2023, 12:00
-
Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US))31/03/2023, 12:15
-
Ilija Vukotic (University of Chicago (US))
-
Mihai Patrascoiu (CERN)
This presentation will show all that's needed to get your FTS instance configured to serve cloud storage transfers. The final part of the presentation will show our plan to simplify this process and make things easier to configure and more intuitive over all.
Go to contribution page -
Ilija Vukotic (University of Chicago (US))
XCache grew to be a quite stable, performant and function rich caching server for the HEP community.
Go to contribution page
I will propose a few developments that could help its adoption, simplify and optimize its operation in large distributed systems. -
Brian Bockelman (Morgridge Institute for Research)
A cornerstone of translating the raw capacity of a distributed system into an effective source of shared computing power is the methodical management of all the resources. While one commonly thinks of managing processing resources - CPUs, GPUs, memory - there's surprisingly little attention paid to the management of storage resources. Questions abound: How much storage should be set aside? ...
Go to contribution page -
Matevz Tadel (Univ. of California San Diego (US))
XCache overview
Go to contribution page
developments in 5.x
plans
30 minutes -
Matevz Tadel (Univ. of California San Diego (US))
-
Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US))
Choose timezone
Your profile timezone: