CTA has been in operation at DESY for nearly two years, during which time additional experiments have been integrated, and over 80PB of data has been written to tape. This presentation will provide an overview of recent developments, along with insights and experiences from running dCache+CTA so far
Fermilab will move to CTA this spring with dCache as the frontend file system. The modifications made to CTA to be able to read Enstore (Fermilab's legacy tape management software) files will be discussed, as will our solution to read the existing Enstore Small File Aggregation (SFA) files.
Operational issues arising during our push to production will be highlighted. Details on our...
We will share our experiences and challenges with using CTA last year. We optimized the CTA configuration and upgraded the EOS&CTA. Additionally, we expanded the scale of two experimental applications, Tier 1 of LHCB and HEPS, and enhanced the monitoring of the CTA system.
The CERN Tape Archive (CTA) was designed to meet the demands of data archival from the LHC experiments, in terms of both data volume and throughput. In order to ingest data at the rates demanded by the LHC data acquisition (DAQ) systems, the system is built on EOS and CTA's scalable architecture principles. To optimise the performance of both disk and tape hardware and to achieve the desired...
Operating the CERN Tape Archive all year-round does not come without surprises and challenges: massive recall campaigns, peak system throughput for archival during the data tacking period and (not so) transparent upgrades to critical services we depend on push the system to the limits, popping some nuts and bolts from time to time.
In this presentation, we will share insights gained from...
Until now, repack and archival of new files were carried out on the same tapepool. This could lead to the user's new archive files being mixed on the same tape as the old repack files, reducing performance during retrieve. With the latest version of CTA, we can create dedicated REPACK archive routes in order to repack to separate tapepools. We have modified ATRESYS to accommodate this new...
CTA was designed with two goals in mind: throughput to and from the tape system and minimising the stress on the tape infrastructure (minimising the number of tape mounts). These two constraints become particularly challenging in retrieval dataflows when elements external to the system start to misbehave.
In this presentation, we will explore the internal logic behind CTA’s retrieval...
Antares is the tape archive service at RAL that manages both Tier-1 and local Facilities data. In this talk, we present the main operational changes and developments in the service since last year’s CTA workshop. Among others, these include the migration of the service from SL7, the CTA Frontend separation and the deployment of the new EOS nodes connected to the LHC-OPN network in our Tier-1...
This year, we have upgraded CTA to Alma 9 and worked on automating the platform installation using Puppet. Additionally, we have tested the CTA Operations modules along with other features, such as policy mount rules. In February-March, we plan to conduct performance tests by allocating more resources to our test environment.
Unfortunately, we are still facing issues with humidity in the...
The Tapeguy is TRIUMF's home build tape system for ATLAS T1 data center, It was designed to be a stable system that can reliably store and retrieve LHC produced data as a tiered HSM system. we also open to other solutions, evaluated CERN CTA at site in 2024. the talk will present current tapeguy status, recent updates, and the evaluation done at site.
This talk is a follow-up from the 2024 BoF session on Offsite Tape Backup between sites. We will present the proof-of-concept architecture that we plan to develop in 2025. We propose to test it with one collaborating Tier-1 site (yet to be identified).