Actions: dedicated meeting with TAPE providers
Rucio 1.25.4 comes with support for SRM+GridFTP together SRM+HTTP protocol
- only one of these protocols can be configured on RSE
- FTS transfer protocol preference for SRM must be set to https;gsiftp;root
- no FTS interface to use different SRM preference for individual transfers
- SRM+GridFTP used only for storage that doesn't support SRM+HTTP at all
- this is sufficient to cover ATLAS use-cases - transfers tape <-> disk
- motivation - Data Challenges with as little as possible GridFTP (RAL Castor system)
- CMS plans with tape transfers(?)
New / additional tape bringonline test
- upload ~ 10TB dataset with 1GB files to each tape endpoint
- ask dCache/StoRM administrators to clean these files from disk buffer
- unfortunately storage administrators can't easily remove individual files and cleanup of whole buffer would certainly affect production
- use existing old production data(set) with high probability to be on the tape(?)
- it would be necessary to use production Rucio instance
- require similar config overwrites (patches for Rucio) used e.g. by ATLAS Functional Tests WebDAV(?)
- we would have to be more careful, but anyway at some point we have to move SRM+HTTP to production
- add Rucio rule to trigger transfer of NEARLINE file
- don't reuse files, because after test transfer they'll be ONLINE
- once we run out of NEARLINE source files ask again for disk buffer cleanup
- with current test infrastructure all files will be used ~ in 30 days
- run less tests or ask for bigger space to reduce cleanup requests(?)
- what would be good test for transfers with SRM+HTTP TAPE destination(?)
- is transfer to normal disk instead of disk buffer sufficient(?)
- how to verify that file really reached tape storage(?)
Keep current Fuctional Tests TAPE(?)
- not very useful to test TAPE
- just SRM+HTTP transfer from tape disk buffer
- concern that files are not really deleted from tapes
- test files will be physically stored on tapes for years
- currently 200GB/day
- modify to SRM+HTTP tests from disks?
- e.g. "read timeout" issue is visible also for disks