Disk space allocated on EOS for NP04 and NP02: 1PB
e-groups based ACLs to control access rights for different activities and users, ie. DAQ endpoint with a few “writers”
Access granted on CASTOR for tape backup purposes
“lazy“ aggregation, cold storage behaviour
Data access:
Local users (batch and interactive): xrootd and FUSE data access available and exercised during last months.
“Grid-like” workflows enabled: ACLs based on VOMS certificates fetch from DUNE VO plus some local overrides in place to steer np02 and np04 distributed data together
Gridftp doors dimensioned accordingly to fulfill future protoDUNE, scaled-out from 2x1GByte/s to 8x1GByte/s (reminder: this bandwidth is shared among all eospublic experiments)
EOS and CASTOR endpoints linked: FTS/xrdcp used for data aggregation and data recalls
16:20
Tea/Coffee?
5
Overview of the network implementation in EHN1 and connectivity to the CC