WLCG DOMA Network Data Challenges
-
Provisional date: last week of September (Rucio DB update could move it only by a week since last 2 weeks of October host test beams).
-
Central Infrastructure ready to submit transfer with label --activity “Data Challenge”.
-
Functional tests tried with all ATLAS SCRATCHDISK sites using:
`rucio01:wlcg_doma_dc_test.0001` container, that includes only
mc15_13TeV:EVNT.10267991._000001.pool.root.1
Total files : 1
Total size : 150.365 MB
Total events : 5000
-
Results available at prototype of common monitoring (Data Challenges/FTS Status Board - https://monit-grafana.cern.ch/d/ZqU5ugjMz/fts-status-board?orgId=20 ).
-
Two different approaches to consider:
-
A1: to test link siteA-siteB, files present at siteA but not at siteB should be identified and the related dataset attached to a new dedicated container (`rucio01:wlcg_doma_dc_test.0001`).
-
A2: existing files should be replicated into different `scope:name` pair copies and later deleted.
-
Experiments feedback and status:
-
ATLAS: ongoing effort to efficiently integrate machinery for A1.
-
CMS: central infrastructure waiting for feedback (after dedicated meeting) to identify cms-responsible and preferred approach. Still under discussion if central infrastructure to be used or not. CMS needs to upgrade to at least Rucio-release 1.25.6 to profit from --activity “Data Challenge”. CMS needs to upgrade to at least Rucio-release 1.25.6 to profit from --activity “Data Challenge”. Proposal for A2 as something similar has already been done.
-
LHCb: agreed to integrate --activity “Data Challenge” label as well (at FTS level). Will NOT use central infrastructure.
-
ALICE: participating in the xrootd monitoring discussions.
Data Challenges general
- Common meeting with TAPE challenges and T0 challenges decided to cooperate to avoid stepping on each other with too many requests
- We think that the two challenges should be technically kept separated i.e. the data challenges should not involve tape
- Should also happen after the network challenges because they measure an extra layer
- Alice only interested in tape challenges
- LHCb not clear
- Tape information requested to T1 sites https://twiki.cern.ch/twiki/bin/view/LCG/TapeTestsPreparation
- xrootd monitoring review
- Meeting to review the xrootd monitoring
- Replace current gled collector with new collector
- Add a UDP -> TCP packet translator to avoid loosing information over WAN
- There is almost an agreement but this might become a medium term activity
- xrootd fields may also be reduced - need to understand who needs all that detailed information
- Some was requested by ATLAS and CMS to debug streaming
- SRM+https testing setup
- Moved under ATLAS for final transfers to and from tape
- INFN-T1, already created a test tape endpoint
- BNLLAKE and BNL-ATLAS will be next
- Manchester will be used this specific tests
- Plan to test all the T1s
- Important to have data challenges using SRM+https instead of SRM+gsitp
TPC Update
-
TPC migration
- ATLAS 87% done missing 9 T2 sites
- 3 testing new xrootd
- 1 new EOS
- 4 waiting for stable xrootd to be in production repositories
- 1 Storm site to be upgraded soon
- xrootd still discovering bugs with new sites added to the tests
- CMS 64% missing 19 T2 sites
- 3 are ready for Production (but haven't enabled them yet)
- 3 are having issues with their loadTests
- 1 just passed the manual tests
- 10 have an endpoint but haven't passed the manual tests
- 2 don't have an endpoint yet