- Compact style
- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
https://tinyurl.com/T1-GGUS-Open
https://tinyurl.com/T1-GGUS-Closed
https://lcgwww.gridpp.rl.ac.uk/utils/availchart/
https://cms-site-readiness.web.cern.ch/cms-site-readiness/SiteReadiness/HTML/SiteReadinessReport.html#T1_UK_RAL
http://hammercloud.cern.ch/hc/app/atlas/siteoverview/?site=RAL-LCG2&startTime=2020-01-29&endTime=2020-02-06&templateType=isGolden
New Accounting period starts:
CPU (HEP-SPEC06) : 132,125 -> 156,436
Disk (Tbytes) : 11,000 -> 13,024
Tape (Tbytes) : 27,625 -> 32,708
Data Carousel:
Data15 started yesterday (9 datasets); already >50% done
GU queues:
New queues created; will start to disable the old queues;
GGUS: 146360: Deletion failures from echo:
e.g.:
gsiftp://gridftp.echo.stfc.ac.uk:2811/atlas:datadisk/rucio/data17_13TeV/e1/71/data17_13TeV.00339387.physics_Main.daq.RAW._lb0525._SFO-6._0002.data
The deletion error rate peaked at about 10% at 1600, 31 March (with ~70TB/h deletion volume), and ~ 12GB/s deletion throughput.
Now improved to near 100% efficiency, with ~ 2-3TB/h deletion volume; ~1GB/s throughput.
Can be related to the data carousel processing model.
No monitoring seems to capture deletion information from Vande / kibana; needed?
Do we know where the 'bottleneck' appears?
'Lifetime model' data deletion set for next week (suspect a smaller effect).