- Compact style
- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
https://tinyurl.com/T1-GGUS-Open
https://tinyurl.com/T1-GGUS-Closed
https://lcgwww.gridpp.rl.ac.uk/utils/availchart/
https://cms-site-readiness.web.cern.ch/cms-site-readiness/SiteReadiness/HTML/SiteReadinessReport.html#T1_UK_RAL
http://hammercloud.cern.ch/hc/app/atlas/siteoverview/?site=RAL-LCG2&startTime=2020-01-29&endTime=2020-02-06&templateType=isGolden
MCTape deletions; slow deletions, affecting overall deletion rate:
- https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=382696
* Dedicated RAL Rucio Reaper set up for these deletions
- N threads modified to give ~ 3.3Hz rate
- Expect completion around beginning of next week
- Still would be nice to understand if 3s (asynchronous) is usual for deletion request.
WN Slots - Modification to slots count for tranche
- https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=381995
Jose hardcoded 48 slots for test tranche; but quattor not propagating to condor?
Tape is still full so no new transfers in. This is why the FTS status is currently at zero. CMS is currently not in need of additional tape to be provided.
I thought I made an accidental deletion on part of folder CSA07 on castor. After investigation, I found : Data is no longer needed by CMS (2007 training data), data was actually deleted already and I only deleted empty folders. I sent a list of the empty folders deleted to AD and DM.
There was supposed to be a network intervention today, on the jumbo frames, but we don't think it happened. There was a switch over of the OPN to the new 100Gb link though (links RAL to CERN and T1s).
CMS job efficiency has been up and down. Probably some jobs were running purely with Onsite data, although I cannot confirm this.
LHCb:
DUNE:
Averaging ~100 concurrent jobs in the last few days, ~300 concurrent jobs in the last few hours. Jobs are 4-core slots.