- Compact style
- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
https://stfc365.sharepoint.com/:w:/r/sites/RAL-LCG2Tier1Liaison/Shared%20Documents/General/Weekly%20Reports/Weekly%20Report%205%20July%202021%20.docx?d=wb14eb80bd2a3413bab99eb7abb877752&csf=1&web=1&e=1ECJn9
https://tinyurl.com/T1-GGUS-Open
https://tinyurl.com/T1-GGUS-Closed
https://lcgwww.gridpp.rl.ac.uk/utils/availchart/
https://cms-site-readiness.web.cern.ch/cms-site-readiness/SiteReadiness/HTML/SiteReadinessReport.html#T1_UK_RAL
http://hammercloud.cern.ch/hc/app/atlas/siteoverview/?site=RAL-LCG2&startTime=2020-01-29&endTime=2020-02-06&templateType=isGolden
ATLAS requesting 20GB/core max work dir size to help with merging jobs:
Should be ok on all tranches; 2019 a bit tight, if Xcache is included.
wv-2019-dell: 3.2T for 128 nodes; e.g. 25 GB/core
wn-2018-xma: 3.3T / 64: 50
wn-2017-dell: 3.5T / 64
wn-2017-xma: 3.3T / 56
wn-2016-dell: 5.0T / 40 (+running reduced slots)
wn-2015-xma: 3.4T / 32 (+running reduced slots)
wn-2015-hpe: 3.4T / 32 (+running reduced slots)
wn-2014-viglen-highmem: 3.4T / 32 (+running reduced slots)
Major problems with Rucio on Monday night; put sites into test till afternoon.
Still need to discuss about the RAL / ATLAS discrepancy of HS06 values (
Some SAM test SRM/tape endpoint failures on Monday/Tuesday. Castor team found there were a lot of requests in the system and it was having an impact on all VOs. Possibly caused by some tests being performed by LHCb, reading from Castor buffer, but there were also a lot of reads from NA62.
Job efficiency is going through a good period, this is due to a job mix including a very large proportion of GEN-SIM jobs which do not use significant I/O.