- Compact style
- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
https://tinyurl.com/T1-GGUS-Open
https://tinyurl.com/T1-GGUS-Closed
https://lcgwww.gridpp.rl.ac.uk/utils/availchart/
https://cms-site-readiness.web.cern.ch/cms-site-readiness/SiteReadiness/HTML/SiteReadinessReport.html#T1_UK_RAL
http://hammercloud.cern.ch/hc/app/atlas/siteoverview/?site=RAL-LCG2&startTime=2020-01-29&endTime=2020-02-06&templateType=isGolden
Investigating failing tape transfers from RAL and elsewhere for CMS. Currently the finger is pointing at FNAL FTS which was recently upgraded. I did a test with CERN FTS on a subset of the same data and that has successful transfers where the FNAL FTS has none. The data is staging successfully from tape but then not being instructed by FTS to move to Echo (at least this is the current working theory). Steve Murray is looking at it. He says that FNAL FTS is mis-configured for Antares.
Also on tape failures - a few CMS tapes were 'disabled' this week (they were re-enabled by the script, but still caused significant failures). Is this happening more than normal?
Intermittent webdav SAM test failures in the last 2 days. Coincident with critical status on a number of gws: svc01/02, gw14/15 mainly.
RAL in HC test overnight:
- Stage out failures (svc02) and Rack power off triggered HC test failures. One of the HC tests stopped running, so RAL not put back online.
- Have forced site online, and following up; experts now have reinjected tests.
BNL -> RAL (and CNAF) transfers over the OPN have been very slow for ~ 1 week. Problem appears to be on the BNL side however.
Accounting differences observered between the VO monitoring and WLCG accounting figures, starting ~ September. See attached plot.
DNS issues reappeared on Sunday morning. Due (?) to TTL changes to webdav alias, observed fewer transfer failures during this period.