- Compact style
- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Help us make Indico better by taking this survey! Aidez-nous à améliorer Indico en répondant à ce sondage !
Please be sure to register for the Facilities meeting at Argonne:
Dark data cleanup at BNL followed up in the DDMops jira: https://its.cern.ch/jira/browse/ATLDDMOPS-5465 . After the cleanup still significant leftover remains (300-400TB for DATADISK and about 100TB for SCRATCHDISK), which could be a reporting issue or not reported usage. Need to be checked on the storage side.
Independently of the previous point, BNL storage reporting is stuck since Nov.15 - showing absolute no change in storage numbers since then for any token. This may result in filling the storage. Mentioned this also in that ticket, with BNL guys in CC.
The storage reporting consistency issue at MWT2_UC_SCRATCHDISK, with storage numbers below the rucio ones. Looks like this happened after ~600K (~90TB) deletion on Nov.8-9 with subsequent transfers filling that freed space.
SLACXRD_LOCALGROUPDISK space reporting value dropped a couple of days ago, probably just reporting issue.
Working on issues with the OSG/WLCG MaDDash instance: https://psmad.opensciencegrid.org/maddash-webui/
The PWA (pSConfig GUI) at https://psconfig.opensciencegrid.org has some issues getting all the hosts published in OIM and GOCDB. We are working on tracking down the problem in the code in GitHub: https://github.com/soichih/gocdb2sls
We have seen some cases where perfSONAR toolkit deployments have default limits set that prevent testing from working. The toolkits seem to be OK but test results are not showing up. In some cases this is because of a 10GByte directory size limit. The file to check for latency nodes is /etc/owamp-server/owamp-server.limits. The value to increase is 'disk=10G' Increase it to at least 50G (assuming your disk can hold this much).
No tickets/incidents
We finished upgrading the slave postgresql database to the dcache head node from sl6 to centos7. ZFS is used to host the postgresql database, and we upgradedthe postgresql to 10-10.6 from 10-10.5 for both the host and slave nodes.
We built the 1.8.2 openafs rpms on the centos 7.5 node. The new openafs client (1.8.2) is running well on the centos 7.5 node, we plan to test it on the SL6/7 nodes.