In order to enable an iCal export link, your account needs to have an API key created. This key enables other applications to access data from within Indico even when you are neither using nor logged into the Indico system yourself with the link provided. Once created, you can manage your key at any time by going to 'My Profile' and looking under the tab entitled 'HTTP API'. Further information about HTTP API keys can be found in the Indico documentation.
Additionally to having an API key associated with your account, exporting private event information requires the usage of a persistent signature. This enables API URLs which do not expire after a few minutes so while the setting is active, anyone in possession of the link provided can access the information. Due to this, it is extremely important that you keep these links private and for your use only. If you think someone else may have acquired access to a link using this key in the future, you must immediately create a new key pair on the 'My Profile' page under the 'HTTP API' and update the iCalendar links afterwards.
Permanent link for public information only:
Permanent link for all public and protected information:
Additional large block storage request from IT-DB for oracle recovery servers (~500TB eventually, but 200TB for now). Waiting on new vault cluster to grant this.
Following this and the AFS request (+800TB) it was decided that:
New ceph servers in the vault will be created as a new cluster "cephvault" which will be added as new volume types in Cinder "vault-100, vault-500" for the two qos types. (original plan was for this vault hardware to replace beesly RA racks).
We expect another 4 quads in July (8.5PB) -- this was originally to replace S3/CASTOR/Cephfs machines in RJ and EC racks. Instead we will use the July delivery to replace the current beesly hardware (RA racks).
Bernd will aim to get us 8 more quads for Q4 delivery to replace S3, CASTOR, Cephfs.
nethub s3 cluster has been suffering with slow ping time for quite some time. Friday I ran some iperf3 tests and found that one of the racks is very very slow.
I propose that we configure an iperf3 regular test between all ceph osds. E.g. run iperf3 server on port 8001, then in an hourly cron run an iperf test and send an email to ceph-alerts if the bandwidth is below 500Mbps.
This morning one ceph/gabe/osd host went down, not coming back after a reboot. Connected to console, it was OK, but there is no network. I pinged Vincent Ducret on mattermost:
There was an intervention fixing crc errors on port 12 of the switch -- our server is on port 11.
He went to the switch, saw there are no activity leds, so he replugged the cable, and our server came back up.
He asked the 2nd line CS ops to please exercise more caution when manipulating cables.
On the ceph side, the cluster was degraded for a few hours -- I set noout so that backfilling wouldn't start.
Repair Service Liaison5m
Backend Cluster Maintenance5m
Added new capacity to ceph/beesly
Formatted its OSDS
Removed its OSDS
Removed new capacity from ceph/beesly
Created new "vault" cluster
Moved new machines to the vault cluster
Picked 3 machines to act as mons also on different racks