In order to enable an iCal export link, your account needs to have an API key created. This key enables other applications to access data from within Indico even when you are neither using nor logged into the Indico system yourself with the link provided. Once created, you can manage your key at any time by going to 'My Profile' and looking under the tab entitled 'HTTP API'. Further information about HTTP API keys can be found in the Indico documentation.
Additionally to having an API key associated with your account, exporting private event information requires the usage of a persistent signature. This enables API URLs which do not expire after a few minutes so while the setting is active, anyone in possession of the link provided can access the information. Due to this, it is extremely important that you keep these links private and for your use only. If you think someone else may have acquired access to a link using this key in the future, you must immediately create a new key pair on the 'My Profile' page under the 'HTTP API' and update the iCalendar links afterwards.
Permanent link for public information only:
Permanent link for all public and protected information:
cooperating with leduc and vlado to get cta machines for ceph
Enrico (barn, beesly, gabe, meredith, nethub, vault)5m
(from last week) Benchmarking and finalization for enrollment in OpenStack
Slowly removing OSDs on new HW
1800 OSDs in crush -- PG map getting too big
Will remove 10x48 OSDs --> 1500 remain in crush
Hosts will remain in root=incoming for easier recreation
MGR struggles to export metrics to prometheus
Likely due to big PG map
`ceph config set mgr mgr/prometheus/scrape_interval 120`
(from last week) 2 nodes require reinstallation to get raid1 on system disk (CEPH-1045)
CEPH-1078: one rgw marked out for network switch replcaement
Now back. Thanks @Dan!
Meredith, Nethub, Vault: NTR
Discussion with OpenStack guys:
Embrace AZ model for Cinder volumes provisioning
One cluster missing (Oscar?) for "standard" volumes, then Beesly + Vault
IO volumes provided by Meredith (io2, io3) + Kelly (hyperc)
Barn will do the critical-power volumes
Dan (dwight, flax, kopano, jim)5m
Dan van der Ster
flax: slowness reported Thursday:
linuxsoft reported slow early in the day, around 9:20am on ~Ceph
Dan checked, no slow reqs. Increased MDS cache size around 9:40 from 4GB to 8GB.
Cephfs load plots don't show any correlated increases. None of the clients seemed exceptionally busy/active.
Wojciech reported all looks ok to him.
Dan started checking samba gw's around 2pm -- they were hammering an msg.sock dir with hundreds of socket files. Guisseppe cleaned that up over the afternoon -- not sure exactly when. After ?restarting samba? the msg.sock files were ~normal.
Around 17h00 dan moved JIRA and Webcast to mds.2 from mds.0 -- they are relatively metadata active, but not clear these were related to any slowness during the day.
My best theory is that the samba thrashing caused the mds.0 to be so busy that some md requests were slowed down.
Giuseppe is reviewing if msg.sock even needs to be on a shared fs. And he will move to Levinson.
CEPH-1068: dwight crush and peering issue (pgs undersized / clean depending on the order osds boot) -- reproduced the issue again with debug_osd = 20 and posted the logs to the tracker. during this testing I managed to break dwight for a couple minutes by making huge crush map changes in a short time.
CEPH-1078: one rgw marked out for network switch replcaement.
CEPH-1005: pg removal slowness. There is a new config to test this week on gabe for the pg removal issue. (Devs found that the rocksdb cache entries are not removed even after an index entry is deleted from rocksdb -- so eventually the effective rocksdb cache size drops to zero and this might explain why we see thrashing onthe ssds during removal).
FILER-140: filer-carbon moving to io2 volume -- all but one io1 volume has been zpool removed. No noticeable performance change, afaict.