In order to enable an iCal export link, your account needs to have an API key created. This key enables other applications to access data from within Indico even when you are neither using nor logged into the Indico system yourself with the link provided. Once created, you can manage your key at any time by going to 'My Profile' and looking under the tab entitled 'HTTP API'. Further information about HTTP API keys can be found in the Indico documentation.
Additionally to having an API key associated with your account, exporting private event information requires the usage of a persistent signature. This enables API URLs which do not expire after a few minutes so while the setting is active, anyone in possession of the link provided can access the information. Due to this, it is extremely important that you keep these links private and for your use only. If you think someone else may have acquired access to a link using this key in the future, you must immediately create a new key pair on the 'My Profile' page under the 'HTTP API' and update the iCalendar links afterwards.
Permanent link for public information only:
Permanent link for all public and protected information:
Hops Hadoop, Hopsworks and Q&A with Guest Speaker1h
This sessions follows up from the morning Computing seminar, see https://indico.cern.ch/event/716743/
Hops is a drop-in replacement for Hadoop that can scale the Hadoop Filesystem (HDFS) to over 1 million ops/s by migrating the NameNode metadata to an external scale-out in-memory database. This talk will introduce recent improvements in HopsFS: storing small files in the database (both in-memory and on SSD disk tables), a new scalable block-reporting protocol, support for erasure-coding with data locality, and work on multi-data center replication. For small files (under 64-128 KB), HopsFS can reduce read latency to under 10ms, while also improving read throughput by 3-4X and write throughput by >15X. Our new block reporting protocol reduces block reporting traffic by up to 99% for large clusters, at the cost of a small increase in metadata. While our solution for erasure-coding is implemented at the block-level preserving data locality. Finally, our ongoing work on geographic replication points a way forward for HDFS in the cloud, providing data-center level high availability without any performance hit.
One novel aspect of Hops we will discuss, is its use of TLS certificates as an alternative authentication/authorization mechanism to Kerberos. Apart from the improved scalability of certificate managers, compared to the Kerberos KDC, certificates offer the ability to support multi-tenancy and easier integration with devices/clients in external administrative domains. Finally, we will discuss operational support for Hops, and how it supports new features such as Anaconda, Spark, Hive, and TensorFlow.
(KTH Royal Institute of Technology in Stockholm)