In order to enable an iCal export link, your account needs to have an API key created. This key enables other applications to access data from within Indico even when you are neither using nor logged into the Indico system yourself with the link provided. Once created, you can manage your key at any time by going to 'My Profile' and looking under the tab entitled 'HTTP API'. Further information about HTTP API keys can be found in the Indico documentation.
Additionally to having an API key associated with your account, exporting private event information requires the usage of a persistent signature. This enables API URLs which do not expire after a few minutes so while the setting is active, anyone in possession of the link provided can access the information. Due to this, it is extremely important that you keep these links private and for your use only. If you think someone else may have acquired access to a link using this key in the future, you must immediately create a new key pair on the 'My Profile' page under the 'HTTP API' and update the iCalendar links afterwards.
Permanent link for public information only:
Permanent link for all public and protected information:
Please contact email@example.com if you have specific topics you want to discuss, so that we can better organize the discussion and time for Q&A.
Additional questions and follow-up from the morning's computing seminar.
Discussion on topics regarding integrating Spark with Python, performance and usability - including ideas on further use of Arrow integration to pass data from Spark to the Cofea framework developed and FNAL (Lindesy Gray, Andrew Melo, CMS)
Drill down on Performance and Spark+Parquet in the context of speeding up data extraction for the Spark based framework developed for NXCals project. Several optimizations have been tested or are in the pipeline so far (including sorting by timestamp, partitioning and splitting in multiple files). There is interest to understand roadmap and current work in this area from Spark and open source communities, which can be of help for further tuning of the platform (Jakub Wozniak, BE-CO).
Interest in Spark structured streaming discussion, evolution in Spark 3, integration with Kafka, possible Kafka client upgrade to 2.0 (from the IT-CM monitoring team)
Possible interest by team working on Kubernetes and Kubeflow (Ricardo Brito Da Rocha, IT-CM)
Possible interest by team working on SWAN, integrating Spark and Jupyter + developing distributed processing for ROOT with Spark (Enric Tejedor Saavedra, EP-SFT)
(CERN), Lindsey Gray
(Fermi National Accelerator Lab. (US))