In order to enable an iCal export link, your account needs to have an API key created. This key enables other applications to access data from within Indico even when you are neither using nor logged into the Indico system yourself with the link provided. Once created, you can manage your key at any time by going to 'My Profile' and looking under the tab entitled 'HTTP API'. Further information about HTTP API keys can be found in the Indico documentation.
Additionally to having an API key associated with your account, exporting private event information requires the usage of a persistent signature. This enables API URLs which do not expire after a few minutes so while the setting is active, anyone in possession of the link provided can access the information. Due to this, it is extremely important that you keep these links private and for your use only. If you think someone else may have acquired access to a link using this key in the future, you must immediately create a new key pair on the 'My Profile' page under the 'HTTP API' and update the iCalendar links afterwards.
Permanent link for public information only:
Permanent link for all public and protected information:
Deep Learning has emerged as one of the most successful fields of machine learning and artificial intelligence with overwhelming success in industrial speech, text and vision benchmarks. Consequently it evolved into the central field of research for IT giants like Google, facebook, Microsoft, Baidu, and Amazon. Deep Learning is founded on novel neural network techniques, the recent availability of very fast computers, and massive data sets. In its core, Deep Learning discovers multiple levels of abstract representations of the input.
The main obstacle to learning deep neural networks is the vanishing gradient problem. The vanishing gradient impedes credit assignment to the first layers of a deep network or to early elements of a sequence, therefore limits model selection. Major advances in Deep Learning can be related to avoiding the vanishing gradient like stacking, ReLUs, residual networks, highway networks, and LSTM.
For Deep Learning, we suggested self-normalizing neural networks (SNNs) which automatically avoid the vanishing gradient. In unsupervised Deep Learning generative adversarial networks (GANs) excel in generating realistic images outperforming all previous approaches. We proved that a two time-scale update rule for training GANs converge under mild assumptions to a local Nash equilibrium. For deep reinforcement learning we introduced a new approach to learn long delayed rewards, for which methods that estimate value functions like temporal difference, Monte Carlo, or Monte Carlo Tree Search failed.
Current applications of Deep Learning in physics comprise analysis of ATLAS data e.g. to identify measurements of the Higgs boson, quantum chemistry, energy prediction without the Schrödinger equation and wave functions, and quantum state classifications. On the other hand, methods from physics are used to describe Deep Learning systems. The Fokker-Plank equation describes the behavior of stochastic gradient descent which finds flat minima in error surfaces. We use electric field equations to define a new GAN objective which can be proved via the continuity equation to have a single (global) Nash equilibrium.