Third HSE-Yandex autumn school on generative models
from
Tuesday 23 November 2021 (09:45)
to
Friday 26 November 2021 (22:00)
Monday 22 November 2021
Tuesday 23 November 2021
09:45
Welcome
-
Andrey Ustyuzhanin
(
Yandex School of Data Analysis (RU)
)
Welcome
Andrey Ustyuzhanin
(
Yandex School of Data Analysis (RU)
)
09:45 - 10:00
Room: Princeton
10:00
Generative Models Intro
-
Denis Derkach
(
National Research University Higher School of Economics (RU)
)
Generative Models Intro
Denis Derkach
(
National Research University Higher School of Economics (RU)
)
10:00 - 11:20
Room: Princeton
11:20
Coffee Break
Coffee Break
11:20 - 11:40
Room: Princeton
11:40
Introduction to Generative Models. Practice
Introduction to Generative Models. Practice
11:40 - 13:00
Room: Princeton
13:00
Lunch
Lunch
13:00 - 14:00
14:00
GANs Introduction
-
Denis Derkach
(
National Research University Higher School of Economics (RU)
)
GANs Introduction
Denis Derkach
(
National Research University Higher School of Economics (RU)
)
14:00 - 15:20
Room: Princeton
15:20
Tea Break
Tea Break
15:20 - 15:40
Room: Princeton
15:40
Generative Adversarial Networks. Practice
Generative Adversarial Networks. Practice
15:40 - 17:00
Room: Princeton
Wednesday 24 November 2021
10:00
Diffusion Generative Models-1
Diffusion Generative Models-1
10:00 - 11:20
Room: Princeton
11:20
Coffee Break
Coffee Break
11:20 - 11:40
Room: Princeton
11:40
Diffusion Generative Models-1. Practice
Diffusion Generative Models-1. Practice
11:40 - 13:00
Room: Princeton
13:00
Lunch
Lunch
13:00 - 14:00
14:00
Diffusion Generative Models-2
Diffusion Generative Models-2
14:00 - 15:20
Room: Princeton
Contributions
14:00
Generative adversarial networks
-
Artem Maevskiy
(
HSE University
)
15:20
Tea Break
Tea Break
15:20 - 15:40
Room: Princeton
15:40
Diffusion Generative Models-2. Practice
Diffusion Generative Models-2. Practice
15:40 - 17:00
Room: Princeton
Thursday 25 November 2021
10:00
Efficient sampling methods-1
Efficient sampling methods-1
10:00 - 11:20
Room: Princeton
11:20
Coffee Break
Coffee Break
11:20 - 11:40
Room: Princeton
11:40
Efficient sampling methods-2
Efficient sampling methods-2
11:40 - 13:00
Room: Princeton
13:00
Lunch
Lunch
13:00 - 14:00
14:00
Efficient sampling methods-3
Efficient sampling methods-3
14:00 - 15:20
Room: Princeton
15:20
Tea Break
Tea Break
15:20 - 15:40
Room: Princeton
15:40
Efficient sampling methods-4
Efficient sampling methods-4
15:40 - 17:00
Room: Princeton
Friday 26 November 2021
10:00
Monte Carlo Variational Auto-Encoders
-
Panov Maxim
Monte Carlo Variational Auto-Encoders
Panov Maxim
10:00 - 10:30
Room: Princeton
10:30
Survey of methods of k-means clustering with optimal transport
-
Suvorikova Alexandra
Survey of methods of k-means clustering with optimal transport
Suvorikova Alexandra
10:30 - 11:00
Room: Princeton
TBA
11:00
RICH GAN
-
Mokhnenko Sergey
RICH GAN
Mokhnenko Sergey
11:00 - 11:30
Room: Princeton
11:30
The Robustness of Deep Networks: A Geometrical Perspective
-
Evgenii Burnaev
The Robustness of Deep Networks: A Geometrical Perspective
Evgenii Burnaev
11:30 - 12:10
Room: Princeton
12:10
Coffee Break
Coffee Break
12:10 - 12:30
Room: Princeton
12:30
Resolution-robust Large Mask Inpainting with Fourier Convolutions
-
Roman Suvorov, Aleksei Silvestrov
Resolution-robust Large Mask Inpainting with Fourier Convolutions
Roman Suvorov, Aleksei Silvestrov
12:30 - 13:00
Room: Princeton
13:00
Problems with Deep Learning: new AI winter or new synthesis?
-
Mikhail Burtsev
Problems with Deep Learning: new AI winter or new synthesis?
Mikhail Burtsev
13:00 - 13:30
Room: Princeton
13:30
Lunch
Lunch
13:30 - 14:30
14:30
Material generation with AI
-
Lazarev Mikhail
Material generation with AI
Lazarev Mikhail
14:30 - 15:00
Room: Princeton
15:00
Uncertainty of Generative models
-
Agatha Shishigina
Uncertainty of Generative models
Agatha Shishigina
15:00 - 15:30
Room: Princeton
15:30
Tea Break
Tea Break
15:30 - 15:50
Room: Princeton
15:50
Black-Box Optimization with Local Generative Surrogates
-
Sergey Shirobokov
Black-Box Optimization with Local Generative Surrogates
Sergey Shirobokov
15:50 - 16:10
Room: Princeton
16:10
Weather tuning via surrogates
-
Sergey Popov
Weather tuning via surrogates
Sergey Popov
16:10 - 16:30
Room: Princeton
16:30
Tackling the Challenge of Uncertainty Estimation and Robustness to Distributional Shift in Real-World applications
-
Andrey Malinin
Tackling the Challenge of Uncertainty Estimation and Robustness to Distributional Shift in Real-World applications
Andrey Malinin
16:30 - 17:10
Room: Princeton
While much research has been done on developing methods for improving robustness to distributional shift and uncertainty estimation, most of these methods were developed only for small-scale regression or image classification tasks. Limited work has examined developing standard datasets and benchmarks for assessing these approaches. Furthermore, many tasks of practical interest have different modalities, such as tabular data, audio, text, or sensor data, which offer significant challenges involving regression and discrete or continuous structured prediction. In this work, we propose the Shifts Dataset for evaluation of uncertainty estimates and robustness to distributional shift. The dataset, which has been collected from industrial sources and services, is composed of three tasks, with each corresponding to a particular data modality: tabular weather prediction, machine translation, and self-driving car (SDC) vehicle motion prediction. All of these data modalities and tasks are affected by real, `in-the-wild' distributional shifts and pose interesting challenges with respect to uncertainty estimation. We hope that this dataset will enable researchers to meaningfully evaluate the plethora of recently developed uncertainty quantification methods, assessment criteria and baselines, and accelerate the development of safe and reliable machine learning in real-world risk-critical applications. An additional challenge to uncertainty estimation in real world tasks is that standard approaches, such as model ensembles, are computationally expensive. Ensemble Distribution Distillation (EnDD) is an approach that allows a single model to efficiently capture both the predictive performance and uncertainty estimates of an ensemble. Although theoretically principled, this work shows that the original Dirichlet log-likelihood criterion for EnDD exhibits poor convergence when applied to large-scale tasks where the number of classes is very high. Specifically, we show that in such conditions the original criterion focuses on the distribution of the ensemble tail-class probabilities rather than the probability of the correct and closely related classes. We propose a new training objective which resolves the gradient issues of EnDD and enables its application to tasks with many classes, as we demonstrate on the ImageNet, LibriSpeech, and WMT17 En-De datasets containing 1000, 5000, and 40,000 classes, respectively.
17:10
The end
The end
17:10 - 17:20
Room: Princeton
18:00
School dinner
School dinner
18:00 - 20:30