Speakers
Description
Introduction
Accurate climate prediction hinges on the ability to resolve multi-scale turbulent dynamics in the atmosphere and oceans [1]. An important mechanism of energy exchange between the ocean and the atmosphere is mesoscale turbulence, which contains motions of length scale $\mathcal{O}$(100 km). Two-layer quasi-geostrophic (QG) simulations [2] are a popular technique for numerically simulating these motions due to their computational tractability. QG models strike a balance between computational cost and physical fidelity by resolving large-scale turbulent processes at the scale of the discretization grid and modeling smaller subgrid-scale processes. However, the accuracy and generalizability of QG simulations greatly depend on the subgrid-scale model. Recently, machine learning has been introduced as a tool to learn effective stochastic subgrid-scale turbulence models [2].
In this work, we propose a novel approach for using geometric generative models to learn subgrid-scale closure schemes. By leveraging the geometric structure inherent in geophysical flows and the representational power of modern generative models, we aim to capture the essential physics of scale interactions more faithfully than conventional parameterizations, enabling more accurate and generalizable QG subgrid-scale models.
Problem Statement Predict closure term within a two-layer quasi-geostrophic fluid simulation of an idealized ocean system through a generative approach.
Given a potential vorticity field $q$ with horizontal geostrophic velocity field $u$ and a coarse-graining operator $\overline{(\cdot)}$, the task is to predict the unresolved subgrid forcing from the coarsely resolved (low-pass filtered) field:
$S_q = \overline{(u \cdot \nabla q)} - (\,\overline{u} \cdot \nabla \overline{q}\,)$
where $\overline{q}$ is the coarsely resolved field.
Method
Variational autoencoders (VAEs) [3] have been shown to be useful for encoding images in compressed representations as well as powerful generative models for images. Various works in literature have incorporated group symmetries to VAEs. In this work, we propose an equivariant encoder-decoder framework that maintains the training simplicity of standard VAEs while enabling the generation of samples that respect underlying group symmetries through [4]. In our approach, we incorporate adjustable sensitivity to equivariant constraints, allowing for flexible control over the strictness of symmetry adherence during learning.
Dataset Ground truth training data is generated using a $256 \times 256 \times 2 $ grid in pyqg (256 horizontal resolution, 2 layers), considered 'high resolution' ground truth data. For more details, see [5].
Results
In this figure, we see a comparison of performance for standard VAE (without any group symmetries) compared with our model with different group symmetries (C4, SO(3)) with and without regularized equivariant loss.
References
[1] Brunton, Steven L. et al, "Machine learning for fluid mechanics." Annual review of fluid mechanics, 2020
[2] Perezhogin, Pavel et al "Generative data‐driven approaches for stochastic subgrid parameterizations in an idealized ocean model." Journal of Advances in Modeling Earth Systems, 2023.
[3] Kingma, Diederik P., and Max Welling. "Auto-encoding variational bayes" 2013.
[4] Geiger, Mario et al "e3nn: Euclidean neural networks", 2022.
[5]: Ross, Andrew et al. "Benchmarking of Machine Learning Ocean Subgrid Parameterizations in an Idealized Model." Journal of Advances in Modeling Earth Systems, 2023.