Speaker
Dr
Mahsa Taheri Ganjhobadi
(Postdoc Hamburg university)
Description
While physical systems are often described in high-dimensional spaces, they frequently exhibit hidden low-dimensional structures. A powerful way to exploit this characteristic is through sparsity. In this talk, we explore the role of sparsity in neural networks in two key contexts: (1) in generative models, particularly diffusion models, where we demonstrate how sparsity can accelerate the sampling process; and (2) in structured sparsity—at the levels of connections, nodes, and layers—where we analyze its impact on the generalization error of neural networks. We also discuss how concepts of sparsity in machine learning extend to physics, uncovering deep connections between the two fields.
Author
Dr
Mahsa Taheri Ganjhobadi
(Postdoc Hamburg university)
Co-author
Prof.
Johannes Lederer