Geometric deep learning leverages symmetry and transformation properties of input and output spaces to design neural network architectures. By incorporating invariance or equivariance into the model, the resulting architecture can be easier to learn, achieve better generalization performance, and use fewer parameters. Recent equivariant architectures show competitive compute requirements, but there are also open questions on the trade-off between exact symmetries and soft inductive bias towards simpler models.