Statistical inference is a key part of any HEP analysis. Today, there are a multitude of libraries in Python (zfit, pyhf, iminuit, hepstats, ComPWA and more) specifically designed to cover different corners or areas of the spectrum. Amongst them, zfit emerged years ago with the goal to bring two key problems together: a standardized interface and a performant backend. This talk will give an overview on all of the different fits that we do in HEP and the libraries that cover them, outlining their niche and raison d'etre. We will cover how the ecosystem (from our perspective) should move forward and what the landscape should look like in the future. Two important topics will be present: standardization of interfaces with models, minimizers as well as HS3 (HEP Statistics Serialization Standard) and different backends for the performance and their possible choices (TensorFlow, JAX, numpy, numba, Sympy, aesara)