Speaker
Description
The Python HEP analysis ecosystem and its user base grew significantly in the last few years, and with it the need for advanced statistical inference tools involving likelihood fits; a core part of most analyses in HEP.
zfit started over five years ago with the goal to provide this capability, a library for model fitting in HEP: scalable - in terms of model building complexity and performance - and pythonic - well-integrated into the Python ecosystem.
After many iterations with users and a long development process, zfit reaches a maturity stage.
In this talk, we will go over the extensive feature set of zfit: from binned and unbinned fits, extensive model building and the ability to create custom models up to advanced likelihood building, weighted fits and a variety of available minimizers. Thanks to its modern numpy-like backend, TensorFlow, with just-in-time compilation and the ability to run on CPUs and GPU, zfit is highly performant. zfit is also well-embeded into the Scikit-HEP ecosystem and beyond: it seamlessly integrates for data loading, plotting and more statistical tools, and allows libraries that build sophisticade models, such as ComPWA and more, to use zfit for statistical inference.
Requested talk length | 20 |
---|