Speaker
Description
The core driving force behind the data intensive nature of research in particle physics, is the level of statistical precision achievable in measuring the parameters of the Standard Model. The RooFit and RooStats data modelling packages are the gateway that allows users to interface the complex data processing abilities provided by ROOT and the precise statistical inference tools required to probe the nature of the standard model.
Typical modern LHC analyses are described at the likelihood level by a statistical model containing a large number of signal, control and validation regions. To streamline this model building process, several higher-level languages have been developed to allow users to bypass many of the statistical concepts of the underlying RooFit probability models. However, given the complexity of such models, users need a good grasp of both high-level and low-level model building concepts and its interactions with the RooStats statistical analysis tools to understand the computational performance of the fit, to validate it's proper convergence and be able to successfully validate the final physics results.
We report on an effort to setup a new documentation project that blends descriptions of both high-level and low-level modeling tools, with links and branches to general descriptions of the underlying statistical methods, and links to 'good practices' of modelling building that have emerged from LHC analyses. An important feature of this comprehensive documentation that covers both statistical methods as well as software tools, that all code examples, both in the statistics text as well as in the tool documentation are interactive through CERN SWAN service. Centrally maintained documentation such as this ensures that all statistical tools benefit from the latest improvements in ROOT performance, new features and tools are more easily incorporated into the core functionality and issues may be tracked more efficiently.