We give an overview of common statistical issues discussed in the heavy flavour physics experiments LHCb and Belle. A focus will be put on limit setting in searches, the use of weighted events in inference (as computed with the sPlot technique), and the handling of systematic uncertainties.
The high luminosity and large cross sections enjoyed by LHC experiments means that statistical errors are minimal, and the rigorous treatment of systematic errors becomes very important - an area which lacks the "safety net" of chi squared and other goodness-of-fit measures. This entails including all uncertainties, estimating them properly, and not to inflating the error by including the...
I will discuss aspects of the frequentist and Bayesian approaches to testing a point null hypothesis (say mu=0) versus a continuous alternative hypothesis (say mu>0). This test arises frequently in particle physics, where mu is the signal strength of a previously unobserved signal (within or beyond the Standard Model). The frequentist testing approach maps identically onto the frequentist...
Interval estimation is one of the most common types of inference practiced by experimentalists, and the flavour sector is not less interested than high-pt physics. In fact, the physics of flavour brings to the table some quite interesting problems, with complex multi-dimensional parameter spaces, non-linearities, and significant systematic effects. While much has been written on the...
The ATLAS and CMS collaborations have produced numerous results during the first two data-taking runs of the LHC, ranging from precision measurements of SM processes to searches for exotic phenomena and the discovery of the Higgs boson. These results make use of (often complex) statistical techniques, both for the publications and during the development and review of the data analysis/ In this...
In any experimental science, the knowledge available on a given phenomenon is formalized into a statistical model. The latter encapsulates our understanding of its nature, its properties as well as our uncertainties. Experimental measurements are then collected and statistical tests of hypothesis are used to answer the important question: is our model valid? As a result, a variety of tests...
A brief introduction to boostrap estimates of accuracy, this talk does not assume familiarity with the topic. Bootstrap standard errors and confidence intervals are described using a small but genuine data set.
When an observable density is a superposition of signal and background PDFs that each factorise in a ''discriminant'' and a ''control'' variable, sWeights allow one to determine the signal density in the control variable using information from only the discriminant variable.
After reviewing the basics of the method and casting the formalism into the framework of orthogonal functions, the talk...
Fits of weighted events, for example to correct for acceptance effects or to statistically subtract background events using sWeights, have recently seen increasing use in the flavour physics community. This talk will discuss the determination of parameters and their uncertainties using weighted events, with particular focus on unbinned fits of weighted data.
Global fits are an indispensable tool in the search for New Physics (NP). On the one hand, they can provide interpretations of measurements that deviate from Standard Model (SM) predictions, and on the other hand, they allow using the wealth of experimental data for testing the viability of NP models. Being "global" means that these fits include hundreds of observables, whose theoretical...