Felice Pantaleo (University of Pisa (IT))
Data analyses based on evaluation of likelihood functions are commonly used in the high-energy physics community for fitting statistical models to data samples. The likelihood functions require the evaluation of several probability density functions on the data. This is accomplished using loops. For the evaluation operations, the standard accuracy is double precision floating point. The probability density functions require the evaluation of several transcendental functions (mainly exponential and square roots). Therefore, fast evaluation of the likelihood functions can be achieved either by a faster execution of the transcendental expressions or using vectorization for the loops. The former can be achieved reducing the numerical accuracy, i.e. using single precision floating or in general less accurate functions. The latter requires special techniques to vectorize the transcendental functions. However, the impact of this optimization can be significant, and in particular in the future when the vectors units will become larger and larger. Several compilers gives the possibility to apply auto-vectorization and several floating point optimizations. We will show results when using different compilers on different hardware systems for several probability distribution functions.
|Student? Enter 'yes'. See http://goo.gl/MVv53||yes|
Vincenzo Innocente (CERN)