21-25 May 2012
New York City, NY, USA
US/Eastern timezone

Numerical accuracy and auto-vectorization of probability density functions used in high energy physics

24 May 2012, 13:30
4h 45m
Rosenthal Pavilion (10th floor) (Kimmel Center)

Rosenthal Pavilion (10th floor)

Kimmel Center

Poster Software Engineering, Data Stores and Databases (track 5) Poster Session


Felice Pantaleo (University of Pisa (IT))


Data analyses based on evaluation of likelihood functions are commonly used in the high-energy physics community for fitting statistical models to data samples. The likelihood functions require the evaluation of several probability density functions on the data. This is accomplished using loops. For the evaluation operations, the standard accuracy is double precision floating point. The probability density functions require the evaluation of several transcendental functions (mainly exponential and square roots). Therefore, fast evaluation of the likelihood functions can be achieved either by a faster execution of the transcendental expressions or using vectorization for the loops. The former can be achieved reducing the numerical accuracy, i.e. using single precision floating or in general less accurate functions. The latter requires special techniques to vectorize the transcendental functions. However, the impact of this optimization can be significant, and in particular in the future when the vectors units will become larger and larger. Several compilers gives the possibility to apply auto-vectorization and several floating point optimizations. We will show results when using different compilers on different hardware systems for several probability distribution functions.
Student? Enter 'yes'. See http://goo.gl/MVv53 yes

Primary authors

Dr Alfio Lazzaro (CERN openlab) Andrzej Nowak (CERN) Felice Pantaleo (University of Pisa (IT)) Julien Leduc Mr Sverre Jarp (CERN) Yngve Sneen Lindal (Norwegian University of Science and Technology (NO))


Presentation materials

There are no materials yet.