Speaker
Domenico Giordano
(CERN)
Description
Performance measurements and monitoring are essential for the efficient use of computing resources as they allow selecting and validating the most effective resources for a given processing workflow. In a commercial cloud environment an exhaustive resource profiling has additional benefits due to the intrinsic variability of a virtualised environment. In this context resource profiling via initial benchmarking quickly allows to identify issues and mitigate them. Ultimately it provides additional information to compare the presumed performance of invoiced resources and the actual delivered performance as perceived at the client level.
In this report we will discuss the experience acquired in benchmarking commercial cloud resources during production activities such as the recent ATLAS Monte Carlo production in Helix Nebula cloud providers. The workflow put in place to collect and analyse performance metrics will be described. Results of the comparison study among commonly used benchmark metrics will be also reported. Those common benchmarks span from generic open-source benchmarks (encoding algorithm and kernel compilers) to experiment specific benchmarks (ATLAS KitValidation) and fast benchmarks based on random number generation.
Length of presentation (max. 20 minutes) | 20 |
---|
Primary author
Domenico Giordano
(CERN)
Co-authors
Alessandro Di Girolamo
(CERN)
Cristovao Cordeiro
(CERN)
Laurence Field
(CERN)
Luis Villazon Esteban
(Universidad de Oviedo (ES))