Speaker
Description
The amount of data to be processed by experiments in high energy physics is tremendously increasing in the coming years. For the first time in history the expected technology advance itself will not be sufficient to cover the arising gap between required and available resources based on the assumption of maintaining the current flat budget hardware procurement strategy. This leads to unprecedented challenges to both HEP software development and computing models, respectively. A first step towards meeting these challenges is the development of load balancing strategies for the existing workflow management systems in order to ensure a most effective resource utilization during processing of the comprehensive and world-wide distributed datasets.
We report on using Palladio [1], an existing simulator for simulating the performance of abstract software architectural models, to model and simulate the performance of computing jobs executed at GridKa Tier 1 center. We validate the model with real-world performance measurements. With this model, we will enable model-based evaluation of different load balancing strategies.
[1] Becker, Steffen, Heiko Koziolek, and Ralf Reussner. "The Palladio component model for model-driven performance prediction." Journal of Systems and Software 82, no. 1 (2009): 3-22.