Speaker
Description
Impact
The proposed solution as been enforced by the CMS experiment with the implementation of a service dedicated to end user analysis workflows execution. There are more than thousand users from different countries who run more than 40K jobs every day. A deployment activity has been performed successfully and our large community is using the service both for Monte Carlo simulations and for the analysis of the data collected during first operational phase of LHC until the end of 2009.
Detailed analysis
HEP computing models require the coexistence of specific and complex workflows granting high scalability as their main QoS. The Grid provides an effective solution to scalability problems thanks to infrastructures that allow building distributed applications coping with specific use cases. We propose the adoption of a server layer between the users and the Grid middleware based on CRAB. This intermediate service guarantees modularity and overcomes the heterogeneity of the Grid, but also represents the key point to implement a flexible and reliable application by isolating further the end users from their specific application workflow details (e.g. data location catalogs interactions and data movements duties). The client/server paradigm lets in addition to build a manageable environment with the chance to handle centrally policies and deployment of new features.
Conclusions and Future Work
Significant experience has been achieved in CMS during past years through many computing challenges. A solid methodology to design flexible and scalable Grid servers has been acquired. Future activities are aimed at generalizing and spreading the proposed approach by defining a common core of libraries to be shared within various CMS workload management projects.
URL for further information | https://twiki.cern.ch/twiki/bin/view/CMS/SWGuideCrab |
---|---|
Keywords | distributed analysis, HEP, Client/Server |