Description
The High Energy Physics (HEP) community has been both an early adopter of grid technology as well as a driving force behind EGEE and other grid projects. The Worldwide LHC Computing Grid (WLCG) builds on the infrastructures established by these projects and has been successfully used for LHC data taking, processing and analysis.
The most important elements of the Grid infrastructure provided for the experiments will be presented during this session; the computing models of the experiments, the operation protocols used at the sites, and the end-user analysis infrastructures. In addition, other related communities are represented at this session such as Synchrotron Radiation Facilities. Therefore, the HEP session will be a forum where common aspects of Grid computing infrastructure used by different large communities will be discussed.
The EGEE User Forums have always provided an ideal framework to share experiences between Grid users, and to trigger collaborations between different clusters. The topics that will be presented and discussed during this session will encourage the continuation of existing collaborations and hopefully initiate new ones.
-
Dr roberto santinelli (CERN/IT/GD)14/04/2010, 11:00Support services and tools for user communitiesOralThe proliferation of tools for monitoring both activities and the status of the critical services, together with the pressing need for prompt reactions to problems impacting data taking, user analysis and scheduled activities (e.g. MC simulation) brings the need of better organizing the huge amount of information available. The monitoring system for the LHCb Grid Computing relies on many...Go to contribution page
-
Dr Jose Flix Molina (Cent. Invest. Energ. Medioamb. Tec. (CIEMAT))14/04/2010, 11:20Scientific results obtained using distributed computing technologiesOralBy the end of 2009, the CMS Experiment at the Large Hadron Collider (LHC) has already started data taking with proton collisions at 450 GeV and 1.2 TeV per beam. CMS has invested a few years to build a robust Distributed Computing system to meet the expected performances in terms of data transfer/archiving, calibration, reconstruction, and analysis. Here, we will focus on the readiness of the...Go to contribution page
-
George Kourousias (Sincrotrone Trieste S.C.p.A.), Milan Prica (Sincrotrone Trieste S.C.p.A.), Roberto Pugliese (Sincrotrone Trieste S.C.p.A.)14/04/2010, 11:40Experiences from application porting and deploymentOralThe Synchrotron Radiation Facilities (SRF), as large research establishments, have a very important role in Science and a great impact in the community. Due to the data-parallelism of the computational problems in the general field of physical sciences and the very high data volumes in terms of storage, the Grid Computing has been a successful paradigm. But other than their Computational...Go to contribution page
-
Dr Xavier Espinal (PIC/IFAE)14/04/2010, 12:00National and international activities and collaborationsOralThe ATLAS PIC cloud is composed of the Iberian sites: PIC (Tier-1), IFAE, IFIC, UAM, LIP-Lisbon, LIP-Coimbra and NCG-INGRID-PT (Tier-2s) and is finalising preparations for the LHC data taking. To achieve readiness for data taking, all sites has been involved in the ATLAS Distributed Computing activities since early 2006: simulated event production, reprocessing, data and user analysis...Go to contribution page
-
Dr Santiago Gonzalez De La Hoz (Instituto de Fรญsica Corpuscular (IFIC)-Universitat de Valรจncia-CSIC)14/04/2010, 14:00Scientific results obtained using distributed computing technologiesOralThe distributed analysis tests during the STEP09 and UAT exercises were a success for ATLAS and for the sites involved in the ATLAS computing model. The services were exercised at records level with good efficiencies and real users continued to get real work done at the same time. Sample problems found were : data access, was troublesome under heavy loads; pilot factories need to be...Go to contribution page
-
Mr Erik Edelmann (CSC - Finnish IT Centre for Science)14/04/2010, 14:20Software services exploiting and/or extending grid middleware (gLite, ARC, UNICORE etc)OralThe Compact Muon Solenoid (CMS) is one of the general purpose experiments at the CERN Large Hadron Collider (LHC). For distributing analysis jobs to computational resources scattered across the world, the CMS project has developed the CMS Remote Analysis Builder software (CRAB). Until now CRAB has only supported the gLite and OSG middlewares, but with the help of a new plugin, the CMS...Go to contribution page
-
Dr Massimo Lamanna (CERN)14/04/2010, 14:40Scientific results obtained using distributed computing technologiesOralIn this talk we will describe the experience of the experiment ATLAS in commissioning the system (large scale user-analysis exercises in late 2009) and the first experience with real data. First of all, we will describe the tools to commission and control the sites contributing to user analysis. We will describe the support structure and the user experience (study of monitoring data and...Go to contribution page
-
Filippo Spiga (Dipartimento di Fisica "G. Occhialini"-Universita degli Studi Milano-Bicocca)14/04/2010, 15:00End-user environments, scientific gateways and portal technologiesOralGrid Applications adopting a client/server paradigm allow easier enforcement, improvement and optimization of the middlewares. A server allows us to enact specific and complex workflows, in order to centralize application management and hiding Grid complexities to the end users. These features aim to enable Grid to a large and heterogeneous community not only at the infrastructure level, but...Go to contribution page