Dr
Jeremy Coles
(University of Cambridge - GridPP)
3/23/09, 2:00โฏPM
Grid Middleware and Networking Technologies
oral
During 2008 we have seen several notable changes in the way the LHC experiments have tried to tackle outstanding gaps in the implementation of their computing models. The development of space tokens and changes in job submission and data movement tools are key examples. The first section of this paper will review these changes and the technical/configuration impacts they have had at the site...
Tobias Koenig
(Karlsruhe Institute of Technology (KIT))
3/23/09, 2:20โฏPM
Grid Middleware and Networking Technologies
oral
Offering sustainable Grid services to users and other computing centres is the main aim of GridKa, the German Tier-1 centre of the WLCG infrastructure. The availability and reliability of IT services directly influences the customersโ satisfaction as well as the reputation of the service provider and not to forget the economical aspects. It is thus important to concentrate on processes and...
Ms
Maite Barroso
(CERN),
Nicholas Thackray
(CERN)
3/23/09, 2:40โฏPM
Grid Middleware and Networking Technologies
oral
A review of the evolution of WLCG/EGEE grid operations
Authors: Maria BARROSO, Diana BOSIO, David COLLADOS, Maria DIMOU, Antonio RETICO, John SHADE, Nick THACKRAY, Steve TRAYLEN, Romain WARTEL
As the EGEE grid infrastructure continues to grow in size, complexity and usage, the task of ensuring the
continued, uninterrupted availability of the grid services to the ever increasing number...
Laura Perini
(INFN Milano),
Tiziana Ferrari
(INFN CNAF)
3/23/09, 3:00โฏPM
Grid Middleware and Networking Technologies
oral
International research collaborations increasingly require secure sharing of resources owned by the partner organizations and distributed among different administration domains. Examples of resources include data, computing facilities (commodity computer clusters, HPC systems, etc.), storage space, metadata from remote archives, scientific instruments, sensors, etc. Sharing is made possible...
Mrs
Ruth Pordes
(FERMILAB)
3/23/09, 3:20โฏPM
Grid Middleware and Networking Technologies
oral
The Open Science Grid usage has ramped up more than 25% in the past twelve months due to both the increase in throughput of the core stakeholders โ US LHC, LIGO and Run II โ and increase in usage by non-physics communities. We present and analyze this ramp up together with the issues encountered and implications for the future.
It is important to understand the value of collaborative...
Dr
Donatella Lucchesi
(University and INFN Padova)
3/23/09, 3:40โฏPM
Grid Middleware and Networking Technologies
oral
The CDF II experiment has been taking data at FNAL since 2001. The CDF computing architecture has evolved from initially using dedicated computing farms to using decentralized Grid-based resources on the EGEE grid, Open Science Grid and FNAL Campus grid.
In order to deliver high quality physics results in a timely manner to a running experiment,
CDF has had to adapt to Grid with minimum...
Mr
Gilles Mathieu
(STFC, Didcot, UK)
3/23/09, 4:30โฏPM
Grid Middleware and Networking Technologies
oral
All grid projects have to deal with topology and operational information like resource distribution, contact lists and downtime declarations. Storing, maintaining and publishing this information properly is one of the key elements to successful grid operations. The solution adopted by EGEE and WLCG projects is a central repository that hosts this information and makes it available to users and...
Dr
Jose Hernandez
(CIEMAT)
3/23/09, 4:50โฏPM
Grid Middleware and Networking Technologies
oral
Establishing efficient and scalable operations of the CMS distributed
computing system critically relies on the proper integration,
commissioning and scale testing of the data and workfload management
tools, the various computing workflows and the underlying computing
infrastructure located at more than 50 computing centres worldwide
interconnected by the Worldwide LHC Computing...
Mr
Xin Zhao
(Brookhaven National Laboratory,USA)
3/23/09, 5:10โฏPM
Grid Middleware and Networking Technologies
oral
ATLAS Grid production, like many other VO applications, requires the
software packages to be installed on remote sites in advance. Therefore,
a dynamic and reliable system for installing the ATLAS software releases
on Grid sites is crucial to guarantee the timely and smooth start of
ATLAS production and reduce its failure rate.
In this talk, we discuss the issues encountered in the...
Dr
Graeme Andrew Stewart
(University of Glasgow)
3/23/09, 5:30โฏPM
Grid Middleware and Networking Technologies
oral
The ATLAS Production and Distributed Analysis System (PanDA) is a key
component of the ATLAS distributed computing infrastructure. All ATLAS
production jobs, and a substantial amount of user and group analysis
jobs, pass through the PanDA system which manages their execution on
the grid. PanDA also plays a key role in production task definition
and the dataset replication request system....
Dr
Andrea Sciabร
(CERN)
3/23/09, 5:50โฏPM
Grid Middleware and Networking Technologies
oral
The LHC experiments (ALICE, ATLAS, CMS and LHCb) rely for the data acquisition, processing, distribution, analysis and simulation on complex computing systems, run using a variety of services, provided by the experiment services, the WLCG Grid and the different computing centres. These services range from the most basic (network, batch systems, file systems) to the mass storage services or the...
Prof.
Harvey Newman
(Caltech)
3/23/09, 6:10โฏPM
Grid Middleware and Networking Technologies
oral
I will review the status, outlook recent technology trends and
state of the art developments in the major networks serving the
high energy physics community in the LHC era.
I will also cover the progress in reducing or closing the Digital Divide
separating scientists in several world regions from the mainstream,
from the perspective of the ICFA Standing Committee on
Inter-regional Connectivity.
Olivier Martin
(Ictconsulting)
3/24/09, 3:20โฏPM
Grid Middleware and Networking Technologies
oral
Despite many coordinated efforts to promote the use of IPv6, the migration from IPv4 is far from being up to the expectations of most Internet experts. However, time is running fast and unallocated IPv4 address space should run out within the next 3 years or so. The speaker will attempt to explain the reasons behind the lack of enthusiasm for IPv6, in particular, the lack of suitable migration...
Gabriele Garzoglio
(FERMI NATIONAL ACCELERATOR LABORATORY)
3/24/09, 3:40โฏPM
Grid Middleware and Networking Technologies
oral
Grids enable uniform access to resources by implementing standard interfaces to resource gateways. Gateways control access privileges to resources using user's identify and personal attributes, which are available through Grid credentials. Typically, Gateways implement access control by mapping Grid credentials to local privileges.
In the Open Science Grid (OSG), privileges are granted on...
Andrea Ceccanti
(INFN CNAF, Bologna, Italy),
Tanya Levshina
(FERMI NATIONAL ACCELERATOR LABORATORY)
3/24/09, 4:30โฏPM
Grid Middleware and Networking Technologies
oral
The Grid community uses two well-established registration services, which allow users to be authenticated under the auspices of Virtual Organizations (VOs).
The Virtual Organization Membership Service (VOMS), developed in the context of the Enabling Grid for E-sciencE (EGEE) project, is an Attribute Authority service that issues attributes expressing membership information of a subject...
Andrea Ceccanti
(CNAF - INFN),
John White White
(Helsinki Institute of Physics HIP)
3/24/09, 4:50โฏPM
Grid Middleware and Networking Technologies
oral
The new authorization service of the gLite middleware stack is presented.
In the EGEE-II project, the overall authorization study and review gave
recommendations that the authorization should be rationalized throughout
the middleware stack. As per the accepted recommendations, the new
authorization service is designed to focus on EGEE gLite computational
components: WMS, CREAM, and...
Dr
Oliver Keeble
(CERN)
3/24/09, 5:10โฏPM
Grid Middleware and Networking Technologies
oral
Grid computing as currently understood is normally enabled through the
deployment of integrated software distributions which expose specific
interfaces to core resources (data, CPU), provide clients and also
higher level services. This paper examines the reasons for this reliance
on large distributions and discusses whether the benefits are genuinely
worth the considerable investment...
Dr
Simone Pagan Griso
(University and INFN Padova)
3/24/09, 5:30โฏPM
Grid Middleware and Networking Technologies
oral
Large international collaborations that use de-centralized computing
models are becoming a custom rather than an exception in High Energy Physics.
A good computing model for such big and spread collaborations has to
deal with the distribution of the experiment-specific software around the world.
When the CDF experiment developed its software infrastructure,
most computing was done on...
Dr
Ian Bird
(CERN)
3/24/09, 5:50โฏPM
Grid Middleware and Networking Technologies
oral
This paper will provide a review of the middleware that is currently used in WLCG, and how that compares to what was initially expected when the project started. The talk will look at some of the lessons to be learned, and why what is in use today is sometimes quite different from what may have been anticipated. For the future it is clear that finding the effort for long term support and...
Pablo Saiz
(CERN)
3/24/09, 6:10โฏPM
Grid Middleware and Networking Technologies
oral
AliEn is the GRID interface that ALICE has developed to be able to do its distributed computing. AliEn provides all the components needed to build a distributed environment, including a file and metadata catalogue, a priority-based job execution model and a file replication system.
Another of the components provided by AliEn is an automatic software package installation service, PackMan....
Julia Andreeva
(CERN)
3/26/09, 2:00โฏPM
Grid Middleware and Networking Technologies
oral
Job processing and data transfer are the main computing activities on the
WLCG infrastructure. Reliable monitoring of the job processing on the WLCG
scope is a complicated task due to the complexity of the infrastructure itself
and the diversity of the currently used job submission
methods.
The talk will describe the new strategy for the job monitoring on the WLCG
scope, covering...
Mr
Ricky Egeland
(Minnesota)
3/26/09, 2:00โฏPM
Grid Middleware and Networking Technologies
oral
The PhEDEx Data Service provides access to information from the central PhEDEx database, as well as certificate-authenticated managerial operations such as requesting the transfer or deletion of data. The Data Service is integrated with the 'SiteDB' service for fine-grained access control, providing a safe and secure environment for operations. A plugin architecture allows server-side modules...
Mr
Alberto Pace
(CERN)
3/26/09, 2:20โฏPM
Grid Middleware and Networking Technologies
oral
Data management components at CERN form the backbone for production and analysis activities of the experiments of the LHC accelerator. Significant data amounts (15PB/y) will need to be collected from the online systems, reconstructed and distributed to other sites participating in the Worlrdwide LHC computing Grid for further analysis. More recently also significant resources to support local...
Dr
Janusz Martyniak
(Imperial College London)
3/26/09, 2:20โฏPM
Grid Middleware and Networking Technologies
oral
In this paper we describe the architecture and operation of the Real Time Monitor (RTM), developed by the Grid team in the HEP group at Imperial College London. This is arguably the most popular dissemination tool within the EGEE Grid. Having been used, on many occasions including GridFest and LHC inauguration events held at CERN in October 2008.
The RTM gathers information from EGEE sites...
รkos Frohner
(CERN)
3/26/09, 2:40โฏPM
Grid Middleware and Networking Technologies
oral
Data management is one of the cornerstones in the distributed
production computing environment that the EGEE project aims to
provide for a e-Science infrastructure.
We have designed and implemented a set of services and client
components, addressing the diverse requirements of all user
communities. LHC experiments as main users will generate and
distribute...
Mr
david collados
(CERN)
3/26/09, 2:40โฏPM
Grid Middleware and Networking Technologies
oral
Authors: David Collados, Judit Novak, John Shade, Konstantin Skaburskas, Lapka Wojciech
It is four years now since the first prototypes of tools and tests started to monitor the Worldwide LHC Computing Grid (WLCG) services. One of these tools is the Service
Availability Monitoring (SAM) framework, which superseded the SFT tool, and has become a keystone for the monthly WLCG availability...
Dr
Patrick Fuhrmann
(DESY)
3/26/09, 3:00โฏPM
Grid Middleware and Networking Technologies
oral
At the time of CHEP'09, the LHC Computing Grid approach and implementation is rapidly approaching the moment it finally has to prove its feasibility. The same is true for dCache, the grid middle-ware storage component, meant to store and manage the largest share of LHC data outside of the LHC Tier 0.
This presentation will report on the impact of recently deployed dCache sub-components,...
Ramiro Voicu
(California Institute of Technology)
3/26/09, 3:00โฏPM
Grid Middleware and Networking Technologies
oral
USLHCNet provides transatlantic connections of the Tier1 computing facilities at Fermilab and Brookhaven with the Tier0 and Tier1 facilities at CERN as well as Tier1s elsewhere in Europe and Asia. Together with ESnet, Internet2 and the GEANT, USLHCNet also supports connections between the Tier2 centers. The USLHCNet core infrastructure is using the Ciena Core Director devices that provide...
Luca Magnoni
(INFN CNAF)
3/26/09, 3:20โฏPM
Grid Middleware and Networking Technologies
oral
StoRM is a Storage Resource Manager (SRM) service adopted in the context of WLCG to provide data management capabilities on high performing cluster and parallel file systems as Lustre and GPFS. The experience gained in the readiness challenges of LHC Grid infrastructure proves that scalability and performance of SRM services are key characteristics to provide effective and reliable storage...
Robert Quick
(Indiana University)
3/26/09, 3:20โฏPM
Grid Middleware and Networking Technologies
oral
The Open Science Grid (OSG) Resource and Service Validation (RSV) project seeks to provide solutions for several grid fabric monitoring problems, while at the same time providing a bridge between the OSG operations and monitoring infrastructure and the WLCG (Worldwid LHC Computing Grid) infrastructure. The RSV-based OSG fabric monitoring begins with local resource fabric monitoring, which...
Dr
Stephen Burke
(RUTHERFORD APPLETON LABORATORY)
3/26/09, 3:40โฏPM
Grid Middleware and Networking Technologies
oral
The GLUE information schema has been in use in the LCG/EGEE production Grid since the first version was defined in 2002. In 2007 a major redesign of GLUE, version 2.0, was started in the context of the Open Grid Forum following the creation of the GLUE Working Group. This process has taken input from a number of Grid projects, but as a major user of the version 1 schema LCG/EGEE has had a...
Dr
Jamie Shiers
(CERN)
3/26/09, 3:40โฏPM
Grid Middleware and Networking Technologies
oral
The WLCG service has been declared officially open for production and analysis during the LCG Grid Fest held at CERN - with live contributions from around the world - on Friday 3rd October 2008. But the service is not without its problems - services or even sites suffer degradation or complete outage with painful repercussions on experiment activities, the operations and service model is...
Dr
Jukka Klem
(Helsinki Institute of Physics HIP)
3/26/09, 4:30โฏPM
Grid Middleware and Networking Technologies
oral
The Compact Muon Solenoid (CMS) is one of the LHC (Large Hadron Collider) experiments at CERN. CMS computing relies on different grid infrastructures to provide calculation and storage resources. The major grid middleware stacks used for CMS computing are gLite, OSG and ARC (Advanced Resource Connector). Helsinki Institute of Physics (HIP) builds one of the Tier-2 centers for CMS computing....
Dr
Alessandro di Girolamo
(CERN IT/GS), Dr
Andrea Sciaba
(CERN IT/GS), Dr
Elisa Lanciotti
(CERN IT/GS), Dr
Nicolo Magini
(CERN IT/GS), Dr
Patricia Mendez Lorenzo
(CERN IT/GS), Dr
Roberto Santinelli
(CERN IT/GS), Dr
Simone Campana
(CERN IT/GS), Dr
Vincenzo Miccio
(CERN IT/GS)
3/26/09, 4:30โฏPM
Grid Middleware and Networking Technologies
oral
In a few months, the four LHC detectors will collect data at a significant rate that is expected to ramp-up to around 15PB per year. To process such a large quantity of data, the experiments have developed over the last years distributed computing models that build on the overall WLCG service. These implement the different services provided by the gLite middleware into the computing models of...
Dr
Oxana Smirnova
(Lund University / NDGF)
3/26/09, 4:50โฏPM
Grid Middleware and Networking Technologies
oral
The Advanced Resource Connector (ARC) middleware introduced by
NorduGrid is one of the leading Grid solutions used by scientists
worldwide. Its simplicity, reliability and portability, matched by
unparalleled efficiency, make it attractive for large-scale facilities
like the Nordic DataGrid Facility (NDGF) and its Tier1 center, and
also for smaller scale projects. Being well-proven in...
Giuseppe Codispoti
(Dipartimento di Fisica)
3/26/09, 4:50โฏPM
Grid Middleware and Networking Technologies
oral
The CMS experiment at LHC started using the Resource Broker (by the EDG and LCG projects) to submit production and analysis jobs to distributed computing resources of the WLCG infrastructure over 6 years ago. In 2006 it started using the gLite Workload Management System (WMS) and Logging & Bookkeeping (LB). In current configuration the interaction with the gLite-WMS/LB happens through the CMS...
Gabriele Garzoglio
(FERMI NATIONAL ACCELERATOR LABORATORY)
3/26/09, 5:10โฏPM
Grid Middleware and Networking Technologies
oral
The Open Science Grid (OSG) and the Enabling Grids for E-sciencE (EGEE) have a common security model, based on Public Key Infrastructure. Grid resources grant access to users because of their membership in a Virtual Organization (VO), rather than on personal identity. Users push VO membership information to resources in the form of identity attributes, thus declaring that resources will be...
Massimo Sgaravatto
(INFN Padova)
3/26/09, 5:10โฏPM
Grid Middleware and Networking Technologies
oral
In this paper we describe the use of CREAM and CEMON for job
submission and management within the gLite Grid middleware. Both CREAM
and CEMON address one of the most fundamental operations of a Grid
middleware, that is job submission and management. Specifically, CREAM
is a job management service used for submitting, managing and
monitoring computational jobs. CEMON is an event...
Dr
Marian Zvada
(Fermilab)
3/26/09, 5:30โฏPM
Grid Middleware and Networking Technologies
oral
Many members of large science collaborations already have specialized grids available to advance their research in the need of getting more computing resources for data analysis. This has forced the Collider Detector at Fermilab (CDF) collaboration to move beyond the usage of dedicated resources and start exploiting Grid resources.
Nowadays, CDF experiment is increasingly relying on...
Mr
Maxim Grigoriev
(FERMILAB)
3/26/09, 5:30โฏPM
Grid Middleware and Networking Technologies
oral
Fermilab hosts the US Tier-1 center for data storage and analysis of the Large Hadron Collider's (LHC) Compact Muon Solenoid (CMS) experi
ment. To satisfy operational requirements for the LHC networking model, the networking group at Fermilab, in collaboration with Internet2
and ESnet, is participating in the perfSONAR-PS project. This collaboration has created a collection of network...
Dr
Andrei TSAREGORODTSEV
(CNRS-IN2P3-CPPM, MARSEILLE)
3/26/09, 5:50โฏPM
Grid Middleware and Networking Technologies
oral
DIRAC, the LHCb community Grid solution, was considerably
reengineered in order to meet all the requirements for processing the data
coming from the LHCb experiment. It is covering all the tasks starting
with raw data transportation from the experiment area to the grid storage,
data processing up to the final user analysis. The reengineered DIRAC3
version of the system includes a...
Mr
Philip DeMar
(FERMILAB)
3/26/09, 5:50โฏPM
Grid Middleware and Networking Technologies
oral
Fermilab has been one of the earliest sites to deploy data circuits in production for wide-area high impact data movement. The US-CMS Tier-1 Center at Fermilab uses end-to-end (E2E) circuits to support data movement with the Tier-0 Center at CERN, as well as with all of the US-CMS Tier-2 sites. On average, 75% of the network traffic into and out of the Laboratory is carried on E2E circuits....
Dr
Alina Grigoras
(CERN PH/AIP), Dr
Andreas Joachim Peters
(CERN IT/DM), Dr
Costin Grigoras
(CERN PH/AIP), Dr
Fabrizio Furano
(CERN IT/GS), Dr
Federico Carminati
(CERN PH/AIP), Dr
Latchezar Betev
(CERN PH/AIP), Dr
Pablo Saiz
(CERN IT/GS), Dr
Patricia Mendez Lorenzo
(CERN IT/GS), Dr
Predrag Buncic
(CERN PH/SFT), Dr
Stefano Bagnasco
(INFN/Torino)
3/26/09, 6:10โฏPM
Grid Middleware and Networking Technologies
oral
With the startup of LHC, the ALICE detector will collect data at a rate that, after two years, will reach 4PB per year. To process such a large quantity of data, ALICE has developed over ten years a distributed computing environment, called AliEn, integrated with the WLCG environment. The ALICE environment presents several original solutions, which have shown their viability in a number of...