Frank Wuerthwein
(UCSD for the OSG consortium),
Ruth Pordes
(Fermi National Accelerator Laboratory (FNAL)), Mrs
Ruth Pordes
(FERMILAB)
2/13/06, 2:00โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
We report on the status and plans for the Open Science Grid Consortium, an open,
shared national distributed facility in the US which supports a multi-discplinary
suite of science applications. More than fifty University and Laboratory groups,
including 2 in Brazil and 3 in Asia, now have their resources and services
accessible to OSG. 16 Virtual Organizations have registered their...
Robert Gardner
(University of Chicago)
2/13/06, 2:20โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
We describe the purpose, architectural definition, deployment and operational
processes for the Integration Testbed (ITB) of the Open Science Grid (OSG). The ITB
has been successfully used to integrate a set of functional interfaces and services
required for the OSG Deployment. Activity leading to two major deployments of the OSG
grid infrastructure. We discuss the methods and logical...
Dr
Peter Malzacher
(Gesellschaft fuer Schwerionenforschung mbH (GSI))
2/13/06, 2:40โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
The German Ministry for Education and Research announced a 100 million euro German
e-science initiative focused on: Grid computing, e-learning and knowledge management.
In a first phase started September 2005 the Ministry has made available 17 million
euro for D-Grid, which currently comprises six research consortia: five community
grids - HEP-Grid (high-energy physics),...
Dr
Chadwick Keith
(Fermilab)
2/13/06, 3:00โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
FermiGrid is a cooperative project across the Fermilab Computing Division and its
stakeholders which includes the following 4 key components: Centrally Managed &
Supported Common Grid Services, Stakeholder Bilateral Interoperability, Development
of OSG Interfaces for Fermilab and Exposure of the Permanent Storage System. The
initial goals, current status and future plans for FermiGrid will...
Mr
Michel Jouvin
(LAL / IN2P3)
2/13/06, 4:00โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
Several HENP laboratories in Paris region have joined together to provide an LCG/EGEE
Tier2 center. This resource, called GRIF, will focus on LCG experiments but will also
be opened to EGEE users from other disciplines and to local users. It will provide
resources for both analysis and simulation and offer a large storage space (350 TB
planned by end of 2007).
This Tier2 will have...
Prof.
Arshad Ali
(National University of Sciences & Technology (NUST) Pakistan)
2/13/06, 4:20โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
We present a report on Grid activities in Pakistan over the last three years and
conclude that there is significant technical and economic activity due to the
participation in Grid research and development. We started collaboration with
participation in the CMS software development group at CERN and Caltech in 2001. This
has led to the current setup for CMS production and the LCG Grid...
Mr
Gilles Mathieu
(IN2P3, Lyon), Ms
Helene Cordier
(IN2P3, Lyon), Mr
Piotr Nyczyk
(CERN)
2/13/06, 4:40โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
The paper reports on the evolution of operational model which was set up in the
"Enabling Grids for E-sciencE" (EGEE) project, and on the implications of Grid
Operations in LHC Computing Grid (LCG).
The primary tasks of Grid Operations cover monitoring of resources and services,
notification of failures to the relevant contacts and problem tracking through a
ticketing system. Moreover,...
Dr
Flavia Donno
(CERN), Dr
Marco Verlato
(INFN Padova)
2/13/06, 5:00โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
The organization and management of the user support in a global e-science computing
infrastructure such as the Worldwide LHC Computing Grid (WLCG) is one of the
challenges of the grid. Given the widely distributed nature of the organization, and
the spread of expertise for installing, configuring, managing and troubleshooting the
grid middleware services, a standard centralized model could...
Mr
Rajesh Kalmady
(Bhabha Atomic Research Centre)
2/13/06, 5:20โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
The LHC Computing Grid (LCG) connects together hundreds of sites consisting of
thousands of components such as computing resources, storage resources, network
infrastructure and so on. Various Grid Operation Centres (GOCs) and Regional
Operations Centres (ROCs) are setup to monitor the status and operations of the grid.
This paper describes Gridview, a Grid Monitoring and Visualization...
Mr
Sergio Andreozzi
(INFN-CNAF)
2/13/06, 5:40โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
The Grid paradigm enables the coordination and sharing of a large number of
geographically-dispersed heterogeneous resources that are contributed by different
institutions. These resources are organized into virtual pools and assigned to group
of users. The monitoring of such a distributed and dynamic system raises a number of
issues like the need for dealing with administrative...
Dr
Maria Cristina Vistoli
(Istituto Nazionale di Fisica Nucleare (INFN))
2/14/06, 2:00โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
Moving from a National Grid Testbed to a Production quality Grid service for the HEP
applications requires an effective operations structure and organization, proper user
and operations support, flexible and efficient management and monitoring tools.
Moreover the middleware releases should be easily deployable using flexible
configuration tools, suitable for various and different local...
Anja Vest
(University of Karlsruhe)
2/14/06, 2:20โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
Computer clusters at universities are usually shared among many groups. As an
example, the Linux cluster at the "Institut fuer Experimentelle Kernphysik" (IEKP),
University of Karlsruhe, is shared between working groups of the high energy physics
experiments AMS, CDF and CMS, and has successfully been integrated into the SAM grid
of CDF and the LHC computing grid LCG for CMS while it still...
Mr
Laurence Field
(CERN)
2/14/06, 2:40โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
Open Science Grid (OSG) and LHC Computing Grid (LCG) are two grid infrastructures
that were built independently on top of a Virtual Data Toolkit (VDT) core. Due to the
demands of the LHC Virtual Organizations (VOs), it has become necessary to ensure
that these grids interoperate so that the experiments can seamlessly use them as one
resource. This paper describes the work that was...
Dr
Graeme A Stewart
(University of Glasgow)
2/14/06, 3:00โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
Data management has proved to be one of the hardest jobs to do in a the grid
environment. In particular, file replication has suffered problems of transport
failures, client disconnections, duplication of current transfers and resultant
server saturation.
To address these problems the globus and gLite grid middlewares offer new services
which improve the resiliancy and robustness of...
Mr
Paolo Badino
(CERN)
2/14/06, 4:00โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
In this paper we report on the lessons learned from the Middleware point of view
while running the gLite File Transfer Service (FTS) on the LCG Service Challenge 3
setup. The FTS has been designed based on the experience gathered from the Radiant
service used in Service Challenge 2, as well as the CMS Phedex transfer service. The
first implementation of the FTS was put to use in the...
Dr
David Cameron
(European Organization for Nuclear Research (CERN))
2/14/06, 4:20โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
The ATLAS detector currently under construction at CERN's Large Hadron Collider
presents data handling requirements of an unprecedented scale. From 2008 the ATLAS
distributed data management (DDM) system must manage tens of petabytes of event data
per year, distributed around the world: the collaboration comprises 1800 physicists
participating from more than 150 universities and...
Timur Perelmutov
(FERMI NATIONAL ACCELERATOR LABORATORY)
2/14/06, 4:40โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
dCache collaboration actively works on the implementation and improvement of the
features and the grid support of dCache storage. It has delivered Storage Resource
Managers (SRM) interface, GridFtp server, Resilient Manager and Interactive Web
Monitoring tools. SRMs are middleware components whose function is to provide dynamic
space allocation and file management of shared storage...
Dr
Dantong Yu
(BROOKHAVEN NATIONAL LABORATORY), Dr
Xin Zhao
(BROOKHAVEN NATIONAL LABORATORY)
2/14/06, 5:00โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
We describe two illustrative cases in which Grid middleware (GridFtp, dCache and SRM)
was used successfully to transfer hundreds of terabytes of data between BNL and its
remote RHIC and ATLAS collaborators. The first case involved PHENIX production data
transfers to CCJ, a regional center in Japan, during the 2005 RHIC run. Approximately
270TB of data, representing 6.8 billion polarized...
Dr
Alexandre Vaniachine
(ANL)
2/14/06, 5:20โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
High energy and nuclear physics applications on computational grids require efficient
access to terabytes of data managed in relational databases. Databases also play a
critical role in grid middleware: file catalogues, monitoring, etc. Crosscutting the
computational grid infrastructure, a hyperinfrastructure of the databases emerges.
The Database Access for Secure Hyperinfrastructure...
Dr
Birger Koblitz
(CERN)
2/14/06, 5:40โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
We present the AMGA (ARDA Metadata Grid Application) metadata catalog, which is a
part of the gLite middleware. AMGA provides a very lightweight metadata service as
well as basic database access functionality on the Grid. Following a brief overview
of the AMGA design, functionality, implementation and security features, we will show
performance comparisons of AMGA with direct database...
Dr
Rodney Walker
(SFU)
2/15/06, 2:00โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
The Condor-G meta-scheduling system has been used to create a single Grid of GT2
resources from LCG and GridX1, and ARC resources from NorduGrid. Condor-G provides
the submission interfaces to GT2 and ARC gatekeepers, enabling transparent submission
via the scheduler. Resource status from the native information systems is converted
to the Condor ClassAd format and used for matchmaking to...
Giuseppe AVELLINO
(Datamat S.p.A.)
2/15/06, 2:20โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
Contemporary Grids are characterized by a middleware that provides the necessary
virtualization of computation and data resources for the shared working environment
of the Grid. In a large-scale view, different middleware technologies and
implementations have to coexist. The SOA approach provides the needed architectural
backbone for interoperable environments, where different...
Abhishek Singh RANA
(University of California, San Diego, CA, USA)
2/15/06, 2:40โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
We report on first experiences with building and operating an Edge Services
Framework (ESF) based on Xen virtual machines instantiated via the Workspace Service
available in Globus Toolkit, and developed as a joint project between EGEE, LCG, and
OSG. Many computing facilities are architected with their compute and storage
clusters behind firewalls. Edge Services are instantiated on a small...
Mr
Marcus Hardt
(Unknown)
2/15/06, 3:00โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
One problem in distributed computing is bringing together application developers
and resource providers to ensure that applications work well on the resources
provided. A layer of abstraction between resources and applications provides new
possibilities in designing Grid solutions.
This paper compares different virtualisation environments, among which are Xen
(developed at the...
Abhishek Singh Rana
(UCSD)
2/15/06, 4:00โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
Securely authorizing incoming users with appropriate privileges on distributed grid
computing resources is a difficult problem. In this paper we present the work of the
Open Science Grid Privilege Project which is a collaboration of developers from
universities and national labs to develop an authorization infrastructure to provide
finer grained authorization consistently to all grid...
Mr
Philippe Canal
(FERMILAB)
2/15/06, 4:20โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
We will describe the architecture and implementation of the new accounting service
for the Open Science Grid. Gratia's main goal is to provide the OSG stakeholders
with a reliable and accurate set of views of the usage of ressources across the OSG.
Gratia implements a service oriented, secure framework for the necessary collectors
and sensors. Gratia also provides repositories and access...
Mr
Adrian Casajus Ramo
(Departamento d' Estructura i Constituents de la Materia)
2/15/06, 4:40โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
DIRAC is the LHCb Workload and Data Management System and is based on a
service-oriented architecture. It enables generic distributed computing with
lightweight Agents and Clients for job execution and data transfers. DIRAC code base
is 99% python with all remote requests handled using the XML-RPC protocol. DIRAC is
used for the submission of production and analysis jobs by the LHCb...
Mrs
Tanya Levshina
(FERMILAB)
2/15/06, 5:00โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
Currently, grid development projects require end users to be authenticated under the
auspices of a "recognized" organization, called a Virtual Organization (VO). A VO
establishes resource-usage agreements with grid resource providers. The VO is
responsible for authorizing its members and optionally assigning them to groups and
roles within the VO. This enables fine-grained authorization...
Dr
Andrew McNab
(UNIVERSITY OF MANCHESTER)
2/15/06, 5:20โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
GridSite has extended the industry-standard Apache webserver for use within Grid
projects, both by adding support for Grid security credentials such as GSI and VOMS,
and with the GridHTTP protocol for bulk file transfer via HTTP. We describe how
GridHTTP combines the security model of X.509/HTTPS with the performance of Apache,
in local and wide area bulk transfer applications. GridSite...
Mr
Levente HAJDU
(BROOKHAVEN NATIONAL LABORATORY)
2/15/06, 5:40โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
In the distributed computing world of heterogeneity, sites may have from the bare
minimum Globus package available to a plethora of advanced services. Moreover, sites
may have restrictions and limitations which need to be understood by resource brokers
and planner in order to take the best advantage of resource and computing cycles.
Facing this reality and to take full advantage of any...
Dr
Joel Snow
(Langston University)
2/16/06, 2:00โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
Periodically an experiment will reprocess data taken previously to take advantage of
advances in its reconstruction code and improved understanding of the detector.
Within a period of ~6 months the Dร experiment has reprocessed, on the grid, a large
fraction (0.5fb-1) of the Run II data. This corresponds to some 1 billion events or
250TB of data and used raw data as input, requiring...
Mrs
Mona Aggarwal
(Imperial College London)
2/16/06, 2:20โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
The LCG is an operational Grid currently running at 136 sites in 36 countries,
offering its users access to nearly 14,000 CPUs and approximately 8PB of storage [1].
Monitoring the state and performance of such a system is challenging but vital to
successful operation. In this context the primary motivation for this research is to
analyze LCG performance by doing a statistical analysis of...
Dr
Dirk Pleiter
(DESY)
2/16/06, 2:40โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
Numerical simulations of QCD formulated on the lattice (LQCD) require a huge amount
of computational resources. Grid technologies can help to improve exploitation of
these precious resources, e.g. by sharing the produced data on a global level. The
International Lattice DataGrid (ILDG) has been founded to define the required
standards needed for a grid infrastructure to be used for...
Go Iwai
(JST)
2/16/06, 3:00โฏPM
Grid middleware and e-Infrastructure operation
oral presentation
A new project for advanced simulation technology in radiotherapy was launched on Oct.
2003 with funding of JST (Japan Science and Technology Agency) in Japan. The project
aim is to develop an ample set of simulation package for radiotherapy based on Geant4
in collaboration between Geant4 developers and medical users. They need much more
computing power and strong security for accurate and...