Conveners
Grid middleware and tools: GM 1
- Ian Bird (CERN)
Grid middleware and tools: GM 2
- Jeff Templon (NIKEF)
Grid middleware and tools: GM 3
- Robert Gardner (University of Chicago)
Grid middleware and tools: GM 4
- Ian Bird (CERN)
Grid middleware and tools: GM 5
- Jeff Templon (NIKEF)
Grid middleware and tools: GM 6
- Robert Gardner (University of Chicago)
Grid middleware and tools: GM 7
- Ian Bird (CERN)
-
Pablo Saiz (CERN)03/09/2007, 14:00Grid middleware and toolsoral presentationThanks to the grid, users have access to computing resources distributed all over the world. The grid hides the complexity and the differences of its heterogeneous components. In order for this to work, it is vital that all the elements are setuped properly, and that they can interact with each other. It is also very important that errors are detected as soon as possible, and that the...Go to contribution page
-
Dr Oliver Gutsche (FERMILAB)03/09/2007, 14:20Grid middleware and toolsoral presentationThe CMS computing model to process and analyze LHC collision data follows a data-location driven approach and is using the WLCG infrastructure to provide access to GRID resources. As a preparation for data taking beginning end of 2007, CMS tests its computing model during dedicated data challenges. Within the CMS computing model, user analysis plays an important role in the CMS...Go to contribution page
-
Dr Amber Boehnlein (FERMI NATIONAL ACCELERATOR LABORATORY)03/09/2007, 14:40Grid middleware and toolsoral presentationHigh energy physics experiments periodically reprocess data, in order to take advantage of improved understanding of the detector and the data processing code. Between February and May 2007, the DZero experiment will reprocess a substantial fraction of its dataset. This consists of half a billion events, corresponding to more than 100 TB of data, organized in 300,000 files. The...Go to contribution page
-
Tadashi Maeno (Brookhaven National Laboratory)03/09/2007, 15:00Grid middleware and toolsoral presentationA new distributed software system was developed in the fall of 2005 for the ATLAS experiment at the LHC. This system, called PanDA, provides an integrated service architecture with late binding of jobs, maximal automation through layered services, tight binding with ATLAS distributed data management (DDM) system, advanced error discovery and recovery procedures, and other features. In this...Go to contribution page
-
Dr Pablo Saiz (CERN)03/09/2007, 15:20Grid middleware and toolsoral presentationStarting from the end of this year, the ALICE detector will collect data at a rate that, after two years, will reach 4PB per year. To process such a large quantity of data, ALICE has developed over the last seven years a distributed computing environment, called AliEn, integrated in the WLCG environment. The ALICE environment presents several original solutions, which have shown their...Go to contribution page
-
Dr Flavia Donno (CERN)03/09/2007, 15:40Grid middleware and toolsoral presentationStorage Services are crucial components of the Worldwide LHC Computing Grid (WLCG) infrastructure spanning more than 200 sites and serving computing and storage resources to the High Energy Physics LHC communities. Up to tens of Petabytes of data are collected every year by the 4 LHC experiments at CERN. To process these large data volumes it is important to establish a protocol and a very...Go to contribution page
-
Dr Patrick Fuhrmann (DESY)03/09/2007, 16:30Grid middleware and toolsoral presentationWith the start of the Large Hardron Collider at CERN, end of 2007, the associated experiments will feed the major share of their data into the dCache Storage Element technology at most of the Tier I centers and many of the Tier IIs including the larger sites. For a project, not having its center of gravity at CERN, and receiving contributions from various loosely coupled sites in...Go to contribution page
-
Dr Gerd Behrmann (Nordic Data Grid Facility)03/09/2007, 16:50Grid middleware and toolsoral presentationThe LCG collaboration is encompased by a number of Tier 1 centers. The nordic LCG Tier 1 is in contrast to other Tier 1 centers distributed over most of Scandinavia. A distributed setup was chosen for both political and technical reasons, but also provides a number of unique challenges. dCache is well known and respected as a powerfull distributed storage resource manager, and was chosen...Go to contribution page
-
Dr Markus Schulz (CERN)03/09/2007, 17:10Grid middleware and toolsoral presentationAs a part of the EGEE project the data management group at CERN has developed and support a number of tools for various aspects of data management: A file catalog (LFC), a key store for encryption keys (Hydra), a grid file access library (GFAL) which transparently uses various byte access protocols to access data in various storage systems, a set of utilities (lcg_utils) for higher level...Go to contribution page
-
Mr Mario Lassnig (CERN & University of Innsbruck, Austria)03/09/2007, 17:30Grid middleware and toolsoral presentationThe ATLAS detector at CERN's Large Hadron Collider presents data handling requirements on an unprecedented scale. From 2008 on the ATLAS distributed data management system (DQ2) must manage tens of petabytes of event data per year, distributed globally via the LCG, OSG and NDGF computing grids, now known as the WLCG. Since its inception in 2005 DQ2 has continuously managed all datasets...Go to contribution page
-
Dr Syed Naqvi (CoreGRID Network of Excellence)03/09/2007, 17:50Grid middleware and toolsoral presentationSecurity requirements of service oriented architectures (SOA) are reasonably higher than the classical information technology (IT) architectures. Loose coupling โ the inherent benefit of SOA โ stipulates security as a service so as to circumvent tight binding of the services. The services integration interfaces are developed with minimal assumptions between the sending and receiving...Go to contribution page
-
Dr Marianne Bargiotti (European Organization for Nuclear Research (CERN))04/09/2007, 11:00Grid middleware and toolsoral presentationThe DIRAC Data Management System (DMS) relies on both WLCG Data Management services (LCG File Catalogues, Storage Resource Managers and FTS) and LHCb specific components (Bookkeeping Metadata File Catalogue). The complexity of both the DMS and its interactions with numerous WLCG components as well as the instability of facilities concerned, has turned frequently into unexpected problems...Go to contribution page
-
Dr Markus Schulz (CERN)04/09/2007, 11:20Grid middleware and toolsoral presentationA key feature of WLCG's multi-tier model is a robust and reliable file transfer service that efficiently moves bulk data sets between the various tiers, corresponding to the different stages of production and user analysis. We describe in detail the file transfer service both the tier-0 data export and the inter-tier data transfers, discussing the transition and lessons learned in moving...Go to contribution page
-
Mr Igor Sfiligoi (FNAL)04/09/2007, 11:40Grid middleware and toolsoral presentationGrids are making it possible for Virtual Organizations (VOs) to run hundreds of thousands of jobs per day. However, the resources are distributed among hundreds of independent Grid sites. A higer level Workload Management System (WMS) is thus necessary. glideinWMS is a pilot-based WMS, inheriting several useful features: 1) Late binding: Pilots are sent to all suitable Grid...Go to contribution page
-
Mr Luigi Zangrando (INFN Padova)04/09/2007, 12:00Grid middleware and toolsoral presentationModern GRID middlewares are built around components providing basic functionality, such as data storage, authentication and security, job management, resource monitoring and reservation. In this paper we describe the Computing Resource Execution and Management (CREAM) service. CREAM provides a Web service-based job execution and management capability for Grid systems; in particular, it is...Go to contribution page
-
Paul Nilsson (UT-Arlington)05/09/2007, 14:00Grid middleware and toolsoral presentationThe PanDA software provides a highly performant distributed production and distributed analysis system. It is the first system in the ATLAS experiment to use a pilot based late job delivery technique. In this talk, we will describe the architecture of the pilot system used in Panda. Unique features have been implemented for high reliability automation in a distributed environment....Go to contribution page
-
Dr Stuart Paterson (CERN)05/09/2007, 14:20Grid middleware and toolsoral presentationThe LHCb DIRAC Workload and Data Management System employs advanced optimization techniques in order to dynamically allocate resources. The paradigms realized by DIRAC, such as late binding through the Pilot Agent approach, have proven to be highly successful. For example, this has allowed the principles of workload management to be applied not only at the time of user job submission to...Go to contribution page
-
Mr Marco Cecchi (INFN cnaf)05/09/2007, 14:40Grid middleware and toolsoral presentationThe gLite Workload Management System (WMS) is a collection of components providing a service responsible for the distribution and management of tasks across resources available on a Grid. The main purpose is to accept a request of execution of a job from a client, find appropriate resources to satisfy it and follow it until completion. Different aspects of job management are accomplished...Go to contribution page
-
Mr Igor Sfiligoi (FNAL)05/09/2007, 15:00Grid middleware and toolsoral presentationThe advent of the Grids have made it possible for any user to run hundreds of thousands of jobs in a matter of days. However, the batch slots are not organized in a common pool, but are instead grouped in independent pools at hundreds of Grid sites distributed among the five continents. A higher level Workload Management System (WMS) that aggregates resources from many sites is thus...Go to contribution page
-
Dr Sanjay Padhi (University of Wisconsin-Madison)05/09/2007, 15:20Grid middleware and toolsoral presentationWith the evolution of various Grid Technologies along with foreseen first LHC collision this year, a homogeneous and interoperable Production system for ATLAS is a necessity. We present the CRONUS, which a Condor Glide-in based ATLAS Production Executor. The Condor glide-in daemons traverse to the Worker nodes, submitted via Condor-G or gLite RB. Once activated, they preserve the...Go to contribution page
-
Dr Steve Fisher (RAL)05/09/2007, 15:40Grid middleware and toolsoral presentationR-GMA, as deployed by LCG, is a large distributed system. We are currently addressing some design issues to make it highly reliable, and fault tolerant. In validating the new design, there were two classes of problems to consider: one related to the flow of data and the other to the loss of control messages. R-GMA streams data from one place to another; there is a need to consider the...Go to contribution page
-
Mr Philippe Canal (FERMILAB)05/09/2007, 16:30Grid middleware and toolsoral presentationWe will review the architecture and implementation of the accounting service for the Open Science Grid. Gratia's main goal is to provide the OSG stakeholders with a reliable and accurate set of views of the usage of resources across the OSG. We will review the status of deployment of Gratia across the OSG and its upcoming development. We will also discuss some aspects of current OSG...Go to contribution page
-
Martin Flechl (IKP, Uppsala Universitet)05/09/2007, 16:50Grid middleware and toolsoral presentationA Grid is defined as being ``coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations''. Over recent years a number of grid projects, many of which have a strong regional presence, have emerged to help coordinate institutions and enable grids. Today, we face a situation where a number of grid projects exist, most of which have slightly...Go to contribution page
-
Julia Andreeva (CERN)05/09/2007, 17:10Grid middleware and toolsoral presentationThe goal of the Grid is to provide a coherent access to distributed computing resources. All LHC experiments are using several Grid infrastructures and a variety of the middleware flavors. Due to the complexity and heterogeinity of a distributed system the monitoring represents a challenging task. Independently of the underlying platform , the experiments need to ave a complete and uniform...Go to contribution page
-
Dr Antonio Pierro (INFN-BARI)05/09/2007, 17:30Grid middleware and toolsoral presentationThe monitoring of the grid user activity and application performance is extremely useful to plan resource usage strategies particularly in cases of complex applications. Large VO's , like the LHC ones, do their monitoring by means of dashboards. Other VO's or communities, like for example the BioinforGRID one, are characterized by a greater diversification of the application types: so...Go to contribution page
-
James Casey (CERN)05/09/2007, 17:50Grid middleware and toolsoral presentationDuring 2006, the Worldwide LHC Computing Grid Project (WLCG) constituted several working groups in the area of fabric and application monitoring with the mandate of improving the reliability and availability of the grid infrastructure through improved monitoring of the grid fabric. This talk will discuss the โGrid Service Monitoringโ Working Group. This has the aim to evaluate the...Go to contribution page
-
Dr Stephen Burke (Rutherford Appleton Laboratory, UK)06/09/2007, 14:00Grid middleware and toolsoral presentationA common information schema for the description of Grid resources and services is an essential requirement for interoperating Grid infrastructures, and its implementation interacts with every Grid component. In this context, the GLUE information schema was originally defined in 2002 as a joint project between the European DataGrid and DataTAG projects and the US iVDGL (the...Go to contribution page
-
Dr David Groep (NIKHEF)06/09/2007, 14:20Grid middleware and toolsoral presentationThe majority of compute resources in todayโs scientific grids are based on Unix and Unix-like operating systems. In this world, user and user-group management is based around the well-known and trusted concepts of โuser IDsโ and โgroup IDsโ that are local to the resource; in contrast, the grid concepts of user and group management are centered around globally assigned user identities and...Go to contribution page
-
Dr Andrew McNab (University of Manchester)06/09/2007, 14:40Grid middleware and toolsoral presentationComponents of the GridSite system are used within WLCG and gLite to process security credentials and access policies. We describe recent extensions to this system to include the Shibboleth authentication framework of Internet2, and how the GridSite architecture can now import a wide variety of credential types, including onetime passcodes, X.509, GSI, VOMS, Shibboleth and OpenID and then...Go to contribution page
-
Mr Riccardo Zappi (INFN-CNAF)06/09/2007, 15:00Grid middleware and toolsoral presentationIn the near future, data on the order of hundred of Petabytes will be spread in multiple storage systems worldwide dispersed in, potentially, billions of replicated data items. Users, typically, are agnostic about the location of their data and they want to get access by either specifying logical names or using some lookup mechanism. A global namespace is a logical layer that allows...Go to contribution page
-
Valerio Venturi (INFN)06/09/2007, 15:20Grid middleware and toolsoral presentationThe Virtual Organization Membership Service (VOMS) is a system for managing users in a Virtual Organization. It manages and releases user's information such as group membership, roles, and other authorization data. VOMS was born with the aim of supporting dynamic, fine grained, and multi-stakeholder access control to enable coordinate sharing in virtual organizations. The current software...Go to contribution page
-
Dirk Duellmann (CERN)06/09/2007, 15:40Grid middleware and toolsoral presentationThe CORAL package is the LCG Persistency Framework foundation for accessing relational databases. From the start CORAL has been designed to facilitate the deployment of the LHC experiment database applications in a distributed computing environment. This contribution focuses on the description of CORAL features for distributed database deployment. In particular we cover -...Go to contribution page
-
Dr Greig A Cowan (University of Edinburgh)06/09/2007, 16:30Grid middleware and toolsoral presentationThe start of data taking this year at the Large Hadron Collider will herald a new era in data volumes and distributed processing in particle physics. Data volumes of 100s of Terabytes will be shipped to Tier-2 centres for analysis by the LHC experiments using the Worldwide LHC Computing Grid (WLCG). In many countries Tier-2 centres are distributed between a number of institutes, e.g.,...Go to contribution page
-
Mr Ian Gable (University of Victoria)06/09/2007, 16:50Grid middleware and toolsoral presentationDeployment of HEP application in heterogeneous grid environments can be challenging because many of the applications are dependent on specific OS versions and have a large number of complex software dependencies. Virtual machine monitors such as Xen could ease the deployment burden by allowing applications to be packaged complete with their execution environments. Our previous work has...Go to contribution page
-
Dr Alfredo Pagano (INFN/CNAF, Bologna, Italy)06/09/2007, 17:10Grid middleware and toolsoral presentationWorldwide grid projects such as EGEE and WLCG need services with high availability, not only for grid usage, but also for associated operations. In particular, tools used for daily activities or operational procedures are considered critical. In this context, the goal of the work done to solve the EGEE failover problem is to propose, implement and document well-established mechanisms and...Go to contribution page
-
Mr Robert Stober (Platform Computing)06/09/2007, 17:30Grid middleware and toolsoral presentationUniversus refers to an extension to Platform LSF that provides a secure, transparent, one-way interface from an LSF cluster to any foreign cluster. A foreign cluster is a local or remote cluster managed by a non-LSF workload management system. Universus schedules work to foreign clusters as it would to any other execution host. Beyond its ability to interface with foreign workload...Go to contribution page