-
T. Coviello (INFN Via E. Orabona 4 I - 70126 Bari Italy)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterA grid system is a set of heterogeneous computational and storage resources, distributed on a large geographic scale, which belong to different administrative domains and serve several different scientific communities named Virtual Organizations (VOs). A virtual organization is a group of people or institutions which collaborate to achieve common objectives. Therefore such system has...Go to contribution page
-
G. Rubini (INFN-CNAF)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterAnalyzing Grid monitoring data requires the capability of dealing with multidimensional concepts intrinsic to Grid systems. The meaningful dimensions identified in recent works are the physical dimension referring to geographical location of resources, the Virtual Organization (VO) dimension, the time dimension and the monitoring metrics dimension. In this paper, we discuss the...Go to contribution page
-
M. Jones (Manchester University)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterThe BaBar experiment has accumulated many terabytes of data on particle physics reactions, accessed by a community of hundreds of users. Typical analysis tasks are C++ programs, individually written by the user, using shared templates and libraries. The resources have outgrown a single platform and a distributed computing model is needed. The grid provides the natural toolset....Go to contribution page
-
T. Coviello (DEE – POLITECNICO DI BARI, V. ORABONA, 4, 70125 – BARI,ITALY)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterGrid computing is a large scale geographically distributed and heterogeneous system that provides a common platform for running different grid enabled applications. As each application has different characteristics and requirements, it is a difficult task to develop a scheduling strategy able to achieve optimal performance because application-specific and dynamic system status have...Go to contribution page
-
The ARDA Team29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterThe ARDA project was started in April 2004 to support the four LHC experiments (ALICE, ATLAS, CMS and LHCb) in the implementation of individual production and analysis environments based on the EGEE middleware. The main goal of the project is to allow a fast feedback between the experiment and the middleware development teams via the construction and the usage of end-to-end...Go to contribution page
-
D. Malon (ANL)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterAs ATLAS begins validation of its computing model in 2004, requirements imposed upon ATLAS data management software move well beyond simple persistence, and beyond the "read a file, write a file" operational model that has sufficed for most simulation production. New functionality is required to support the ATLAS Tier 0 model, and to support deployment in a globally distributed...Go to contribution page
-
L. Poncet (LAL-IN2p3)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterIn the last few years grid software (middleware) has become available from various sources. However, there are no standards yet which allow for an easy integration of different services. Moreover, middleware was produced by different projects with the main goal of developing new functionalities rather than production quality software. In the context of the LHC Computing Grid...Go to contribution page
-
T. Wlodek (Brookhaven National Lab)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterA description of a Condor-based, Grid-aware batch software system configured to function asynchronously with a mass storage system is presented. The software is currently used in a large Linux Farm (2700+ processors) at the RHIC and ATLAS Tier 1 Computing Facility at Brookhaven Lab. Design, scalability, reliability, features and support issues with a complex Condor-based batch...Go to contribution page
-
A. Wagner (CERN)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterCERN has about 5500 Desktop PCs. These computers offer a large pool of resources that can be used for physics calculations outside office hours. The paper describes a project to make use of the spare CPU cycles of these PCs for LHC tracking studies. The client server application is implemented as a lightweight, modular screensaver and a Web Application containing the physics job...Go to contribution page
-
P. Love (Lancaster University)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterBuilding on several years of sucess with the MCRunjob projects at DZero and CMS, the fermilab sponsored joint Runjob project aims to provide a Workflow description language common to three experiments: DZero, CMS and CDF. This project will encapsulate the remote processing experiences of the three experiments in an extensible software architecture using web services as...Go to contribution page
-
T. Harenberg (UNIVERSITY OF WUPPERTAL)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterThe D0 experiment at the Tevatron is collecting some 100 Terabytes of data each year and has a very high need of computing resources for the various parts of the physics program. D0 meets these demands by establishing a world - increasingly based on GRID technologies. Distributed resources are used for D0 MC production and data reprocessing of 1 billion events, requiring 250 TB to be...Go to contribution page
-
O. Smirnova (Lund University, Sweden)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterIn common grid installations, services responsible for storing big data chunks, replication of those data and indexing their availability are usually completely decoupled. And a task of synchronizing data is passed to either user-level tools or separate services (like spiders) which are subject to failure and usually cannot perform properly if one of underlying services fails too. The...Go to contribution page
-
D. Wicke (Fermilab)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterAbstract: The D0 experiment faces many challenges enabling access to large datasets for physicists on 4 continents. The strategy of solving these problems on worlwide distributed computing clusters is followed. Already since the begin of TEvatron RunII (March 2001) all Monte-Carlo simulations are produced outside of Fermilab at remote systems. For analyses as system of regional...Go to contribution page
-
L. Lueking (FERMILAB)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterThe Run II experiments at Fermilab, CDF and D0, have extensive database needs covering many areas of their online and offline operations. Delivery of the data to users and processing farms based around the world has represented major challenges to both experiments. The range of applications employing databases includes data management, calibration (conditions), trigger information, run...Go to contribution page
-
S. Stonjek (Fermi National Accelerator Laboratory / University of Oxford)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterCDF is an experiment at the Tevatron at Fermilab. One dominating factor of the experiments' computing model is the high volume of raw, reconstructed and generated data. The distributed data handling services within SAM move these data to physics analysis applications. The SAM system was already in use at the D-Zero experiment. Due to difference in the computing model of the...Go to contribution page
-
I. Stokes-Rees (UNIVERSITY OF OXFORD PARTICLE PHYSICS)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterThe DIRAC system developed for the CERN LHCb experiment is a grid infrastructure for managing generic simulation and analysis jobs. It enables jobs to be distributed across a variety of computing resources, such as PBS, LSF, BQS, Condor, Globus, LCG, and individual workstations. A key challenge of distributed service architectures is that there is no single point of control over...Go to contribution page
-
V. garonne (CPPM-IN2P3 MARSEILLE)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterThe Workload Management System (WMS) is the core component of the DIRAC distributed MC production and analysis grid of the LHCb experiment. It uses a central Task database which is accessed via a set of central Services with Agents running on each of the LHCb sites. DIRAC uses a 'pull' paradigm where Agents request tasks whenever they detect their local resources are available. The...Go to contribution page
-
M.G. Pia (INFN GENOVA)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterWe show how nowadays it is possible to achieve the goal of accuracy and fast computation response in radiotherapic dosimetry using Monte Carlo methods, together with a distributed computing model. Monte Carlo methods have never been used in clinical practice because, even if they are more accurate than available commercial software, the calculation time needed to accumulate sufficient...Go to contribution page
-
L. Guy (CERN)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterExtensive and thorough testing of the EGEE middleware is essential to ensure that a production quality Grid can be deployed on a large scale as well as across the broad range of heterogeneous resources that make up the hundreds of Grid computing centres both in Europe and worldwide. Testing of the EGEE middleware encompasses the tasks of both verification and validation. In adition...Go to contribution page
-
L. Matyska (CESNET, CZECH REPUBLIC)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterThe Logging and Bookkeeping service tracks job passing through the Grid. It collects important events generated by both the grid middleware components and applications, and processes them at a chosen L&B server to provide the job state. The events are transported through secure reliable channels. Job tracking is fully distributed and does not depend on a single information source, the...Go to contribution page
-
P. Mendez Lorenzo (CERN IT/GD)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterIn a Grid environment, the access to information on system resources is a necessity in order to perform common tasks such as matching job requirements with available resources, accessing files or presenting monitoring information. Thus both middleware service, like workload and data management, and applications, like monitoring tools, requiere an interface to the Grid information...Go to contribution page
-
X. Zhao (Brookhaven National Laboratory)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterThis paper describes the deployment and configuration of the production system for ATLAS Data Challenge 2 starting in May 2004, at Brookhaven National Laboratory, which is the Tier1 center in the United States for the International ATLAS experiment. We will discuss the installation of Windmill (supervisor) and Capone (executor) software packages on the submission host and the relevant...Go to contribution page
-
R. santinelli (CERN/IT/GD)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterThe management of Application and Experiment Software represents a very common issue in emerging grid-aware computing infrastructures. While the middleware is often installed by system administrators at a site via customized tools that serve also for the centralized management of the entire computing facility, the problem of installing, configuring and validating Gigabytes of Virtual...Go to contribution page
-
R. Walker (Simon Fraser University)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterA large number of Grids have been developed, motivated by geo-political or application requirements. Despite being mostly based on the same underlying middleware, the Globus Toolkit, they are generally not inter-operable for a variety of reasons. We present a method of federating those disparate grids which are based on the Globus Toolkit, together with a concrete example of interfacing...Go to contribution page
-
V. Fine (BROOKHAVEN NATIONAL LABORATORY)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterMost HENP experiment software includes a logging or tracing API allowing for displaying in a particular format important feedback coming from the core application. However, inserting log statements into the code is a low-tech method for tracing the program execution flow and often leads to a flood of messages in which the relevant ones are occluded. In a distributed computing...Go to contribution page
-
R. Barbera (Univ. Catania and INFN Catania)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterComputational and data grids are now entering a more mature phase where experimental test-beds are turned into production quality infrastructures operating around the clock. All this is becoming true both at national level, where an example is the Italian INFN production grid (http://grid-it.cnaf.infn.it), and at the continental level, where the most strinking example is the European Union...Go to contribution page
-
T. ANTONI (GGUS)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterFor very large projects like the LHC Computing Grid Project (LCG) involving 8,000 scientists from all around the world, it is an indispensable requirement to have a well organized user support. The Institute for Scientific Computing at the Forschungszentrum Karlsruhe started implementing a Global Grid User Support (GGUS) after official assignment of the Grid Deployment Board in March...Go to contribution page
-
A. Retico (CERN)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterThe installation and configuration of LCG middleware, as it is currently being done, is complex and delicate. An “accurate” configuration of all the services of LCG middleware requires a deep knowledge of the inside dynamics and hundreds of parameters to be dealt with. On the other hand, the number of parameters and flags that are strictly needed in order to run a working ”default”...Go to contribution page
-
L. Field (CERN)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterThis paper reports on the deployment experience of the defacto grid information system, Globus MDS, in a large scale production grid. The results of this experience led to the development of an information caching system based on a standard openLDAP database. The paper then describes how this caching system was developed further into a production quality information system including a...Go to contribution page
-
H. Tallini (IMPERIAL COLLEGE LONDON)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterGROSS (GRidified Orca Submission System) has been developed to provide CMS end users with a single interface for running batch analysis tasks over the LCG-2 Grid. The main purpose of the tool is to carry out job splitting, preparation, submission, monitoring and archiving in a transparent way which is simple to use for the end user. Central to its design has been the requirement for...Go to contribution page
-
A. Gellrich (DESY)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterDESY is one of the world-wide leading centers for research with particle accelerators and a center for research with synchrotron light. The hadron-electron collider HERA houses four experiments which are taking data and will be operated until 2006 at least. The computer center manages a data volumes of order 1 PB and is the home for around 1000 CPUs. In 2003 DESY started to set up a...Go to contribution page
-
M. Burgon-Lyon (UNIVERSITY OF GLASGOW)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterJIM (Job and Information Management) is a grid extension to the mature data handling system called SAM (Sequential Access via Metadata) used by the CDF, DZero and Minos Experiments based at Fermilab. JIM uses a thin client to allow job submissions from any computer with Internet access, provided the user has a valid certificate or kerberos ticket. On completion the job output can be...Go to contribution page
-
A. Anjum (NIIT)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterIn the context of Interactive Grid-Enabled Analysis Environment (GAE), physicists desire bi-directional interaction with the job they submitted. In one direction, monitoring information about the job and hence a “progress bar” should be provided to them. On other direction, physicist should be able to control their jobs. Before submission, they may direct the job to some specified...Go to contribution page
-
A. Anjum (NIIT)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterGrid is emerging as a great computational resource but its dynamic behaviour makes the Grid environment unpredictable. System failure or network failure can occur or the system performance can degrade. So once the job has been submitted monitoring becomes very essential for user to ensure that the job is completed in an efficient way. In current environments once user submits a job he...Go to contribution page
-
G. Donvito (UNIVERSITà DEGLI STUDI DI BARI), G. Tortone (INFN Napoli)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterIn a wide-area distributed and heterogeneous grid environment, monitoring represents an important and crucial task. It includes system status checking, performance tuning, bottlenecks detecting, troubleshooting, fault notifying. In particular a good monitoring infrastructure must provide the information to track down the current status of a job in order to locate any problems....Go to contribution page
-
E.M.V. Fasanelli (I.N.F.N.)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterThe infn.it AFS cell has been providing a useful single file-space and authentication mechanism for the whole INFN, but the lack of a distributed management system, has lead several INFN sections and LABs to setup local AFS cells. The hierarchical transitive cross-realm authentication introduced in the Kerberos 5 protocol and the new versions of the OpenAFS and MIT implementation of...Go to contribution page
-
D. Rebatto (INFN - MILANO)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterIn this paper we present an overview of the implementation of the LCG interface for the ATLAS production system. In order to take profit of the features provided by DataGRID software, on which LCG is based, we implemented a Python module, seamless integrated into the Workload Management System, which can be used as an object-oriented API to the submission services. On top of it we...Go to contribution page
-
L. Tuura (NORTHEASTERN UNIVERSITY, BOSTON, MA, USA)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterExperiments frequently produce many small data files for reasons beyond their control, such as output splitting into physics data streams, parallel processing on large farms, database technology incapable of concurrent writes into a single file, and constraints from running farms reliably. Resulting data file size is often far from ideal for network transfer and mass storage performance....Go to contribution page
-
S. Thorn29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterThe University of Edinburgh has an significant interest in mass storage systems as it is one of the core groups tasked with the roll out of storage software for the UK's particle physics grid, GridPP. We present the results of a development project to provide software interfaces between the SDSC Storage Resource Broker, the EU DataGrid and the Storage Resource Manager. This project was...Go to contribution page
-
I. Legrand (CALTECH)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterThe design and optimization of the Computing Models for the future LHC experiments, based on the Grid technologies, requires a realistic and effective modeling and simulation of the data access patterns, the data flow across the local and wide area networks, and the scheduling and workflow created by many concurrent, data intensive jobs on large scale distributed systems. This paper...Go to contribution page
-
E. Berman (FERMILAB)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterFermilab operates a petabyte scale storage system, Enstore, which is the primary data store for experiments' large data sets. The Enstore system regularly transfers greater than 15 Terabytes of data each day. It is designed using a client-server architecture providing sufficient modularity to allow easy addition and replacement of hardware and software components. Monitoring of this...Go to contribution page
-
G. Zito (INFN BARI)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterThe complexity of the CMS Tracker (more than 50 million channels to monitor) now in construction in ten laboratories worldwide with hundreds of interested people , will require new tools for monitoring both the hardware and the software. In our approach we use both visualization tools and Grid services to make this monitoring possible. The use of visualization enables us to represent...Go to contribution page
-
D. Sanders (UNIVERSITY OF MISSISSIPPI)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterHigh-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent...Go to contribution page
-
I. Adachi (KEK)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterThe Belle experiment has accumulated an integrated luminosity of more than 240fb-1 so far, and a daily logged luminosity has exceeded 800pb-1. This requires more efficient and reliable way of event processing. To meet this requirement, new offline processing scheme has been constructed, based upon technique employed for the Belle online reconstruction farm. Event processing is...Go to contribution page
-
E. Berdnikov (INSTITUTE FOR HIGH ENERGY PHYSICS, PROTVINO, RUSSIA)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterThe scope of this work is the study of scalability limits of the Certification Authority (CA), running for large scale GRID environments. The operation of Certification Authority is analyzed from the view of the rate of incoming requests, complexity of authentication procedures, LCG security restrictions and other limiting factors. It is shown, that standard CA operational...Go to contribution page
-
C. Nicholson (UNIVERSITY OF GLASGOW)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterIn large-scale Grids, the replication of files to different sites is an important data management mechanism which can reduce access latencies and give improved usage of resources such as network bandwidth, storage and computing power. In the search for an optimal data replication strategy, the Grid simulator OptorSim was developed as part of the European DataGrid project. Simulations of...Go to contribution page
-
G. Shabratova (Joint Institute for Nuclear Research (JINR))29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterThe report presents an analysis of the Alice Data Challenge 2004. This Data Challenge has been performed on two different distributed computing environments. The first one is the Alice Environment for distributed computing (AliEn) used standalone. Presently this environment allows ALICE physicists to obtain results on simulation, reconstruction and analysis of data in ESD format for...Go to contribution page
-
S. Mrenna (FERMILAB)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterPATRIOT is a project that aims to provide better predictions of physics events for the high-Pt physics program of Run2 at the Tevatron collider. Central to Patriot is an enstore or mass storage repository for files describing the high-Pt physics predictions. These are typically stored as StdHep files which can be handled by CDF and D0 and run through detector and triggering...Go to contribution page
-
B. Quinn (The University of Mississippi)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterThe D0 experiment at Fermilab's Tevatron will record several petabytes of data over the next five years in pursuing the goals of understanding nature and searching for the origin of mass. Computing resources required to analyze these data far exceed the capabilities of any one institution. Moreover, the widely scattered geographical distribution of collaborators poses further serious...Go to contribution page
-
A. Anjum (NIIT)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterGrid computing provides key infrastructure for distributed problem solving in dynamic virtual organizations. However, Grids are still the domain of a few highly trained programmers with expertise in networking, high-performance computing, and operating systems. One of the big issues in the full-scale usage of a grid is the matching of the resource requirements of a job submission to...Go to contribution page
-
29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterFor The BaBar Computing Group BaBar has recently moved away from using Objectivity/DB for it's event store towards a ROOT-based event store. Data in the new format is produced at about 20 institutions worldwide as well as at SLAC. Among new challenges are the organization of data export from remote institutions, archival at SLAC and making the data visible to users for analysis and...Go to contribution page
-
A. Hasan (SLAC)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterWe describe the production experience gained from implementing and using exclusively the San Diego Super Computer Center developed Storage Resource Broker (SRB) to distribute the BaBar experiment's production event data stored in ROOT files from the experiment center at SLAC, California, USA to a Tier A computing center at ccinp3, Lyon France. In addition we outline how the system can...Go to contribution page
-
D. Andreotti (INFN Sezione di Ferrara)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterThe BaBar experiment has been taking data since 1999. In 2001 the computing group started to evaluate the possibility to evolve toward a distributed computing model in a Grid environment. In 2003, a new computing model, described in other talks, was implemented, and ROOT I/O is now being used as the Event Store. We implemented a system, based onthe LHC Computing Grid (LCG) tools, to submit...Go to contribution page
-
I. Terekhov (FERMI NATIONAL ACCELERATOR LABORATORY)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterSAMGrid is a globally distributed system for data handling and job management, developed at Fermilab for the D0 and CDF experiments in Run II. The Condor system is being developed at the University of Wisconsin for management of distributed resources, computational and otherwise. We briefly review the SAMGrid architecture and its interaction with Condor, which was presented earlier. We...Go to contribution page
-
A. Lyon (FERMI NATIONAL ACCELERATOR LABORATORY)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterThe SAMGrid team is in the process of implementing a monitoring and information service, which fulfills several important roles in the operation of the SAMGrid system, and will replace the first generation of monitoring tools in the current deployments. The first generation tools are in general based on text logfiles and represent solutions which are not scalable or maintainable. The...Go to contribution page
-
E. Slabospitskaya (Institute for High Energy Physics,Protvino,Russia)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterStorage Resource Manager (SRM) and Grid File Access Library (GFAL) are GRID middleware components used for transparent access to Storage Elements. SRM provides a common interface (WEB service) to backend systems giving dynamic space allocation and file management. GFAL provides a mechanism whereby an application software can access a file at a site without having to know which transport...Go to contribution page
-
V. Bartsch (OXFORD UNIVERSITY)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterTo distribute computing for CDF (Collider Detector at Fermilab) a system managing local compute and storage resources is needed. For this purpose CDF will use the DCAF (Decentralized CDF Analysis Farms) system which is already at Fermilab. DCAF has to work with the data handling system SAM (Sequential Access to data via Metadata). However, both DCAF and SAM are mature systems which...Go to contribution page
-
R. JONES (LANCAS)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterThe ATLAS Computing Model is under continuous active development. Previous exercises focussed on the Tier-0/Tier-1 interactions, with an emphasis on the resource implications and only a high-level view of the data and workflow. The work presented here considerably revises the resource implications, and attempts to describe in some detail the data and control flow from the High Level...Go to contribution page
-
Douglas Smith (Stanford Linear Accelerator Center)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterThe new BaBar bookkeeping system comes with tools to directly support data analysis tasks. This Task Manager system acts as an interface between datasets defined in the bookkeeping system, which are used as input to analyzes, and the offline analysis framework. The Task Manager organizes the processing of the data by creating specific jobs to be either submitted to a batch system, or...Go to contribution page
-
A. Boehnlein (FERMI NATIONAL ACCELERATOR LABORATORY)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterThe D0 experiment relies on large scale computing systems to achieve her physics goals. As the experiment lifetime spans, multiple generations of computing hardware, it is fundemental to make projective models in to use available resources to meet the anticipated needs. In addition, computing resources can be supplied as in-kind contributions by collaborating institutions and...Go to contribution page
-
C. ARNAULT (CNRS)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterOne of the most important problems in software management of a very large and complex project such as Atlas is how to deploy the software on the running sites. By running sites we include computer sites ranging from computing centers in the usual sense down to individual laptops but also the computer elements of a computing grid organization. The deployment activity consists in...Go to contribution page
-
S. Bagnasco (INFN Torino)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterAliEn (ALICE Environment) is a GRID middleware developed and used in the context of ALICE, the CERN LHC heavy-ion experiment. In order to run Data Challenges exploiting both AliEn “native” resources and any infrastructure based on EDG-derived middleware (such as the LCG and the Italian GRID.IT), an interface system was designed and implemented; some details of a prototype were already...Go to contribution page
-
J. kennedy (LMU Munich)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterThis paper presents an overview of the legacy interface provided for the ATLAS DC2 production system. The term legacy refers to any non-grid system which may be deployed for use within DC2. The reasoning behind providing such a service for DC2 is twofold in nature. Firstly, the legacy interface provides a backup solution should unforeseen problems occur while developing the grid...Go to contribution page
-
A. Kreymer (FERMILAB)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterThe Fermilab CDF Run-II experiment is now providing official support for remote computing, expanding this to about 1/4 of the total CDF computing during the Summer of 2004. I will discuss in detail the extensions to CDF software distribution and configuration tools and procedures, in support of CDF GRID/DCAF computing for Summer 2004. We face the challenge of unreliable networks, time...Go to contribution page
-
29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterIn the High Energy Physics (HEP) community, Grid technologies have been accepted as solutions to the distributed computing problem. Several Grid projects have provided software in the last years. Among of all them, the LCG - especially aimed at HEP applications - provides a set of services and respective client interfaces, both in the form of command line tools as well as programming...Go to contribution page
-
P. Cerello (INFN Torino)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterBreast cancer screening programs require managing and accessing a huge amount of data, intrinsically distributed, as they are collected in different Hospitals. The development of an application based on Computer Assisted Detection algorithms for the analysis of digitised mammograms in a distributed environment is a typical GRID use case. In particular, AliEn (ALICE Environment)...Go to contribution page
-
O. SMIRNOVA (Lund University, Sweden)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterThe Nordic Grid facility (NorduGrid) came into production operation during the summer of 2002 when the Scandinavian Atlas HEP group started to use the Grid for the Atlas Data Challenges and was thus the first Grid ever contributing to an Atlas production. Since then, the Grid facility has been in continuous 24/7 operation offering an increasing number of resources to a growing set of...Go to contribution page
-
E. Perez-Calle (CIEMAT)29/09/2004, 10:00Track 4 - Distributed Computing ServicesposterExpansion of large computing fabrics/clusters throughout the world would create a need for stricter security. Otherwise any system could suffer damages such as data loss, data falsification or misuse. Perimeter security and intrusion detection system (IDS) are the two main aspects that must be taken into account in order to achieve system security. The main target of an intrusion...Go to contribution page
-
F. Furano (INFN Padova)29/09/2004, 10:00Track 5 - Distributed Computing Systems and ExperiencesposterThis paper describes XTNetFile, the client side of a project conceived to address the high demand data access needs of modern physics experiments such as BaBar using the ROOT framework. In this context, a highly scalable and fault tolerant client/server architecture for data access has been designed and deployed which allows thousands of batch jobs and interactive sessions to...Go to contribution page
Choose timezone
Your profile timezone: