-
A. Ceseracciu (SLAC / INFN PADOVA)27/09/2004, 14:00Track 5 - Distributed Computing Systems and Experiencesoral presentationThe Event Reconstruction Control System of the BaBar experiment was redesigned in 2002, to satisfy the following major requirements: flexibility and scalability. Because of its very nature, this system is continuously maintained to implement the changing policies, typical of a complex, distributed production enviromnent. In 2003, a major revolution in the BaBar computing model, the...Go to contribution page
-
J. Andreeva (UC Riverside)27/09/2004, 14:20Track 5 - Distributed Computing Systems and Experiencesoral presentationOne of the goals of CMS Data Challenge in March-April 2004 (DC04) was to run reconstruction for sustained period at 25 Hz input rate with distribution of the produced data to CMS T1 centers for further analysis. The reconstruction was run at the T0 using CMS production software, of which the main components are RefDB (CMS Monte Carlo 'Reference Database' with Web interface) and McRunjob...Go to contribution page
-
L. GOOSSENS (CERN)27/09/2004, 14:40Track 5 - Distributed Computing Systems and Experiencesoral presentationIn order to validate the Offline Computing Model and the complete software suite, ATLAS is running a series of Data Challenges (DC). The main goals of DC1 (July 2002 to April 2003) were the preparation and the deployment of the software required for the production of large event samples, and the production of those samples as a worldwide distributed activity. DC2 (May 2004 until...Go to contribution page
-
S. Pardi (DIPARTIMENTO DI MATEMATICA ED APPLICAZIONI "R.CACCIOPPOLI")27/09/2004, 15:00Track 5 - Distributed Computing Systems and Experiencesoral presentationThe standard procedures for the extraction of gravitational wave signals coming from coalescing binaries provided by the output signal of an interferometric antenna may require computing powers generally not available in a single computing centre or laboratory. A way to overcome this problem consists in using the computing power available in different places as a single geographically...Go to contribution page
-
P. Buncic (CERN)27/09/2004, 15:20Track 5 - Distributed Computing Systems and Experiencesoral presentationAliEn (ALICE Environment) is a Grid framework developed by the Alice Collaboration and used in production for almost 3 years. From the beginning, the system was constructed using Web Services and standard network protocols and Open Source components. The main thrust of the development was on the design and implementation of an open and modular architecture. A large part of the component...Go to contribution page
-
H. Kornmayer (FORSCHUNGSZENTRUM KARLSRUHE (FZK))27/09/2004, 15:40Track 5 - Distributed Computing Systems and Experiencesoral presentationThe observation of high-energetic gamma-rays with ground based air cerenkov telescopes is one of the most exciting areas in modern astro particle physics. End of the year 2003 the MAGIC telescope started operation.The low energy threshold for gamma-rays together with different background sources leads to a considerable amount of data. The analysis will be done in different institutes...Go to contribution page
-
S. Burke (Rutherford Appleton Laboratory)27/09/2004, 16:30Track 5 - Distributed Computing Systems and Experiencesoral presentationThe European DataGrid (EDG) project ran from 2001 to 2004, with the aim of producing middleware which could form the basis of a production Grid, and of running a testbed to demonstrate the middleware. HEP experiments (initially the four LHC experiments and subsequently BaBar and D0) were involved from the start in specifying requirements, and subsequently in evaluating the performance...Go to contribution page
-
M. Schulz (CERN)27/09/2004, 16:50Track 5 - Distributed Computing Systems and Experiencesoral presentationLCG2 is a large scale production grid formed by more than 40 worldwide distributed sites. The aggregated number of CPUs exceeds 3000 several MSS systems are integrated in the system. Almost all sites form an independent administrative domain. On most of the larger sites the local computing resources have been integrated into the grid. The system has been used for large scale...Go to contribution page
-
R. Pordes (FERMILAB)27/09/2004, 17:10Track 5 - Distributed Computing Systems and Experiencesoral presentationThe U.S.LHC Tier-1 and Tier-2 laboratories and universities are developing production Grids to support LHC applications running across a worldwide Grid computing system. Together with partners in computer science, physics grid projects and running experiments, we will build a common national production grid infrastructure which is open in its architecture, implementation and use. The...Go to contribution page
-
S. Dasu (UNIVERSITY OF WISCONSIN)27/09/2004, 17:30Track 5 - Distributed Computing Systems and Experiencesoral presentationThe University of Wisconsin distributed computing research groups developed a software system called Condor for high throughput computing using commodity hardware. An adaptation of this software, Condor-G, is part of Globus grid computing toolkit. However, original Condor has additional features that allows building of an enterprise level grid. Several UW departments have Condor computing...Go to contribution page
-
A. Lyon (FERMI NATIONAL ACCELERATOR LABORATORY)27/09/2004, 17:50Track 5 - Distributed Computing Systems and Experiencesoral presentationThe SAMGrid team has recently refactored its test harness suite for greater flexibility and easier configuration. This makes possible more interesting applications of the test harness, for component tests, integration tests, and stress tests. We report on the architecture of the test harness and its recent application to stress tests of a new analysis cluster at Fermilab, to explore...Go to contribution page
-
A. Shevel (STATE UNIVERSITY OF NEW YORK AT STONY BROOK)27/09/2004, 18:10Track 5 - Distributed Computing Systems and Experiencesoral presentationThe PHENIX collaboration records large volumes of data for each experimental run (now about 1/4 PB/year). Efficient and timely analysis of this data can benefit from a framework for distributed analysis via a growing number of remote computing facilities in the collaboration. The grid architecture has been, or is being deployed at most of these facilities. The experience being...Go to contribution page
-
D. Smith (STANFORD LINEAR ACCELERATOR CENTER)29/09/2004, 14:00Track 5 - Distributed Computing Systems and Experiencesoral presentationfor the BaBar Computing Group. The analysis of the BaBar experiment requires many times the measured data to be produced in simulation. This requirement has resulted in one of the largest distributed computing projects ever completed. The latest round of simulation for BaBar started in early 2003, and completed in early 2004, and encompassed over 1 million jobs, and over 2.2...Go to contribution page
-
29/09/2004, 14:20Track 5 - Distributed Computing Systems and Experiencesoral presentationThe CMS 2004 Data Challenge (DC04) was devised to test several key aspects of the CMS Computing Model in three ways: by trying to sustain a 25 Hz reconstruction rate at the Tier-0; by distributing the reconstructed data to six Tier-1 Regional Centers (FNAL in US, FZK in Germany, Lyon in France, CNAF in Italy, PIC in Spain, RAL in UK) and handling catalogue issues; by redistributing...Go to contribution page
-
A. Klimentov (A)29/09/2004, 14:40Track 5 - Distributed Computing Systems and Experiencesoral presentationAMS-02 Computing and Ground Data Handling. V.Choutko (MIT, Cambridge), A.Klimentov (MIT, Cambridge) and M.Pohl (Geneva University) AMS (Alpha Magnetic Spectrometer) is an experiment to search in space for dark matter and antimatter on the International Space Station (ISS). The AMS detector had a precursor flight in 1998 (STS- 91, June 2-12, 1998)....Go to contribution page
-
A. Fanfani (INFN-BOLOGNA (ITALY))29/09/2004, 15:00Track 5 - Distributed Computing Systems and Experiencesoral presentationIn March-April 2004 the CMS experiment undertook a Data Challenge(DC04). During the previous 8 months CMS undertook a large simulated event production. The goal of the challenge was to run CMS reconstruction for sustained period at 25Hz input rate, distribute the data to the CMS Tier-1 centers and analyze them at remote sites. Grid environments developed in Europe by the LHC...Go to contribution page
-
Rob KENNEDY (FNAL)29/09/2004, 15:20Track 5 - Distributed Computing Systems and Experiencesoral presentationMost of the simulated events for the DZero experiment at Fermilab have been historically produced by the โremoteโ collaborating institutions. One of the principal challenges reported concerns the maintenance of the local software infrastructure, which is generally different from site to site. As the understanding of the community on distributed computing over distributively owned and...Go to contribution page
-
A. Peters (ce)29/09/2004, 15:40Track 5 - Distributed Computing Systems and Experiencesoral presentationDuring the first half of 2004 the ALICE experiment has performed a large distributed computing exercise with two major objectives: to test the ALICE computing model, included distributed analysis, and to provide data sample for a refinement of the ALICE Jet physics Monte-Carlo studies. Simulation reconstruction and analysis of several hundred thousand events were performed, using the...Go to contribution page
-
J. Closier (CERN)29/09/2004, 16:30Track 5 - Distributed Computing Systems and Experiencesoral presentationThe LHCb experiment performed its latest Data Challenge (DC) in May-July 2004. The main goal was to demonstrate the ability of the LHCb grid system to carry out massive production and efficient distributed analysis of the simulation data. The LHCb production system called DIRAC provided all the necessary services for the DC: Production and Bookkeeping Databases, File catalogs, Workload...Go to contribution page
-
M. Mambelli (UNIVERSITY OF CHICAGO)29/09/2004, 16:50Track 5 - Distributed Computing Systems and Experiencesoral presentationWe describe the design and operational experience of the ATLAS production system as implemented for execution on Grid3 resources. The execution environment consisted of a number of grid-based tools: Pacman for installation of VDT-based Grid3 services and ATLAS software releases, the Capone execution service built from the Chimera/Pegasus virtual data system for directed acyclic graph...Go to contribution page
-
29/09/2004, 17:10Track 5 - Distributed Computing Systems and Experiencesoral presentationThis talk describes the various stages of ATLAS Data Challenge 2 (DC2) in what concerns usage of resources deployed via NorduGrid's Advanced Resource Connector (ARC). It also describes the integration of these resources with the ATLAS production system using the Dulcinea executor. ATLAS Data Challenge 2 (DC2), run in 2004, was designed to be a step forward in the distributed data...Go to contribution page
-
30/09/2004, 14:00Track 5 - Distributed Computing Systems and Experiencesoral presentationProject SETI@HOME has proven to be one of the biggest successes of distributed computing during the last years. With a quite simple approach SETI manages to process huge amounts of data using a vast amount of distributed computer power. To extend the generic usage of these kinds of distributed computing tools, BOINC (Berkeley Open Infrastructure for Network Computing) is being...Go to contribution page
-
P E. Tissot-Daguette (CERN)30/09/2004, 14:20Track 5 - Distributed Computing Systems and Experiencesoral presentationThe AliEn system, an implementation of the Grid paradigm developed by the ALICE Offline Project, is currently being used to produce and analyse Monte Carlo data at over 30 sites on four continents. The AliEn Web Portal is built around Open Source components with a backend based on Grid Services and compliant with the OGSA model. An easy and intuitive presentation layer gives the...Go to contribution page
-
Julia ANDREEVA (CERN)30/09/2004, 14:40Track 5 - Distributed Computing Systems and Experiencesoral presentationThe ARDA project was started in April 2004 to support the four LHC experiments (ALICE, ATLAS, CMS and LHCb) in the implementation of individual production and analysis environments based on the EGEE middleware. The main goal of the project is to allow a fast feedback between the experiment and the middleware development teams via the construction and the usage of end-to-end...Go to contribution page
-
M. Ballintijn (MIT)30/09/2004, 15:00Track 5 - Distributed Computing Systems and Experiencesoral presentationThe Parallel ROOT Facility, PROOF, enables a physicist to analyze and understand very large data sets on an interactive time scale. It makes use of the inherent parallelism in event data and implements an architecture that optimizes I/O and CPU utilization in heterogeneous clusters with distributed storage. Scaling to many hundreds of servers is essential to process tens or hundreds of...Go to contribution page
-
D. Adams (BNL)30/09/2004, 15:20Track 5 - Distributed Computing Systems and Experiencesoral presentationThe ATLAS distributed analysis (ADA) system is described. The ATLAS experiment has more that 2000 physicists from 150 insititutions in 34 countries. Users, data and processing are distributed over these sites. ADA makes use of a collection of high-level web services whose interfaces are expressed in terms of AJDL (abstract job definition language) which includes descriptions of...Go to contribution page
-
F. van Lingen (CALIFORNIA INSTITUTE OF TECHNOLOGY)30/09/2004, 15:40Track 5 - Distributed Computing Systems and Experiencesoral presentationIn this paper we report on the implementation of an early prototype of distributed high-level services supporting grid-enabled data analysis within the LHC physics community as part of the ARDA project within the context of the GAE (Grid Analysis Environment) and begin to investigate the associated complex behaviour of such an end-to-end system. In particular, the prototype...Go to contribution page
-
30/09/2004, 16:30Track 5 - Distributed Computing Systems and Experiencesoral presentationAny physicist who will analyse data from the LHC experiments will have to deal with data and computing resources which are distributed across multiple locations and with different access methods. GANGA helps the end user by tying in specifically to the solutions for a given experiment ranging from specification of data to retrieval and post-processing of produced output. For LHCb and ATLAS...Go to contribution page
-
N. De Filippis (UNIVERSITA' DEGLI STUDI DI BARI AND INFN)30/09/2004, 16:50Track 5 - Distributed Computing Systems and Experiencesoral presentationDuring the CMS Data Challenge 2004 a realtime analysis was attempted at INFN and PIC Tier-1 and Tier-2s in order to test the ability of the instrumented methods to quickly process the data. Several agents and automatic procedures were implemented to perform the analysis at the Tier-1/2 synchronously with the data transfer from Tier-0 at CERN. The system was implemented in the Grid...Go to contribution page
-
K. Wu (LAWRENCE BERKELEY NATIONAL LAB)30/09/2004, 17:10Track 5 - Distributed Computing Systems and Experiencesoral presentationNuclear and High Energy Physics experiments such as STAR at BNL are generating millions of files with PetaBytes of data each year. In most cases, analysis programs have to read all events in a file in order to find the interesting ones. Since most analyses are only interested in some subsets of events in a number of files, a significant portion of the computer time is wasted on...Go to contribution page
-
T. Johnson (SLAC)30/09/2004, 17:30Track 5 - Distributed Computing Systems and Experiencesoral presentationThe aim of the service is to allow fully distributed analysis of large volumes of data while maintaining true (sub-second) interactivity. All the Grid related components are based on OGSA style Grid services, and to the maximum extent uses existing Globus Toolkit 3.0 (GT3) services. All transactions are authenticated and authorized using GSI (Grid Security Infrastructure) mechanism -...Go to contribution page
-
G R. Moloney30/09/2004, 17:50Track 5 - Distributed Computing Systems and Experiencesoral presentationWe have developed and deployed a data grid for the processing of data from the Belle experiment, and for the production of simulated Belle data. The Belle Analysis Data Grid brings together compute and storage resources across five separate partners in Australia, and the Computing Research Centre at the KEK laboratory in Tsukuba, Japan. The data processing resouces are general...Go to contribution page
-
A. Sill (TEXAS TECH UNIVERSITY)30/09/2004, 18:10Track 5 - Distributed Computing Systems and Experiencesoral presentationTo maximize the physics potential of the data currently being taken, the CDF collaboration at Fermi National Accelerator Laboratory has started to deploy user analysis computing facilities at several locations throughout the world. Over 600 users are signed up and able to submit their physics analysis and simulation applications directly from their desktop or laptop computers to these...Go to contribution page
Choose timezone
Your profile timezone: