-
G. Cancio (CERN)27/09/2004, 14:00Track 6 - Computer Fabricsoral presentationThis paper describes the evolution of fabric management at CERN's T0/T1 Computing Center, from the selection and adoption of prototypes produced by the European DataGrid (EDG) project[1] to enhancements made to them. In the last year of the EDG project, developers and service managers have been working to understand and solve operational and scalability issues. CERN has adopted and...Go to contribution page
-
Tomasz WLODEK (BNL)27/09/2004, 14:20Track 6 - Computer Fabricsoral presentationThis presentation describes the experiences and the lessons learned by the RHIC/ATLAS Computing Facility (RACF) in building and managing its 2,700+ CPU (and growing) Linux Farm over the past 6+ years. We describe how hardware cost, end-user needs, infrastructure, footprint, hardware configuration, vendor selection, software support and other considerations have played a role in...Go to contribution page
-
Don Petravick27/09/2004, 14:40Track 6 - Computer Fabricsoral presentationAs part of the DOE SciDAC "National Infrastructure for Lattice Gauge Computing" project, Fermilab builds and operates production clusters for lattice QCD simulations. We currently operate three clusters: a 128-node dual Xeon Myrinet cluster, a 128-node Pentium 4E Myrinet cluster, and a 32-node dual Xeon Infiniband cluster. We will discuss the operation of these systems and examine their...Go to contribution page
-
S. Thorn27/09/2004, 15:00Track 6 - Computer Fabricsoral presentationScotGrid is a prototype regional computing centre formed as a collaboration between the universities of Durham, Edinburgh and Glasgow as part of the UK's national particle physics grid, GridPP. We outline the resources available at the three core sites and our optimisation efforts for our user communities. We discuss the work which has been conducted in extending the centre to embrace new...Go to contribution page
-
J. Rodriguez (UNIVERSITY OF FLORIDA)27/09/2004, 15:20Track 6 - Computer Fabricsoral presentationThe High Energy Physics Group at the University of Florida is involved in a variety of projects ranging from High Energy Experiments at hadron and electron positron colliders to cutting edge computer science experiments focused on grid computing. In support of these activities members of the Florida group have developed and deployed a local computational facility which consists of...Go to contribution page
-
S. Canon (NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER)27/09/2004, 15:40Track 6 - Computer Fabricsoral presentationSupporting multiple large collaborations on shared compute farms has typically resulted in divergent requirements from the users on the configuration of these farms. As the frameworks used by these collaborations are adapted to use Grids, this issue will likely have a significant impact on the effectiveness of Grids. To address these issues, a method was developed at Lawrence Berkeley...Go to contribution page
-
P. DeMar (FNAL)27/09/2004, 16:30Track 6 - Computer Fabricsoral presentationManagement of large site network such as FNAL LAN presents many technical and organizational challenges. This highly dynamic network consists of around 10 thousand network nodes. The nature of the activities FNAL is involved in and its computing policy require that the network remains as open as reasonably possible both in terms of connectivity to the outside networks and in with respect...Go to contribution page
-
J. VanWezel (FORSCHUNGZENTRUM KARLSRUHE)27/09/2004, 16:50Track 6 - Computer Fabricsoral presentationThe HEP experiments that use the regional center GridKa will handle large amounts of data. Traditional access methods via local disks or large network storage servers show limitations in size, throughput or data management flexibility. High speed interconnects like Fibre Channel, iSCSI or Infiniband as well as parallel file systems are becoming increasingly important in large cluster...Go to contribution page
-
O. Tatebe (GRID TECHNOLOGY RESEARCH CENTER, AIST)27/09/2004, 17:10Track 6 - Computer Fabricsoral presentationGfarm v2 is designed for facilitating reliable file sharing and high-performance distributed and parallel data computing in a Grid across administrative domains by providing a Grid file system. A Grid file system is a virtual file system that federates multiple file systems. It is possible to share files or data by mounting the virtual file system. This paper discusses the design...Go to contribution page
-
Y. CHENG (COMPUTING CENTER,INSTITUTE OF HIGH ENERGY PHYSICS,CHINESE ACADEMY OF SCIENCES)27/09/2004, 17:30Track 6 - Computer Fabricsoral presentationWith the development of Linux and improvement of PC's performance, PC cluster used as high performance computing system is becoming much popular. The performance of I/O subsystem and cluster file system is critical to a high performance computing system. In this work the basic characteristics of cluster file systems and their performance are reviewed. The performance of four...Go to contribution page
-
T. Mkrtchyan (DESY)27/09/2004, 17:50Track 6 - Computer Fabricsoral presentationAfter successful implementation and deployment of the dCache system over the last years, one of the additional required services, the namespace service, is faced additional and completely new requirements. Most of these are caused by scaling the system, the integration with Grid services and the need for redundant (high availability) configurations. The existing system, having only...Go to contribution page
-
T. Smith (CERN)29/09/2004, 14:00Track 6 - Computer Fabricsoral presentationThis paper discusses the challenges in maintaining a stable Managed Storage Service for users built upon dynamic underlying disk and tape layers. Early in 2004 the tools and techniques used to manage disk, tape, and stage servers were refreshed in adopting the QUATTOR tool set. This has markedly increased the coherency and efficiency of the configuration of data servers. The LEMON...Go to contribution page
-
A. Moibenko (FERMI NATIONAL ACCELERATOR LABORATORY, USA)29/09/2004, 14:20Track 6 - Computer Fabricsoral presentationFermilab has developed and successively uses Enstore Data Storage System. It is a primary data store for the Run II Collider Experiments, as well as for the others. It provides data storage in robotic tape libraries according to requirements of the experiments. High fault tolerance and availability, as well as multilevel priority based request processing allows experiments to effectively...Go to contribution page
-
H. Meinhard (CERN-IT)29/09/2004, 14:40Track 6 - Computer Fabricsoral presentationBy 2008, the T0/T1 centre for the LHC at CERN is estimated to use about 5000 TB of disk storage. This is a very significant increase over the about 250 TB running now. In order to be affordable, the chosen technology must provide the required performance and at the same time be cost-effective and easy to operate and use. We will present an analysis of the cost (both in terms of...Go to contribution page
-
S. Wiesand (DESY)29/09/2004, 15:00Track 6 - Computer Fabricsoral presentation64-Bit commodity clusters and farms based on AMD technology meanwhile have been proven to achieve a high computing power in many scientific applications. This report first gives a short introduction into the specialties of the amd64 architecture and the characteristics of two-way Opteron systems. Then results from measuring the performance and the behavior of such systems in various...Go to contribution page
-
S. Jarp (CERN)29/09/2004, 15:20Track 6 - Computer Fabricsoral presentationFor the last 18 months CERN has collaborated closely with several industrial partners to evaluate, through the opencluster project, technology that may (and hopefully will) play a strong role in the future computing solutions, primarily for LHC but possibly also for other HEP computing environments. Unlike conventional field testing where solutions from industry are evaluated rather...Go to contribution page
-
A. Heiss (FORSCHUNGSZENTRUM KARLSRUHE)29/09/2004, 15:40Track 6 - Computer Fabricsoral presentationDistributed physics analysis techniques as provided by the rootd and proofd concepts require a fast and efficient interconnect between the nodes. Apart from the required bandwidth the latency of message transfers is important, in particular in environments with many nodes. Ethernet is known to have large latencies, between 30 and 60 micro seconds for the common Giga-bit Ethernet. The...Go to contribution page
-
J-D. Durand (CERN)29/09/2004, 16:30Track 6 - Computer Fabricsoral presentationThe Cern Advanced STORage (CASTOR) system is a scalable high throughput hierarchical storage system developed at CERN. CASTOR was first deployed for full production use in 2001 and has expanded to now manage around two PetaBytes and almost 20 million files. CASTOR is a modular system, providing a distributed disk cache, a stager, and a back end tape archive, accessible via a global...Go to contribution page
-
P. Fuhrmann (DESY)29/09/2004, 16:50Track 6 - Computer Fabricsoral presentationThe dCache software system has been designed to manage a huge amount of individual disk storage nodes and let them appear under a single file system root. Beside a variety of other features, it supports the GridFtp dialect, implements the Storage Resource Manager interface (SRM V1) and can be linked against the CERN GFAL software layer. These abilities makes dCache a perfect Storage...Go to contribution page
-
T. Perelmutov (FERMI NATIONAL ACCELERATOR LABORATORY)29/09/2004, 17:10Track 6 - Computer Fabricsoral presentationStorage Resource Managers (SRMs) are middleware components whose function is to provide dynamic space allocation and file management on shared storage components on the Grid. SRMs support protocol negotiation and reliable replication mechanism. The SRM standard allows independent institutions to implement their own SRMs, thus allowing for a uniform access to heterogeneous storage...Go to contribution page
-
Y. Iida (HIGH ENERGY ACCELERATOR RESEARCH ORGANIZATION)29/09/2004, 17:30Track 6 - Computer Fabricsoral presentationThe Belle experiment has accumulated an integrated luminosity of more than 240fb-1 so far, and a daily logged luminosity now exceeds 800pb-1. These numbers correspond to more than 1PB of raw and processed data stored on tape and an accumulation of the raw data at the rate of 1TB/day. The processed, compactified data, together with Monte Carlo simulation data for the final physics...Go to contribution page
-
L. Magnoni (INFN-CNAF)29/09/2004, 17:50Track 6 - Computer Fabricsoral presentationWithin a Grid the possibility of managing storage space is fundamental, in particular, before and during application execution. On the other hand, the increasing availability of highly performant computing resources raises the need for fast and efficient I/O operations and drives the development of parallel distributed file systems able to satisfy these needs granting access to distributed...Go to contribution page
-
S. Veseli (Fermilab)29/09/2004, 18:10Track 6 - Computer Fabricsoral presentationThe SAMGrid Database Server encapsulates several important services, such as accessing file metadata and replica catalog, keeping track of the processing information, as well as providing the runtime support for SAMGrid station services. Recent deployment of the SAMGrid system for CDF has resulted in unification of the database schema used by CDF and D0, and the complexity of changes...Go to contribution page
Choose timezone
Your profile timezone: