27 September 2004 to 1 October 2004
Interlaken, Switzerland
Europe/Zurich timezone

Session

Computer Fabrics

10
27 Sept 2004, 14:00
Interlaken, Switzerland

Interlaken, Switzerland

Presentation materials

There are no materials yet.

  1. G. Cancio (CERN)
    27/09/2004, 14:00
    Track 6 - Computer Fabrics
    oral presentation
    This paper describes the evolution of fabric management at CERN's T0/T1 Computing Center, from the selection and adoption of prototypes produced by the European DataGrid (EDG) project[1] to enhancements made to them. In the last year of the EDG project, developers and service managers have been working to understand and solve operational and scalability issues. CERN has adopted and...
    Go to contribution page
  2. Tomasz WLODEK (BNL)
    27/09/2004, 14:20
    Track 6 - Computer Fabrics
    oral presentation
    This presentation describes the experiences and the lessons learned by the RHIC/ATLAS Computing Facility (RACF) in building and managing its 2,700+ CPU (and growing) Linux Farm over the past 6+ years. We describe how hardware cost, end-user needs, infrastructure, footprint, hardware configuration, vendor selection, software support and other considerations have played a role in...
    Go to contribution page
  3. Don Petravick
    27/09/2004, 14:40
    Track 6 - Computer Fabrics
    oral presentation
    As part of the DOE SciDAC "National Infrastructure for Lattice Gauge Computing" project, Fermilab builds and operates production clusters for lattice QCD simulations. We currently operate three clusters: a 128-node dual Xeon Myrinet cluster, a 128-node Pentium 4E Myrinet cluster, and a 32-node dual Xeon Infiniband cluster. We will discuss the operation of these systems and examine their...
    Go to contribution page
  4. S. Thorn
    27/09/2004, 15:00
    Track 6 - Computer Fabrics
    oral presentation
    ScotGrid is a prototype regional computing centre formed as a collaboration between the universities of Durham, Edinburgh and Glasgow as part of the UK's national particle physics grid, GridPP. We outline the resources available at the three core sites and our optimisation efforts for our user communities. We discuss the work which has been conducted in extending the centre to embrace new...
    Go to contribution page
  5. J. Rodriguez (UNIVERSITY OF FLORIDA)
    27/09/2004, 15:20
    Track 6 - Computer Fabrics
    oral presentation
    The High Energy Physics Group at the University of Florida is involved in a variety of projects ranging from High Energy Experiments at hadron and electron positron colliders to cutting edge computer science experiments focused on grid computing. In support of these activities members of the Florida group have developed and deployed a local computational facility which consists of...
    Go to contribution page
  6. S. Canon (NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER)
    27/09/2004, 15:40
    Track 6 - Computer Fabrics
    oral presentation
    Supporting multiple large collaborations on shared compute farms has typically resulted in divergent requirements from the users on the configuration of these farms. As the frameworks used by these collaborations are adapted to use Grids, this issue will likely have a significant impact on the effectiveness of Grids. To address these issues, a method was developed at Lawrence Berkeley...
    Go to contribution page
  7. P. DeMar (FNAL)
    27/09/2004, 16:30
    Track 6 - Computer Fabrics
    oral presentation
    Management of large site network such as FNAL LAN presents many technical and organizational challenges. This highly dynamic network consists of around 10 thousand network nodes. The nature of the activities FNAL is involved in and its computing policy require that the network remains as open as reasonably possible both in terms of connectivity to the outside networks and in with respect...
    Go to contribution page
  8. J. VanWezel (FORSCHUNGZENTRUM KARLSRUHE)
    27/09/2004, 16:50
    Track 6 - Computer Fabrics
    oral presentation
    The HEP experiments that use the regional center GridKa will handle large amounts of data. Traditional access methods via local disks or large network storage servers show limitations in size, throughput or data management flexibility. High speed interconnects like Fibre Channel, iSCSI or Infiniband as well as parallel file systems are becoming increasingly important in large cluster...
    Go to contribution page
  9. O. Tatebe (GRID TECHNOLOGY RESEARCH CENTER, AIST)
    27/09/2004, 17:10
    Track 6 - Computer Fabrics
    oral presentation
    Gfarm v2 is designed for facilitating reliable file sharing and high-performance distributed and parallel data computing in a Grid across administrative domains by providing a Grid file system. A Grid file system is a virtual file system that federates multiple file systems. It is possible to share files or data by mounting the virtual file system. This paper discusses the design...
    Go to contribution page
  10. Y. CHENG (COMPUTING CENTER,INSTITUTE OF HIGH ENERGY PHYSICS,CHINESE ACADEMY OF SCIENCES)
    27/09/2004, 17:30
    Track 6 - Computer Fabrics
    oral presentation
    With the development of Linux and improvement of PC's performance, PC cluster used as high performance computing system is becoming much popular. The performance of I/O subsystem and cluster file system is critical to a high performance computing system. In this work the basic characteristics of cluster file systems and their performance are reviewed. The performance of four...
    Go to contribution page
  11. T. Mkrtchyan (DESY)
    27/09/2004, 17:50
    Track 6 - Computer Fabrics
    oral presentation
    After successful implementation and deployment of the dCache system over the last years, one of the additional required services, the namespace service, is faced additional and completely new requirements. Most of these are caused by scaling the system, the integration with Grid services and the need for redundant (high availability) configurations. The existing system, having only...
    Go to contribution page
  12. T. Smith (CERN)
    29/09/2004, 14:00
    Track 6 - Computer Fabrics
    oral presentation
    This paper discusses the challenges in maintaining a stable Managed Storage Service for users built upon dynamic underlying disk and tape layers. Early in 2004 the tools and techniques used to manage disk, tape, and stage servers were refreshed in adopting the QUATTOR tool set. This has markedly increased the coherency and efficiency of the configuration of data servers. The LEMON...
    Go to contribution page
  13. A. Moibenko (FERMI NATIONAL ACCELERATOR LABORATORY, USA)
    29/09/2004, 14:20
    Track 6 - Computer Fabrics
    oral presentation
    Fermilab has developed and successively uses Enstore Data Storage System. It is a primary data store for the Run II Collider Experiments, as well as for the others. It provides data storage in robotic tape libraries according to requirements of the experiments. High fault tolerance and availability, as well as multilevel priority based request processing allows experiments to effectively...
    Go to contribution page
  14. H. Meinhard (CERN-IT)
    29/09/2004, 14:40
    Track 6 - Computer Fabrics
    oral presentation
    By 2008, the T0/T1 centre for the LHC at CERN is estimated to use about 5000 TB of disk storage. This is a very significant increase over the about 250 TB running now. In order to be affordable, the chosen technology must provide the required performance and at the same time be cost-effective and easy to operate and use. We will present an analysis of the cost (both in terms of...
    Go to contribution page
  15. S. Wiesand (DESY)
    29/09/2004, 15:00
    Track 6 - Computer Fabrics
    oral presentation
    64-Bit commodity clusters and farms based on AMD technology meanwhile have been proven to achieve a high computing power in many scientific applications. This report first gives a short introduction into the specialties of the amd64 architecture and the characteristics of two-way Opteron systems. Then results from measuring the performance and the behavior of such systems in various...
    Go to contribution page
  16. S. Jarp (CERN)
    29/09/2004, 15:20
    Track 6 - Computer Fabrics
    oral presentation
    For the last 18 months CERN has collaborated closely with several industrial partners to evaluate, through the opencluster project, technology that may (and hopefully will) play a strong role in the future computing solutions, primarily for LHC but possibly also for other HEP computing environments. Unlike conventional field testing where solutions from industry are evaluated rather...
    Go to contribution page
  17. A. Heiss (FORSCHUNGSZENTRUM KARLSRUHE)
    29/09/2004, 15:40
    Track 6 - Computer Fabrics
    oral presentation
    Distributed physics analysis techniques as provided by the rootd and proofd concepts require a fast and efficient interconnect between the nodes. Apart from the required bandwidth the latency of message transfers is important, in particular in environments with many nodes. Ethernet is known to have large latencies, between 30 and 60 micro seconds for the common Giga-bit Ethernet. The...
    Go to contribution page
  18. J-D. Durand (CERN)
    29/09/2004, 16:30
    Track 6 - Computer Fabrics
    oral presentation
    The Cern Advanced STORage (CASTOR) system is a scalable high throughput hierarchical storage system developed at CERN. CASTOR was first deployed for full production use in 2001 and has expanded to now manage around two PetaBytes and almost 20 million files. CASTOR is a modular system, providing a distributed disk cache, a stager, and a back end tape archive, accessible via a global...
    Go to contribution page
  19. P. Fuhrmann (DESY)
    29/09/2004, 16:50
    Track 6 - Computer Fabrics
    oral presentation
    The dCache software system has been designed to manage a huge amount of individual disk storage nodes and let them appear under a single file system root. Beside a variety of other features, it supports the GridFtp dialect, implements the Storage Resource Manager interface (SRM V1) and can be linked against the CERN GFAL software layer. These abilities makes dCache a perfect Storage...
    Go to contribution page
  20. T. Perelmutov (FERMI NATIONAL ACCELERATOR LABORATORY)
    29/09/2004, 17:10
    Track 6 - Computer Fabrics
    oral presentation
    Storage Resource Managers (SRMs) are middleware components whose function is to provide dynamic space allocation and file management on shared storage components on the Grid. SRMs support protocol negotiation and reliable replication mechanism. The SRM standard allows independent institutions to implement their own SRMs, thus allowing for a uniform access to heterogeneous storage...
    Go to contribution page
  21. Y. Iida (HIGH ENERGY ACCELERATOR RESEARCH ORGANIZATION)
    29/09/2004, 17:30
    Track 6 - Computer Fabrics
    oral presentation
    The Belle experiment has accumulated an integrated luminosity of more than 240fb-1 so far, and a daily logged luminosity now exceeds 800pb-1. These numbers correspond to more than 1PB of raw and processed data stored on tape and an accumulation of the raw data at the rate of 1TB/day. The processed, compactified data, together with Monte Carlo simulation data for the final physics...
    Go to contribution page
  22. L. Magnoni (INFN-CNAF)
    29/09/2004, 17:50
    Track 6 - Computer Fabrics
    oral presentation
    Within a Grid the possibility of managing storage space is fundamental, in particular, before and during application execution. On the other hand, the increasing availability of highly performant computing resources raises the need for fast and efficient I/O operations and drives the development of parallel distributed file systems able to satisfy these needs granting access to distributed...
    Go to contribution page
  23. S. Veseli (Fermilab)
    29/09/2004, 18:10
    Track 6 - Computer Fabrics
    oral presentation
    The SAMGrid Database Server encapsulates several important services, such as accessing file metadata and replica catalog, keeping track of the processing information, as well as providing the runtime support for SAMGrid station services. Recent deployment of the SAMGrid system for CDF has resulted in unification of the database schema used by CDF and D0, and the complexity of changes...
    Go to contribution page
Building timetable...