13โ€“17 Feb 2006
Tata Institute of Fundamental Research
Europe/Zurich timezone

Session

Computing Facilities and Networking

CFN
13 Feb 2006, 14:00
Tata Institute of Fundamental Research

Tata Institute of Fundamental Research

Homi Bhabha Road Mumbai 400005 India

Presentation materials

There are no materials yet.

  1. Mr Vladimir Bahyl (CERN IT-FIO)
    13/02/2006, 14:00
    Computing Facilities and Networking
    oral presentation
    Availability approaching 100% and response time converging to 0 are two factors that users expect of any system they interact with. Even if the real importance of these factors is a function of the size and nature of the project, todays users are rarely tolerant of performance issues with system of any size. Commercial solutions for load balancing and failover are plentiful. Citrix...
    Go to contribution page
  2. Dr Doris Ressmann (Forschungszentrum Karlsruhe)
    13/02/2006, 14:20
    Computing Facilities and Networking
    oral presentation
    At GridKa an initial capacity of 1.5 PB online and 2 PB background storage is needed for the LHC start in 2007. Afterwards the capacity is expected to grow almost exponentially. No computing site will be able to keep this amount of data in online storage, hence a highly accessible tape connection is needed. This paper describes a high-performance connection of the online storage to an IBM...
    Go to contribution page
  3. Michal Kwiatek (CERN)
    13/02/2006, 14:40
    Computing Facilities and Networking
    oral presentation
    Over the last years, we have experienced a growing demand for hosting java web applications. At the same time, it has been difficult to find an off-the-shelf solution that would enable load balancing, easy administration and a high level of isolation between applications hosted within a J2EE server. The architecture developed and used in production at CERN is based on a linux...
    Go to contribution page
  4. Dr Patrick Fuhrmann (DESY)
    13/02/2006, 15:00
    Computing Facilities and Networking
    oral presentation
    For the last two years, the dCache/SRM Storage Element has been successfully integrated into the LCG framework and is in heavy production at several dozens of sites, spanning a range from single host installations up to those with some hundreds of tera bytes of disk space, delivering more than 50 TByes per day to clients. Based on the permanent feedback from our users and the detailed...
    Go to contribution page
  5. Mr Tigran Mkrtchyan Mkrtchyan (Deutsches Elektronen-Synchrotron DESY)
    13/02/2006, 16:00
    Computing Facilities and Networking
    oral presentation
    After successfully deploying dCache over the last few years, the dCache team reevaluated the potential of using dCache for extremely huge and heavily used installations. We identified the filesystem namespace module as one of the components which would very likely need a redesign to cope with expected requirements in the medium term future. Having presented the initial design of Chimera...
    Go to contribution page
  6. Dr Roger Cottrell (Stanford Linear Accelerator Center)
    13/02/2006, 16:20
    Computing Facilities and Networking
    oral presentation
    The future of computing for HENP applications depends increasingly on how well the global community is connected. With South Asia and Africa accounting for about 36% of the worldโ€™s population, the issues of internet/network facilities are a major concern for these regions if they are to successfully partake in scientific endeavors. However, not only is the International bandwidth for these...
    Go to contribution page
  7. Richard Cavanaugh (University of Florida)
    13/02/2006, 16:40
    Computing Facilities and Networking
    oral presentation
    UltraLight is a collaboration of experimental physicists and network engineers whose purpose is to provide the network advances required to enable petabyte-scale analysis of globally distributed data. Current Grid-based infrastructures provide massive computing and storage resources, but are currently limited by their treatment of the network as an external, passive, and largely unmanaged...
    Go to contribution page
  8. Dr Roger JONES (LANCASTER UNIVERSITY)
    13/02/2006, 17:00
    Computing Facilities and Networking
    oral presentation
    Following on from the LHC experimentsโ€™ computing Technical Design Reports, HEPiX, with the agreement of the LCG, formed a Storage Task Force. This group was to: examine the current LHC experiment computing models; attempt to determine the data volumes, access patterns and required data security for the various classes of data, as a function of Tier and of time; consider the current...
    Go to contribution page
  9. Mr Francois Fluckiger (CERN)
    13/02/2006, 17:20
    Computing Facilities and Networking
    oral presentation
    The openlab, created three years ago at CERN, was a novel concept: to involve leading IT companies in the evaluation and the integration of cutting-edge technologies or services, focusing on potential solutions for the LCG. The novelty lay in the duration of the commitment (three years during which companies provided a mix of in-kind and in-cash contributions), the level of the...
    Go to contribution page
  10. Mr Dinesh Sarode (Computer Division, BARC, Mumbai-85, India)
    14/02/2006, 14:00
    Computing Facilities and Networking
    oral presentation
    Today we can have huge datasets resulting from computer simulations (CFD, physics, chemistry etc) and sensor measurements (medical, seismic and satellite). There is exponential growth in computational requirements in scientific research. Modern parallel computers and Grid are providing the required computational power for the simulation runs. The rich visualization is essential in...
    Go to contribution page
  11. Dr Wenji Wu (Fermi National Accelerator Laboratory)
    14/02/2006, 14:20
    Computing Facilities and Networking
    oral presentation
    The computing models for HEP experiments are becoming ever more globally distributed and grid-based, both for technical reasons (e.g., to place computational and data resources near each other and the demand) and for strategic reasons (e.g., to leverage technology investments). To support such computing models, the network and end systems (computing and storage) face unprecedented...
    Go to contribution page
  12. Igor Mandrichenko (FNAL)
    14/02/2006, 14:40
    Computing Facilities and Networking
    oral presentation
    Fermilab is a high energy physics research lab that maintains a dynamic network which typically supports around 10,000 active nodes. Due to the open nature of the scientific research conducted at FNAL, the portion of the network used to support open scientific research requires high bandwidth connectivity to numerous collaborating institutions around the world, and must facilitate...
    Go to contribution page
  13. Dr Dantong Yu (BROOKHAVEN NATIONAL LABORATORY), Dr Dimitrios Katramatos (BROOKHAVEN NATIONAL LABORATORY)
    14/02/2006, 15:00
    Computing Facilities and Networking
    oral presentation
    A DOE MICS/SciDac funded project, TeraPaths, deployed and prototyped the use of differentiated networking services based on a range of new transfer protocols to support the global movement of data in the high energy physics distributed computing environment. While this MPLS/LAN QoS work specifically targets networking issues at BNL, the experience acquired and expertise developed is...
    Go to contribution page
  14. Dr Dirk Pleiter (DESY)
    14/02/2006, 16:00
    Computing Facilities and Networking
    oral presentation
    apeNEXT is the latest generation of massively parallel machines optimized for simulating QCD formulated on a lattice (LQCD). In autumn 2005 the commissioning of several large-scale installations of apeNEXT started, which will provide a total of 15 TFlops of compute power. This fully custom designed computer has been developed by an European collaboration composed of groups from INFN...
    Go to contribution page
  15. Dr Chih-Hao Huang (Fermi National Accelerator Laboratory)
    14/02/2006, 16:20
    Computing Facilities and Networking
    oral presentation
    ENSTORE is a very successful petabyte-scale mass storage system developed at Fermilab. Since its inception in the late 1990s, ENSTORE has been serving the Fermilab community, as well as its collaborators, and now holds more than 3 petabytes of data on tape. New data is arriving at an ever increasing rate. One practical issue that we are confronted with is: storage technologies have been...
    Go to contribution page
  16. Dr Gidon Moont (GridPP/Imperial)
    14/02/2006, 16:40
    Computing Facilities and Networking
    oral presentation
    A working prototype portal for the LHC Computing Grid (LCG) is being customised for use by the T2K 280m Near Detector software group. This portal is capable of submitting jobs to the LCG and retrieving the output on behalf of the user. The T2K specific developement of the portal will create customised submission systems for the suites of production and analysis software being written by...
    Go to contribution page
  17. Shawn Mc Kee (High Energy Physics)
    14/02/2006, 17:00
    Computing Facilities and Networking
    oral presentation
    We will describe the networking details of NSF-funded UltraLight project and report on its status. The projectโ€™s goal is to meet the data-intensive computing challenges of the next generation of particle physics experiments with a comprehensive, network-focused agenda. The UltraLight network is a hybrid packet- and circuit-switched network infrastructure employing both โ€œultrascaleโ€...
    Go to contribution page
  18. Dr Les Cottrell (Stanford Linear Accelerator Center (SLAC))
    14/02/2006, 17:20
    Computing Facilities and Networking
    oral presentation
    High Energy and Nuclear Physics (HENP) experiments generate unprecedented volumes of data which need to be transferred, analyzed and stored. This in turn requires the ability to sustain, over long periods, the transfer of large amounts of data between collaborating sites, with relatively high throughput. Groups such as the Particle Physics Data Grid (PPDG) and Globus are developing and...
    Go to contribution page
  19. Mr Rohitashva Sharma (BARC)
    15/02/2006, 14:00
    Computing Facilities and Networking
    oral presentation
    It is important to know the Quality of Service offered by nodes in a cluster both for users and load balancing programs like LSF, PBS and CONDOR for submitting a job on to a given node. This will help in achieving optimal utilization of nodes in a cluster. Simple metrics like load average, memory utilization etc do not adequately describe load on the nodes or Quality of Service (QoS)...
    Go to contribution page
  20. Mr Andrey Bobyshev (FERMILAB)
    15/02/2006, 14:20
    Computing Facilities and Networking
    oral presentation
    High Energy Physics collaborations consist of hundreds to thousands of physicists and are world-wide in scope. Experiments and applications now running, or starting soon, need the data movement capabilities now available only on advanced and/or experimental networks. The Lambda Station project steers selectable traffic through site infrastructure and onto these "high-impact" wide-area ...
    Go to contribution page
  21. Prof. Manuel Delfino Reznicek (Port d'Informaciรณ Cientรญfica)
    15/02/2006, 14:40
    Computing Facilities and Networking
    oral presentation
    Efficient hierarchical storage management of small size files continues to be a challenge. Storing such files directly on tape-based tertiary storage leads to extremely low operational efficiencies. Commercial tape virtualization products are few, expensive and only proven in mainframe environments. Asking the users to deal with the problem by โ€œbundlingโ€ their files leads to a plethora of...
    Go to contribution page
  22. Dr Ian Fisk (FERMILAB)
    15/02/2006, 15:00
    Computing Facilities and Networking
    oral presentation
    CMS is preparing seven remote Tier-1 computing facilities to archive and serve experiment data. These centers represent the bulk of CMS's data serving capacity, a significant resource for reprocessing data, all of the simulation archiving capacity, and operational support for Tier-2 centers and analysis facilities. In this paper we present the progress on deploying the largest remote...
    Go to contribution page
  23. Dr Hans Wenzel (FERMILAB)
    15/02/2006, 16:00
    Computing Facilities and Networking
    oral presentation
    We report on the ongoing evaluation of new 64 Bit processors as they become available to us. We present the results of benchmarking these systems in various operating modes and also measured the power consumption. To measure the performance we use HEP and CMS specific applications including: the analysis tool ROOT (C++), the MonteCarlo generator Pythia (FORTRAN), OSCAR (C++) the GEANT 4...
    Go to contribution page
  24. Mr Carsten Germer (DESY IT)
    15/02/2006, 16:20
    Computing Facilities and Networking
    oral presentation
    Taking the implementation of ZOPE/ZMS at DESY as an example we will show and discuss various approaches and procedures to introduce a Content Management System in a HEP Institute. We will show how requirements were gathered to make decisions regarding software and hardware. How existing Systems and management procedures needed to be taken into consideration. How the project was...
    Go to contribution page
  25. Dr Stefan Stancu (University of California, Irvine)
    15/02/2006, 16:40
    Computing Facilities and Networking
    oral presentation
    The ATLAS experiment will rely on Ethernet networks for several purposes. A control network will provide infrastructure services and will also handle the traffic associated with control and monitoring of trigger and data acquisition (TDAQ) applications. Two independent data networks (dedicated TDAQ networks) will be used exclusively for transferring the event data within the High Level...
    Go to contribution page
  26. Abhishek Singh RANA (University of California, San Diego, CA, USA)
    15/02/2006, 17:00
    Computing Facilities and Networking
    oral presentation
    We introduce gPLAZMA (grid-aware PLuggable AuthoriZation MAnagement) Architecture. Our work is motivated by a need for fine-grain security (Role Based Access Control or RBAC) in Storage Systems, and utilizes VOMS extended X.509 certificate specification for defining extra attributes (FQANs), based on RFC 3281. Our implementation, the gPLAZMA module for dCache, introduces Storage...
    Go to contribution page
  27. Iosif Legrand (CALTECH)
    16/02/2006, 14:00
    Computing Facilities and Networking
    oral presentation
    To satisfy the demands of data intensive grid applications it is necessary to move to far more synergetic relationships between applications and networks. The main objective of the VINCI project is to enable data intensive applications to efficiently use and coordinate shared, hybrid network resources, to improve the performance and throughput of global-scale grid systems, such as those...
    Go to contribution page
  28. Dr Mathias de Riese (DESY)
    16/02/2006, 14:20
    Computing Facilities and Networking
    oral presentation
    DESY is one of the worlds leading centers for research with particle accelerators and synchrotron light. The computer center manages a data volume of the order of 1 PB and houses around 1000 CPUs. During DESY's engagement as Tier-2 center for LHC experiments these numbers will at least double. In view of these increasing activities an improved fabric management infrastructure is being...
    Go to contribution page
  29. Mr Dirk Jahnke-Zumbusch (DESY)
    16/02/2006, 14:40
    Computing Facilities and Networking
    oral presentation
    DESY operates some thousand computers, based on different operating systems. On Servers and workstations not only the operating systems but many centrally supported software systems are used. Most of these systems, operating and software systems come with their own user and account management tools. Typically they do not know of each other, which makes live harder for users, when you have...
    Go to contribution page
Building timetable...