21โ€“25 May 2012
New York City, NY, USA
US/Eastern timezone

Session

Computer Facilities, Production Grids and Networking

04
21 May 2012, 13:30
New York City, NY, USA

New York City, NY, USA

Conveners

Computer Facilities, Production Grids and Networking

  • Daniele Bonacorsi (University of Bologna)

Computer Facilities, Production Grids and Networking

  • Andreas Heiss (Forschungszentrum Karlsruhe GmbH)

Computer Facilities, Production Grids and Networking

  • Maria Girone (CERN)

Computer Facilities, Production Grids and Networking

  • Daniele Bonacorsi (University of Bologna)

Presentation materials

There are no materials yet.

  1. Dr Domenico Vicinanza (DANTE)
    21/05/2012, 13:30
    Computer Facilities, Production Grids and Networking (track 4)
    Parallel
    The Large Hadron Collider (LHC) is currently running at CERN in Geneva, Switzerland. Physicists are using LHC to recreate the conditions just after the Big Bang, by colliding two beams of particles and heavy ions head-on at very high energy. The project is expected to generate 27 TB of raw data per day, plus 10 TB of "event summary data". This data is sent out from CERN to eleven Tier 1...
    Go to contribution page
  2. Edoardo Martelli (CERN)
    21/05/2012, 13:55
    Computer Facilities, Production Grids and Networking (track 4)
    Parallel
    The much-heralded exhaustion of the IPv4 networking address space has finally started. While many of the research and education networks have been ready and poised for years to carry IPv6 traffic, there is a well-known lack of academic institutes using the new protocols. One reason for this is an obvious absence of pressure due to the extensive use of NAT or that most currently still have...
    Go to contribution page
  3. Mr Andrey Bobyshev (FERMILAB)
    21/05/2012, 14:20
    Computer Facilities, Production Grids and Networking (track 4)
    Parallel
    The LHC is entering its fourth year of production operation. Many Tier1 facilities can count up to a decade of existence when development and ramp-up efforts are included. LHC computing has always been heavily dependent on high capacity, high performance network facilities for both the LAN and WAN data movement, particularly within the Tier1 centers. As a result, the Tier1 centers tend to...
    Go to contribution page
  4. Jan Iven (CERN), Massimo Lamanna (CERN)
    21/05/2012, 14:45
    Computer Facilities, Production Grids and Networking (track 4)
    Parallel
    Large-volume physics data storage at CERN is based on two services, CASTOR and EOS: * CASTOR - in production for many years - now handles the Tier0 activities (including WAN data distribution), as well as all tape-backed data; * EOS - in production since 2011 - supports the fast-growing need for high-performance low-latency (i.e. diskonly) data access for user analysis. In 2011, a large...
    Go to contribution page
  5. Rapolas Kaselis (Vilnius University (LT))
    21/05/2012, 15:10
    Computer Facilities, Production Grids and Networking (track 4)
    Parallel
    CMS experiment possesses distributed computing infrastructure and its performance heavily depends on the fast and smooth distribution of data between different CMS sites. Data must be transferred from the Tier-0 (CERN) to the Tier-1 for storing and archiving, and time and good quality are vital to avoid overflowing CERN storage buffers. At the same time, processed data has to be distributed...
    Go to contribution page
  6. Shawn Mc Kee (University of Michigan (US))
    21/05/2012, 16:35
    Computer Facilities, Production Grids and Networking (track 4)
    Parallel
    Global scientific collaborations, such as ATLAS, continue to push the network requirements envelope. Data movement in this collaboration is projected to include the regular exchange of petabytes of datasets between the collection and analysis facilities in the coming years. These requirements place a high emphasis on networks functioning at peak efficiency and availability; the lack thereof...
    Go to contribution page
  7. Niko Neufeld (CERN)
    21/05/2012, 17:00
    Computer Facilities, Production Grids and Networking (track 4)
    Parallel
    The upgraded LHCb experiment, which is supposed to go into operation in 2018/19 will require a massive increase in its compute facilities. A new 2 MW data-centre is planned at the LHCb site. Apart from the obvious requirement of minimizing the cost, the data-centre has to tie in well with the needs of online processing, while at the same time staying open for future and offline use. We present...
    Go to contribution page
  8. Dr Horst Gรถringer (GSI)
    21/05/2012, 17:25
    Computer Facilities, Production Grids and Networking (track 4)
    Parallel
    GSI in Darmstadt (Germany) is a center for heavy ion research. It hosts an Alice Tier2 center and is the home of the future FAIR facility. The planned data rates of the largest FAIR experiments, CBM and Panda, will be similar to those of the current LHC experiments at Cern. gStore is a hierarchical storage system with unique name space and successfully in operation since more than...
    Go to contribution page
  9. Dmitry Ozerov (Deutsches Elektronen-Synchrotron (DE)), Martin Gasthuber (Deutsches Elektronen-Synchrotron (DE)), Patrick Fuhrmann (DESY), Yves Kemp (Deutsches Elektronen-Synchrotron (DE))
    21/05/2012, 17:50
    Computer Facilities, Production Grids and Networking (track 4)
    Parallel
    We present results on different approaches on mounted filesystems in use or under investigation at DESY. dCache, established since long as a storage system for physics data has implemented the NFS v4.1/pNFS protocol. New performance results will be shown with the most current version of the dCache server. In addition to the native usage of the mounted filesystem in a LAN environment, the...
    Go to contribution page
  10. Wahid Bhimji (University of Edinburgh (GB))
    22/05/2012, 13:30
    Computer Facilities, Production Grids and Networking (track 4)
    Parallel
    We describe recent I/O testing frameworks that we have developed and applied within the UK GridPP Collaboration, the ATLAS experiment and the DPM team, for a variety of distinct purposes. These include benchmarking vendor supplied storage products, discovering scaling limits of SRM solutions, tuning of storage systems for experiment data analysis, evaluating file access protocols, and...
    Go to contribution page
  11. Erik Mattias Wadenstein (Unknown)
    22/05/2012, 13:55
    Computer Facilities, Production Grids and Networking (track 4)
    Parallel
    Distributed storage systems are critical to the operation of the WLCG. These systems are not limited to fulfilling the long term storage requirements. They also serve data for computational analysis and other computational jobs. Distributed storage systems provide the ability to aggregate the storage and IO capacity of disks and tapes, but at the end of the day IO rate is still bound by the...
    Go to contribution page
  12. Brian Paul Bockelman (University of Nebraska (US))
    22/05/2012, 14:20
    Computer Facilities, Production Grids and Networking (track 4)
    Parallel
    While the LHC data movement systems have demonstrated the ability to move data at the necessary throughput, we have identified two weaknesses: the latency for physicists to access data and the complexity of the tools involved. To address these, both ATLAS and CMS have begun to federate regional storage systems using Xrootd. Xrootd, referring to a protocol and implementation, allows us to...
    Go to contribution page
  13. Dr Xavier Espinal Curull (Universitat Autรฒnoma de Barcelona (ES))
    22/05/2012, 14:45
    Computer Facilities, Production Grids and Networking (track 4)
    Parallel
    Scientific experiments are producing huge amounts of data, and they continue increasing the size of their datasets and the total volume of data. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of Scientific Data Centres has shifted from coping efficiently with PetaByte scale storage...
    Go to contribution page
  14. Jason Alexander Smith (Brookhaven National Laboratory (US))
    22/05/2012, 15:10
    Computer Facilities, Production Grids and Networking (track 4)
    Parallel
    Managing the infrastructure of a large and complex data center can be extremely difficult without taking advantage of automated services. Puppet is a seasoned, open-source tool designed for enterprise-class centralized configuration management. At the RHIC/ATLAS Computing Facility at Brookhaven National Laboratory, we have adopted Puppet as part of a suite of tools, including Git, GLPI, and...
    Go to contribution page
  15. Jason Zurawski (Internet2)
    24/05/2012, 13:30
    Computer Facilities, Production Grids and Networking (track 4)
    Parallel
    Scientific innovation continues to increase requirements for the computing and networking infrastructures of the world. Collaborative partners, instrumentation, storage, and processing facilities are often geographically and topologically separated, as is the case with LHC virtual organizations. These separations challenge the technology used to interconnect available resources,...
    Go to contribution page
  16. Steve Barnet (University of Wisconsin Madison)
    24/05/2012, 13:55
    Computer Facilities, Production Grids and Networking (track 4)
    Parallel
    Besides the big LHC experiments a number of mid-size experiments is coming online which need to define new computing models to meet the demands on processing and storage requirements of those experiments. We present the hybrid computing model of IceCube which leverages GRID models with a more flexible direct user model as an example of a possible solution. In IceCube a central datacenter at...
    Go to contribution page
  17. Mr Pier Paolo Ricci (INFN CNAF)
    24/05/2012, 14:20
    Computer Facilities, Production Grids and Networking (track 4)
    Parallel
    The storage solution currently used in production at the INFN Tier-1 at CNAF, is the result of several years of case studies, software development and tests. This solution, called the Grid Enabled Mass Storage System (GEMSS), is based on a custom integration between a fast and reliable parallel filesystem (IBM GPFS), with a complete integrated tape backend based on TIVOLI TSM Hierarchical...
    Go to contribution page
  18. Artur Jerzy Barczyk (California Institute of Technology (US)), Azher Mughal (California Institute of Technology), sandor Rozsa (California Institute of Technology (CALTECH))
    24/05/2012, 14:45
    Computer Facilities, Production Grids and Networking (track 4)
    Parallel
    40Gb/s network technology is increasingly available today in the data centers as well as in the network backbones. We have built and evaluated storage systems equipped with the last generation of 40GbE Network Interface Cards. The recently available motherboards with the PCIe v3 bus provide the possibility to reach the full 40Gb/s rate per network interface. A fast caching system was built...
    Go to contribution page
  19. Tim Bell (CERN)
    24/05/2012, 15:10
    Computer Facilities, Production Grids and Networking (track 4)
    Parallel
    The CERN Computer Centre is reviewing strategies for optimizing the use of the existing infrastructure in the future, and in the likely scenario that any extension will be remote from CERN, and in the light of the way other large facilities are today being operated. Over the past six months, CERN has been investigating modern and widely-used tools and procedures used for virtualisation,...
    Go to contribution page
Building timetable...