Conveners
Hardware and Computing Fabrics: Monday
- Isidro Gonzales Caballero (CERN)
Hardware and Computing Fabrics: Tuesday
- Jiri Chudoba (FZU)
Hardware and Computing Fabrics: Tuesday
- Sverre Jarp (CERN)
Michele Michelotto
(INFN + Hepix)
23/03/2009, 16:30
Hardware and Computing Fabrics
oral
The SPEC INT benchmark has been used as a performance reference for computing in the HEP community for the past 20 years. The SPEC CPU INT 2000 (SI2K) unit of performance has been used by the major HEP experiments both in the Computing Technical Design Report for the LHC experiments and in the evaluation of the Computing Centres. At recent HEPiX meetings several HEP sites have reported...
Mr
Sverre Jarp
(CERN)
23/03/2009, 16:50
Hardware and Computing Fabrics
oral
In CERN openlab we have being running tests with a server using a low-power ATOM N330 dual-core/dual-thread processor deploying both HEP offline and online programs.
The talk will report on the results, both for single runs as well as max throughput runs, and will also report on the results of thermal measurements. It will also show how the price/performance of an ATOM system compares to a...
Tony Cass
(CERN)
23/03/2009, 17:10
Hardware and Computing Fabrics
oral
The current level of demand for Green Data Centres has created a growing market for consultants providing advice on how to meet the requirement for high levels of electrical power and, above all, cooling capacity both economically and ecologically. How should one choose, in the face of the many competing claims, the right concept for a cooling system in order to reach the right power level,...
Mr
Simon Liu
(TRIUMF)
23/03/2009, 17:30
Hardware and Computing Fabrics
oral
We describe in this paper the design and implementation of Tapeguy, a high performance non-proprietary Hierarchical Storage Management System (HSM) which is interfaced to dCache for efficient tertiary storage operations. The system has been successfully implemented at the canadian Tier-1 Centre at TRIUMF. The ATLAS experiment will collect a very large amount of data (approximately 3.5...
Mr
Pavel JAKL
(Nuclear Physics Inst., Academy of Sciences, Praha)
23/03/2009, 17:50
Hardware and Computing Fabrics
oral
Any experiment facing Peta bytes scale problems is in need for a highly scalable mass storage system (MSS) to keep a permanent copy of their valuable data. But beyond the permanent storage aspects, the sheer amount of data makes complete dataset availability onto โlive storageโ (centralized or aggregated space such as the one provided by Scala/Xrootd) cost prohibitive implying that a dynamic...
Stephen Wolbers
(FNAL)
23/03/2009, 18:10
Hardware and Computing Fabrics
oral
As part of its mission to provide integrated storage for a variety of experiments and use patterns, Fermilab's Computing Division examines emerging technologies and reevaluates existing ones to identify the storage solutions satisfying stakeholders' requirements, while providing adequate reliability, security, data integrity and maintainability. We formulated a set of criteria and then...
Roberto Diviร
(CERN)
24/03/2009, 14:00
Hardware and Computing Fabrics
oral
The ALICE (A Large Ion Collider Experiment) Data Acquisition (DAQ) system has the unprecedented requirement to ensure a very high volume, sustained data stream between the ALICE Detector and the Permanent Data Storage (PDS) system which is used as main data repository for Event processing and Offline Computing. The key component to accomplish this task is the Transient Data Storage System...
Oliver Oberst
(Karlsruhe Institute of Technology)
24/03/2009, 14:20
Hardware and Computing Fabrics
oral
Todays experiments in HEP only use a limited number of operating system flavours. Their software might only be validated on one single OS platform. Resource providers might have other operating systems of choice for the installation of the batch infrastructure. This is especially the case if a cluster is shared with other communities, or communities that have stricter security requirements....
Ricardo SALGUEIRO DOMINGUES DA SILVA
(CERN)
24/03/2009, 14:40
Hardware and Computing Fabrics
oral
The ramping up of available resources for LHC data analysis
at the different sites continues. Most sites are currently
running on SL(C)4. However, this operating system is already
rather old, and it is becomming difficult to get the required
hardware drivers, to get the best out of recent hardware.
A possible way out is the migration to SL(C)5 based systems
where possible, in...
Andreas Haupt
(DESY),
Yves Kemp
(DESY)
24/03/2009, 15:00
Hardware and Computing Fabrics
oral
In the framework of a broad collaboration among German particle physicists - the strategic Helmholtz Alliance "Physics a the TeraScale", an Analysis facility has been set up at DESY.The facility is intended to provide the best possible analysis infrastructure for researches of the ATLAS, CMS, LHCb and ILC experiments and also for theory researchers.
In a first part of the contribution, we...
Dr
Isidro Gonzalez Caballero
(Instituto de Fisica de Cantabria, Grupo de Altas Energias)
24/03/2009, 15:20
Hardware and Computing Fabrics
oral
In the CMS computing model, about one third of the computing resources are located at Tier-2 sites, which are distributed across the countries in the collaboration. These sites are the primary platform for user analyses; they host datasets that are created at Tier-1 sites, and users from all CMS institutes submit analysis jobs that run on those data through grid interfaces. They are also the...
Dr
Graeme Andrew Stewart
(University of Glasgow), Dr
Michael John Kenyon
(University of Glasgow), Dr
Samuel Skipsey
(University of Glasgow)
24/03/2009, 15:40
Hardware and Computing Fabrics
oral
ScotGrid is a distributed Tier-2 centre in the UK with sites in
Durham, Edinburgh and Glasgow. ScotGrid has undergone a huge expansion
in hardware in anticipation of the LHC and now provides more than
4MSI2K and 500TB to the LHC VOs.
Scaling up to this level of provision has brought many challenges to
the Tier-2 and we show in this paper how we have adopted new methods
of organising...
Dr
Sergey Panitkin
(Department of Physics - Brookhaven National Laboratory (BNL))
24/03/2009, 16:30
Hardware and Computing Fabrics
oral
Solid State Drives (SSD) is a very promising storage technology for High Energy Physics parallel analysis farms.
Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which...
Mr
Rune Sjoen
(Bergen University College)
24/03/2009, 16:50
Hardware and Computing Fabrics
oral
The ATLAS data network interconnects up to 2000 processors using up to
200 edge switches and five multi-blade chassis devices. Classical,
SNMP-based, network monitoring provides statistics on aggregate traffic,
but something else is needed to be able to quantify single traffic
flows.
sFlow is an industry standard which enables an Ethernet switch to take a
sample of the packets...
Mr
Eric Grancher
(CERN)
24/03/2009, 17:10
Hardware and Computing Fabrics
oral
The Oracle database system is used extensively in the High Energy Physics community. Access to the storage subsystem is one of the major components of the Oracle database. Oracle has introduced new ways to access and manage the storage subsystem in the past years like ASM (10.1), Direct NFS (11.1) and Exadata (11.1).
This paper presents our experience with the different features linked to...
Dr
Jason Smith
(Brookhaven National Laboratory), Ms
Mizuki Karasawa
(Brookhaven National Laboratory)
24/03/2009, 17:30
Hardware and Computing Fabrics
oral
The RACF provides computing support to a broad spectrum of scientific
programs at Brookhaven. The continuing growth of the facility, the diverse
needs of the scientific programs and the increasingly prominent role of
distributed computing requires the RACF to change from a system to a
service-based SLA with our user communities.
A service-based SLA allows the RACF to coordinate more...
Mr
Christopher Hollowell
(Brookhaven National Laboratory), Mr
Robert Petkus
(Brookhaven National Laboratory)
24/03/2009, 17:50
Hardware and Computing Fabrics
oral
The RHIC/ATLAS Computing Facility (RACF) processor farm at Brookhaven
National Laboratory currently provides over 7200 cpu cores (over 13 million
SpecInt2000 of processing power) for computation. Our ability to supply this
level of computational capacity in a data-center limited by physical space,
cooling and electrical power is primarily due to the availability of increasingly
dense...
Dr
Josva Kleist
(Nordic Data Grid Facility)
24/03/2009, 18:10
Grid Middleware and Networking Technologies
oral
The Tier-1 facility operated by the Nordic DataGrid Facility (NDGF)
differs significantly from other Tier-1s in several aspects: It is not
located at one or a few locations but instead distributed throughout the
Nordic, it is not under the governance of a single organisation but
but is instead build from resources under the control of
a number of different national organisations.
Being...