Speaker
Dr
Xavier Espinal Curull
(Universitat Autònoma de Barcelona (ES))
Description
Scientific experiments are producing huge amounts of data, and they continue increasing the size of their datasets and the total volume of data. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of Scientific Data Centres has shifted from coping efficiently with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful storage setup in an intensive HTC environment
Author
Dr
Xavier Espinal Curull
(Universitat Autònoma de Barcelona (ES))
Co-authors
Mr
Arnau Bria
(Port d'Informació Científica (PIC))
Elena Planas
(PIC)
Esther Accion Garcia
(Unknown)
Fernando Lopez Munoz
(PIC)
Francisco Martinez Ramirez De Loaysa
(Unknown)
Gerard Bernabeu Altayó
(PIC (Tier-1))
Prof.
Manuel Delfino Reznicek
(Universitat Autònoma de Barcelona (ES))
Marc Caubet Serrabou
(Universitat Autònoma de Barcelona)