14–18 Oct 2013
Amsterdam, Beurs van Berlage
Europe/Amsterdam timezone

Toward the Cloud Storage Interface of the INFN CNAF Tier-1 Mass Storage System

14 Oct 2013, 15:00
45m
Grote zaal (Amsterdam, Beurs van Berlage)

Grote zaal

Amsterdam, Beurs van Berlage

Poster presentation Data Stores, Data Bases, and Storage Systems Poster presentations

Speakers

Daniele Gregori (Istituto Nazionale di Fisica Nucleare (INFN)) Luca dell'Agnello (INFN-CNAF) Pier Paolo Ricci (INFN CNAF) Tommaso Boccali (Sezione di Pisa (IT))Dr Vincenzo Vagnoni (INFN Bologna)Dr Vladimir Sapunenko (INFN)

Description

The Mass Storage System installed at the INFN CNAF Tier-1 is one of the biggest hierarchical storage facilities in Europe. It currently provides storage resources for about 12% of all LHC data, as well as to other High Energy Physics experiments. The Grid Enabled Mass Storage System (GEMSS) is the present solution implemented at the INFN CNAF Tier-1 and it is based on a custom integration between a high performance parallel file system (General Parallel File System, GPFS) and a tape management system for long-term backend storage on magnetic media (Tivoli Storage Manager, TSM). Data access to Grid users is being granted since several years by the Storage Resource Manager (StoRM), an implementation of the standard SRM interface, widely adopted in the WLCG (Worldwide Large Hadron Collider Computing Grid) collaboration. Requirements from the experiments at the (LHC) Large Hadron Collider and for other High Energy Physics cases, are leading to investigate the adoption of more flexible and user-friendly methods for accessing the storage over the WAN. These ideas include both the storage federation implementation (that is an approach where computing sites are divided in geographic federations permitting the direct file-access of the “federated” storage between sites) and, in general, promising cloud-like approach of sharing data storage. In particular at CNAF a specific integration between GEMSS and Xrootd has been developed in order to match the requirements of the CMS experiment. This was already the case for ALICE, using ad-hoc Xrootd modifications; CMS changes have been validated and are already available in the official Xrootd integration builds. This integration is currently under pre-production and appropriate large scale tests are under way. Moreover, an alternative approach for the storage federations based on http/webdav, in particular for the Atlas use case is under development. In this paper we present the emerging methods and technologies, with particular attention to the cloud data access protocols like WebDAV the Xrootd and http data federations approach namespace . We also discuss the solutions adopted to increase the availability and to optimize the overall performance of the services behind the system, and we provide a short summary and comparison between results obtained using different data access protocols over a cloud-like environment.

Primary author

Pier Paolo Ricci (INFN CNAF)

Co-authors

Daniele Gregori (Istituto Nazionale di Fisica Nucleare (INFN)) Luca dell'Agnello (INFN-CNAF) Tommaso Boccali (Sezione di Pisa (IT)) Dr Vincenzo Vagnoni (INFN Bologna) Dr Vladimir Sapunenko (INFN)

Presentation materials