Speaker
Dr
Gabriele Garzoglio
(FERMI NATIONAL ACCELERATOR LABORATORY)
Description
As the need for Big Data in science becomes ever more relevant, networks around the world are upgrading their infrastructure to support high-speed interconnections. To support its mission, the high-energy physics community as a pioneer in Big Data has always been relying on the Fermi National Accelerator Laboratory to be at the forefront of storage and data movement. This need was reiterated in recent years with the data taking rate of the major LHC experiments reaching tens of Petabytes per year. At Fermilab, this resulted regularly in peaks of data movement on the WAN in and out of the laboratory of about 30 Gbits/s and on the LAN between storage and computational farms of 160 Gbits/s. To address these ever increasing needs, as of this year Fermilab is connected to the Energy Sciences Network (ESNet) through a 100 Gbit/s link.
To understand the optimal system- and application-level configuration to interface computational systems with the new high-speed interconnect, Fermilab has deployed a Network R&D facility connected to the ESNet 100G Testbed. For the past two years, the High Throughput Data Program has been using the Testbed to identify gaps in data movement middleware when transferring data at these high-speeds. The program has published evaluations of technologies typically used in High Energy Physics, such as GridFTP, XrootD, and Squid. This work presents the new R&D facility and the continuation of the evaluation program.
Primary authors
Dave Dykstra
(Fermi National Accelerator Lab. (US))
Dr
Gabriele Garzoglio
(FERMI NATIONAL ACCELERATOR LABORATORY)
Dr
Hyunwoo Kim
(Fermilab)
Dr
Marko Slyz
(Fermilab)
Parag Mhashilkar
(Fermi National Accelerator Laboratory)