10-14 October 2016
San Francisco Marriott Marquis
America/Los_Angeles timezone

Next Generation high performance, multi-dimensional scalable data transfer

13 Oct 2016, 11:00
GG C3 (San Francisco Mariott Marquis)


San Francisco Mariott Marquis

Oral Track 4: Data Handling Track 4: Data Handling


Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US))Dr Roger Cottrell (SLAC National Accelerator Laboratory) Wei Yang (SLAC National Accelerator Laboratory (US))Dr Wilko Kroeger (SLAC National Accelerator Laboratory)


The exponentially increasing need for high speed data transfer is driven by big data, cloud computing together with the needs of data intensive science, High Performance Computing (HPC), defense, the oil and gas industry etc. We report on the Zettar ZX software that has been developed since 2013 to meet these growing needs by providing high performance data transfer and encryption in a scalable, balanced, easy to deploy and use way while minimizing power and space utilization. Working with the Zettar development team we have deployed the ZX software on existing Data Transfer Nodes (DTNs) at SLAC and NERSC and transferred data between a Lustre cluster at SLAC and a GPFS cluster at NERSC. ZX supports multi-dimensional scaling in terms of: computation power, network interfaces, and analysis and customization of the storage interface configuration resulting in optimized use of IOPS. It also pays close attention to successfully and efficiently transferring lots of small files and avoids the latency issues of external services. We have verified that the data rates are comparable with the data rates achieved with those of a commonly used high performance data transfer application (bbcp). In collaboration with several commercial vendors, a Proof of Concept (PoC) has also been put together using off the shelf components to test the ZX scalability and ability to balance services using multiple cores and network interfaces to provide much higher throughput. The PoC consists of two 4-node clusters with the flash storage per node aggregated with a parallel file system, plus 10Gbps, 40Gbps and 100Gbps network interfaces. Each cluster plus the two associated switches occupies 4 rack units and draws less than 3.6KW. Using the PoC, between clusters we have achieved 155Gbps memory- to-memory over a 16x10Gbps link aggregated channel (LAG) and 70Gbps file-to-file with encryption over a 5000 mile 100Gbps link.

Secondary Keyword (Optional) Distributed data handling
Primary Keyword (Mandatory) Network systems and solutions

Primary authors

Dr Chin Fang (Zettar Inc.) Dr Roger Cottrell (SLAC National Accelerator Laboratory)


Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US)) Wei Yang (SLAC National Accelerator Laboratory (US)) Dr Wilko Kroeger (SLAC National Accelerator Laboratory)

Presentation Materials

Your browser is out of date!

Update your browser to view this website correctly. Update my browser now