27 September 2004 to 1 October 2004
Interlaken, Switzerland
Europe/Zurich timezone

InfiniBand for High Energy Physics

29 Sep 2004, 15:40
Harder (Interlaken, Switzerland)


Interlaken, Switzerland

oral presentation Track 6 - Computer Fabrics Computer Fabrics




Distributed physics analysis techniques as provided by the rootd and proofd concepts require a fast and efficient interconnect between the nodes. Apart from the required bandwidth the latency of message transfers is important, in particular in environments with many nodes. Ethernet is known to have large latencies, between 30 and 60 micro seconds for the common Giga-bit Ethernet. The InfiniBand architecture is a relatively new, open industry standard. It defines a switched high-speed, low-latency fabric designed to connect compute nodes and I/O nodes with copper or fibre cables. The theoretical bandwidth is up to 30 Gbit/s. The Institute for Scientific Computing (IWR) at the Forschungszentrum Karlsruhe is testing InfiniBand technology since begin of 2003, and has a cluster of dual Xeon nodes using the 4X (10 Gbit/s) version of the interconnect. Bringing the RFIO protocol - which is part of the CERN CASTOR facilities for sequential file transfers - to InfiniBand has been a big success, allowing significant reduction of CPU consumption and increase of file transfer speed. A first prototype of a direct interface to InfiniBand for the root toolkit has been designed and implemented. Experiences with hard- and software, in particular MPI performance results, will be reported. The methods and first performance results on rfio and root will be shown and compared to other fabric technologies like Ethernet.

Primary authors

A. Heiss (FORSCHUNGSZENTRUM KARLSRUHE) G. GANIS (CERN) U. Schwickerath (Forschungszentrum Karlsruhe)

Presentation Materials