Speaker
Description
Recent years have seen the mass adoption of streaming in mobile computing, an increase in size and frequency of bulk long-haul data transfers
in science in general, and the usage of big data sets in job processing
demanding real-time long-haul accesses that can be greatly affected by
variations in latency. It has been shown in the Physics and climate research communities that the need to exchange petabytes of data with
global collaborators can be seriously hampered by the TCP congestion
control and latency. Demands for faster and lower latency transfers have
been stressed also by the increasing need of encryption both in mobile
computing and in computational science.
Two recent and promising additions to the internet protocols are TCPBBR and QUIC. BBR implements a congestion policy that promises a
better control in TCP bottlenecks on long haul transfer. TCP-BBR is
implemented in the Linux kernnels above 4.9. It has been shown, however,
to demand some fine tuning in the interaction, for example, with the Linux
Fair Queue. QUIC, on the other hand, replaces HTTP and TLS with a
protocol on the top of UDP and thin layer to serve HTTP. It has been
reported to account today for 7% of Google’s traffic. It hasn’t been used
in server to server transfers even if its creators see that as a real possibility.
Our work evaluates the applicability and tuning of TCP-BBR and
QUIC for WLCG and data science transfers. We describe the integration
of each of them into the transfer tool iperf and the xroot protocol. Possibly, for the first time, server to server deployment of QUIC and tests
of the resulting performance evaluation of both QUIC and TCP-BBR on
long haul transfers involving WLCG servers is presented.