Presentation:
High Touch DC24 data analyses
- Most of the flow are between 1GB and 100MB. Only a small fraction is >4GB, most of them generated by perfsonar.
- Only 40K over 157k flows are xrootd or https.
- The number of flows using Jumbo frames i very low. They have better performances though
Discussion:
- We need to convey the message to the experiments that their way of transferring files is highly inefficient
- We should discuss the issue with FTS developers and the EOS
From Chat:
Justas Balcas to Everyone (Jul 2, 2024, 5:55 PM)
Just to drop for everyones information (as it is a lot and time runs out). SENSE Team with UCSD is working on an XRootd testing over diff RTTs between two gen5 powerful machines (abstract accepted for CHEP).Just some highlights:
For 2 servers connected back to back (and huge memory disk) - max able to achieve is 240gbit/s - and looks to be software limitation. One item we are investigating how XRootd communicates with storage layer and feeding back to transfer socket all data. There are hardcoded part’s inside the xrootd code - which limits max reads/writes 1mb into a socket. For storage layers - 1MB chunk reads and a lot head movement will affect a lot and what is sent back to the network.
What would happen if XRootD use 4MB/16MB chunks to read data and pass back to the socket? From my observation with Ceph (no xrootd) - moving from 4MB Chunks to 16MB chunks allowed to reach CephFS read/write up to 20GB/s locally (production), while 4MB chunks were not able to exceed 10GB/s (local - is random read/write from kernel mount, would it improve xrootd to increase chunk size reads - to be seen).
For reference: https://ceph.io/en/news/blog/2021/diving-deeper/
Hiro to Everyone (Jul 2, 2024, 5:56 PM)
also, keep in mind that FTS is just requesting the 3rd party transfer to the storage. The storage is the one actually starts the transfer.