HTTP-TPC multistream
- Summarize status (support in FTS/gfal2 and individual storage implementations as passive and active HTTP-TPC push&pull)
- FTS/gfal2 - configured number of streams is passed by
X-Number-Of-Streams
header to the COPY HTTP request - dCache
- DPM
- EOS - pull transfers fails with enabled multistream, push ignores(?) multistream
- Echo
- StoRM
- XRootD - pull transfers fails with enabled multistream, push ignores(?) multistream
- Discussion triggered by GGUS:157985
- Should we invest time to fix / implement or is single stream acceptable by DOMA?
- What are the benefits? Could such functionality reduce operational effort?
- Overloading storage with multiple connections (e.g. dCache movers)?
- Other parameters ignored by FTS/gfal2 for HTTP-TPC implementation
- TCP buffer size
- enforce IPv4 vs. IPv6
- disable proxy delegation
- Configuration interface in FTS is same for all protocols
- it is not clear which features are generally supported / supported by fraction of SE implementation
Non-standard disknode ports in default configuration (HTTPS)
- dCache - disknode by default use port range 20000-25000 for all protocols
- DPM - disknode by default use 443 port for HTTPS, 1095 for xroot and random port for gsiftp
- Echo - doors use 1094 for HTTPS
- EOSATLAS - disknode use 8443 for HTTPS
- StoRM - headnode / webdav doors use 8443 for HTTPS
- XRootD - use 1094 for HTTPS
dCache per-pool transfer limit
- LHCb users observed problems accessing files on dCache GGUS:153653
- hadd normally opens all files at start and per-pool transfer limit can be reached quite easily
- can be avoided with maxopenedfiles CLI parameter
- it should not be so easy to "overload" dCache with number of opened files
- too "low limit" observed also by ATLAS at some dCache sites
- jobs using "directio" can keeps (multiple) files open for a long time
- FZK limit number of movers because of memory usage with xroot protocol
- dCache can even crash when exhaust dcache.java.memory.direct memory
- can be triggered by just "one misbehaving" client
- extensively discussed with dCache developers
- I would not be surprised to see similar issues once users starts to extensively use e.g. RDataFrame