Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)
High priority Framework issues:
- Start / Stop / Start: 2 problems on O2 side left:
-
- All processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
- TPC ITS matching QC crashing accessing CCDB objects. Not clear if same problem as above, or a problem in the task itself:
- Stabilize calibration / fix EoS: New scheme: https://its.cern.ch/jira/browse/O2-4308: Tested, and the 2 problems with new oldestPossible errors and with input-proxy not sending EoS are still there.
- Fix problem with ccdb-populator: no idea yet - since Ole left, someone else will have to take care.
- TF-status message (from https://github.com/AliceO2Group/AliceO2/pull/13495) sent by readout-proxy. Status?
Sync reconstruction
- Waiting for RC to test COSMIC replay data set.
- Waiting for RC to test STOP timeout impact.
Async reconstruction
- Remaining oscilation problem: GPUs get sometimes stalled for a long time up to 2 minutes. Checking 2 things:
- does the situation get better without GPU monitoring? --> Inconclusive
- We can use increased GPU processes priority as a mitigation, but doesn't fully fix the issue.
- ḾI100 GPU stuck problem will only be addressed after AMD has fixed the operation with the latest official ROCm stack.
- Limiting factor for pp workflow is now the TPC time series, which is to slow and creates backpressure (costs ~20% performance on EPNs). Enabled multi-threading as recommended by Matthias - need to check if it works.
AliECS related topics:
- Extra env var field still not multi-line by default., created https://its.cern.ch/jira/browse/OGUI-1624 to follow this up seperately from other tickets.
GPU ROCm / compiler topics:
- List of important issues with AMD:
- Random server reboots on MI100: Tried several workarounds, but no solution found so far. Giada spotted some weird FairMQ problems in the large scale test, which could probably be due to some memory corruption happening.
- Random crashes on MI100 due to memory error, can be worked around by serializing all kernel and DMA transfers, which has 20% performance degradation.
- Miscompilation leading to crashes, worked around by changing our code, but compiler bug still there.
- Provide an RPM ROCm version with all fixes, so that we don't need to compile clang manually with custom patches.
- Proper way to enable amdgpu-function-calls instead of hacking AMD scripts and binaries.
- hipHostRegister has become very slow when more than 1 GPU visible (via ROCR_VISIBLE_DEVICES).
- EPNs provided 3 servers for ROCm 6.2.4 / Alma 9.4.
- Set up the reproducer for the reboot there, works reliable. Passed instructions to AMD how to run it.
- EPN also provided 3 servers with the new minor versions ROCm 6.3.2 / Alma 9.5.
- Tried to reproduce it on these 3 servers since 2 days, didn't happen.
- Exact same software for sure crashes with ROCm 6.3.1 / Alma 9.4.
- Also tried manual reboots of the servers in between, thinking perhaps they are some times in good and some times in bad state.
- Perhaps it is really fixed now, but want to gather more statistics before we can tell AMD they can stop looking into this and should focus on the memory error leasing to application crashes.
- Reboot issue has "disappeared" with bumping from Alma 9.4 to 9.5.
- Tested from Thursday till this morning in FSTs on 3 MI100 and 1 MI50 EPN by me, and on 2 more servers by EPN team. No crash so far.
- Double-checked that the only change is really Alma 9.4 v.s. 9.5, same ROCm, same O2 code.
- The other GPUs issues remain also with 9.5.
- Damon from AMD is back working for us, and will look at the GPU memory error next.
- Giulio and Sergio updated the "staging" container in Jenkins. We need to test if the RPM deployment problem is fixed now and whether the RPMs we build can do the FST on the new Alma 9.5 EPNs.
- Try to find a better solution for the problem with __device__ inline functions leaking symbols in the host code.
TPC / GPU Processing
- WIP: Use alignas() or find a better solution to fix alignment of monte carlo labels: https://its.cern.ch/jira/browse/O2-5314
- Waiting for TPC to fix bogus TPC transformations for good, then we can revert the workaround.
- Waiting for TPC to check PR which uses full cluster errors including average charge and occupancy map errors during seeding.
- Final solution: merging transformation maps on the fly into a single flat object: Still WIP
- Pending OpenCL2 issues:
- printf not working due to confirmed bug in clang, fix is being prepared. Prevents further debugging for now.
- GPU MemClean not working in TPC clusterization, need to debug.
- Crash in merger, which can be worked around by disabling clang SPIRV optimization. Probably bug in clang, but need to fix printf first to debug.
- Also with optimization disabled, crashing later in TPC merging, need printf to debug.
- Solved memset issue with OpenCL, but Clusterizer still gives slightly different clusters running on OpenCL.
- Felix reported the problem is due to off-by-one offset in NoiseSuppression. Need to check how that can happen only in OpenCL.
- Next high priority topic: Improvements for cluster sharing and cluster attachment at lower TPC pad rows.
Other Topics
- Hiring new fellow: open until 17.3., 7 applications so far, 4 of them declined by HR for formal reasons.
EPN major topics:
- Fast movement of nodes between async / online without EPN expert intervention.
- 2 goals I would like to set for the final solution:
- It should not be needed to stop the SLURM schedulers when moving nodes, there should be no limitation for ongoing runs at P2 and ongoing async jobs.
- We must not lose which nodes are marked as bad while moving.
- Interface to change SHM memory sizes when no run is ongoing. Otherwise we cannot tune the workflow for both Pb-Pb and pp: https://alice.its.cern.ch/jira/browse/EPN-250
- Lubos to provide interface to querry current EPN SHM settings - ETA July 2023, Status?
- Improve DataDistribution file replay performance, currently cannot do faster than 0.8 Hz, cannot test MI100 EPN in Pb-Pb at nominal rate, and cannot test pp workflow for 100 EPNs in FST since DD injects TFs too slowly. https://alice.its.cern.ch/jira/browse/EPN-244 NO ETA
- DataDistribution distributes data round-robin in absense of backpressure, but it would be better to do it based on buffer utilization, and give more data to MI100 nodes. Now, we are driving the MI50 nodes at 100% capacity with backpressure, and then only backpressured TFs go on MI100 nodes. This increases the memory pressure on the MI50 nodes, which is anyway a critical point. https://alice.its.cern.ch/jira/browse/EPN-397
- TfBuilders should stop in ERROR when they lose connection.
- Allow epn user and grid user to set nice level of processes: https://its.cern.ch/jira/browse/EPN-349
- Should discuss when we can upgrade the OS to Alma 9.5, probably starting with staging.
Other EPN topics: