Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)
High priority Framework issues:
- Start / Stop / Start: 2 problems on O2 side left:
-
- All processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
- TPC ITS matching QC crashing accessing CCDB objects. Not clear if same problem as above, or a problem in the task itself:
- Stabilize calibration / fix EoS: New scheme: https://its.cern.ch/jira/browse/O2-4308: Reported 2 issues to GIulio, waiting for a fix. Status?
- Fix problem with ccdb-populator: no idea yet - since Ole left, someone else will have to take care.
- TF-status message (from https://github.com/AliceO2Group/AliceO2/pull/13495) sent by readout-proxy. Status?
Sync reconstruction
- Waiting for RC to test COSMIC replay data set.
- Waiting for RC to test STOP timeout impact.
- Smooth data taking till end of run. Minor errors were still fixed, but fixes not deployed since no time and issues not critical, e.g. MFT corrupt data leading to crashes at start of run, and TPC calibration causing trouble in CCDB when older objects arrive later.
Async reconstruction
- Remaining oscilation problem: GPUs get sometimes stalled for a long time up to 2 minutes. Checking 2 things:
- does the situation get better without GPU monitoring? --> Inconclusive
- We can use increased GPU processes priority as a mitigation, but doesn't fully fix the issue.
- ḾI100 GPU stuck problem will only be addressed after AMD has fixed the operation with the latest official ROCm stack.
- Limiting factor for pp workflow is now the TPC time series, which is to slow and creates backpressure (costs ~20% performance on EPNs). Enabled multi-threading as recommended by Matthias - need to check if it works.
- Changed MI100 vobox to run default container, and disabled matching for 96 core GPU jobs, so should not run normal 8 core async jobs CPU-only.
EPN major topics:
- Fast movement of nodes between async / online without EPN expert intervention.
- 2 goals I would like to set for the final solution:
- It should not be needed to stop the SLURM schedulers when moving nodes, there should be no limitation for ongoing runs at P2 and ongoing async jobs.
- We must not lose which nodes are marked as bad while moving.
- Interface to change SHM memory sizes when no run is ongoing. Otherwise we cannot tune the workflow for both Pb-Pb and pp: https://alice.its.cern.ch/jira/browse/EPN-250
- Lubos to provide interface to querry current EPN SHM settings - ETA July 2023, Status?
- Improve DataDistribution file replay performance, currently cannot do faster than 0.8 Hz, cannot test MI100 EPN in Pb-Pb at nominal rate, and cannot test pp workflow for 100 EPNs in FST since DD injects TFs too slowly. https://alice.its.cern.ch/jira/browse/EPN-244 NO ETA
- DataDistribution distributes data round-robin in absense of backpressure, but it would be better to do it based on buffer utilization, and give more data to MI100 nodes. Now, we are driving the MI50 nodes at 100% capacity with backpressure, and then only backpressured TFs go on MI100 nodes. This increases the memory pressure on the MI50 nodes, which is anyway a critical point. https://alice.its.cern.ch/jira/browse/EPN-397
- TfBuilders should stop in ERROR when they lose connection.
- Allow epn user and grid user to set nice level of processes: https://its.cern.ch/jira/browse/EPN-349
- Tentative time for ALMA9 deployment: december 2024.
Other EPN topics:
AliECS related topics:
- Extra env var field still not multi-line by default.
GPU ROCm / compiler topics:
- ROCm 6.2.2 available from AMD, old problems seem fixed, but we see 2 new types of crashes
- New miscompilation for >ROCm 6.0
- Waiting for AMD to fix the reproducer we provided (not yet fixed in 6.2.2, but we have a workaround).
- Try to find a better solution for the problem with __device__ inline functions leaking symbols in the host code.
- Merged GPUDataTypes and GPUDataTypeHeaders libraries, since it was causing trouple for Jens with ROOT DebugStreamers, since I had some hack in there with respect to ROOT dictionaries for the TRD GPU track model.
- A lot of progress for C++ for OpenCL / OpenCL 3.0 backend, using POCL (Portable OpenCL C++ runtime) and Clang 19, we can run our SPIR-V IL code on CPU. Clusterization works, but then TPC tracking crashes (under investigation).
- Plan is to fix this, and then remove all the obsolete support for ROOT 5 / AliRoot build / OpenCL 1.2 / C++ < C++17. Run 2 data will still be supported by using the existing raw dump in AliRoot and loading it in the standalone benchmark.
TPC GPU Processing
- WIP: Use alignas() or find a better solution to fix alignment of monte carlo labels: https://its.cern.ch/jira/browse/O2-5314
- Waiting for TPC to fix bogus TPC transformations for good, then we can revert the workaround.
- Waiting for TPC to check PR which uses full cluster including average charge and occupancy map errors during seeding.
- Next PR will allow to remove multiple clusters in the same pad row and keep only the best one.
- Added feature to cut TPC clusters above a certain time bin in TF to cut away bogus data due to altro sync signal.
- Fixed a regression that lead to RTC compilation failure due to using system headers.
- Fixed a bug in TPC GPU track model decoding when TF was truncated during sync processing due to buffer overflows / TPC trips.
TPC processing performance regression:
- Final solution: merging transformation maps on the fly into a single flat object: Still WIP
General GPU Processing