Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)
High priority Framework issues:
- Start / Stop / Start: 2 problems on O2 side left:
-
- All processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
- TPC ITS matching QC crashing accessing CCDB objects. Not clear if same problem as above, or a problem in the task itself:
- Stabilize calibration / fix EoS: New scheme: https://its.cern.ch/jira/browse/O2-4308:
- RC did some timeout tests, need to repeat in physics, then decide on actual timeouts.
- Ernst will verify the processing of calib data after data processing timeout.
- PR with InfoLogger improvements still WIP.
- Fix problem with ccdb-populator: no idea yet - since Ole left, someone else will have to take care.
- TF-status message (from https://github.com/AliceO2Group/AliceO2/pull/13495) sent by readout-proxy. Status?
- ONNXRuntime update merged.
- Problem with GPU and non-GPU container building ONNXRuntime with same hash and uploading to binary repository.
- GPU O2 build does not get binary of ONNXRuntime with GPU support.
- Fixed by https://github.com/alisw/alidist/pull/5855 using new alibuild feature of versioned system packages.
Sync reconstruction
- Waiting for RC to test COSMIC replay data set.
- Waiting for RC to test STOP timeout impact.
- Problem with high CPU load due to DPL metrics. Disabled GUI metrics in online mode. Issue mostly fixed, but yesterday we had some problems?
Async reconstruction
- Remaining oscilation problem: GPUs get sometimes stalled for a long time up to 2 minutes. Checking 2 things:
- does the situation get better without GPU monitoring? --> Inconclusive
- We can use increased GPU processes priority as a mitigation, but doesn't fully fix the issue.
- Need to investigate short GPU stall problem.
- Limiting factor for pp workflow is now the TPC time series, which is to slow and creates backpressure (costs ~20% performance on EPNs). Enabled multi-threading as recommended by Matthias - need to check if it works.
AliECS related topics:
- Extra env var field still not multi-line by default., created https://its.cern.ch/jira/browse/OGUI-1624 to follow this up seperately from other tickets.
GPU ROCm / compiler topics:
- List of important issues with AMD:
- Issues that disappeared but not yet understood: random server reboot with alma 9.4, miscompilation with ROCm 6.2, GPU getting stuck when DMA engine turned off, MI100 stalling with ROCm 5.5.
- EPN deployed the fix by AMD a second time, this time it works. Automatic workaround for MI100 removed in O2/dev. Will be deployed with next SW update.
- Problem with building ONNXRuntime with MigraphX support, to be checked.
- slc9-gpu-builder container was lacking dependencies for building NVIDIA GPU ONNX Support with TensorRT. Fixed, and tested, but currently not really needed since we cannot build ONNXRuntime with AMD and NVIDIA support.
- Try to find a better solution for the problem with __device__ inline functions leaking symbols in the host code.
- Once we bump arrow (PR opened by Giulio), we can bump LLVM to 19.
- AMD reported a regression where deterministic mode has slight differences CPU v.s. GPU. Need to check if regression in O2 code or in ROCm.
- ROCm 6.4 released. AMD split the driver and the ROCm part. Need to check if something do be done on our side. In any case, fix for synchronization is missing, so we cannot use 6.4 yet.
TPC / GPU Processing
- WIP: Use alignas() or find a better solution to fix alignment of monte carlo labels: https://its.cern.ch/jira/browse/O2-5314
- Waiting for TPC to fix bogus TPC transformations for good, then we can revert the workaround.
- Waiting for TPC to check PR which uses full cluster errors including average charge and occupancy map errors during seeding.
- Final solution: merging transformation maps on the fly into a single flat object: Still WIP
- Pending OpenCL2 issues:
- printf not working due to confirmed bug in clang, fix is being prepared. Prevents further debugging for now.
- Crash in merger, which can be worked around by disabling clang SPIRV optimization. Probably bug in clang, but need to fix printf first to debug.
- Also with optimization disabled, crashing later in TPC merging, need printf to debug.
- Felix debugged the OpenCL clusterization problem to be due to off-by-one offset in NoiseSuppression. Need to check how that can happen only in OpenCL.
- Next high priority topic: Improvements for cluster sharing and cluster attachment at lower TPC pad rows.
- Improved code-generation and application of DETERMINISTIC mode flags, such that GPU RTC can enable the deterministic mode and no_fast_math flags in the code it compiles. Now also yields 100% same results as the CPU in deterministic mode.
- However, originally planned to have deterministic mode a runtime flag only if RTC is used. Tunrs out this won't work, since configuration parameters might be rounded when scaled on the host before being passed on to the GPU. I.e. unavoidable to recompile the host code in deterministic mode.
- Switched from thrust library to CUB library for sorting using the full GPU device. Thrust was adding unnecessary synchronizations. I patched them away in CUDA's thrust, but never had time to do the same in HIP thrust, and my CUDA patch didn't work any more with the latest CUDA. So switching to CUB seems the simplest solution. Time per TF reduced from 4.1 to 4.0 seconds on my NVIDIA GPU, unfortunately no improvements on MI50/MI100.
- With RTC able to compile with NO_FAST_MATH, working to fix the issue that some clusters fail track-model decoding if RTC enabled due to floating point rounding.
- We had fixed this without RTC using per-kernel compilation and per-kernel compile flags, but so far this was not possible with RTC.
- We now have a runtime config object with all GPU launch bound parameters. Can automatically generate RTC code with that. (And we can load the runtime object from a preprocessor define header).
- This is now use by Gabriele, to tune launch bound parameters with RTC (much faster than recompiling O2).
- Working on PR to add more GPU compile time parameters to the config object in this way, such that they can be changed for RTC.
Other Topics
- Selected Felix as first candidate for Quest position. Now at HR, but going pretty slowly...
EPN major topics:
- Fast movement of nodes between async / online without EPN expert intervention.
- 2 goals I would like to set for the final solution:
- It should not be needed to stop the SLURM schedulers when moving nodes, there should be no limitation for ongoing runs at P2 and ongoing async jobs.
- We must not lose which nodes are marked as bad while moving.
- Interface to change SHM memory sizes when no run is ongoing. Otherwise we cannot tune the workflow for both Pb-Pb and pp: https://alice.its.cern.ch/jira/browse/EPN-250
- Lubos to provide interface to querry current EPN SHM settings - ETA July 2023, Status?
- Improve DataDistribution file replay performance, currently cannot do faster than 0.8 Hz, cannot test MI100 EPN in Pb-Pb at nominal rate, and cannot test pp workflow for 100 EPNs in FST since DD injects TFs too slowly. https://alice.its.cern.ch/jira/browse/EPN-244 NO ETA
- DataDistribution distributes data round-robin in absense of backpressure, but it would be better to do it based on buffer utilization, and give more data to MI100 nodes. Now, we are driving the MI50 nodes at 100% capacity with backpressure, and then only backpressured TFs go on MI100 nodes. This increases the memory pressure on the MI50 nodes, which is anyway a critical point. https://alice.its.cern.ch/jira/browse/EPN-397
- TfBuilders should stop in ERROR when they lose connection.
- Allow epn user and grid user to set nice level of processes: https://its.cern.ch/jira/browse/EPN-349
- EPN would like to bump slurm, for that we also need to bump the async voboxes. I'd suggest to move them to ALMA9 directly. Probably need to sit together to do this. From then on, we also plan to put the vobox handling into the EPN ansible, so that EPN will take over its maintenance.
Other EPN topics: