Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)
High priority Framework issues:
- Fix dropping lifetime::timeframe for good: Still pending: problem with CCDB objects getting lost by DPL leading to "Dropping lifetime::timeframe", saw at least one occation during SW validation.
- Found yet another "last problem with oldestPossibleTimeframe": Errors at P2 during STOP about oldestPossibleOutput decreasing, which is not allower.
- Problem is due to TimingInfo not being filled properly during EoS, and old or uninitialized counters were used. Fixed which cures my reproducer in FST. To be deployed at P2, and we need to check again for bogus messages.
- Start / Stop / Start: 2 problems on O2 side left:
-
- All processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
- TPC ITS matching QC crashing accessing CCDB objects. Not clear if same problem as above, or a problem in the task itself:
- Stabilize calibration / fix EoS: New scheme: https://its.cern.ch/jira/browse/O2-4308: Status?.
- Fix problem with ccdb-populator: no idea yet - since Ole left, someone else will have to take care.
- Expendable tasks - 2 problems reported, 1 already fixed. Giulio fixed the second problem in 1 reproducer, but in another reproducer and at P2 it still fails. Being checked.
Global calibration topics:
- TPC IDC and SAC workflow issues to be reevaluated with new O2 at restart of data taking. Cannot reproduce the problems any more.
Sync reconstruction
- Had a problem with COSMIC runs crashing due to bug in ITS code.
- Recorded cosmic raw data. Status?
- Crash was from incorrect printout in ITS reco, fixed in new O2 deployed at P2.
- Again crashes in its-tracker, fixed by Ruben in O2/dev, needs to be deployed at P2.
- gpu-reconstruction crashing due to receiving bogus CCDB objects.
- Found that the corruption occurs when the CCDB object download times out. New O2 tag of today has proper error detection, and will give a FATAL error message instead of shipping corrupted objects.
- CCDB experts so far do not understand the problem and cannot reproduce it locally.
- Gave me some tests to run on the EPN. Depending on the outcome, should instruct them to use the EPNs and my reproducer to follow this up without involving us for each test.
- TOF fix solves the memory corruption leading to boost interprocess lock errors.
- Some crashes of tpc-tracking yesterday in SYNTHETIC runs, to be investigated.
- Will discuss with RC tomorrow what timeouts to use for STOP of run, and new Calib Scheme.
Async reconstruction
- Remaining oscilation problem: GPUs get sometimes stalled for a long time up to 2 minutes. Checking 2 things:
- does the situation get better without GPU monitoring? --> Inconclusive
- We can use increased GPU processes priority as a mitigation, but doesn't fully fix the issue.
- ḾI100 GPU stuck problem will only be addressed after AMD has fixed the operation with the latest official ROCm stack.
- Limiting factor for pp workflow is now the TPC time series, which is to slow and creates backpressure (costs ~20% performance on EPNs). Enabled multi-threading as recommended by Matthias - need to check if it works.
EPN major topics:
- Fast movement of nodes between async / online without EPN expert intervention.
- 2 goals I would like to set for the final solution:
- It should not be needed to stop the SLURM schedulers when moving nodes, there should be no limitation for ongoing runs at P2 and ongoing async jobs.
- We must not lose which nodes are marked as bad while moving.
- Interface to change SHM memory sizes when no run is ongoing. Otherwise we cannot tune the workflow for both Pb-Pb and pp: https://alice.its.cern.ch/jira/browse/EPN-250
- Lubos to provide interface to querry current EPN SHM settings - ETA July 2023, Status?
- Improve DataDistribution file replay performance, currently cannot do faster than 0.8 Hz, cannot test MI100 EPN in Pb-Pb at nominal rate, and cannot test pp workflow for 100 EPNs in FST since DD injects TFs too slowly. https://alice.its.cern.ch/jira/browse/EPN-244 NO ETA
- DataDistribution distributes data round-robin in absense of backpressure, but it would be better to do it based on buffer utilization, and give more data to MI100 nodes. Now, we are driving the MI50 nodes at 100% capacity with backpressure, and then only backpressured TFs go on MI100 nodes. This increases the memory pressure on the MI50 nodes, which is anyway a critical point. https://alice.its.cern.ch/jira/browse/EPN-397
- TfBuilders should stop in ERROR when they lose connection.
- Allow epn user and grid user to set nice level of processes: https://its.cern.ch/jira/browse/EPN-349
- Improve core dumps and time stamps: https://its.cern.ch/jira/browse/EPN-487
- Tentative time for ALMA9 deployment: december 2024.
Other EPN topics:
Full system test issues:
Topology generation:
- Should test to deploy topology with DPL driver, to have the remote GUI available.
- DPL driver needs to implement FMQ state machine. Postponed until YETS issues solved.
- 2 Occucances where the git repository in the topology cache was corrupted. Not really clear how this can happen, also not reproducible. Was solved by wiping the cache. Will add a check to the topology scripts to check for a corrupt repository, and in that case delete it and check it out anew.
AliECS related topics:
- Extra env var field still not multi-line by default.
GPU ROCm / compiler topics:
- Compilation failure due to missing symbols when compiling with -O0. Similar problem found by Matteo, being debugged.. Sent a reproducer to AMD.
- Internal compiler error with LOG(...) macro: we have a workaround, AMD has a reproducer, waiting for a fix.
- New miscompilation for >ROCm 6.0
- Waiting for AMD to fix the reproducer we provided.
- ROCm 6.2 / ALMA 9.2
- Building on the EPNs. Christian can use them for ML framework tests. Waiting for ROCm 6.2.1 to test O2.
- Bumping GCC:
- Fixed the ROCm problems with new GCC 13/14.
- Fixed bogus -Werror warnings in O2 with new GCC 13, now the CI is failing compiling O2Physics with GCC 13.
- Bumped LLVM to 18 by Sergio:
- Fixed relocation problems, vanilla 18.1 release has a regression for OpenCL compilation. Backported the fix from LLVM 19.
- GPU RTC deployed on EPNs and working properly.
- O2 Code Checker: failed to find header files to to relocation problem. Investigated yesterday and finally managed to solve with Giulio.
- New clang has additional checks, e.g. for modernization of shared pointers. Thus CI will now report some new errors, to be fixed over time.
- Long-pending issue of CodeChecker not finding omp.h. After the problem caused by not finding <iostream> etc. above, started to look into this, since we learned it can have side effects.
TPC GPU Processing
- Bug in TPC QC with MC embedding, TPC QC does not respect sourceID of MC labels, so confuses tracks of signal and of background events.
- Started to look into this, should have a fix this week.
- Christian reported a bug when the O2 noise label is set. TPC QC was derived from AliRoot and didn't know about a noise label besides the fake label. Fixed.
- Sandro starting to look into the GPU event display, asking if one can color tracks based on collision. Currently not possible for Run 3, since the MC label is not read correctly. Fix should be easy.
- New problem with bogus values in TPC fast transformation map still pending. Sergey is investigating, but waiting for input from Alex. Ruben reported that he still sees such bogus values.
- Status of cluster error parameterizations
- No progress yet on newly requested debug streamers.
- Waiting for TPC to check PR with full cluster errors during seeding.
- TPC reported a problem with laser runs. In case of bad data (tpc data outside of the triggered drift time), GPUs sometimes can get stuck, so apparently the skipping of bad data is not fully working. Recorded some laser raw data to check.
- Fully fixed. Actual crash came from incorrect handling of detected buffer overflows during creation of fast search grid in TPC tracking, which is fixed now.
- Buffer overflows came from bogus values of the TPC transformation, moving clusters by 10^20cms, leading to bogus fast search grids. Fixed this now by a temporary workaround to not apply any SCD correction > 100 cm. Should be reverted once TPC has a proper solution.
- In addition there were failures from FPEs during the tracking when track parameters and cov matrix became inf / NaN, which is fixed by the same workaround for the TPC transform map.
- Want to switch to int8 ... uint64 types instead of char, short, ...
TPC processing performance regression:
- Final solution: merging transformation maps on the fly into a single flat object: Still WIP
- See attached PDF for performance improvements.
General GPU Processing