Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)
High priority framework topics:
- Problem with EndOfStream / Dropping Lifetime::Timeframe data
- 3rd debug feature in development (verify that lifetime::timeframe messages were created): status?
- Change consumeWhenAll completion policy to wait for oldestPossibleTimeframe: PR opened but has merge conflicts.
- If consumeWhenAny, do not check for lifetime::timeframe input / output agrrement. status?
- Fix parsing of spec strings, to allow specifying lifetime without specifying subspec. Apparently not a limitation in DPL but in QC? Status?
- Problem with QC topologies with expendable tasks - Fixed in DPL, waiting for feedback.
- New issue: sometimes CCDB populator produces backpressure, without processing data. Crashed several Pb-Pb runs yet: https://alice.its.cern.ch/jira/browse/O2-4244
- Disappeared after disabled CPV gain calib, that was very slow. However, this can only have hidden the problem. Apparently there is a race condition that can trigger a problem in the input handling, which makes the CCDB populator stuck. Since the run funciton of the CCDB populator is not called and it does not have a special completion policy, but simply consumeWhenAny, this is likely to be a generic problem.
- Cannot be debugged Pb-Pb right now, since it is mitigated. But must be understood afterwards.
- C++20 / ROOT 6.30 status?
- Implement new EndOfStream scheme for calibration.
Other framework tickets:
- TOF problem with receiving condition in tof-compressor: https://alice.its.cern.ch/jira/browse/O2-3681
- Grafana metrics: Might want to introduce additional rate metrics that subtract the header overhead to have the pure payload: low priority.
- Backpressure reporting when there is only 1 input channel: no progress: https://alice.its.cern.ch/jira/browse/O2-4237
- Stop entire workflow if one process segfaults / exits unexpectedly. Tested again in January, still not working despite some fixes. https://alice.its.cern.ch/jira/browse/O2-2710
- https://alice.its.cern.ch/jira/browse/O2-1900 : FIX in PR, but has side effects which must also be fixed.
- https://alice.its.cern.ch/jira/browse/O2-2213 : Cannot override debug severity for tpc-tracker
- https://alice.its.cern.ch/jira/browse/O2-2209 : Improve DebugGUI information
- https://alice.its.cern.ch/jira/browse/O2-2140 : Better error message (or a message at all) when input missing
- https://alice.its.cern.ch/jira/browse/O2-2361 : Problem with 2 devices of the same name
- https://alice.its.cern.ch/jira/browse/O2-2300 : Usage of valgrind in external terminal: The testcase is currently causing a segfault, which is an unrelated problem and must be fixed first. Reproduced and investigated by Giulio.
- Found a reproducible crash (while fixing the memory leak) in the TOF compressed-decoder at workflow termination, if the wrong topology is running. Not critical, since it is only at the termination, and the fix of the topology avoids it in any case. But we should still understand and fix the crash itself. A reproducer is available.
- Support in DPL GUI to send individual START and STOP commands.
- Problem I mentioned last time with non-critical QC tasks and DPL CCDB fetcher is real. Will need some extra work to solve it. Otherwise non-critical QC tasks will stall the DPL chain when they fail.
- DPL sending SHM metrics for all devices, not only input proxy: https://alice.its.cern.ch/jira/browse/O2-4234
- Some improvements to ease debugging: https://alice.its.cern.ch/jira/browse/O2-4196 https://alice.its.cern.ch/jira/browse/O2-4195 https://alice.its.cern.ch/jira/browse/O2-4166
- After Pb-Pb, we need to do a cleanup session and go through all these pending DPL tickets with a higher priority, and finally try to clean up the backlog.
Global calibration topics:
- TPC IDC and SAC workflow issues to be reevaluated with new O2 at restart of data taking. Cannot reproduce the problems any more.
Sync processing
- Created a JIRA ticker summarizing my proposal for a script that parses and summarizes InfoLogger messages: https://alice.its.cern.ch/jira/browse/R3C-992
Async reconstruction
- Remaining oscilation problem: GPUs get sometimes stalled for a long time up to 2 minutes.
- Checking 2 things: does the situation get better without GPU monitoring? --> Inconclusive
- We can use increased GPU processes priority as a mitigation, but doesn't fully fix the issue.
- Currently cannot use MI100 nodes due to AMD GPU problem, could use it as CPU-only nodes.
EPN major topics:
- Fast movement of nodes between async / online without EPN expert intervention.
- 2 goals I would like to set for the final solution:
- It should not be needed to stop the SLURM schedulers when moving nodes, there should be no limitation for ongoing runs at P2 and ongoing async jobs.
- We must not lose which nodes are marked as bad while moving.
- Interface to change SHM memory sizes when no run is ongoing. Otherwise we cannot tune the workflow for both Pb-Pb and pp: https://alice.its.cern.ch/jira/browse/EPN-250
- Lubos to provide interface to querry current EPN SHM settings - ETA July 2023, Status?
- Improve DataDistribution file replay performance, currently cannot do faster than 0.8 Hz, cannot test MI100 EPN in Pb-Pb at nominal rate, and cannot test pp workflow for 100 EPNs in FST since DD injects TFs too slowly. https://alice.its.cern.ch/jira/browse/EPN-244 NO ETA
- DataDistribution distributes data round-robin in absense of backpressure, but it would be better to do it based on buffer utilization, and give more data to MI100 nodes. Now, we are driving the MI50 nodes at 100% capacity with backpressure, and then only backpressured TFs go on MI100 nodes. This increases the memory pressure on the MI50 nodes, which is anyway a critical point. https://alice.its.cern.ch/jira/browse/EPN-397
- TfBuilders should stop in ERROR when they lose connection.
Other EPN topics:
Raw decoding checks:
- Add additional check on DPL level, to make sure firstOrbit received from all detectors is identical, when creating the TimeFrame first orbit.
Full system test issues:
Topology generation:
- Should test to deploy topology with DPL driver, to have the remote GUI available. Status?
QC / Monitoring / InfoLogger updates:
- TPC has opened first PR for monitoring of cluster rejection in QC. Trending for TPC CTFs is work in progress. Ole will join from our side, and plan is to extend this to all detectors, and to include also trending for raw data sizes.
AliECS related topics:
- Extra env var field still not multi-line by default.
FairMQ issues:
- Disabled FMQ RefCount segment on EPNs, since it required too much memory for calib runs, where one/few EPNs receive all data.
High priority RC YETS issues:
- Fix dropping lifetime::timeframe for good
- Need 3rd debug feature.
- Problem in flp199 dcs workflow. Fixed.
- Need to test / merge PR for updated consumeWhenAny policy.
- Need to understand what to do for parsing spec strings with lifetime and without subspec in QC.
- We (mostly Ole) managed to fix all the remaining incompatible sporadic / timeframe input / output definitions. Global workflows can run without the debug env variable now.
- For problem with network buffer limits in FMQ, we discussed possible solutions in https://alice.its.cern.ch/jira/browse/O2-4414. Unfortunately, with current FMQ / ZMQ, there is no good solution. QC could try to use REQ/REP instead of PUB/SUB. To be tested.
- Expandable tasks in QC. Everything merged. Needs to be tested.
- Stabilize calibration / fix EoS: We have a plan how to implement it. Will take some time, but hopefully before restart of data taking.
- Fix problem with ccdb-populater: no idea yet, no ETA.
GPU ROCm / compiler topics:
- Found new HIP internal compiler error when compiling without optimization: -O0 make the compilation fail with unsupported LLVM intrinsic. Reported to AMD.
- Found a new miscompilation with -ffast-math enabled in looper folllowing, for now disabled -ffast-math.
- Must create new minimal reproducer for compile error when we enable LOG(...) functionality in the HIP code. Check whether this is a bug in our code or in ROCm. Lubos will work on this.
- Found another compiler problem with template treatment found by Ruben. Have a workaround for now. Need to create a minimal reproducer and file a bug report.
- Debugging the calibration, debug output triggered another internal compiler error in HIP compiler. No problem for now since it happened only with temporary debug code. But should still report it to AMD to fix it.
- New compiler regression in ROCm 5.6, need to create testcase and send to AMD.
- ROCm 5.7.1 not working, waiting for AMD to reply.
- New problem in async reco on MI100 - GPUs get stalled for several hours:
- Not clear whether this is the same problem as the shorter stalls of few seconds we see in sync and async.
- It happens on both MI50 and MI100, but on MI100 it seems much more likely. A large fraction of the jobs is affected. On
MI50 nodes it seems very rare.
- Sometimes it is only a single GPU process that is stuck, sometimes all 4 GPU processes of a NUMA domain is stuck.
Sometimes there is an AMD kworker thread running at 100% load, sometimes not, sometimes a ROCm runtime thread is
running at 100% load.
- Attaching GDB, the workflow seems stuck in random places. Sometimes I cannot attach gdb (gdb itself gets stuck). I have
seen cases where it is stuck in init, so in that case it cannot be data driven. Could be a real ROCm problem.
- Apparently, the GPU recovers after a long time. I have seen single time frames take 10h – 15h due to this. Not clear
whether it always recovers, since some jobs time out after 24h.
- I have never seen this in my tests (though they were mostly on MI50, where it is rare). So either it happens only with GRID
jobs that run in the Apptainer, or it could be due to 2 jobs in parallel on both NUMA domains, or it is just statistics.
- AMD released ROCm 6.0. EPN has set up a test node. Will test this week.
TPC GPU Processing
- Bug in TPC QC with MC embedding, TPC QC does not respect sourceID of MC labels, so confuses tracks of signal and of background events.
- New problem with bogus values in TPC fast transformation map still pending. Sergey is investigating, but waiting for input from Alex.
- TPC has provided the dead channel map as calib object. Next step now is to respect it during tracking, and do not abort tracking if no hits are found when the channels are dead.
- Ruben reported a problem with FPEs in TPC reco. Was a severe bug where tracks for which the refit failed were not rejected but written to the output incomplete, thus dEdx values were uninitialized. Fixed.
- Problem with tracks having invalid sinPhi in OuterParam. Reported by Ruben. Found to be a problem during tracking, when last propagation strep outwards failed, and no further cluster was fitted afterwards. Fix upcoming.