Alice Weekly Meeting: Software for Hardware Accelerators / PDP-SRC
→
Europe/Zurich
-
-
1
DiscussionSpeakers: David Rohr (CERN), Ole Schmidt (CERN)
Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)
High priority framework topics:
- Problem at end of run with lots of error messages and breaks many calibration runs https://alice.its.cern.ch/jira/browse/O2-3315
- Did some investigation on dropping lifetime::timeframe errors, and seems an overlap of many issues causing this, some are fixed:
- Missing inputs not handled correctly with original 0xDEADBEEF mechanism. Should be fixed now that inject dummy data at readout-proxy level.
- TRD pulseheight QC seems to stop sending output after STOP of run, causing bunch of messages at end of run.
- When running on CTFs, bunch of messages at EOR since CTF reader did one additional iteration.
- Checked Giulio's change to prevent counting up oldestPossible in case input proxy receives raw data. It was correct and merged, but there was another problem that the current oldestPossible was cached and still sent after processing, i.e. it was not counting up but still repeatedly sent a value too high by one. Created an additional PR that prevents limits oldestPossible updates to the rewinded TF counter in case it was rewinded. For me, that cleaned the issues I could reproduce in the full system test.
- Still, there seems to be a general problem, since running on staging there are rare sporadic messages during the run, and still a flood at end of run. Current best guess is a problem when forwarding messages / CCDB objects from one DPL device to another.
- NB: Fix for EoS for calib workflows (see below) fixed some premature EoS, also reducing the bogus messages at end of run, but still not to 0.
- Did some investigation on dropping lifetime::timeframe errors, and seems an overlap of many issues causing this, some are fixed:
- Fix START-STOP-START for good
- https://github.com/AliceO2Group/AliceO2/pull/9895 still not merged due to a conflict.
- GPU Standalone multi-threading:
- Fully commissioned and active since yesterday. No problem in 30 kHz fill last night. Ruben checked online QC, no difference. Also async QC will be checked.
- Remaining issues that were fixed:
- Special completion policy to guarantee orderind was not active crashing a test on staging.
- Failure in CTF code handling missing missing CTP/LUMI data, also in several other places. Fixed by Ruben. Before it simply went unnoticed when CTP lumi was missing and timeframes were silently dropped. This is no longer possible with the new dummy data injection at readout proxy level.
- Failure in GPU code merging calib uploads for different time frames (necessary since each thread processes only every other TF, so there can be 2 calib changes in between.)
- GPU utilization on MI50 nodes at 47 kHz was 88%, on M100 nodes more spare capacity. I.e. safe for Pb-Pb data taking.
- Problem with QC topologies with expendable tasks- For items to do see: https://alice.its.cern.ch/jira/browse/QC-953 - Status?
- Problem in QC where we collect messages in memory while the run is stopped: https://alice.its.cern.ch/jira/browse/O2-3691
- Tests ok, will be deployed after HI and then we see.
- Saw similar issues on calib aggregator nodes.
- Switch 0xdeadbeef handling from on-the-fly creating dummy messages for optional messages, to injecting them at readout-proxy level.
- Done on EPN, enabled in the global workflow, and working correctly now.
- After Pb-Pb, need to change FLP workflows and all detector workflows on EPN.
- New issue: sometimes CCDB populator produces backpressure, without processing data. Crashed several Pb-Pb runs yet: https://alice.its.cern.ch/jira/browse/O2-4244
Other framework tickets:
- TOF problem with receiving condition in tof-compressor: https://alice.its.cern.ch/jira/browse/O2-3681
- Grafana metrics: Might want to introduce additional rate metrics that subtract the header overhead to have the pure payload: low priority.
- Backpressure reporting when there is only 1 input channel: no progress: https://alice.its.cern.ch/jira/browse/O2-4237
- Stop entire workflow if one process segfaults / exits unexpectedly. Tested again in January, still not working despite some fixes. https://alice.its.cern.ch/jira/browse/O2-2710
- https://alice.its.cern.ch/jira/browse/O2-1900 : FIX in PR, but has side effects which must also be fixed.
- https://alice.its.cern.ch/jira/browse/O2-2213 : Cannot override debug severity for tpc-tracker
- https://alice.its.cern.ch/jira/browse/O2-2209 : Improve DebugGUI information
- https://alice.its.cern.ch/jira/browse/O2-2140 : Better error message (or a message at all) when input missing
- https://alice.its.cern.ch/jira/browse/O2-2361 : Problem with 2 devices of the same name
- https://alice.its.cern.ch/jira/browse/O2-2300 : Usage of valgrind in external terminal: The testcase is currently causing a segfault, which is an unrelated problem and must be fixed first. Reproduced and investigated by Giulio.
- Found a reproducible crash (while fixing the memory leak) in the TOF compressed-decoder at workflow termination, if the wrong topology is running. Not critical, since it is only at the termination, and the fix of the topology avoids it in any case. But we should still understand and fix the crash itself. A reproducer is available.
- Support in DPL GUI to send individual START and STOP commands.
- Problem I mentioned last time with non-critical QC tasks and DPL CCDB fetcher is real. Will need some extra work to solve it. Otherwise non-critical QC tasks will stall the DPL chain when they fail.
Global calibration topics:
- TPC IDC workflow problem.
- TPC has issues with SAC workflow. Need to understand if this is the known long-standing DPL issue with "Dropping lifetime::timeframe" or something else.
- Even with latest changes, difficult to ensure guaranteed calibration finalization at end of global run (as discussed with Ruben yesterday).
- After discussion with Peter, Giulio this morning: We should push for 2 additional states in the state machine at the end of run between RUNNING and READY:
- DRAIN: For all but O2, the transision RUNNING --> FINALIZE is identical to what we do in STOP: RUNNING --> READY at the moment. I.e. no more data will come in then.
- O2 could finalize the current TF processing with some time out, where it stops processing incoming data, and at EndOfStream trigger the calibration postprocessing.
- FINALIZE: No more data is guaranteed to come in, but the calibration could still be running. So we leave FMQ channels open, and have a time out to finalize the calibration. If input proxies have not yet received the EndOfStream, they will inject it to trigger the final calibration.
- DRAIN: For all but O2, the transision RUNNING --> FINALIZE is identical to what we do in STOP: RUNNING --> READY at the moment. I.e. no more data will come in then.
- This would require changes in O2, DD, ECS, FMQ, but all changes except for in O2 should be trivial, since all other components would not do anything in these states.
- Started to draft a document, but want to double-check it will work out this way before making it public.
- After discussion with Peter, Giulio this morning: We should push for 2 additional states in the state machine at the end of run between RUNNING and READY:
- Problem with endOfStream in the middle of a run, stopping calib processing most likely fixed.
- Happened again few times and killed runs.
- (I could not reproduce the exact problem on staging, but something similar, which is now fixed.)
- There were 2 independent issues:
- EndOfStream messages carry runNumber = 0, which triggers the newRun flag in the readout proxy, resettting the peer counters, so the check if the eos has arrived from all peers will always be true.
- When a process crashes online, ODC takes down the remaining processes in that collection, which triggered an end of stream from the readout proxy during the STOP transition. This is no suppressed if running online and if the endOfStream counting criterion from all peers is not met.
CCDB:
- Bump to libwebsockets 4.x / JAliEn-ROOT 0.7.4: Status? Costin asked to check in async production before merging.
Sync reconstruction / Software at P2:
- Had runs with up to 47 kHz. TPC GPU reco stable and fast enough, 88% GPU utilization on MI50 nodes, more space capacity on MI100 nodes.
- Now at SW version .31, .32 already deployed and again more in the pipeline.
- Large SW update on saturday, also on FLPs and QC.
- Ruben discovered a severe bug in HMPID raw decoding yesterday, perhaps HMPID data taken so far bad. Needs a fix ASAP.
- If we want to port ITS TPC matching QC, will again require synchronous update on EPN, FLP and new QC.
CTF Size:
- Discussed again with RC switch to new CTF coding scheme (since will save significant amount of money, thx to Peter and Andreas for the support).
- New SW build .32 with new CTF coding available, tested in FST, staging, and locally encoding and decoding CTFs (by Ruben and David).
- Took a data replay run, and that was ok.
- Next step: test in PHYSICS for one run, wait for async QC of that run (<24h), then enable by default.
Async reconstruction
- Remaining oscilation problem: GPUs get sometimes stalled for a long time up to 2 minutes.
- Checking 2 things: does the situation get better without GPU monitoring? --> Inconclusive
- We can use increased GPU processes priority as a mitigation, but doesn't fully fix the issue.
- Performance issue seen in async reco on MI100, need to investigate.
EPN major topics:
- Fast movement of nodes between async / online without EPN expert intervention.
- 2 goals I would like to set for the final solution:
- It should not be needed to stop the SLURM schedulers when moving nodes, there should be no limitation for ongoing runs at P2 and ongoing async jobs.
- We must not lose which nodes are marked as bad while moving.
- 2 goals I would like to set for the final solution:
- Interface to change SHM memory sizes when no run is ongoing. Otherwise we cannot tune the workflow for both Pb-Pb and pp: https://alice.its.cern.ch/jira/browse/EPN-250
- Lubos to provide interface to querry current EPN SHM settings - ETA July 2023, Status?
- Improve DataDistribution file replay performance, currently cannot do faster than 0.8 Hz, cannot test MI100 EPN in Pb-Pb at nominal rate, and cannot test pp workflow for 100 EPNs in FST since DD injects TFs too slowly. https://alice.its.cern.ch/jira/browse/EPN-244 NO ETA
- DataDistribution distributes data round-robin in absense of backpressure, but it would be better to do it based on buffer utilization, and give more data to MI100 nodes. Now, we are driving the MI50 nodes at 100% capacity with backpressure, and then only backpressured TFs go on MI100 nodes. This increases the memory pressure on the MI50 nodes, which is anyway a critical point. https://alice.its.cern.ch/jira/browse/EPN-397
- New problem seen in 3 runs 544300, 544305, and 544384. There were errors before on the FLPs, which perhaps caused this behavior, but what is then seen: Many EPNs stop receiving TFs, the remaining EPNs cannot handle the rate and create backpressure. No clear error is printed to InfoLogger. Runs were eventually stopped due to the backpressure, while it was not understood why the processing created backpressure, which was simply due to too high rate to remaining EPNs.
- Not clear what the ROOT cause is, could be a SYMPTOM of a prior problem on FLPs, or a connectivity issue?
- Possible mitigation would be at least to shut down TfBuilder that do not receive data any more, such that bad nodes count against nmin.
Other EPN topics:
- Check NUMA balancing after SHM allocation, sometimes nodes are unbalanced and slow: https://alice.its.cern.ch/jira/browse/EPN-245
- Fix problem with SetProperties string > 1024/1536 bytes: https://alice.its.cern.ch/jira/browse/EPN-134 and https://github.com/FairRootGroup/DDS/issues/440
- After software installation, check whether it succeeded on all online nodes (https://alice.its.cern.ch/jira/browse/EPN-155) and consolidate software deployment scripts in general.
- Improve InfoLogger messages when environment creation fails due to too few EPNs / calib nodes available, ideally report a proper error directly in the ECS GUI: https://alice.its.cern.ch/jira/browse/EPN-65
- Create user for epn2eos experts for debugging: https://alice.its.cern.ch/jira/browse/EPN-383
- EPNs sometimes get in a bad state, with CPU stuck, probably due to AMD driver. To be investigated and reported to AMD.
TPC Raw decoding checks:
- Add additional check on DPL level, to make sure firstOrbit received from all detectors is identical, when creating the TimeFrame first orbit.
Full system test issues:
Topology generation:
- Should test to deploy topology with DPL driver, to have the remote GUI available. Status?
QC / Monitoring / InfoLogger updates:
- TPC has opened first PR for monitoring of cluster rejection in QC. Trending for TPC CTFs is work in progress. Ole will join from our side, and plan is to extend this to all detectors, and to include also trending for raw data sizes.
AliECS related topics:
- Extra env var field still not multi-line by default.
GPU ROCm / compiler topics:
- Found new HIP internal compiler error when compiling without optimization: -O0 make the compilation fail with unsupported LLVM intrinsic. Reported to AMD.
- Found a new miscompilation with -ffast-math enabled in looper folllowing, for now disabled -ffast-math.
- Must create new minimal reproducer for compile error when we enable LOG(...) functionality in the HIP code. Check whether this is a bug in our code or in ROCm. Lubos will work on this.
- Found another compiler problem with template treatment found by Ruben. Have a workaround for now. Need to create a minimal reproducer and file a bug report.
- Debugging the calibration, debug output triggered another internal compiler error in HIP compiler. No problem for now since it happened only with temporary debug code. But should still report it to AMD to fix it.
- New compiler regression in ROCm 5.6, need to create testcase and send to AMD.
- ROCm 5.7 released, didn't check yet. AMD MI50 will go end of maintenance in Q2 2024. Checking with AMD if the card will still be supported by future ROCm versions.
TPC GPU Processing
- Bug in TPC QC with MC embedding, TPC QC does not respect sourceID of MC labels, so confuses tracks of signal and of background events.
- Online runs at low IR / low energy observe weird number of clusters per track statistics.
- Problem was due to incorrect vdrift, though it is not clear why this breaks tracking so badly, being investigated.
- Ruben reported an issue with global track refit, which some times does not produce the TPC track fit results. To be investigated.
- Robert reported a problem in tracking of Laser runs triggering buffer overflow protection on GPU. Probably due to different local occupancy, leading to unsuited number of maximum seeds / tracks estimated. TPC took some laser raw data to check locally.
ANS Encoding
- New CTF coding scheme is merged and was commissioned, but it is not active. It would give ~10% better compression than what we currently have. We can also gain ~2.5% by updating the dictionaries for current encoding, which Ruben will do once we go to 50 kHz.
Issues currently lacking manpower, waiting for a volunteer:
- For debugging, it would be convenient to have a proper tool that (using FairMQ debug mode) can list all messages currently in the SHM segments, similarly to what I had hacked together for https://alice.its.cern.ch/jira/browse/O2-2108
- Redo / Improve the parameter range scan for tuning GPU parameters. In particular, on the AMD GPUs, since they seem to be affected much more by memory sizes, we have to use test time frames of the correct size, and we have to separate training and test data sets.
- Problem at end of run with lots of error messages and breaks many calibration runs https://alice.its.cern.ch/jira/browse/O2-3315
-
2
TRD TrackingSpeaker: Ole Schmidt (CERN)
-
3
TPC ML ClusteringSpeaker: Christian Sonnabend (CERN, Heidelberg University (DE))
-
4
ITS TrackingSpeaker: Matteo Concas (CERN)
- ITS tracking:
- GPU: no progress, will try simplified propagation (like it used to be in early days of ITS tracking) if I find time this week.
- Currently optimising seeding vertexing for async.
- HIP as a language:
- Autodetection failure is not fedback even in latest CMake (3.27.7) -> PR in alidist to externally override GPU architectures, to be parsed after every GPU setting.
- Currently understanding an issue with OpenCL compilation seen in CI and reproduced also locally.
- ITS tracking:
-
1