Alice Weekly Meeting: Software for Hardware Accelerators / PDP-SRC

Europe/Zurich
Zoom Meeting ID
61230224927
Host
David Rohr
Useful links
Join via phone
Zoom URL
    • 11:00 11:20
      Discussion 20m
      Speakers: David Rohr (CERN), Ole Schmidt (CERN)

      Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)

      High priority framework topics:

      • Problem at end of run with lots of error messages and breaks many calibration runs https://alice.its.cern.ch/jira/browse/O2-3315
        • Did some investigation on dropping lifetime::timeframe errors, and seems an overlap of many issues causing this, some are fixed:
          • Missing inputs not handled correctly with original 0xDEADBEEF mechanism. Should be fixed now that inject dummy data at readout-proxy level.
          • TRD pulseheight QC seems to stop sending output after STOP of run, causing bunch of messages at end of run.
          • When running on CTFs, bunch of messages at EOR since CTF reader did one additional iteration.
          • Input-proxy on calib nodes increments oldestPossibleCounter when counting the EoS from different channels, but should do so only when forwarding the EoS. Discussed a potential fix with Giulio, PR should come soon.
          • Still, there seems to be a general problem, since we see such messages also in the middle of online runs without backpressure and with all inputs present. Current best guess is a problem when forwarding messages / CCDB objects from one DPL device to another.
      • Fix START-STOP-START for good
      • GPU Standalone multi-threading:
        • Fully commissioned and active since yesterday. No problem in 30 kHz fill last night. Ruben checked online QC, no difference. Also async QC will be checked.
        • Remaining issues that were fixed:
          • Special completion policy to guarantee orderind was not active crashing a test on staging.
          • Failure in CTF code handling missing missing CTP/LUMI data, also in several other places. Fixed by Ruben. Before it simply went unnoticed when CTP lumi was missing and timeframes were silently dropped. This is no longer possible with the new dummy data injection at readout proxy level.
          • Failure in GPU code merging calib uploads for different time frames (necessary since each thread processes only every other TF, so there can be 2 calib changes in between.)
      • Problem with QC topologies with expendable tasks- For items to do see: https://alice.its.cern.ch/jira/browse/QC-953 - Status?
      • Problem in QC where we collect messages in memory while the run is stopped: https://alice.its.cern.ch/jira/browse/O2-3691
        • Tests ok, will be deployed after HI and then we see.
        • Saw similar issues on calib aggregator nodes.
      • Switch 0xdeadbeef handling from on-the-fly creating dummy messages for optional messages, to injecting them at readout-proxy level.
        • Done on EPN, enabled in the global workflow, and working correctly now.
        • After Pb-Pb, need to change FLP workflows and all detector workflows on EPN.

      Other framework tickets:

      • TOF problem with receiving condition in tof-compressor: https://alice.its.cern.ch/jira/browse/O2-3681
      • Grafana metrics: Might want to introduce additional rate metrics that subtract the header overhead to have the pure payload: low priority.
      • Backpressure reporting when there is only 1 input channel: no progress: https://alice.its.cern.ch/jira/browse/O2-4237
      • Stop entire workflow if one process segfaults / exits unexpectedly. Tested again in January, still not working despite some fixes. https://alice.its.cern.ch/jira/browse/O2-2710
      • https://alice.its.cern.ch/jira/browse/O2-1900 : FIX in PR, but has side effects which must also be fixed.
      • https://alice.its.cern.ch/jira/browse/O2-2213 : Cannot override debug severity for tpc-tracker
      • https://alice.its.cern.ch/jira/browse/O2-2209 : Improve DebugGUI information
      • https://alice.its.cern.ch/jira/browse/O2-2140 : Better error message (or a message at all) when input missing
      • https://alice.its.cern.ch/jira/browse/O2-2361 : Problem with 2 devices of the same name
      • https://alice.its.cern.ch/jira/browse/O2-2300 : Usage of valgrind in external terminal: The testcase is currently causing a segfault, which is an unrelated problem and must be fixed first. Reproduced and investigated by Giulio.
      • Found a reproducible crash (while fixing the memory leak) in the TOF compressed-decoder at workflow termination, if the wrong topology is running. Not critical, since it is only at the termination, and the fix of the topology avoids it in any case. But we should still understand and fix the crash itself. A reproducer is available.
      • Support in DPL GUI to send individual START and STOP commands.
      • Problem I mentioned last time with non-critical QC tasks and DPL CCDB fetcher is real. Will need some extra work to solve it. Otherwise non-critical QC tasks will stall the DPL chain when they fail.

      Global calibration topics:

      • TPC IDC workflow problem.
      • TPC has issues with SAC workflow. Need to understand if this is the known long-standing DPL issue with "Dropping lifetime::timeframe" or something else.
      • Even with latest changes, difficult to ensure guaranteed calibration finalization at end of global run (as discussed with Ruben yesterday).
        • After discussion with Peter, Giulio this morning: We should push for 2 additional states in the state machine at the end of run between RUNNING and READY:
          • DRAIN: For all but O2, the transision RUNNING --> FINALIZE is identical to what we do in STOP: RUNNING --> READY at the moment. I.e. no more data will come in then.
            • O2 could finalize the current TF processing with some time out, where it stops processing incoming data, and at EndOfStream trigger the calibration postprocessing.
          • FINALIZE: No more data is guaranteed to come in, but the calibration could still be running. So we leave FMQ channels open, and have a time out to finalize the calibration. If input proxies have not yet received the EndOfStream, they will inject it to trigger the final calibration.
        • This would require changes in O2, DD, ECS, FMQ, but all changes except for in O2 should be trivial, since all other components would not do anything in these states.
        • Started to draft a document, but want to double-check it will work out this way before making it public.
      • Problem when calibration aggregators suddenly receive an endOfStream in the middle of the run and stop processing:
        • Happens since ODC 0.78, which checks device states during running: If one device fails, it sends sigkill to all devices of the collection, and FairMQ takes the shortest way through the state machine to EXIT, which involves a STOP transition, which then sends the EoS. The DPL input proxy on the calib node should in principle check that it has received EoS from all nodes, but for some reason that is not working. Todo:
          • Fix the input proxy to correctly count the number of EoS.
          • Change the device behavior such that we do not send EoS in case of sigkill when running on FLP/EPN.

      CCDB:

      • Bump to libwebsockets 4.x / JAliEn-ROOT 0.7.4: Status? Costin asked to check in async production before merging.
      • Problem with CCDB objects created at P2 not synched fast enough, so testing async reco directly did not work with CCDB failure. Costin did some steps to improve the syncing.

      Sync reconstruction / Software at P2:

      • Run with 30 kHz tonight. Still severe issues with ITS beam background and with TRD.
        • Software mostly OK, needed to increase CPU resources for MFT decoding. Need to see how it goes with 50 kHz.
        • Now with GPU multi-threading merged, should be OK for 50 kHz in TPC with some margin.
        • EMCAL raw decoder fixes merged, but still crashing regularly. Needs updated QC, and another raw decoder fix.
        • Some more fixes pending, to improve dummy injection at readout-proxy, improve error message, and downscale detector messages to InfoLogger.
        • Now at version .27, i.e. 27 builds with cherry-picks after software freeze.
        • Possibly build .28 soon, with pending fixes for EPN only, and at next occation (probably Saturday), bump also on FLPs and switch to new QC, needed for TPC DCAr QC.

      CTF Size:

      • Extrapolation of MC data:
        • Correction factors: 30% more clusters in 2022 Pb-Pb than in MC (130 GB), and on top: + 10% more clusters seen now compared to 2022, +4% worse looper rejection than in MC, + 5%-10% less efficient entroy encoding while sticking to the old scheme:
        • 208 - 217 GB/s (10^9 = GB)
      • Extrapoilation from real data sizes:
        • 198 GB/s (extrapolated from 8 kHz interaction rate).
        • Should be repeated with latest higher IR runs.
        • See only 110 GB/s at 30 kHz rate from last night, need to check why so much lower now.
        • There is still a discrepancy between number of vertices we cound, and luminosity readings of ZDC. ZDC reports lower values. We should understand that to avoid leveling at 50kHz while actually running at higher rates.

      Async reconstruction

      • Remaining oscilation problem: GPUs get sometimes stalled for a long time up to 2 minutes.
        • Checking 2 things: does the situation get better without GPU monitoring? --> Inconclusive
        • We can use increased GPU processes priority as a mitigation, but doesn't fully fix the issue.
      • Performance issue seen in async reco on MI100, need to investigate.

      EPN major topics:

      • Fast movement of nodes between async / online without EPN expert intervention.
        • 2 goals I would like to set for the final solution:
          • It should not be needed to stop the SLURM schedulers when moving nodes, there should be no limitation for ongoing runs at P2 and ongoing async jobs.
          • We must not lose which nodes are marked as bad while moving.
      • Interface to change SHM memory sizes when no run is ongoing. Otherwise we cannot tune the workflow for both Pb-Pb and pp: https://alice.its.cern.ch/jira/browse/EPN-250
        • Lubos to provide interface to querry current EPN SHM settings - ETA July 2023, Status?
      • Improve DataDistribution file replay performance, currently cannot do faster than 0.8 Hz, cannot test MI100 EPN in Pb-Pb at nominal rate, and cannot test pp workflow for 100 EPNs in FST since DD injects TFs too slowly. https://alice.its.cern.ch/jira/browse/EPN-244 NO ETA
      • Interface to communicate list of active EPNs to epn2eos monitoring: https://alice.its.cern.ch/jira/browse/EPN-381
        • Calib nodes integrated by Federico, working
      • DataDistribution distributes data round-robin in absense of backpressure, but it would be better to do it based on buffer utilization, and give more data to MI100 nodes. Now, we are driving the MI50 nodes at 100% capacity with backpressure, and then only backpressured TFs go on MI100 nodes. This increases the memory pressure on the MI50 nodes, which is anyway a critical point. https://alice.its.cern.ch/jira/browse/EPN-397

      Other EPN topics:

      TPC Raw decoding checks:

      • Add additional check on DPL level, to make sure firstOrbit received from all detectors is identical, when creating the TimeFrame first orbit.

      Full system test issues:

      Topology generation:

      • Should test to deploy topology with DPL driver, to have the remote GUI available. Status?

      QC / Monitoring / InfoLogger updates:

      • TPC has opened first PR for monitoring of cluster rejection in QC. Trending for TPC CTFs is work in progress. Ole will join from our side, and plan is to extend this to all detectors, and to include also trending for raw data sizes.

      AliECS related topics:

      • Extra env var field still not multi-line by default.

      GPU ROCm / compiler topics:

      • Found new HIP internal compiler error when compiling without optimization: -O0 make the compilation fail with unsupported LLVM intrinsic. Reported to AMD.
      • Found a new miscompilation with -ffast-math enabled in looper folllowing, for now disabled -ffast-math.
      • Must create new minimal reproducer for compile error when we enable LOG(...) functionality in the HIP code. Check whether this is a bug in our code or in ROCm. Lubos will work on this.
      • Found another compiler problem with template treatment found by Ruben. Have a workaround for now. Need to create a minimal reproducer and file a bug report.
      • Debugging the calibration, debug output triggered another internal compiler error in HIP compiler. No problem for now since it happened only with temporary debug code. But should still report it to AMD to fix it.
      • New compiler regression in ROCm 5.6, need to create testcase and send to AMD.
      • ROCm 5.7 released, didn't check yet. AMD MI50 will go end of maintenance in Q2 2024. Checking with AMD if the card will still be supported by future ROCm versions.

      TPC GPU Processing

      • Bug in TPC QC with MC embedding, TPC QC does not respect sourceID of MC labels, so confuses tracks of signal and of background events.
      • Online runs at low IR / low energy observe weird number of clusters per track statistics.
        • Problem was due to incorrect vdrift, though it is not clear why this breaks tracking so badly, being investigated.
      • Ruben reported an issue with global track refit, which some times does not produce the TPC track fit results. To be investigated.

      ANS Encoding

      • New CTF coding scheme is merged and was commissioned, but it is not active. It would give ~10% better compression than what we currently have. We can also gain ~2.5% by updating the dictionaries for current encoding, which Ruben will do once we go to 50 kHz.

      Issues currently lacking manpower, waiting for a volunteer:

      • For debugging, it would be convenient to have a proper tool that (using FairMQ debug mode) can list all messages currently in the SHM segments, similarly to what I had hacked together for https://alice.its.cern.ch/jira/browse/O2-2108
      • Redo / Improve the parameter range scan for tuning GPU parameters. In particular, on the AMD GPUs, since they seem to be affected much more by memory sizes, we have to use test time frames of the correct size, and we have to separate training and test data sets.
    • 11:20 11:25
      TRD Tracking 5m
      Speaker: Ole Schmidt (CERN)
    • 11:25 11:30
      TPC ML Clustering 5m
      Speaker: Christian Sonnabend (CERN, Heidelberg University (DE))
    • 11:30 11:35
      ITS Tracking 5m
      Speaker: Matteo Concas (CERN)
      • ITS tracking: on hold
      • HIP as a language (Debugging made difficulto on EPNs because of rocm-gdb breaking if dirver version != runtime version)
        • Previous inconsistent behaviours were related to wrong detected architecture using rocm_agent_enumerator.
        • rocm_agent_enumerator fails if run in any alienv-entered environment. (usual Python failure with encodings) nonetheless there is a default behaviour that seems to depend on CMake version.
        • CMake module for HIP language changes a lot around the versions where they enable the feature, making the investigation longer across different versions.
        • Default behaviour upon failed autodetection to be better understood, to see if we can override with meaningful default behaviour.