Alice Weekly Meeting: Software for Hardware Accelerators / PDP-SRC

Zoom Meeting ID
David Rohr
Useful links
Join via phone
Zoom URL
    • 11:00 AM 11:20 AM
      Discussion 20m
      Speakers: David Rohr (CERN), Ole Schmidt (CERN)

      Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)


      High priority framework topics:

      • Regression of START-STOP-START work that makes all runs fail with lots of error messages and breaks many calibration runs
        • Revert didn't help for some reason, need to investigate further, but should fix the regression ASAP, and the general start/stop/start issue until restart of data taking.
      • Async workflow for 1NUMA domain with higher multiplicities gets stuck:
      • Fix START-STOP-START for good
      • Multi-threaded pipeline still not working in FST / sync processing, but only in standalone benchmark.
      • Suppoort marking QC tasks as non-critical in DDS and O2Control topology export:
      • Bumped DebugGUI, issue with visualization of large workflows with >64k of vertices is fixed

      Other framework tickets:

      • Grafana metrics: Might want to introduce additional rate metrics that subtract the header overhead to have the pure payload: low priority.
      • Backpressure reporting when there is only 1 input channel: no progress.
      • Stop entire workflow if one process segfaults / exits unexpectedly. Tested again in January, still not working despite some fixes.
      • : FIX in PR, but has side effects which must also be fixed.
      • : Cannot override debug severity for tpc-tracker
      • : Improve DebugGUI information
      • : Better error message (or a message at all) when input missing
      • : Problem with 2 devices of the same name
      • : Usage of valgrind in external terminal: The testcase is currently causing a segfault, which is an unrelated problem and must be fixed first. Reproduced and investigated by Giulio.
      • DPL Raw Sequencer segfaults when an HBF is missing. Fixed by changing way raw-reader sends the parts, but Matthias will add a check to prevent such failures in the future.
      • Found a reproducible crash (while fixing the memory leak) in the TOF compressed-decoder at workflow termination, if the wrong topology is running. Not critical, since it is only at the termination, and the fix of the topology avoids it in any case. But we should still understand and fix the crash itself. A reproducer is available.
      • Support in DPL GUI to send individual START and STOP commands.
      • Problem I mentioned last time with non-critical QC tasks and DPL CCDB fetcher is real. Will need some extra work to solve it. Otherwise non-critical QC tasks will stall the DPL chain when they fail.

      Global calibration topics:

      • TPC IDC / SAC calibration:
        • Debug session tomorrow morning with Robert, to continue working on IDC+SAC workflow.

      Async reconstruction

      • Severe memory issues over xmas break, most jobs on the EPN crashing due to going OOM.
      • Compared to the run numbers / O2 versions that were used for GPU tuning / release validation, other run  number / new O2 need somewhat more memory. This is only few GB, but since we have literally no margin, jobs are killed for going OOM.
        • This affected the EPN more than the GRID, since EPN used a larger SHM size, to hold more time frames, due to faster GPU processing. With reduced SHM size, it seems to work on the EPN, but we have to reduce the number of TFs in flight to the level of CPU jobs.
      • Some investigation of the memory usage:
        • ~100 O2 + QC processes running:
          • min memory usage is 134 MB
          • Median is ~250 MB
          • Process with large memory are: its-tracking, tpc-tracking, its-tpc matching, tpc entropy decoding, ctf reader, qc file sink, tof and some other QCs.
        • But nothing to excessive, TPC + ITS tracking are at ~3 GB, the rest is 1GB or below.
        • Sum of memory of processes is ~38 GB, SHM size was 19 GB, cgroup memory limit is 60 GB, so basically no margin.
      • Should try to reduce memory of some processes, particularly QC, but despite we have to do it in any case, it will not help much.
      • Only way out is to switch to the 1NUMA domain workflow, for which we have to fix the problem that the workflow gets stuck.
      • Switched to 1NUMA domain slurm queue setup for vobox. Still running 1GPU workflow, but gives us more memory so we can run the optimized GPU setup.
        • Bug in NUMA-aware GPU selection when submitting 2 4GPU jobs to the same node, fixed in O2, and workaround in place for GRID jobs until we switch to a new O2 tag.
      • High failure rates tonight. 1 bad EPN node, which was taken down and EPN team informed. Other errors seem CCDB related.

      EPN major topics:

      • New AMD ROCm >= 5.4 does no longer support CentOS as operating system. Officially supported is now only RHEL, SLES, Ubuntu. Checking with AMD whether Alma or Rocky Linux would work. Should switch the EPN farm to new OS before data taking, otherwise we will not be able to deploy new fixes by AMD.
      • Update ROCm to 5.3 for now.
      • Need a procedure / tool to move nodes quickly between online / async partition. EPN working on this. Currently most EPNs are still usually in online, and we have to ask to get some in async. Should arrive at a state where all EPNs that are not needed are in async by default.
      • Opened a JIRA ticket for EPN to follow up the interface to change SHM memory sizes when no run is ongoing (which was requested 1 year ago). Otherwise we cannot tune the workflow for both Pb-Pb and pp:

      Other EPN topics:

      Topology generation:

      • Should change dpl-workflow script to fail if any process in the dpl pipe (workflow | workflow | ...) has non-zero exit code.
      • Switching phase 1 of topology generation to using updateable RPMs instead of script in home folder (basically just copying the existing script to another place). Timo will set up the Jenkins builder.

      QC / Monitoring / InfoLogger updates:

      • TPC has opened first PR for monitoring of cluster rejection in QC. Trending for TPC CTFs is work in progress. Ole will join from our side, and plan is to extend this to all detectors, and to include also trending for raw data sizes.

      CCDB topics.

      AliECS related topics:

      • Improve error message in AliECS GUI for EPN related failures. PDP error messages are sent via ODC in the Run reply, e.g. for topology generation failures, but ECS does not show them, but only shows generic "EPN Partition Initialize Failed"
      • Send list of FLPs in run to topology generation.
      • Send flag whether it is a production / staging environment to topology generation.

      GPU ROCm / compiler topics:

      • Locally tested OpenCL compilation with Clang 14 bumping –cl-std from clc++ (OpenCL 2.0) to CLC++2021 (OpenCL 3.0) and using clang-internal SPIR-V backend. Arrow bump to 8.0 done, which was prerequesite.
        • Work on bumping GCC still ongoing (by Giulio), will follow up with Clang 15 afterwards, once we are at arrow 10.
      • Found new HIP internal compiler error when compiling without optimization: -O0 make the compilation fail with unsupported LLVM intrinsic. Reported to AMD.
      • Found a new miscompilation with -ffast-math enabled in looper folllowing, for now disabled -ffast-math.
      • Must create new minimal reproducer for compile error when we enable LOG(...) functionality in the HIP code. Check whether this is a bug in our code or in ROCm. Lubos will work on this.
      • Found another compiler problem with template treatment found by Ruben. Have a workaround for now. Need to create a minimal reproducer and file a bug report.

      TPC GPU Processing

      • Random GPU crashes under investigation.
      • TPC CTF Skimming finalized.
      • Bug in TPC tracking on CPU depending on number of threads. 5 gives incorrect result. 1, 2, 3, 4, 6, 64, 128 seem to be ok. Investigating.
      • TPC CTF decoding now accepts "no input", not only "empty input"
      • Problem in TPC tracking, when some TPC pad rows in a sector have issues, track merging across these pad rows seems not to work correctly. Investigating.
      • Fixed several issues after report by Ruben about problem in refit:
        • Storage of outer parameters for looping tracks (now stored at outermost position of primary leg): should we do this only when secondary legs are dropped, or always?
        • Fix removal of cluster association of dropped secondary legs, if cluster is shared and attached to a primary leg of a different track.
        • Fix leg counting for ce-crossing tracks.
        • Problem in refit of lowPt tracks in TrackParCov model was due to large cluster errors and high covariance for very low Pt tracks. Ruben is working to improve this.
      • Working to improve TPC track time assignment:
        • Start with average eta estimate (eta = 0.5, configureable) in seeding, instead of with z=0.
        • Improved algorithm to propagate to beamline.
        • If track doesn't come close to the beamline, assume it is secondary, and assume eta=05 for the innermost hit.

      TRD Tracking

      • Some minor updates to fix issues with too accurate time in pile up scenario leading to fake vertices, and added parameter to remove TRD track with less than 3 matches.

      ITS GPU Tracking and Vertexing:

      • Work on tracking ongoing, splitting of TF to reduce memory size implemented.

      ANS Encoding

      • Michael is still working on the elias delta encoding, and needs to fix the algorithm to respect C++ pointer alignment constraints. Afterwards will continue integration.

      Issues currently lacking manpower, waiting for a volunteer:

      • For debugging, it would be convenient to have a proper tool that (using FairMQ debug mode) can list all messages currently in the SHM segments, similarly to what I had hacked together for
      • Redo / Improve the parameter range scan for tuning GPU parameters. In particular, on the AMD GPUs, since they seem to be affected much more by memory sizes, we have to use test time frames of the correct size, and we have to separate training and test data sets.