Alice Weekly Meeting: Software for Hardware Accelerators / PDP-SRC

Europe/Zurich
Videoconference
ALICE GPU Meeting
Zoom Meeting ID
61230224927
Host
David Rohr
Useful links
Join via phone
Zoom URL
    • 11:00 11:20
      Discussion 20m
      Speakers: David Rohr (CERN), Ole Schmidt (CERN)

      Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)

      High priority RC YETS issues:

      • Fix dropping lifetime::timeframe for good: No news
        • Still pending: problem with CCDB objects getting lost by DPL leading to "Dropping lifetime::timeframe", saw at least one occation during SW validation.
        • Ruben reported a similar problem in async reco, but only during last TFs of a run, probably independent bug.
        • No other instances of Dropping lifetime::timeframe seen at P2.
      • Expandable tasks in QC. Everything merged on our side.
        • Was a new problem on EPNs, that DPL DDS export had wrong format. Fixed by Ole+Giulio. A patch by EPN for the XML merging is needed for the next test.
      • Start / Stop / Start:
        • Problems in readout and QC fixed. Now 3 new problems, at least 2 on our side: No news
          • GPU multi-thread pipeline gets stuck after restart. Should be trivial to fix. https://its.cern.ch/jira/browse/O2-4638
          • Some processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
          • TPC ITS matching QC crashing accessing CCDB objects. Not clear if same problem as above, or a problem in the task itself.
      • Stabilize calibration / fix EoS: New scheme: https://its.cern.ch/jira/browse/O2-4308
        • Work in progress, partial PR open.
      • Fix problem with ccdb-populator: no idea yet, no ETA.

       

      High priority framework topics:

      • See YETS issues

       

      Other framework tickets:

      • TOF problem with receiving condition in tof-compressor: https://alice.its.cern.ch/jira/browse/O2-3681
      • Grafana metrics: Might want to introduce additional rate metrics that subtract the header overhead to have the pure payload: low priority.
      • Backpressure reporting when there is only 1 input channel: no progress: https://alice.its.cern.ch/jira/browse/O2-4237
      • Stop entire workflow if one process segfaults / exits unexpectedly. Tested again in January, still not working despite some fixes. https://alice.its.cern.ch/jira/browse/O2-2710
      • https://alice.its.cern.ch/jira/browse/O2-1900 : FIX in PR, but has side effects which must also be fixed.
      • https://alice.its.cern.ch/jira/browse/O2-2213 : Cannot override debug severity for tpc-tracker
      • https://alice.its.cern.ch/jira/browse/O2-2209 : Improve DebugGUI information
      • https://alice.its.cern.ch/jira/browse/O2-2140 : Better error message (or a message at all) when input missing
      • https://alice.its.cern.ch/jira/browse/O2-2361 : Problem with 2 devices of the same name
      • https://alice.its.cern.ch/jira/browse/O2-2300 : Usage of valgrind in external terminal: The testcase is currently causing a segfault, which is an unrelated problem and must be fixed first. Reproduced and investigated by Giulio.
      • Found a reproducible crash (while fixing the memory leak) in the TOF compressed-decoder at workflow termination, if the wrong topology is running. Not critical, since it is only at the termination, and the fix of the topology avoids it in any case. But we should still understand and fix the crash itself. A reproducer is available.
      • Support in DPL GUI to send individual START and STOP commands.
      • Problem I mentioned last time with non-critical QC tasks and DPL CCDB fetcher is real. Will need some extra work to solve it. Otherwise non-critical QC tasks will stall the DPL chain when they fail.
      • DPL sending SHM metrics for all devices, not only input proxy: https://alice.its.cern.ch/jira/browse/O2-4234
      • Some improvements to ease debugging: https://alice.its.cern.ch/jira/browse/O2-4196 https://alice.its.cern.ch/jira/browse/O2-4195 https://alice.its.cern.ch/jira/browse/O2-4166
      • After Pb-Pb, we need to do a cleanup session and go through all these pending DPL tickets with a higher priority, and finally try to clean up the backlog.

      Global calibration topics:

      • TPC IDC and SAC workflow issues to be reevaluated with new O2 at restart of data taking. Cannot reproduce the problems any more.

      Sync processing

      • Proposal to parse InfoLogger message and alert automatically: https://alice.its.cern.ch/jira/browse/R3C-992
      • Seen crashes in pp replay: corrupt CCDB objects, but also general corruption in SHM.
        • Tried to downgrade ROOT, to check if that caused CCDB object corruption. I don't think it is the case?
        • SHM corruption (which can also corrupt the CCDB objects) seem to come from ITS decoding.
          • A bit weird, since the code wasn't change, and problem appears only with pp replay data, not with pbpb replay, and not with pp or pbpb SYNTHETIC.
      • Problem that ethernet route to DCS node FLP199 wasn't working. O2DPG fixed to use IB route to FLP199 directly in objects created on EPN.
        • We should add an IB alias for alidcs.cern.ch... and use that

      Async reconstruction

      • Remaining oscilation problem: GPUs get sometimes stalled for a long time up to 2 minutes.
        • Checking 2 things: does the situation get better without GPU monitoring? --> Inconclusive
        • We can use increased GPU processes priority as a mitigation, but doesn't fully fix the issue.
      • ḾI100 GPU stuck problem will only be addressed after AMD has fixed the operation with the latest official ROCm stack.
      • Network problems on EPN farm solved, back in operation.
      • Improvement by Giulio to reduce QC memory consumption in async reco by changing ROOT serialization. To be validated / tested.

       

      EPN major topics:

      • Fast movement of nodes between async / online without EPN expert intervention.
        • 2 goals I would like to set for the final solution:
          • It should not be needed to stop the SLURM schedulers when moving nodes, there should be no limitation for ongoing runs at P2 and ongoing async jobs.
          • We must not lose which nodes are marked as bad while moving.
      • Interface to change SHM memory sizes when no run is ongoing. Otherwise we cannot tune the workflow for both Pb-Pb and pp: https://alice.its.cern.ch/jira/browse/EPN-250
        • Lubos to provide interface to querry current EPN SHM settings - ETA July 2023, Status?
      • Improve DataDistribution file replay performance, currently cannot do faster than 0.8 Hz, cannot test MI100 EPN in Pb-Pb at nominal rate, and cannot test pp workflow for 100 EPNs in FST since DD injects TFs too slowly. https://alice.its.cern.ch/jira/browse/EPN-244 NO ETA
      • DataDistribution distributes data round-robin in absense of backpressure, but it would be better to do it based on buffer utilization, and give more data to MI100 nodes. Now, we are driving the MI50 nodes at 100% capacity with backpressure, and then only backpressured TFs go on MI100 nodes. This increases the memory pressure on the MI50 nodes, which is anyway a critical point. https://alice.its.cern.ch/jira/browse/EPN-397
      • TfBuilders should stop in ERROR when they lose connection.
      • Need fix for XML merging for topologies with expendable tasks.

       

      Other EPN topics:

       

      Raw decoding checks:

      • Add additional check on DPL level, to make sure firstOrbit received from all detectors is identical, when creating the TimeFrame first orbit.

       

      Full system test issues:

      Topology generation:

      • Should test to deploy topology with DPL driver, to have the remote GUI available.
        • DPL driver needs to implement FMQ state machine. Postponed until YETS issues solved.

       

      QC / Monitoring / InfoLogger updates:

      • TPC has opened first PR for monitoring of cluster rejection in QC. Trending for TPC CTFs is work in progress. Ole will join from our side, and plan is to extend this to all detectors, and to include also trending for raw data sizes.

       

      AliECS related topics:

      • Extra env var field still not multi-line by default.

       

      GPU ROCm / compiler topics:

      • Found new HIP internal compiler error when compiling without optimization: -O0 make the compilation fail with unsupported LLVM intrinsic. Reported to AMD.
      • Found a new miscompilation with -ffast-math enabled in looper folllowing, for now disabled -ffast-math.
      • Must create new minimal reproducer for compile error when we enable LOG(...) functionality in the HIP code. Check whether this is a bug in our code or in ROCm. Lubos will work on this.
      • Found another compiler problem with template treatment found by Ruben. Have a workaround for now. Need to create a minimal reproducer and file a bug report.
      • Debugging the calibration, debug output triggered another internal compiler error in HIP compiler. No problem for now since it happened only with temporary debug code. But should still report it to AMD to fix it.
      • Had a call with AMD yesterday, basically summarized the status. Now again waiting for them to work on it.

       

      TPC GPU Processing

      • Bug in TPC QC with MC embedding, TPC QC does not respect sourceID of MC labels, so confuses tracks of signal and of background events.
      • New problem with bogus values in TPC fast transformation map still pending. Sergey is investigating, but waiting for input from Alex.
      • Implemented all debug streamers as requested by TPC.
      • Almost all cluster error parameterization changes are implemented. Still pending is:
        • Provide average cluster qMax
        • Use better forumula after tuning with debug streamers
        • Still need to decide whether we just want to exclude edge clusters as Ruben and me propose, or have a smooth error masking as Marian proposes.
        • Implement edge correction and cluster masking / rejection.

      General GPU Processing

      • Consistency between CPU and GPU processing status:
        • Trying to get fully deterministic tracking with GPUCA_NO_FAST_MATH + additional debug options, which will introduce many intermediate sorting steps.
          • Fixed one more significant race condition bug in TPC tracking (now in total 6 bugs fixed).
          • Improved sorting kernels (some were still not fully deterministic) and sped up sorting, so that we can run this with >= 10 PbPb collision TFs.
          • Inconsistency in clusterization still to be investigated by Felix when he finds time. For now using only data sets where it doesn't appear.
          • State now:
            • On 10 Pb-Pb dataset: repeatedly running on GPU gives always the same result, same for CPU.
            • CPU and GPU results still differ. In sector and adjacent sector merging are consistent. Differences appear in CE merging and refit.
      • Started work to make O2 propagator easily usable in ITS tracking, which is not part of the GPU reconstruction library - TODOs:
        • Provide an (optionally device-relocable-code) object that can be linked to other GPU code e.g. ITS, which provides all code needed to use the propagator. The same mechanism as for the other kernel files will obtain and fill the constant cache: WIP
        • Use constant memory in fewer places, to disentangle the code. Particularly, pass processing context as kernel argument not in constant cache.
        • Once this is all working in CUDA, port over all the work to the HIP backend, including RTC.
        • Switch the HIP backend to autogenerate the HIP code from the CUDA code.
    • 11:20 11:25
      TRD Tracking 5m
      Speaker: Ole Schmidt (CERN)
    • 11:25 11:30
      TPC ML Clustering 5m
      Speaker: Christian Sonnabend (CERN, Heidelberg University (DE))

      Main focus

      • Visualizations and QA: RootInteractive for Tracks and Clusters
      • Data-readout / transformation: Get (X, Y, Z) position from (sector, row, pad, time), get all clusters assigned to tracks (after tracking), readout track properties

       

      QA and visualizations

      Clusters: Single event

       

      Tracks: 50 Ev. @ 50kHz PbPb

      Native (top), Network (bottom)

       

      dE/dx vs tpcInnerParam:


       

      Chi2 / NCl:


       

      Number of clusters:


       

      Current issues to solve

      • Reco workflow with QA and MC enabled crashes since not every cluster has an ideal cluster attached to it after the assignment process -> Tried dummy label, no label, explicitly unsetting it...
      • Getting tracks transformed correctly is not so easy: Have to check some other tasks how to do it (linear transformations just looked completely off...)

       

      Next steps

      • Better training data selection for network -> Create quality score for training data based on charge contribution by MC charge, sector boundary, etc. and weight training data accordingly
      • Feel comfortable interfacing clsuters / tracks now -> Implementing PyTorch C++ API in O2. Will try to get a simple GPU script for ROCm working (within O2)
      • Return to network training
      • To be discussed: Looper fitting based on tagged clusters (could start with simple helix model)
    • 11:30 11:35
      ITS Tracking 5m
      Speaker: Matteo Concas (CERN)
    • 11:35 11:55
      TPC Track Model Decoding on GPU 20m
      Speaker: Gabriele Cimador (Universita e INFN Trieste (IT))

      5.3*107 clusters TimeFrame

      Intel CPU 12 cores / Nvidia GPU

      EPN - AMD CPU 128 cores / AMD GPU

      2.7*108 clusters TimeFrame

      EPN - AMD CPU 128 cores / AMD GPU