Alice Weekly Meeting: Software for Hardware Accelerators / PDP-SRC

Europe/Zurich
Zoom Meeting ID
61230224927
Host
David Rohr
Useful links
Join via phone
Zoom URL
    • 11:00 AM 11:20 AM
      Discussion 20m
      Speakers: David Rohr (CERN), Ole Schmidt (CERN)

      Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)

      High priority RC YETS issues:

      • Fix dropping lifetime::timeframe for good: No news
        • Still pending: problem with CCDB objects getting lost by DPL leading to "Dropping lifetime::timeframe", saw at least one occation during SW validation.
      • Start / Stop / Start:
        • Problems in readout and QC fixed. Now 2 new problems left:
          • Some processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
          • TPC ITS matching QC crashing accessing CCDB objects. Not clear if same problem as above, or a problem in the task itself.
      • Stabilize calibration / fix EoS: New scheme: https://its.cern.ch/jira/browse/O2-4308: No news
      • Problem with DPL getting stuck waiting for oldestPossibleTimeframe reappeared in physics data taking, if the above CCDB dropping lifetime::timeframe bug happens for 2 TFs in a row. Should give this higher priority. I'll try to have a look.
      • Fix problem with ccdb-populator: no idea yet, no ETA.
      • Memory leak in DPL internal-ccdb-backend - no progress.

       

      High priority framework topics:

      • See YETS issues

       

      Other framework tickets:

       

      Global calibration topics:

      • TPC IDC and SAC workflow issues to be reevaluated with new O2 at restart of data taking. Cannot reproduce the problems any more.

       

      Async reconstruction

      • Remaining oscilation problem: GPUs get sometimes stalled for a long time up to 2 minutes.
        • Checking 2 things: does the situation get better without GPU monitoring? --> Inconclusive
        • We can use increased GPU processes priority as a mitigation, but doesn't fully fix the issue.
      • ḾI100 GPU stuck problem will only be addressed after AMD has fixed the operation with the latest official ROCm stack.

       

      EPN major topics:

      • Fast movement of nodes between async / online without EPN expert intervention.
        • 2 goals I would like to set for the final solution:
          • It should not be needed to stop the SLURM schedulers when moving nodes, there should be no limitation for ongoing runs at P2 and ongoing async jobs.
          • We must not lose which nodes are marked as bad while moving.
      • Interface to change SHM memory sizes when no run is ongoing. Otherwise we cannot tune the workflow for both Pb-Pb and pp: https://alice.its.cern.ch/jira/browse/EPN-250
        • Lubos to provide interface to querry current EPN SHM settings - ETA July 2023, Status?
      • Improve DataDistribution file replay performance, currently cannot do faster than 0.8 Hz, cannot test MI100 EPN in Pb-Pb at nominal rate, and cannot test pp workflow for 100 EPNs in FST since DD injects TFs too slowly. https://alice.its.cern.ch/jira/browse/EPN-244 NO ETA
      • DataDistribution distributes data round-robin in absense of backpressure, but it would be better to do it based on buffer utilization, and give more data to MI100 nodes. Now, we are driving the MI50 nodes at 100% capacity with backpressure, and then only backpressured TFs go on MI100 nodes. This increases the memory pressure on the MI50 nodes, which is anyway a critical point. https://alice.its.cern.ch/jira/browse/EPN-397
      • TfBuilders should stop in ERROR when they lose connection.
      • AMD ROCm 5.5/5.6 (our current stack) does not support ALMA 8.9. So we can only bump to 8.8. Asked AMD how long they will keep supporting 8.8. If they drop it with ROCm 6.1, doesn't make sense for us to bump...

       

      Other EPN topics:

       

      Raw decoding checks:

      • Add additional check on DPL level, to make sure firstOrbit received from all detectors is identical, when creating the TimeFrame first orbit.

       

      Full system test issues:

      Topology generation:

      • Should test to deploy topology with DPL driver, to have the remote GUI available.
        • DPL driver needs to implement FMQ state machine. Postponed until YETS issues solved.

       

      QC / Monitoring / InfoLogger updates:

      • TPC has opened first PR for monitoring of cluster rejection in QC. Trending for TPC CTFs is work in progress. Ole will join from our side, and plan is to extend this to all detectors, and to include also trending for raw data sizes.

       

      AliECS related topics:

      • Extra env var field still not multi-line by default.

       

      GPU ROCm / compiler topics:

      • Found new HIP internal compiler error when compiling without optimization: -O0 make the compilation fail with unsupported LLVM intrinsic. Reported to AMD.
      • Found a new miscompilation with -ffast-math enabled in looper folllowing, for now disabled -ffast-math.
      • Must create new minimal reproducer for compile error when we enable LOG(...) functionality in the HIP code. Check whether this is a bug in our code or in ROCm. Lubos will work on this.
      • Found another compiler problem with template treatment found by Ruben. Have a workaround for now. Need to create a minimal reproducer and file a bug report.
      • Debugging the calibration, debug output triggered another internal compiler error in HIP compiler. No problem for now since it happened only with temporary debug code. But should still report it to AMD to fix it.
      • Had a discussion with the AMD engineer about the status. No clear commitment what can be fixed when, but there are internally multiple bug reports open, and certain fixes already in PR, but I think the earliest possible to have them in will be ROCm 6.2, which will be too late for TS1. So most likely we'll have to stick to the current ROCm stack for this year's data taking.

       

      TPC GPU Processing

      • Bug in TPC QC with MC embedding, TPC QC does not respect sourceID of MC labels, so confuses tracks of signal and of background events.
      • New problem with bogus values in TPC fast transformation map still pending. Sergey is investigating, but waiting for input from Alex.
      • Status of cluster error parameterizations
        • Full cluster errors available in refit, occupancy maps shipped via DPL, and created if TPC tracking is not running.
        • No progress yet on newly requested debug streamers.
        • Porting to stable-async postponed, since not clear whether it will actually be used.
      • TPC processing performance regression:
        • O2/dev:
            • Total time: 4.695s, Track Fit Time 1.147s, Seeding Time 1.241s
          • O2/dev with commit from 4.3. reverted:
            • Total time 4.351s, Track Fit Time 1.089, Seeding Time 1.008s
          • For reference: before introduction of the V-Shape map:
            • Total time 3.8421s (didn't measure individual times)
          • O2/dev with scaling factors hard-coded to 0 (essentially using one single transformation map without any scaling):
            • Total time 3.093 Track Fit Time 0.682s Seeding Time 0.429s
        • Proposed 3 ideas to speed up the map access:
          1. We merge the maps on-the-fly to one combined map, and query only one map.
          2. We could add plenty of #ifdef in the code, to make sure that for online purposes all the code for the non-static map is not seen.
          3. We could try to optimize the code to make it easier for the compiler.
      • Meeting to discuss TPCFastTransform with TPC next Thursday, RC will give a short summary in the TB tomorrow.

       

      General GPU Processing

      • Porting CUDA features to HIP:
        • RTC compilation now fully working with HIP. Unfortunately, performance benefit is only 2% compared to 5% with CUDA. Not clear why. Actually I had expercted a larger improvement, since the AMD compiler / hardware seems to suffer more from complicated control flow than NVIDIA, and the constexpr optimization should reduce the control flow.
        • Per-kernel compilation still not available with HIP
      • GPU code now compiles with C++20
      • Can bump to GCC13 from GPU side - tested locally. Problem is now with flatbuffers / onnxruntime. To be discussed in WP3 / WP4/14 meetings.
      • Can not bump LLVM/clang beyond 15 due to a bug in their OpenCL code. Verified that it is still broken in LLVM 18.1.
        • Filed a bug report, they fixed one problem but my reproducer still fails with internal compiler error.
      • Fixed GPU compilation for Run2 data / with AliRoot.
    • 11:20 AM 11:25 AM
      TRD Tracking 5m
      Speaker: Ole Schmidt (CERN)
    • 11:25 AM 11:30 AM
      TPC ML Clustering 5m
      Speaker: Christian Sonnabend (CERN, Heidelberg University (DE))
    • 11:30 AM 11:35 AM
      ITS Tracking 5m
      Speaker: Matteo Concas (CERN)

      Nothing relevant to report, work in progress.

    • 11:35 AM 11:55 AM
      TPC Track Model Decoding on GPU 20m
      Speaker: Gabriele Cimador (Universita e INFN Trieste (IT))

      Basic version

      Improved version

      Results on EPN

      Node with MI50 GPU

      Dataset with 5.3*10^7 clusters

      Dataset with 2.7*10^8 clusters

      Node with MI100 GPU

      Dataset with 5.3*10^7 clusters

      Dataset with 2.7*10^8 clusters