Help us make Indico better by taking this survey! Aidez-nous à améliorer Indico en répondant à ce sondage !

Alice Weekly Meeting: Software for Hardware Accelerators / PDP-SRC

Europe/Zurich
Videoconference
ALICE GPU Meeting
Zoom Meeting ID
61230224927
Host
David Rohr
Useful links
Join via phone
Zoom URL
    • 11:00 11:20
      Discussion 20m
      Speakers: David Rohr (CERN), Giulio Eulisse (CERN)

      Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)

      High priority Framework issues:

      • Fix dropping lifetime::timeframe for good: Still pending: problem with CCDB objects getting lost by DPL leading to "Dropping lifetime::timeframe", saw at least one occation during SW validation.
        • Checked with Ernst, we still get such errors at EOR, but these are false errors. Giulio has suppressed them in O2/dev. Need to wait for SW update at P2.
        • Giulio: Related problem with multi-output proxy - not a problem!
      • Start / Stop / Start: 2 problems on O2 side left:
          • All processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
          • TPC ITS matching QC crashing accessing CCDB objects. Not clear if same problem as above, or a problem in the task itself:
      • Stabilize calibration / fix EoS: New scheme: https://its.cern.ch/jira/browse/O2-4308: Status?.
      • Fix problem with ccdb-populator: no idea yet - since Ole left, someone else will have to take care.
      • Expendable tasks - 2 problems reported, 1 already fixed by Giulio. The other is a cyclic channel of the dpl dummy sink - still under investigateion.

       

      Global calibration topics:

      • TPC IDC and SAC workflow issues to be reevaluated with new O2 at restart of data taking. Cannot reproduce the problems any more.

       

      Sync reconstruction

      • Had a problem with COSMIC runs crashing due to bug in ITS code.
        • Will create COSMIC REPLAY data sets as well, so this can be tested in staging. Recorded cosmic raw data. Status?
        • Crash was from incorrect printout in ITS reco, fixed in new O2 deployed at P2.
      • Asked Ernst if he can from now on prepare SYNTHETIC and REPLAY data sets. He is looking into it.
      • gpu-reconstruction crashing due to receiving bogus CCDB objects.
        • Found that the corruption occurs when the CCDB object download times out. New O2 tag of today has proper error detection, and will give a FATAL error message instead of shipping corrupted objects.
        • Still not clear why it sometimes times out, particularly only using the ccdb-memory daemon, which is running on the local node. To be debugged.
      • Crashes with boost interprocess lock error were due to memory corruption caused by bug in TOF compressor and corrupt TOF raw data not treated correctly. Tentative fix deployed at P2, need to validate that it doesn't happen any more.

       

      Async reconstruction

      • Remaining oscilation problem: GPUs get sometimes stalled for a long time up to 2 minutes. Checking 2 things:
        • does the situation get better without GPU monitoring? --> Inconclusive
        • We can use increased GPU processes priority as a mitigation, but doesn't fully fix the issue.
      • ḾI100 GPU stuck problem will only be addressed after AMD has fixed the operation with the latest official ROCm stack.
      • Limiting factor for pp workflow is now the TPC time series, which is to slow and creates backpressure (costs ~20% performance on EPNs). Enabled multi-threading as recommended by Matthias - need to check if it works.

       

      EPN major topics:

      • Fast movement of nodes between async / online without EPN expert intervention.
        • 2 goals I would like to set for the final solution:
          • It should not be needed to stop the SLURM schedulers when moving nodes, there should be no limitation for ongoing runs at P2 and ongoing async jobs.
          • We must not lose which nodes are marked as bad while moving.
      • Interface to change SHM memory sizes when no run is ongoing. Otherwise we cannot tune the workflow for both Pb-Pb and pp: https://alice.its.cern.ch/jira/browse/EPN-250
        • Lubos to provide interface to querry current EPN SHM settings - ETA July 2023, Status?
      • Improve DataDistribution file replay performance, currently cannot do faster than 0.8 Hz, cannot test MI100 EPN in Pb-Pb at nominal rate, and cannot test pp workflow for 100 EPNs in FST since DD injects TFs too slowly. https://alice.its.cern.ch/jira/browse/EPN-244 NO ETA
      • DataDistribution distributes data round-robin in absense of backpressure, but it would be better to do it based on buffer utilization, and give more data to MI100 nodes. Now, we are driving the MI50 nodes at 100% capacity with backpressure, and then only backpressured TFs go on MI100 nodes. This increases the memory pressure on the MI50 nodes, which is anyway a critical point. https://alice.its.cern.ch/jira/browse/EPN-397
      • TfBuilders should stop in ERROR when they lose connection.
      • Allow epn user and grid user to set nice level of processes: https://its.cern.ch/jira/browse/EPN-349
      • Improve core dumps and time stamps: https://its.cern.ch/jira/browse/EPN-487
      • Tentative time for ALMA9 deployment: december 2024.

       

      Other EPN topics:

       

      Full system test issues:

      Topology generation:

      • Should test to deploy topology with DPL driver, to have the remote GUI available.
        • DPL driver needs to implement FMQ state machine. Postponed until YETS issues solved.
      • 2 Occucances where the git repository in the topology cache was corrupted. Not really clear how this can happen, also not reproducible. Was solved by wiping the cache. Will add a check to the topology scripts to check for a corrupt repository, and in that case delete it and check it out anew.

       

      AliECS related topics:

      • Extra env var field still not multi-line by default.

       

      GPU ROCm / compiler topics:

      • Compilation failure due to missing symbols when compiling with -O0Similar problem found by Matteo, being debugged.. Sent a reproducer to AMD.
      • Internal compiler error with LOG(...) macro: we have a workaround, AMD has a reproducer, waiting for a fix.
      • New miscompilation for >ROCm 6.0
        • Waiting for AMD to fix the reproducer we provided.
      • ROCm 6.2 / ALMA 9.2
        • Building on the EPNs. Christian can use them for ML framework tests. Waiting for ROCm 6.2.1 to test O2.
      • Bumping GCC:
        • Bumping to GCC 13 needs undoing the Geant4 revert, bumping the GPU build container to new custom container with CUDA 12.6 and again custom ROCm compiler, and there is one missing problem not yet understood that ROCm fails to compile with GCC 13/14 headers on RHEL (while working on my laptop).
        • For GCC 14.2 in the future, needs also bumping ROOT, either fixing compile warnings on ONNXRuntime or bumping ONNXRuntime, bumning jason-c switching from autoconf to CMake, and waiting for new CUDA > 12.6 supporting GCC 14.
          • Made all this and disabled CUDA support in a test on the EPN, so Christian can use GCC 14 for ML clustering tests.
          • On my laptop hacked CUDA to use GCC 13 headers but compiling with GCC 14, which also works.
      • Bumping LLVM:
        • LVM bumped to 17, Sergio now working to bump to 18.1 to fix the relocation issue (Anton has meanwhile fixed the problem with arrow for LLVM > 17).
      • GPU RTC on EPNs fixed, deploying an O2 version to test it at P2 today.

       

      TPC GPU Processing

      • Bug in TPC QC with MC embedding, TPC QC does not respect sourceID of MC labels, so confuses tracks of signal and of background events.
      • New problem with bogus values in TPC fast transformation map still pending. Sergey is investigating, but waiting for input from Alex. Ruben reported that he still sees such bogus values.
      • Status of cluster error parameterizations
        • No progress yet on newly requested debug streamers.
        • Waiting for TPC to check PR with full cluster errors during seeding.
      • TPC reported a problem with laser runs. In case of bad data (tpc data outside of the triggered drift time), GPUs sometimes can get stuck, so apparently the skipping of bad data is not fully working. Recorded some laser raw data to check.
        • Fully fixed. Actual crash came from incorrect handling of detected buffer overflows during creation of fast search grid in TPC tracking, which is fixed now.
        • Buffer overflows came from bogus values of the TPC transformation, moving clusters by 10^20cms, leading to bogus fast search grids. Fixed this now by a temporary workaround to not apply any SCD correction > 100 cm. Should be reverted once TPC has a proper solution.
        • In addition there were failures from FPEs during the tracking when track parameters and cov matrix became inf / NaN, which is fixed by the same workaround for the TPC transform map.
      • Fixed bug that FMQ SHM message for occupancy map was allocated 4x too large (sizeof(int) vs sizeof(char) problem, and not 32hbf vs 128hbf confusion as originally anticipated).
      • Improved GPU / TPC configuration dump, no same syntax in standalone test and in O2, making it easily comparable.
      • Added a mode to the standalone benchmark to impose the default O2 settings, for better reproducibility.
      • Problem in GPU code since "char" on aarch64 linux is unsigned, not signed. Will probably move all of the GPU folder to using int8 ... uint64. For now, Giulio switched failing variables from "char" to "signed char"

       

      TPC processing performance regression:

      • Final solution: merging transformation maps on the fly into a single flat object:
      • Temporary mitigation with RTC implemented:
        • Speeds up processing by 19% (O2/dev with feature enabled v.s. feature disabled).
        • We are now 13% slower than software running 2023 in Pb-Pb.
        •  Summing up the performance reductions from all commits for cluster errors (building charge average, building occupancy map, IFC errrors, etc.) yields 11.5% slowdown.
        • Only 1.5% slowdown by all other changes since 2023 Pb-Pb, probably not much we can do, except for further overall code optimization.
        • Still hoping that final solution will gain another ~19% (by going from 2 maps to 1, as for going from 3 maps to 2 in the mitigation).

       

      General GPU Processing

      • Pending problems with using GPU RTC at P2:
        • /tmp is inside the slurm container and wiped afterwards. Fixed by using /var/tmp.
        • RTC is started from one of the GPU processes, which has a NUMA pinning to one NUMA domain, thus it uses only half of the CPU cores. Need to extend the CPU pinning for the compilation subprocesses.
        • RTC compiles for the architecturs of the original build, which is currently MI50/MI100, i.e. all nodes compile twice, which takes extra time. Need to add an option to select an architecture, and the topology generation must put in the setting for MI50 / MI100 architectures.
        • AMD compiler leaves stale temp folders, can avoid this by setting $TMPDIR.
        • RTC compilation fails in an online run since headers (e.g. <cmath>) are not found - Not understood yet.

       

    • 11:20 11:25
      Following up JIRA tickets 5m
      Speaker: Ernst Hellbar (CERN)

      Overview of low-priority framework issues

       

      Updates

        • everything works as expected -> closed
      • Dropping incomplete errors at run stop
        • fix deployed with SW update on 12/08
          • error messages mostly gone
        • still regularly rare errors
          • mch-data-decoder
      Expected Lifetime::Timeframe data MCH/HBPACKETS/0 was not created for timeslice 47383 and might result in dropped timeframes
      Expected Lifetime::Timeframe data MCH/ERRORS/0 was not created for timeslice 20 and might result in dropped timeframes
          • from MCH QC task
      Dropping incomplete <matcher query: (and origin:MCH (and description:ORBITS (just startTime:$0 )))> Lifetime::timeframe data in slot 14 with timestamp 20 < 21 as it can never be completed.
      Dropping incomplete <matcher query: (and origin:MCH (and description:DIGITS (just startTime:$0 )))> Lifetime::timeframe data in slot 14 with timestamp 20 < 21 as it can never be completed.
      Missing <matcher query: (and origin:MCH (and description:HBPACKETS (just startTime:$0 )))> (lifetime:timeframe) while dropping incomplete data in slot 14 with timestamp 20 < 21.
          - happens usually for last empty TFs on one or two EPNs
          - in some recent synthetic runs, it happened on a lot more EPNs (~40)
    • 11:25 11:30
      TPC ML Clustering 5m
      Speaker: Christian Sonnabend (CERN, Heidelberg University (DE))

      Updates on NN perfromance

      • CF clusterizer, 21.6 mio. clusters

      • Classification (trained on MC) + CF regression, 16 mio. clusters

      • Classification (trained on real data, native clusters) + CF regression, 18.5 mio. clusters

      • Next subjects for study
        • Improve regression network (probably connected to training data)
        • Use one of the classification networks, vary cuts and observe impact on tracking efficiency
        • Automatize the working point search (potentially via a hyperparameter optimization strategy)

       

      Updates on NN speed & implementation

      • With ROCm 6.2, ONNXRuntime compiles now for gfx906 (MI50) & gfx908 (MI100) and also with support of float16
      • Successfully imported and evaluated NN's on standalone (in O2) task with ~19 mio. clusters/s (MI50) and ~25 mio. clusters/s (MI100)
        • MI100 should be ~8x faster than MI50 for FP16 -> We are "scheduling"-bound!
      • session->Run() can only currently run with max. 3910 tensors of size 346 (= 7x7x7 + 3) in FP16: Potentially limited by GPU memory page-size
        • Increased perfromance by spawning multiple sessions (~30) and using multithreading to load the GPU

       

      • CPU implementation (top is ONNX, bottom is standard reco)

      ---------------------------------------------------

    • 11:30 11:35
      ITS Tracking 5m
      Speaker: Matteo Concas (CERN)
      • ITS GPU tracking
        • Running with more than a bunch of threads HIP "lost" its deterministic nature in ITS tracking, while CUDA maintains it.
        • Upgraded to rocm 6.2 -> cannot run HIP anymore (runtime/target architecture problem, it is not the first time).
        • Using GPU workflows still gives slightly different results comparing with native gpu tracking, this is genuine, depends on some difference in the configuration, to be understood.
    • 11:35 11:55
      TPC Track Model Decoding on GPU 20m
      Speaker: Gabriele Cimador (Universita e INFN Trieste (IT))