Alice Weekly Meeting: Software for Hardware Accelerators / PDP-SRC

Europe/Zurich
Videoconference
ALICE GPU Meeting
Zoom Meeting ID
61230224927
Host
David Rohr
Useful links
Join via phone
Zoom URL
    • 11:00 11:20
      Discussion 20m
      Speakers: David Rohr (CERN), Ole Schmidt (CERN)

      Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)

      High priority RC YETS issues:

      • Fix dropping lifetime::timeframe for good: No news
        • Still pending: problem with CCDB objects getting lost by DPL leading to "Dropping lifetime::timeframe", saw at least one occation during SW validation.
      • Expandable tasks in QC. Everything merged on our side.
        • Everything fixed and deployedon staging, EPN did not yet deploy their topology merger fixes to production. RC wants better reporting of failed tasks from ECS before it is used in production. Nothing more to 
      • Start / Stop / Start:
        • Problems in readout and QC fixed. Now 3 new problems, at least 2 on our side:
          • GPU multi-thread pipeline gets stuck after restart. Should be trivial to fix. https://its.cern.ch/jira/browse/O2-4638 FIXED
          • Some processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
          • TPC ITS matching QC crashing accessing CCDB objects. Not clear if same problem as above, or a problem in the task itself.
      • Stabilize calibration / fix EoS: New scheme: https://its.cern.ch/jira/browse/O2-4308
        • Work in progress, partial PR open.
      • Problem with bogus oldestPossible messages coming from colliding QC timers: Fixed
      • Problem with FIT workflow and a single EPN causing backpressure hopefully fixed by improving metric-feedback mechanism.
        • Since this week with the fix in, we have problems with workflows getting stuck that have a large number of time frames in flight.
          • Though not clear if at all realted to the fix.
        • As workaround, limited the max tf in flight variable to be smaller than DPL_PIPELINE_LENGTH.
        • Will create a JIRA ticket, so Giulio can try to reproduce in staging and check.
      • Fix problem with ccdb-populator: no idea yet, no ETA.

       

      High priority framework topics:

      • See YETS issues

       

      Other framework tickets:

      • TOF problem with receiving condition in tof-compressor: https://alice.its.cern.ch/jira/browse/O2-3681
      • Grafana metrics: Might want to introduce additional rate metrics that subtract the header overhead to have the pure payload: low priority.
      • Backpressure reporting when there is only 1 input channel: no progress: https://alice.its.cern.ch/jira/browse/O2-4237
      • Stop entire workflow if one process segfaults / exits unexpectedly. Tested again in January, still not working despite some fixes. https://alice.its.cern.ch/jira/browse/O2-2710
      • https://alice.its.cern.ch/jira/browse/O2-1900 : FIX in PR, but has side effects which must also be fixed.
      • https://alice.its.cern.ch/jira/browse/O2-2213 : Cannot override debug severity for tpc-tracker
      • https://alice.its.cern.ch/jira/browse/O2-2209 : Improve DebugGUI information
      • https://alice.its.cern.ch/jira/browse/O2-2140 : Better error message (or a message at all) when input missing
      • https://alice.its.cern.ch/jira/browse/O2-2361 : Problem with 2 devices of the same name
      • https://alice.its.cern.ch/jira/browse/O2-2300 : Usage of valgrind in external terminal: The testcase is currently causing a segfault, which is an unrelated problem and must be fixed first. Reproduced and investigated by Giulio.
      • Found a reproducible crash (while fixing the memory leak) in the TOF compressed-decoder at workflow termination, if the wrong topology is running. Not critical, since it is only at the termination, and the fix of the topology avoids it in any case. But we should still understand and fix the crash itself. A reproducer is available.
      • Support in DPL GUI to send individual START and STOP commands.
      • Problem I mentioned last time with non-critical QC tasks and DPL CCDB fetcher is real. Will need some extra work to solve it. Otherwise non-critical QC tasks will stall the DPL chain when they fail.
      • DPL sending SHM metrics for all devices, not only input proxy: https://alice.its.cern.ch/jira/browse/O2-4234
      • Some improvements to ease debugging: https://alice.its.cern.ch/jira/browse/O2-4196 https://alice.its.cern.ch/jira/browse/O2-4195 https://alice.its.cern.ch/jira/browse/O2-4166
      • We desperately need to do a cleanup session and go through all these pending DPL tickets with a higher priority, and finally try to clean up the backlog.

      Global calibration topics:

      • TPC IDC and SAC workflow issues to be reevaluated with new O2 at restart of data taking. Cannot reproduce the problems any more.

       

      Sync processing

      • Proposal to parse InfoLogger message and alert automatically: https://alice.its.cern.ch/jira/browse/R3C-992
      • Seen crashes in pp replay: corrupt CCDB objects, but also general corruption in SHM.
        • Was due to ITS raw data corruption that was not handled correctly and let to memory corruption - fixed.

       

      Async reconstruction

      • Remaining oscilation problem: GPUs get sometimes stalled for a long time up to 2 minutes.
        • Checking 2 things: does the situation get better without GPU monitoring? --> Inconclusive
        • We can use increased GPU processes priority as a mitigation, but doesn't fully fix the issue.
      • ḾI100 GPU stuck problem will only be addressed after AMD has fixed the operation with the latest official ROCm stack.
      • Merged Gabriele's PR for GPU TPC track model decoding.
      • Ruben saw a memory increase, that makes his workflow go OOM.
        • Partly caused by a regression in Giulio's change for the metric-feedback - fixed now.
        • Also new GPU TPC track model decoding causes a memory increase. Added an option for now to run the old version instead.
        • But still goes OOM. Needs more checks.

       

      EPN major topics:

      • Fast movement of nodes between async / online without EPN expert intervention.
        • 2 goals I would like to set for the final solution:
          • It should not be needed to stop the SLURM schedulers when moving nodes, there should be no limitation for ongoing runs at P2 and ongoing async jobs.
          • We must not lose which nodes are marked as bad while moving.
      • Interface to change SHM memory sizes when no run is ongoing. Otherwise we cannot tune the workflow for both Pb-Pb and pp: https://alice.its.cern.ch/jira/browse/EPN-250
        • Lubos to provide interface to querry current EPN SHM settings - ETA July 2023, Status?
      • Improve DataDistribution file replay performance, currently cannot do faster than 0.8 Hz, cannot test MI100 EPN in Pb-Pb at nominal rate, and cannot test pp workflow for 100 EPNs in FST since DD injects TFs too slowly. https://alice.its.cern.ch/jira/browse/EPN-244 NO ETA
      • DataDistribution distributes data round-robin in absense of backpressure, but it would be better to do it based on buffer utilization, and give more data to MI100 nodes. Now, we are driving the MI50 nodes at 100% capacity with backpressure, and then only backpressured TFs go on MI100 nodes. This increases the memory pressure on the MI50 nodes, which is anyway a critical point. https://alice.its.cern.ch/jira/browse/EPN-397
      • TfBuilders should stop in ERROR when they lose connection.
      • Need fix for XML merging for topologies with expendable tasks: Done

       

      Other EPN topics:

       

      Raw decoding checks:

      • Add additional check on DPL level, to make sure firstOrbit received from all detectors is identical, when creating the TimeFrame first orbit.

       

      Full system test issues:

      Topology generation:

      • Should test to deploy topology with DPL driver, to have the remote GUI available.
        • DPL driver needs to implement FMQ state machine. Postponed until YETS issues solved.

       

      QC / Monitoring / InfoLogger updates:

      • TPC has opened first PR for monitoring of cluster rejection in QC. Trending for TPC CTFs is work in progress. Ole will join from our side, and plan is to extend this to all detectors, and to include also trending for raw data sizes.

       

      AliECS related topics:

      • Extra env var field still not multi-line by default.

       

      GPU ROCm / compiler topics:

      • Found new HIP internal compiler error when compiling without optimization: -O0 make the compilation fail with unsupported LLVM intrinsic. Reported to AMD.
      • Found a new miscompilation with -ffast-math enabled in looper folllowing, for now disabled -ffast-math.
      • Must create new minimal reproducer for compile error when we enable LOG(...) functionality in the HIP code. Check whether this is a bug in our code or in ROCm. Lubos will work on this.
      • Found another compiler problem with template treatment found by Ruben. Have a workaround for now. Need to create a minimal reproducer and file a bug report.
      • Debugging the calibration, debug output triggered another internal compiler error in HIP compiler. No problem for now since it happened only with temporary debug code. But should still report it to AMD to fix it.
      • Now that we have the deterministic tracking, this can hopefully help AMD in debugging.
      • Checked deterministic mode on AMD GPUs, and we get the exact same results as with CPU / CUDA --> at least no miscompilation in current version which shows only in rare cases, which would have been hidden by concurrency before.
      • AMD asked to check with new minor version ROCm 6.0.2, EPN provided test nodes this morning.

       

      TPC GPU Processing

      • Bug in TPC QC with MC embedding, TPC QC does not respect sourceID of MC labels, so confuses tracks of signal and of background events.
      • New problem with bogus values in TPC fast transformation map still pending. Sergey is investigating, but waiting for input from Alex.
      • All features requested by TPC were implemented on Monday.
        • Was in O2/dev, not in production branch. Took a day to port everything, now PR available with everything backported to stable branch.
        • Since then, Marian spotted a bug in the error formula, and a misunderstanding between us whether to use sqrt(qMax) - Both fixed, also in the cherry-pick PR to production.
        • Yesterday, Marian provided new pseudo-code for additional errors at C side for sectors 1 and 2 - to be implemented.

       

      General GPU Processing

      • Consistency between CPU and GPU processing status:
        • Trying to get fully deterministic tracking with GPUCA_NO_FAST_MATH + additional debug options, which will introduce many intermediate sorting steps.
          • Fully deterministic GPU tracking now available. Did not find additional bugs compared to the ones previously reported, but remaining differences were due to sorting issues / real concurrency.
          • In order to use it, set CMake GPUCA_NO_FAST_MATH and configKeyValue deterministicGPUReconstruction=1
      • Started work to make O2 propagator easily usable in ITS tracking, which is not part of the GPU reconstruction library:
        • O2 propagator on  now available to external libararies - tested with ITS tracking. Only requirement is to link against a CMake object library, which will set up everything using static objects.
        • Matteo spotted two bugs with the propagator, in nominal bz field initialization and rotation (both not used by TPC) - fixed.
        • Still the problem that magnetic field is not settable to ITS/TPC/TRD, need to check how to do it. But currently, ITS uses only constant nominal field, so doesn't matter.
      • Started to port features from CUDA to HIP using hipify, seems to work so far, at least some of the files can already be autotranslated.

    • 11:20 11:25
      TRD Tracking 5m
      Speaker: Ole Schmidt (CERN)
    • 11:25 11:30
      TPC ML Clustering 5m
      Speaker: Christian Sonnabend (CERN, Heidelberg University (DE))

      PyTorch

      Tested several possibilities for installing PyTorch

      • Python libraries use pre_cxx11 ABI builds and are hence not compatible (lead to undefined references for certain libraries when linking)
      • Abandonned python-install approach on monday -> Switched to pre-built (cxx11) ABI's from Pytorch website. Modify alidist recipe and achieved building on my private machine (CPU only version: Small and super fast)
      • GPU version tried but Internet in SGP is fantastically slow -> Have to download binaries at CERN and try tonight

       

      • Needs undef's before including pytorch headers:
      #ifdef ClassDef
      #undef ClassDef
      #endif

      #ifdef TreeRef
      #undef TreeRef
      #endif

       

      • Testing now on MacOS

      QA

      Developing a secondary ROOT script which runs after QA task to produce some simple QA (makes life easier)

      • After tuning network training data, position resolution matches native clusterizer

      • Investigated Q_max dE/dx plot and can confirm visually identical behaviour

      • Found bug in sigma estimation of ideal clusterizer -> Needs fixing (will do it today, Welford's algorithm for iterative sigma calculation)

       

    • 11:30 11:35
      ITS Tracking 5m
      Speaker: Matteo Concas (CERN)
      • Testing O2::Propagator in ITS GPU tracking
        • Following the "deterministic" approach to ensure bit-by-bit compatibility with CPU and GPU results.
        • Apart from a bug in non-production code, there are still some cases where results slightly diverge, even though the involved arithmetic is really trivial (https://github.com/AliceO2Group/AliceO2/blob/902f2db1417c6770f950d252a5f324d1304bb14b/Detectors/Base/include/DetectorsBase/Ray.h#L126).
        • This propagates down to material budget estimation -> material correction -> track covariance matrix 


    • 11:35 11:55
      TPC Track Model Decoding on GPU 20m
      Speaker: Gabriele Cimador (Universita e INFN Trieste (IT))
      • Tuned block and grid size for decoding kernels
      • GPUMemCpy for attached and unattached clusters in two separate streams