Alice Weekly Meeting: Software for Hardware Accelerators / PDP-SRC

Europe/Zurich
Zoom Meeting ID
61230224927
Host
David Rohr
Useful links
Join via phone
Zoom URL
    • 1
      Discussion
      Speakers: David Rohr (CERN), Ole Schmidt (CERN)

      Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)

      High priority Framework issues:

      • Fix dropping lifetime::timeframe for good: Still pending: problem with CCDB objects getting lost by DPL leading to "Dropping lifetime::timeframe", saw at least one occation during SW validation.
        • Checked with Ernst, we still get such errors at EOR, but these are false errors. Giulio has suppressed them in O2/dev. Need to wait for SW update at P2.
        • Giulio: Related problem with multi-output proxy - to be checked - Status?
      • Start / Stop / Start: 2 problems on O2 side left:
          • All processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
          • TPC ITS matching QC crashing accessing CCDB objects. Not clear if same problem as above, or a problem in the task itself:
      • Stabilize calibration / fix EoS: New scheme: https://its.cern.ch/jira/browse/O2-4308: Still WiP.
      • Fix problem with ccdb-populator: no idea yet - since Ole left, someone else will have to take care.

       

      Other framework tickets:

      • Will from now on be followed up in serparate minutes..
      • We need to make progress with these tickets at some point...
        • Ernst will start following these up and create a mother ticket in JIRA for this.
      • TOF problem with receiving condition in tof-compressor: https://alice.its.cern.ch/jira/browse/O2-3681
        • last updates 1 year ago, unsure about the history of tof-compressor (Ernst)
        • unable to reproduce the issue (dropping incomplete all but first TF) locally with rawTF from a TOF run
        • first look at FST output on epn shows no errors
          • more cross-checks (e.g. FST env variables) to be done, but looks like this resolved itself 
      • Grafana metrics: Might want to introduce additional rate metrics that subtract the header overhead to have the pure payload: low priority.
      • Backpressure reporting when there is only 1 input channel: no progress: https://alice.its.cern.ch/jira/browse/O2-4237
      • Stop entire workflow if one process segfaults / exits unexpectedly. Tested again in January, still not working despite some fixes. https://alice.its.cern.ch/jira/browse/O2-2710
      • https://alice.its.cern.ch/jira/browse/O2-1900 : FIX in PR, but has side effects which must also be fixed.
      • https://alice.its.cern.ch/jira/browse/O2-2213 : Cannot override debug severity for tpc-tracker
      • https://alice.its.cern.ch/jira/browse/O2-2209 : Improve DebugGUI information
      • https://alice.its.cern.ch/jira/browse/O2-2140 : Better error message (or a message at all) when input missing
      • https://alice.its.cern.ch/jira/browse/O2-2361 : Problem with 2 devices of the same name
      • https://alice.its.cern.ch/jira/browse/O2-2300 : Usage of valgrind in external terminal: The testcase is currently causing a segfault, which is an unrelated problem and must be fixed first. Reproduced and investigated by Giulio.
      • https://its.cern.ch/jira/browse/O2-4759: Run getting stuck when too many TFs are in flight.
      • https://its.cern.ch/jira/browse/O2-4234: Reduce obsolete DPL metrics
      • https://its.cern.ch/jira/browse/O2-4860: Do not use string comparisons to derrive processor type, since DeviceSpec.name is user-defined.
      • Support in DPL GUI to send individual START and STOP commands.
      • DPL sending SHM metrics for all devices, not only input proxy: https://alice.its.cern.ch/jira/browse/O2-4234
      • Some improvements to ease debugging: https://alice.its.cern.ch/jira/browse/O2-4196 https://alice.its.cern.ch/jira/browse/O2-4195 https://alice.its.cern.ch/jira/browse/O2-4166
      • Add additional check on DPL level, to make sure firstOrbit received from all detectors is identical, when creating the TimeFrame first orbit.
      • Implement a proper solution to detect wheter a device is firstInChain https://its.cern.ch/jira/browse/O2-4999

       

      Global calibration topics:

      • TPC IDC and SAC workflow issues to be reevaluated with new O2 at restart of data taking. Cannot reproduce the problems any more.

       

      Sync reconstruction

      Async reconstruction

      • Remaining oscilation problem: GPUs get sometimes stalled for a long time up to 2 minutes. Checking 2 things:
        • does the situation get better without GPU monitoring? --> Inconclusive
        • We can use increased GPU processes priority as a mitigation, but doesn't fully fix the issue.
      • ḾI100 GPU stuck problem will only be addressed after AMD has fixed the operation with the latest official ROCm stack.
      • Limiting factor for pp workflow is now the TPC time series, which is to slow and creates backpressure (costs ~20% performance on EPNs). Enabled multi-threading as recommended by Matthias - need to check if it works.
      • Again problems accessing CCDB for me, JAlien-ROOT / libwebsocket getting stuck on my laptop.
        • Understood and fixed:
          • 1st problem: The orange DNS server has problems replying on TCP, which was necessary due to the long lsit of alien hosts. For me, I stopped using that bogus server. To help other people in France, Costin has shortened the list of active servers to fit in the 512 bytes UDP DNS reply.
          • 2nd problem: If the system libwebsockets was built without libuv support, it is still used by jalien but jalien will just get stuck since the socket callbacks are never called. Opened a PR to JAlien with a check for libuv support.

       

      EPN major topics:

      • Fast movement of nodes between async / online without EPN expert intervention.
        • 2 goals I would like to set for the final solution:
          • It should not be needed to stop the SLURM schedulers when moving nodes, there should be no limitation for ongoing runs at P2 and ongoing async jobs.
          • We must not lose which nodes are marked as bad while moving.
      • Interface to change SHM memory sizes when no run is ongoing. Otherwise we cannot tune the workflow for both Pb-Pb and pp: https://alice.its.cern.ch/jira/browse/EPN-250
        • Lubos to provide interface to querry current EPN SHM settings - ETA July 2023, Status?
      • Improve DataDistribution file replay performance, currently cannot do faster than 0.8 Hz, cannot test MI100 EPN in Pb-Pb at nominal rate, and cannot test pp workflow for 100 EPNs in FST since DD injects TFs too slowly. https://alice.its.cern.ch/jira/browse/EPN-244 NO ETA
      • DataDistribution distributes data round-robin in absense of backpressure, but it would be better to do it based on buffer utilization, and give more data to MI100 nodes. Now, we are driving the MI50 nodes at 100% capacity with backpressure, and then only backpressured TFs go on MI100 nodes. This increases the memory pressure on the MI50 nodes, which is anyway a critical point. https://alice.its.cern.ch/jira/browse/EPN-397
      • TfBuilders should stop in ERROR when they lose connection.
      • Allow epn user and grid user to set nice level of processes: https://its.cern.ch/jira/browse/EPN-349
      • Improve core dumps: https://its.cern.ch/jira/browse/EPN-487
        • A GPU process crashed tonight, but I cannot find the core dump. I believe what happened is that ODC took down the collection killing all processes producing core dumps, and only the last dumps were kept.
        • Changes to suppress core dumps for SIGABRT seems to work, last incidept yielded core dumps. Waiting to understand the other minor issues before closing.
      • Tentative time for ALMA9 deployment: december 2024.
        • First test node with ALMA9 / ROCm 6.2 installed.
        • Idea is to make sure everything works on that node, create a separate build container to have new RPMs in parallel to test in staging, so we can easily switch.

       

      Other EPN topics:

       

      Full system test issues:

      Topology generation:

      • Should test to deploy topology with DPL driver, to have the remote GUI available.
        • DPL driver needs to implement FMQ state machine. Postponed until YETS issues solved.

       

      AliECS related topics:

      • Extra env var field still not multi-line by default.

       

      GPU ROCm / compiler topics:

      • Compilation failure due to missing symbols when compiling with -O0Similar problem found by Matteo, being debugged.. Sent a reproducer to AMD.
      • Internal compiler error with LOG(...) macro: we have a workaround, AMD has a reproducer, waiting for a fix.
      • New miscompilation for ROCm 6.1
        • Found a workaround for us.
        • Provided a standalone reproducer to AMD.
        • Fix will certainly not make it into ROCm 6.2.1, but hopefully into 6.3, so that this won't hit us again.
      • ROCm 6.2
        • Was released last weekend, unfortunately a workaround, which we need was removed, while the fix didn't make it into the release, so we cannot use this release (or we'd need to port the workaround and compile the ROCm LLVM ourselves).
        • Waiting for 6.2.1, which will hopefully contain the fix.
      • EPNs prepared test nodes with ALMA9 / ROCm 6.2.
        • We cannot use them for O2 with GPU yet, but want to make sure everything builds, test new compilers, and Christian can check if that solves his ML framework issues.
      • Bumping GCC:
        • PR to bump to GCC 13 does not pass CI: https://github.com/alisw/alidist/pull/5541
          • Giulio mentioned a GPU/CMake issue, but the CI builders fail with other reasons.
        • Locally on my laptop, I build O2 with GCC 13 and 14.
        • GCC 14 should fix some bugs in the ML frameworks relevant for Christian.
          • Current CUDA (12.5) is not compatible to GCC 14 yet, on my laptop I had to hack some things and pull in GCC 13 headers when compiling CUDA code. Not sure we want this.
          • But once we get GCC 13 working, it would be good to prepare the GCC 14 bump already, such that it passes all CIs but the GPU CI. I'll see if I find a good workaround, otherwise it can be used locally until CUDA support is there.
      • Bumping LLVM:
        • Prerequisite is to bump arrow: https://github.com/alisw/alidist/pull/5439
          • Currently still some issues, some are only in the CI.
        • LLVM18 does not work for aarch64 yet. LLVM17 misses some fixes for OpenCL compilation.
        • We can either use LLVM18 only for o2-epn defaults, or for all x86 builds, or we have to backport the fix to LLVM17 into our build.

       

      TPC GPU Processing

      • Bug in TPC QC with MC embedding, TPC QC does not respect sourceID of MC labels, so confuses tracks of signal and of background events.
      • New problem with bogus values in TPC fast transformation map still pending. Sergey is investigating, but waiting for input from Alex. Ruben reported that he still sees such bogus values.
      • Status of cluster error parameterizations
        • No progress yet on newly requested debug streamers.
        • Waiting for TPC to check PR with full cluster errors during seeding.
      • TPC reported a problem with laser runs. In case of bad data (tpc data outside of the triggered drift time), GPUs sometimes can get stuck, so apparently the skipping of bad data is not fully working. Recorded some laser raw data to check.

       

      TPC processing performance regression:

      • Final solution: merging transformation maps on the fly into a single flat object:
        • Ruben has created a PR with a first version of the flat object. Still needs to be adapted when the map format will be changed. Also need to check it for GPU performance.
        • Discussion with Sergey in TPC SCD meeting today how to proceed.
      • Temporary mitigation with RTC:
        • Still stuck due to GPU RTC problems.

       

      General GPU Processing

      • Pending problems with using GPU RTC at P2:
        • /tmp is inside the slurm container and wiped afterwards. Fixed by using /var/tmp.
        • RTC is started from one of the GPU processes, which has a NUMA pinning to one NUMA domain, thus it uses only half of the CPU cores. Need to extend the CPU pinning for the compilation subprocesses.
        • RTC compiles for the architecturs of the original build, which is currently MI50/MI100, i.e. all nodes compile twice, which takes extra time. Need to add an option to select an architecture, and the topology generation must put in the setting for MI50 / MI100 architectures.
        • AMD compiler leaves stale temp folders, can avoid this by setting $TMPDIR.
        • RTC compilation fails in an online run since headers (e.g. <cmath>) are not found - Not understood yet.

       

      Oncalls:

      • All weeks covered this year.
    • 2
      Following up JIRA tickets
      Speaker: Ernst Hellbar (CERN)

      Overview of low-priority framework issues

    • 3
      TPC ML Clustering
      Speaker: Christian Sonnabend (CERN, Heidelberg University (DE))
    • 4
      ITS Tracking
      Speaker: Matteo Concas (CERN)
    • 5
      TPC Track Model Decoding on GPU
      Speaker: Gabriele Cimador (Universita e INFN Trieste (IT))
    • 6
      Sym Matrices on GPU
      Speaker: Matteo Concas (CERN)