Alice Weekly Meeting: Software for Hardware Accelerators / PDP-SRC

Europe/Zurich
Zoom Meeting ID
61230224927
Host
David Rohr
Useful links
Join via phone
Zoom URL
    • 10:00 AM 10:20 AM
      Discussion 20m
      Speakers: David Rohr (CERN), Giulio Eulisse (CERN)

      Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)

      High priority Framework issues:

      • Start / Stop / Start: 2 problems on O2 side left:
          • All processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
          • TPC ITS matching QC crashing accessing CCDB objects. Not clear if same problem as above, or a problem in the task itself:
      • Stabilize calibration / fix EoS: New scheme: https://its.cern.ch/jira/browse/O2-4308: Tentative fix by Giulio available, will test it this week?
      • Fix problem with ccdb-populator: no idea yet - since Ole left, someone else will have to take care.
      • TF-status message (from https://github.com/AliceO2Group/AliceO2/pull/13495) sent by readout-proxy. Status?

       

      Sync reconstruction

      • Waiting for RC to test COSMIC replay data set.
      • Waiting for RC to test STOP timeout impact.

       

      Async reconstruction

      • Remaining oscilation problem: GPUs get sometimes stalled for a long time up to 2 minutes. Checking 2 things:
        • does the situation get better without GPU monitoring? --> Inconclusive
        • We can use increased GPU processes priority as a mitigation, but doesn't fully fix the issue.
      • ḾI100 GPU stuck problem will only be addressed after AMD has fixed the operation with the latest official ROCm stack.
      • Limiting factor for pp workflow is now the TPC time series, which is to slow and creates backpressure (costs ~20% performance on EPNs). Enabled multi-threading as recommended by Matthias - need to check if it works.

       

      EPN major topics:

      • ALMA9 / ROCm 6.x deployment on hold, until AMD manages to fix the random server reboots or we find a workaround.
      • Fast movement of nodes between async / online without EPN expert intervention.
        • 2 goals I would like to set for the final solution:
          • It should not be needed to stop the SLURM schedulers when moving nodes, there should be no limitation for ongoing runs at P2 and ongoing async jobs.
          • We must not lose which nodes are marked as bad while moving.
      • Interface to change SHM memory sizes when no run is ongoing. Otherwise we cannot tune the workflow for both Pb-Pb and pp: https://alice.its.cern.ch/jira/browse/EPN-250
        • Lubos to provide interface to querry current EPN SHM settings - ETA July 2023, Status?
      • Improve DataDistribution file replay performance, currently cannot do faster than 0.8 Hz, cannot test MI100 EPN in Pb-Pb at nominal rate, and cannot test pp workflow for 100 EPNs in FST since DD injects TFs too slowly. https://alice.its.cern.ch/jira/browse/EPN-244 NO ETA
      • DataDistribution distributes data round-robin in absense of backpressure, but it would be better to do it based on buffer utilization, and give more data to MI100 nodes. Now, we are driving the MI50 nodes at 100% capacity with backpressure, and then only backpressured TFs go on MI100 nodes. This increases the memory pressure on the MI50 nodes, which is anyway a critical point. https://alice.its.cern.ch/jira/browse/EPN-397
      • TfBuilders should stop in ERROR when they lose connection.
      • Allow epn user and grid user to set nice level of processes: https://its.cern.ch/jira/browse/EPN-349

       

      Other EPN topics:

       

      AliECS related topics:

      • Extra env var field still not multi-line by default.

       

      GPU ROCm / compiler topics:

      • List of important issues with AMD:
        • Random server reboots on MI100: Tried several workarounds, but no solution found so far. Giada spotted some weird FairMQ problems in the large scale test, which could probably be due to some memory corruption happening.
        • Random crashes on MI100 due to memory error, can be worked around by serializing all kernel and DMA transfers, which has 20% performance degradation.
        • Miscompilation leading to crashes, worked around by changing our code, but compiler bug still there.
        • Provide an RPM ROCm version with all fixes, so that we don't need to compile clang manually with custom patches.
        • Proper way to enable amdgpu-function-calls instead of hacking AMD scripts and binaries.
        • hipHostRegister has become very slow when more than 1 GPU visible (via ROCR_VISIBLE_DEVICES).
      • New engineer assigned to us, Damon will return in March.
        • Provided instructions how to run the standalone benchmark again.
        • EPN is setting up 6 servers, 3 for ROCm 6.3, 3 for 6.2. In the end we aim for 6.3, but the problem should be easier to debug with 6.2.
      • Try to find a better solution for the problem with __device__ inline functions leaking symbols in the host code.

       

      TPC GPU Processing 

      • WIP: Use alignas() or find a better solution to fix alignment of monte carlo labels: https://its.cern.ch/jira/browse/O2-5314
      • Waiting for TPC to fix bogus TPC transformations for good, then we can revert the workaround.
      • Waiting for TPC to check PR which uses full cluster including average charge and occupancy map errors during seeding.
      • Added protection that system headers cannot be included in GPU device code, which was the cause for the RTC compilation failures in November not seen in the CI.
      • Added option requested by TPC to reject clusters for dEdx based on cluster flag mask.
      • With POCL trunk and Clang 19, for the first time managed to run our OpenCL 2 code. After some fixes, full slice tracking with OpenCL2 was working with exact same results as OpenCL1 / CPU, thus OpenCL 2 now superseds OpenCL 1 code.
      • With OpenCL 1 no longer needed, and I don't do any more tests with AliRoot in the last year, started a larger cleanup campaign:
        • Removed all OpenCL 1 code, and merged OpenCL common and OpenCL2 into one OpenCL library.
        • Removed all code related to AliRoot.
          • Still ongoing to add a feature to the standalone benchmark to create new default calib objects if the format changes. This will allow to keep using Run 2 data with the Standalone benchmark if they are without distortions.
        • Removed all workarounds we had to run with ROOT 5, for compilers that do not support C++11, and for missing C++17 support.
          • Unfortunately, OpenCL for C++ 2021 is still at C++17, so we cannot yet have full C++20 support.
        • In the process of removing several code paths that are obsolete now, since they were used only in legacy code (e.g. OpenCL1 could not run all kernels on CPU, AliRoot needed slice data output between sector tracing and merging).
      • Pending OpenCL2 issues:
        • printf not working due to confirmed bug in clang, fix is being prepared. Prevents further debugging for now.
        • GPU MemClean not working in TPC clusterization, need to debug.
        • Crash in merger, which can be worked around by disabling clang SPIRV optimization. Probably bug in clang, but need to fix printf first to debug.
        • Also with optimization disabled, crashing later in TPC merging, need printf to debug.
      • Next high priority topic: Improvements for cluster sharing and cluster attachment at lower TPC pad rows.

       

      TPC processing performance regression:

      • Final solution: merging transformation maps on the fly into a single flat object: Still WIP

       

      General GPU Processing

       

    • 10:20 AM 10:25 AM
      Following up JIRA tickets 5m
      Speaker: Ernst Hellbar (CERN)
      Low-priority framework issues https://its.cern.ch/jira/browse/O2-5226
      • Grafana metrics: Might want to introduce additional rate metrics that subtract the header overhead to have the pure payload: low priority.
      • Merged workflow fails if outputs defined after being used as input
        • needs to be implemented by Giulio
      • Cannot override options for individual processors in a workflow
        • requires development by Giulio first 
      • Problem with 2 devices of the same name
        • https://github.com/AliceO2Group/AliceO2/pull/13963
        • when adding processors to the workflow, using DataProcessorInfo to compare spec name and the executable to existing processors instead of DataProcessorSpec to compare only the name
        • crashes when both its-sft-decoder and mft-sft-decoder are present in the workflow
      • Usage of valgrind in external terminal: The testcase is currently causing a segfault, which is an unrelated problem and must be fixed first. Reproduced and investigated by Giulio.
      • Run getting stuck when too many TFs are in flight.
      • Do not use string comparisons to derrive processor type, since DeviceSpec.name is user-defined.
      • Support in DPL GUI to send individual START and STOP commands.
      • Add additional check on DPL level, to make sure firstOrbit received from all detectors is identical, when creating the TimeFrame first orbit.
      • Implement a proper solution to detect wheter a device is firstInChain 
      • Deploy topology with DPL driver

       

      PDP-SRC issues
      • Check if we can remove dependencies on /home/epn/odc/files in DPL workflows to remove the dependency on the NFS
        • reading / writing already disabled
        • remaining checks for file existence?
        • check after Pb-Pb by removing files and find remaining dependencies
      • logWatcher.sh and logFetcher scripts modified by EPN to remove dependencies on epnlog user
        • node access privileges fully determined by e-groups
        • new log_access role to allow access in logWatcher mode to retrieve log files, e.g. for on-call shifters
        • to be validated on STG
        • waiting for EPN for further feedback and modifications of the test setup
      • Promote critical DPL Errors to ILG Ops level
        • new "critical" severity in latest FairLogger + unrelated changes (string  -> string_view )
        • new InfoLogger tag (2.8.1) compatible with new severity and latest FairLogger changes
        • known IL messages for QC shifter
          • RC will follow up with detectors if some of the issues are fixed
        • todo
          • see if there are other critical stderr messages 
          • promote documented messages
          • check content of Fatal and Critical messages, make sure they are appropriate for shifters, eventually putting extra information in an Error message right in front
      • new BEAMTYPE for oxygen period
        • https://its.cern.ch/jira/browse/O2-5797
        • beam types
          • p-O and O-O
          • Ne-Ne still to be confirmed
        • scripts to be adjusted to set proper workflow parameters
        • O2 code to be checked for pp and PbPb specific variables
    • 10:25 AM 10:30 AM
      TPC ML Clustering 5m
      Speaker: Christian Sonnabend (CERN, Heidelberg University (DE))

      Main efforts

      • Improve network regression fit
      • Improve QA:
        • Add percentile plots
      • Evaluation on real data
      • Created fully parallelizable version of clusterization code

       

      1. Improve the network fits.

      • Noticed that GPU CF has a lot of entries at 0.
        • Reason 1: Many clusters have a small size, so GPU CF sees "everything", especially inner 3x3 window
        • Reason 2: Especially for clusters with sigma_pad = 0 (in reality), CoG_pad == Max_pad -> NN smears distribution slightly (same goes for time)
      • Fix: Network receives as input the CoG and sigma calculated with the inner 3x3 inputs (no exclusion) in the old fashioned way and applies a multiplicative correction -> Improves the network fit

       

      Left: Purely fully connected network; Right: Fully connected network with inner 3x3 CoG and σ input

           

      Improvement noticeable in track-chi2 distribution

      2. Percentile comparison

      Noticeable differences:

      • Distribution is slightly asymmetric and also non-gaussian
      • GPU CF distribution is slightly higher peaked at the center (at delta cog_pad = 0).
      • NN has a narrower distribution on the outsides -> At 60% and 80% percentiles, the NN curves are closer to 0.

       

      3. Real data

      • All raw TFs of LHC24af, run 550536
      • Using corrections: "--lumi-type 2 --corrmap-lumi-mode 1 --enable-M-shape-correction"
      • Using a rather strict cut on NN clusterizer: Removing ~23% of clusters and ~10% of tracks

      • Loosening cut on NN: Removing 8.2% of clusters and ~3.5% tracks -> Almost no effect on Lambda and K0S spectra!
        -> Investigating dN/dη vs. pT next: Most probably loosing tracks at very low momentum

       

    • 10:30 AM 10:35 AM
      ITS Tracking 5m
      Speaker: Matteo Concas (CERN)
    • 10:35 AM 10:45 AM
      TPC Track Model Decoding on GPU 10m
      Speaker: Gabriele Cimador (Universita e INFN Torino (TO))
    • 10:45 AM 10:55 AM
      Efficient Data Structures 10m
      Speaker: Dr Oliver Gregor Rietmann (CERN)

      Overview

      The goal is to develop a C++ library that allows to abstract the data layout of an array. Possible data layouts include aray of struct (AoS) and struct of array (SoA), see the following example.

      constexpr std::size_t N = 42;

      struct Point { int x, y, z; };
      Point point_aos[N];  // data layout: AoS

      template <std::size_t N>
      struct PointSoA {
          int x[N];
          int y[N];
          int z[N];
      };

      Point<N> point_soa;  // data layout: SoA

      We aim at writing a class that takes the struct Point, a data layout, and possibly more arguments. The class then allows for AoS access, but stores the data in a possibly different layout, thereby hiding the data layout.

      template<
      template <class> class F, // container
      template <template <class> class> class S, // e.g. "Point"
      layout L       // data layout
      >
      struct wrapper;

      New: Unit Tests

      • Added tests using only plain pointers as "container" (F is a pointer type in the class "wrapper" above.)
      • Unit test for CUDA code.

      New: RAII classes as containers for device or unified memory

      Reason:

      • For unit testing I want to hide the CUDA code behind C++ code since the unit testing is C++ only.

      Investigated approaches:

      • Custom "uniqu_ptr" clone
      • Custom "shared_ptr" clone
      • Classes that manage a pointer (AoS) of multiple pointers (SoA) managing the memory

      New: Linker Error with GTest

      • Cannot link libraries using CUDA code when linking gtest at the same time.
      • Happens even if all CUDA is hidden behind a wrapping C++ function.
      • Linking only fails if I also link gtest and if I use CUDA under the hood.