Alice Weekly Meeting: Software for Hardware Accelerators / PDP-SRC

Europe/Zurich
Zoom Meeting ID
61230224927
Host
David Rohr
Useful links
Join via phone
Zoom URL
    • 10:00 10:20
      Discussion 20m
      Speakers: David Rohr (CERN), Giulio Eulisse (CERN)

      Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)

      Run Coordination / Framework Topics:

      • Stabilize calibration / fix EoS: New scheme: https://its.cern.ch/jira/browse/O2-4308:
        • Using 20/25 and 20/40 as EndOfStream timeouts.
        • Improvement deployed to treat all sporadic messages as calibration data (prevents dropping calibration data that was started to process before EndOfStream but finished after the first timeout).
        • Still missing: Processes should go to error when the second timeout expires - Will indicate that calibration failed, and will take out servers from the run which did not finish in a clean state - needed for future START/STOP/START development.
        • PR with InfoLogger improvements still WIP - Status?
        • Processes crashing at shutdown if STOP timeout was short, not clear if related, but should be checked. This still seems to happen, e.g. in the STOP tests with 1/1 and 5/10.
      • Fix problem with ccdb-populator and cpv calib: no idea yet - since Ole left, someone else will have to take care.
      • TF-status message (from https://github.com/AliceO2Group/AliceO2/pull/13495) sent by readout-proxy.
      • Trying to get devices with Lifetime::Sporadic / Lifetime::Timer inputs last, to work around the issue mentioned yesterday by Ruben

       

      AliECS related topics:

      • Extra env var field still not multi-line by default., created https://its.cern.ch/jira/browse/OGUI-1624. Deployed by FLP, seems to work

       

      EPN major topics:

      • Fast movement of nodes between async / online without EPN expert intervention.
        • 2 goals I would like to set for the final solution:
          • It should not be needed to stop the SLURM schedulers when moving nodes, there should be no limitation for ongoing runs at P2 and ongoing async jobs.
          • We must not lose which nodes are marked as bad while moving.
      • Interface to change SHM memory sizes when no run is ongoing. Otherwise we cannot tune the workflow for both Pb-Pb and pp: https://alice.its.cern.ch/jira/browse/EPN-250
        • Lubos to provide interface to querry current EPN SHM settings - ETA July 2023, Status?
      • Improve DataDistribution file replay performance, currently cannot do faster than 0.8 Hz, cannot test MI100 EPN in Pb-Pb at nominal rate, and cannot test pp workflow for 100 EPNs in FST since DD injects TFs too slowly. https://alice.its.cern.ch/jira/browse/EPN-244 NO ETA.
      • DataDistribution distributes data round-robin in absense of backpressure, but it would be better to do it based on buffer utilization, and give more data to MI100 nodes. Now, we are driving the MI50 nodes at 100% capacity with backpressure, and then only backpressured TFs go on MI100 nodes. This increases the memory pressure on the MI50 nodes, which is anyway a critical point. https://alice.its.cern.ch/jira/browse/EPN-397
      • TfBuilders should stop in ERROR when they lose connection.
      • Allow epn user and grid user to set nice level of processes: https://its.cern.ch/jira/browse/EPN-349
      • Slurm bump

       

      Other EPN topics:

       

      GPU Benchmarks in HS23 Contribution from ALICE

      • Will have a meeting with Gabriele and Domenico today to discuss details.

       

      Sync reconstruction

      Async reconstruction

      • Need to investigate short GPU stall problem.
      • Limiting factor for pp workflow is now the TPC time series, which is to slow and creates backpressure (costs ~20% performance on EPNs). Enabled multi-threading as recommended by Matthias - need to check if it works.
      • Managed to run offline build in singularity container on my Laptop with NVIDIA GPU.
      • Waiting for gpu-system.sh to be updated (needs new aliBuild) to build with NVIDIA A100 support, then we can test on NERST Perlmutter. At least I can not start the standalone benchmark on Perlmutter, NVIDIA library issues solved.
        • Problem was due to older ITS code (from current async build) using special NVIDIA tracing libraries, which were moved to header-only with CUDA 12.8, so the problem was already fixed in the latest O2/dev.

       

      GPU ROCm / compiler topics:

      • Issues that disappeared but not yet understood: random server reboot with alma 9.4, miscompilation with ROCm 6.2, GPU getting stuck when DMA engine turned off, MI100 stalling with ROCm 5.5.
      • Problem with building ONNXRuntime with MigraphX support, to be checked.
      • Need to find a way to build ONNXRuntime with support for CUDA and for ROCm.
      • Try to find a better solution for the problem with __device__ inline functions leaking symbols in the host code.
      • LLVM Bump to 20.1: status?
      • ROCm 6.4.1 status:
        • AMD is checking the reproducer. I have some idea how to narrow down where it miscompiles using different compile flags in per-kernel mode.
      • New gpu-system package:
        • After some alibuild refactoring, needed more fixes. Now, new aliBuild tagged. Sergio is debugging some issues on MacOS, then we should be able to bump it everywhere.
      • Improved Standalone Benchmark CI, now compiles with -Werror and checks for CMake errors, suppressed some warnings from clang / nvcc / CMake.

       

      TPC / GPU Processing 

      • WIP: Use alignas() or find a better solution to fix alignment of monte carlo labels: https://its.cern.ch/jira/browse/O2-5314
      • Waiting for TPC to fix bogus TPC transformations for good, then we can revert the workaround.
      • Waiting for TPC to check PR which uses full cluster errors including average charge and occupancy map errors during seeding.
      • Final solution: merging transformation maps on the fly into a single flat object: Still WIP
      • Pending OpenCL2 issues:
        • printf not working due to confirmed bug in clang, fix is being prepared. Prevents further debugging for now.
        • Crash in merger, which can be worked around by disabling clang SPIRV optimization. Probably bug in clang, but need to fix printf first to debug.
        • Also with optimization disabled, crashing later in TPC merging, need printf to debug.
        • Felix debugged the OpenCL clusterization problem to be due to off-by-one offset in NoiseSuppression. Need to check how that can happen only in OpenCL.
      • Next high priority topic: Improvements for cluster sharing and cluster attachment at lower TPC pad rows.
      • Need to check the problem with ONNX external memory allocator.

       

       

    • 10:20 10:25
      Following up JIRA tickets 5m
      Speaker: Ernst Hellbar (CERN)
      Low-priority framework issues https://its.cern.ch/jira/browse/O2-5226
      • Grafana metrics: Might want to introduce additional rate metrics that subtract the header overhead to have the pure payload: low priority.
      • Merged workflow fails if outputs defined after being used as input
        • needs to be implemented by Giulio
      • Cannot override options for individual processors in a workflow
        • requires development by Giulio first 
      • Problem with 2 devices of the same name
      • Usage of valgrind in external terminal: The testcase is currently causing a segfault, which is an unrelated problem and must be fixed first. Reproduced and investigated by Giulio.
      • Run getting stuck when too many TFs are in flight.
      • Do not use string comparisons to derrive processor type, since DeviceSpec.name is user-defined.
      • Support in DPL GUI to send individual START and STOP commands.
      • Add additional check on DPL level, to make sure firstOrbit received from all detectors is identical, when creating the TimeFrame first orbit.
      • Implement a proper solution to detect wheter a device is firstInChain 
      • Deploy topology with DPL driver
      • Start / Stop / Start: 2 problems on O2 side left:
          • All processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
          • TPC ITS matching QC crashing accessing CCDB objects. Not clear if same problem as above, or a problem in the task itself

       

      PDP-SRC issues
      • Check if we can remove dependencies on /home/epn/odc/files in DPL workflows to remove the dependency on the NFS
        • reading / writing already disabled
        • remaining checks for file existence?
        • check after Pb-Pb by removing files and find remaining dependencies
      • logWatcher.sh and logFetcher scripts modified by EPN to remove dependencies on epnlog user
        • node access privileges fully determined by e-groups
        • new log_access role to allow access in logWatcher mode to retrieve log files, e.g. for on-call shifters
        • to be validated on STG
        • waiting for EPN for further feedback and modifications of the test setup
      • TMinuit errors (Initial matrix not pos. def. ) in QC tasks during fitting procedure
        • mainly in vertex x, y fits, but also in some TPC QC tasks
        • rare random errors, can reproduce locally
        • to see and understand if they can be avoided with extra options in the Fit call, otherwise filter them in EPNstderrMonitor
    • 10:25 10:30
      TPC ML Clustering 5m
      Speaker: Christian Sonnabend (CERN, Heidelberg University (DE))

      Framework

      • Added GPU timer to ONNX inference for profiling
      • Added deconvolution flags to NN inference for exact matching with GPU CF

       

      Physics

      • Cluster attachment efficiency vs. fake rate for different network inputs and thresholds
        • Attachment efficiency = (correctly attached cls NN / total cls) * (correctly attached cls NN / correctly attached cls GPU CF)

       

      • Network outperforms GPU CF under all thresholds and input sizes. Choice for threshold is determined by number of correctly attached clusters in the next plot
      • Significant benefits from using 3D networks

      • Number of correctly attached clusters

      • Threshold choice for classification network:
        • <= 0.01: Almost no loss in number of correctly attached clusters
        • >0.01 && <0.1: Maximum loss of 5% correctly attached clusters, but can lead to 18% savings in total clusters (see next plot)

      • Number of total clusters

      • CoG (pad) resolution as a function of occupancy for different network sizes (2 to 5 hidden layers; 16, 32, 64, 128 neurons per layer)

      • More layers work better
      • Network with L5 and N128 -> Not great performance, reason: Overtraining! Immediately visible in the logs. Validation loss goes up while training loss goes down / remains constant.
        -> Improvement for the future: Save network at best training loss, best validation loss and network after all epochs are done.

       

      A cool thing to look at... and more

       

      Neural network loss landscape:

      • Take 1024 elements from the training data sample and perform PCA
        • Finds the axes that maximise the variance when projecting the data onto it, i.e. the most relevant axes to describe the data
      • Take first two principal components and add them to the data as X_new = X + a*PCA1 + b*PCA2, where a and b are scale factors
      • Choose a regular grid for a and b of arbitrary size, calculate the loss for each grid point using the training data output and network output
      • Color = z = MSE loss; Smoothed with cubic splines, 400 grid points per direction

      • Interpretation: This loss landscape is (almost) convex! Networks are (almost) guaranteed to land in the global optimum! This makes the method reliable.

       

      Addendum

    • 10:30 10:35
      ITS Tracking 5m
      Speaker: Matteo Concas (CERN)
    • 10:35 10:40
      TPC Track Model Decoding on GPU 5m
      Speaker: Gabriele Cimador (Universita e INFN Torino (TO))

      News from GPU parameters tuning

      Single kernel optimization

      MergerTrackFit

       

      MergerSectorRefit

      MergerCollect

      CompressionStep1unattached

      4 dimensions optimization (MergerFollowLoopers + CompressionStep0attached)

      21 dimensions optimization - SectorTracker step

      Clusterizer step

      Automated tuning

      • Developed script for automated tuning
      • Tunes most of the steps
      • Tried on a 750kHz pp simulated dataset
      • Results are for the sync time of the standalone benchmark
        • Sync mean time default: 1440.69 ms ± 3.94 ms
        • Sync mean time optimised: 1318.73 ms ± 4.97 ms
        • Performance gain 8.47%

      Next two weeks

      • Absence for CERN School of Computing
      • Plan to run some more automated tuning (on more datasets)
      • Create collection of parameter dumps

       

       

       

    • 10:40 10:45
      Efficient Data Structures 5m
      Speaker: Dr Oliver Gregor Rietmann (CERN)

      Summer Student

      • ALICE summer student has arrived: Milla Bramsted
      • She is working on benchmarking SoA code on GPUs.
      • We will track her project in this google doc.
      • She will add CUDA kernels to this repo.

      ALICE O2 CI-Pipelines on NGT Cluster

      • A fork of the AliceO2 repo is not in the  NextGenTrigggers (NGT) GitHub organization.
      • It has a GitHub action running the standalone benchmark on NGT GPUs (H100).
      • Uses the builds in /cvmfs/alice.cern.ch/ of O2 and dependencies (pipeline takes about 7 minutes).

       

    • 10:45 10:50
      Following up GPU to-dos 5m
      Speaker: Dr Vikas Singhal (Department of Atomic Energy (IN))
    • 10:50 10:55
      TPC Clusterization / OpenCL / Highly Ionizing Particles 5m
      Speaker: Felix Weiglhofer (Goethe University Frankfurt (DE))

      OpenCL

      Clusterizer Issues

      • Fixed Out-Of-Bounds write. Somehow never caused issues in HIP/CUDA, but fixes of-by-one error in OpenCL
      • ZS Decoding: Warps seem to diverge or miscompilation. Have to set warp size to 1 for kernel to work, using barriers doesn't fix the issue
        • Needs more investigation

      -> With both fixes:  Clusterizer now works in OpenCL

      Recursion in TPCFastTransformation

      Lots of helper function written for compile time recursion, e.g.:

      GPUd() static constexpr uint32_t factorial(const uint32_t n) { return (n == 0) || (n == 1) ? 1 : n * factorial(n - 1); }

      Fine for CUDA / HIP, but OpenCL C++ prohibits recursion. Trivial to fix in C++20 with consteval but OpenCL C++ still based on C++17, so use templates instead for recursion: https://github.com/AliceO2Group/AliceO2/pull/14462 

      -> Waiting for feedback from TPC 

      PoCL issues

      PoCL rejects recursive functions, but only tells you where the function is called, not which recursive function it found. E.g. you get errors like this:

      Recursion detected in function: '_ZNU3AS42o23gpu7GPUdEdx11fillClusterEffihffRU3AS4KNS0_23GPUCalibObjectsTemplateINS0_8ConstPtrEEEfff'

      Made debugging issue in TPCFastTransform very tedious. 

      Patched PoCL to demangle C++ symbols + print infringing function. So error now look like this:

      Recursion detected in function: 'o2::gpu::GPUdEdx::fillCluster(float, float, int, unsigned char, float, float, o2::gpu::GPUCalibObjectsTemplate<o2::gpu::ConstPtr> const&, float, float, float)'
      -> Infriging function: 'o2::gpu::MultivariatePolynomialParametersHelper::factorial(unsigned int)'

      (Side note: LLVM demangler can't demangle OpenCL symbols because of address space qualifiers in the mangled name. Have the form 'U1AS1'-> Hack: prune qualifiers before demangling...)

      Other issues

      Couple of other issues that make debugging harder in PoCL:

      • Disabling optimization with -cl-opt-disable to PoCL compiler -> crash in clusterizer kernel
      • Choosing more conservative kernel vectorization with POCL_WORK_GROUP_METHOD -> crash in clusterizer kernel
      • Dump debug output from LLVM with POCL_DEBUG_LLVM_PASSES -> crashes during kernel compilation