Help us make Indico better by taking this survey! Aidez-nous à améliorer Indico en répondant à ce sondage !

Alice Weekly Meeting: Software for Hardware Accelerators / PDP-SRC

Europe/Zurich
Videoconference
ALICE GPU Meeting
Zoom Meeting ID
61230224927
Host
David Rohr
Useful links
Join via phone
Zoom URL
    • 11:00 11:20
      Discussion 20m
      Speakers: David Rohr (CERN), Giulio Eulisse (CERN)

      Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)

      High priority Framework issues:

      • Fix dropping lifetime::timeframe for good: Still pending: problem with CCDB objects getting lost by DPL leading to "Dropping lifetime::timeframe", saw at least one occation during SW validation.
        • PR https://github.com/AliceO2Group/AliceO2/pull/13481 for newly spotted bogus message about decreased oldestPossible counter merged. To be tested at P2. 
      • Start / Stop / Start: 2 problems on O2 side left:
          • All processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
          • TPC ITS matching QC crashing accessing CCDB objects. Not clear if same problem as above, or a problem in the task itself:
      • Stabilize calibration / fix EoS: New scheme: https://its.cern.ch/jira/browse/O2-4308: Status?.
      • Fix problem with ccdb-populator: no idea yet - since Ole left, someone else will have to take care.
      • Giulio fixed the second reproducer I obtained from the full topology at P2. Ernst deployed new SW test version and spurious cyclic channels. Waiting for RC to test expendable tasks with it?

       

      Global calibration topics:

      • TPC IDC and SAC workflow issues to be reevaluated with new O2 at restart of data taking. Cannot reproduce the problems any more.

       

      Sync reconstruction

      • Waiting for RC to test COSMIC replay data set.
      • Some crashes of tpc-tracking yesterday in Pb-Pb REPLAY runs, to be investigated.
        • After some tests with RC, crash happens only when dEdx is enabled. Found a problem with TPC dEdx in combination with GPU RTC. Fixed in O2/dev. As we could not reproduce the problem anywhere but in production, waiting for a test in production?
      • STOP timeout discussion: RC did some tests, which shows a significant dependence on the STOP timeout. However only tested in SYNTHETIC. Should repeat in PHYSICS.
        • Alice will add number of CTF orbit monitoring to GRID DAQ monitoring website.
      • But in O2DPG/GenTopo set incorrect IS_SIMULATED_DATA flag for SYNTHETIC runs leading to EMCAL warnings, fixed.
      • Reported a bug that ECS GUI sends the default string instead of an empty string if a GUI field is empty, fixed by ECS.
      • Problem after EPN slurm changes that topology generation failed regularly on staging
      • - Debugged to be due to stdin not reporting empty but reading fails with "bad file descriptor". Fixed by overriding stdin.
      • Fix had side-effect on ccdb-populator, which was fixed by Ernst.

       

      Async reconstruction

      • Remaining oscilation problem: GPUs get sometimes stalled for a long time up to 2 minutes. Checking 2 things:
        • does the situation get better without GPU monitoring? --> Inconclusive
        • We can use increased GPU processes priority as a mitigation, but doesn't fully fix the issue.
      • ḾI100 GPU stuck problem will only be addressed after AMD has fixed the operation with the latest official ROCm stack.
      • Limiting factor for pp workflow is now the TPC time series, which is to slow and creates backpressure (costs ~20% performance on EPNs). Enabled multi-threading as recommended by Matthias - need to check if it works.
      • Problem reported by Max/Ruben that analysis fails, since sometimes at EOR we receive TFs with no data and bogus orbit.
        • bogus TF are now detected in readout-proxy: https://github.com/AliceO2Group/AliceO2/pull/13495.
        • Giulio will implement tf-status message send by readout-proxy.

       

      EPN major topics:

      • Fast movement of nodes between async / online without EPN expert intervention.
        • 2 goals I would like to set for the final solution:
          • It should not be needed to stop the SLURM schedulers when moving nodes, there should be no limitation for ongoing runs at P2 and ongoing async jobs.
          • We must not lose which nodes are marked as bad while moving.
      • Interface to change SHM memory sizes when no run is ongoing. Otherwise we cannot tune the workflow for both Pb-Pb and pp: https://alice.its.cern.ch/jira/browse/EPN-250
        • Lubos to provide interface to querry current EPN SHM settings - ETA July 2023, Status?
      • Improve DataDistribution file replay performance, currently cannot do faster than 0.8 Hz, cannot test MI100 EPN in Pb-Pb at nominal rate, and cannot test pp workflow for 100 EPNs in FST since DD injects TFs too slowly. https://alice.its.cern.ch/jira/browse/EPN-244 NO ETA
      • DataDistribution distributes data round-robin in absense of backpressure, but it would be better to do it based on buffer utilization, and give more data to MI100 nodes. Now, we are driving the MI50 nodes at 100% capacity with backpressure, and then only backpressured TFs go on MI100 nodes. This increases the memory pressure on the MI50 nodes, which is anyway a critical point. https://alice.its.cern.ch/jira/browse/EPN-397
      • TfBuilders should stop in ERROR when they lose connection.
      • Allow epn user and grid user to set nice level of processes: https://its.cern.ch/jira/browse/EPN-349
      • Improve core dumps and time stamps: https://its.cern.ch/jira/browse/EPN-487
      • Tentative time for ALMA9 deployment: december 2024.

       

      Other EPN topics:

       

      Full system test issues:

      Topology generation:

      • 2 Occucances where the git repository in the topology cache was corrupted. Not really clear how this can happen, also not reproducible. Was solved by wiping the cache. Will add a check to the topology scripts to check for a corrupt repository, and in that case delete it and check it out anew.

       

      AliECS related topics:

      • Extra env var field still not multi-line by default.

       

      GPU ROCm / compiler topics:

      • Compilation failure due to missing symbols when compiling with -O0Similar problem found by Matteo, being debugged.. Sent a reproducer to AMD.
        • Implemented a fix to make sure that all symbols are always defined.
        • Since this has a significant impact on GPU compile time, and fix is on CMake level, we enable it only for CMAKE_BUILD_TYPE=DEBUG
      • Internal compiler error with LOG(...) macro: we have a workaround, AMD has a reproducer, waiting for a fix.
      • New miscompilation for >ROCm 6.0
        • Waiting for AMD to fix the reproducer we provided.
      • Problems of Matteo in DCAFitter come from bug / unexpected or undocumented behavior of CUDA compiler putting bogus host symbols in the object file that kill the application.
        • Filed a bug report to CUDA, but NVIDIA explains that they cannot fix it due to how their compiler works, and since they say it is anyway illegal to have the same device and host functions in different libraries, they closed the bug report as "won't fix" : https://developer.nvidia.com/bugs/4853553
          • (Note, the problem comes from an inline __device__ function in the CUDA code that has the same name as an inline function in a different shared object in the host code, which collide at linking. IMHO such a condition is very difficult to prevent a priori.)
        • For now, found a workaround, by adding additional inline keywords to GPUd() function definitions during host compile pass and disabling the CUDA warnings this causes, and for templated classes adding some tricky #ifdef for all explicit class instantiations.
          • Don't really like this, since I don't want to disable warnings, and the #ifdef fix needs to be repeated for all future class instatiations we'll do, which can cause undefined behavior again that is not easy to understand.
        • Attempting a different idea to use a linker version-script to hide the symbols. But so far seems not to work, since we cannot hide everything in o2::..., and it is not easily possible to distinguish what to hide...
        • Other possibility would be namespaces, but namespacing all GPU code would be a mess, since we'd need different namespaces for host and GPU compilation, i.e. use a macro definition everywhere we have code used on GPU, which include propgagator, matlut, mathutils, etc...
        • Still trying to think of a better solution.
      • Created PR to bump GCC to 14.2 (just to test). Problem with JSON-C already fixed by Sergio, Giulio, David. Investigated another problem in binutils (Giulio, David), will probably just downgrade to a working version. Now failing due to compile errors in AlfFred, STATUS?

       

      TPC GPU Processing

      • GPU reconstruction crashing in the 50khz data REPLAY, SYNTHETIC seems ok, under investigation: possibly fixed, see above.
      • Tests with TPC MC processing revealed problem with incorrectly aligned labels, leading to undefined behavior.
        • Simple proper fix using alignas() statement works, but makes old MC label data unreadable.
        • Tried several approaches to fix this in transparent way, but all failed. Documented in this JIRA: https://its.cern.ch/jira/browse/O2-5314
        • Only solution that still comes to my mind is to change the anyway custom ROOT streamer for the MC labels to use the FlatHeader version information (which is anyway available) and do a copy of the data inserting the necessary padding.
      • Waiting for TPC to fix bogus TPC transformations for good, then we can revert the workaround.
      • Waiting for TPC to check PR which uses full cluster including average charge and occupancy map errors during seeding.
      • Want to switch to int8 ... uint64 types instead of char, short, ...

       

      TPC processing performance regression:

      • Final solution: merging transformation maps on the fly into a single flat object: Still WIP

       

      General GPU Processing

       

       

    • 11:20 11:25
      Following up JIRA tickets 5m
      Speaker: Ernst Hellbar (CERN)
    • 11:25 11:30
      TPC ML Clustering 5m
      Speaker: Christian Sonnabend (CERN, Heidelberg University (DE))

      Updates on ONNX runtime implementation

      • Draft-PR open on alidist for the build of ORT on systems with Nvidia / ROCm GPU's. Checking currently if we can build for both at the same time (with a change to https://github.com/microsoft/onnxruntime/blob/main/cmake/CMakeLists.txt)
      • Draft-PR open on O2 for ORT linbrary implementation (works, and would deliver ~25-30 mio. clusters / s)

       

      Updates on NN side

      • Put QA and NN training into suite submissions, so if any change is made to training data, submission can now be performed for any network architecture and configuration as a "suite" (don't need to submit them individually)
      • Eval. on real data causes some issues with missing files (o2simdigitizerworkflow_configuration.ini, ctpdigits.root) 

       

      • Training converges well, also for N-class classification, revealing sampling bugs

       

      Updates on simulation

      • Neutron capture simulation now included, so can basically generate loopers and enhance training data to classify for exclusion
    • 11:30 11:35
      ITS Tracking 5m
      Speaker: Matteo Concas (CERN)
      • DCA Fitter on GPU:
        • As tests worked, a first showcase application on an analysis task (plot attached for reference): link.
          • Currently, we need to copy the track parameters from the arrow table to construct the tracks. Ideally, this could be done on the GPU as well.
          • Will try to adapt a task (this or similar in HF, where DCA fits are a bottleneck) with the processBulk version (see below).
        • Added processBulk() CPU handler to perform simultaneous fitting (1 Fit/thread) of a vector of TrackParCovs: PR.
          • Added kernel timing. On 10000 fits, 20 blocks, and 512 threads, we are comparable with the CPU, a scaling study is planned.
      • ITS Tracking on GPU:
        • No progress this week; I will start benchmarking on EPN this afternoon.
        • Re-create consistency checks 
    • 11:35 11:45
      TPC Track Model Decoding on GPU 10m
      Speaker: Gabriele Cimador (Universita e INFN Trieste (IT))
    • 11:45 11:55
      Efficient Data Structures 10m
      Speaker: Dr Oliver Gregor Rietmann (CERN)

      Struct of Array (SoA) using C++ reflections

      #include <array>
      #include <iostream>
      #include <experimental/meta>
      
      template <typename T, std::size_t N>
      struct struct_of_arrays {
          // omitted
          // ...
          // ...
      };
      
      struct point {
          float x;
          float y;
          float z;
      };
      
      int main() {
          struct_of_arrays<point, 2> p;
      
          p.x = {1.1, 2.2};
          p.y = {3.3, 4.4};
          p.z = {5.5, 6.6};
      
          std::cout << p.x[0] << ", " << p.x[1] << std::endl;  // output: 1.1, 2.2
      
          return 0;
      }

      Remarks:

      • Reflections will presumably be introduced in C++26.
      • They allow inspection of and operations on members of a struct (similar to std::tuple).

      Application to O2

      For some computations, especially the ones suited for GPU, the SoA memory layout might yield better performance.

      My current work

      • Try to achieve similar SoA using classical template metaprogramming.
      • Try to understand O2 code.
      • Try to understand reflections.