Alice Weekly Meeting: Software for Hardware Accelerators / PDP-SRC

Europe/Zurich
Zoom Meeting ID
61230224927
Host
David Rohr
Useful links
Join via phone
Zoom URL
    • 1
      Discussion
      Speakers: David Rohr (CERN), Giulio Eulisse (CERN)

      Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)

      High priority Framework issues:

      • Fix dropping lifetime::timeframe for good: Still pending: problem with CCDB objects getting lost by DPL leading to "Dropping lifetime::timeframe", saw at least one occation during SW validation.
      • Newly spotted bogus message about decreased oldestPossible counter should be suppressed. Status?
      • Start / Stop / Start: 2 problems on O2 side left:
          • All processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
          • TPC ITS matching QC crashing accessing CCDB objects. Not clear if same problem as above, or a problem in the task itself:
      • Stabilize calibration / fix EoS: New scheme: https://its.cern.ch/jira/browse/O2-4308: Need to test at P2?.
      • Fix problem with ccdb-populator: no idea yet - since Ole left, someone else will have to take care.
      • Metric rate overloading Influx Server fixed.

       

      Global calibration topics:

      • TPC IDC and SAC workflow issues to be reevaluated with new O2 at restart of data taking. Cannot reproduce the problems any more.

       

      Sync reconstruction

      • Waiting for RC to test COSMIC replay data set.
      • Waiting for RC to test STOP timeout impact.
      • Change for CCDB broke CPV and some other calibrations at P2. Fix available. Waiting for test of new software.
      • Bug in TPC code led to crash of IDC calibration task at end of run, fix available but not yet tested.

       

      Async reconstruction

      • Remaining oscilation problem: GPUs get sometimes stalled for a long time up to 2 minutes. Checking 2 things:
        • does the situation get better without GPU monitoring? --> Inconclusive
        • We can use increased GPU processes priority as a mitigation, but doesn't fully fix the issue.
      • ḾI100 GPU stuck problem will only be addressed after AMD has fixed the operation with the latest official ROCm stack.
      • Limiting factor for pp workflow is now the TPC time series, which is to slow and creates backpressure (costs ~20% performance on EPNs). Enabled multi-threading as recommended by Matthias - need to check if it works.
      • Problem reported by Max/Ruben that analysis fails, since sometimes at EOR we receive TFs with no data and bogus orbit.
        • Giulio will implement tf-status message (from https://github.com/AliceO2Group/AliceO2/pull/13495) sent by readout-proxy.

       

      EPN major topics:

      • Fast movement of nodes between async / online without EPN expert intervention.
        • 2 goals I would like to set for the final solution:
          • It should not be needed to stop the SLURM schedulers when moving nodes, there should be no limitation for ongoing runs at P2 and ongoing async jobs.
          • We must not lose which nodes are marked as bad while moving.
      • Interface to change SHM memory sizes when no run is ongoing. Otherwise we cannot tune the workflow for both Pb-Pb and pp: https://alice.its.cern.ch/jira/browse/EPN-250
        • Lubos to provide interface to querry current EPN SHM settings - ETA July 2023, Status?
      • Improve DataDistribution file replay performance, currently cannot do faster than 0.8 Hz, cannot test MI100 EPN in Pb-Pb at nominal rate, and cannot test pp workflow for 100 EPNs in FST since DD injects TFs too slowly. https://alice.its.cern.ch/jira/browse/EPN-244 NO ETA
      • DataDistribution distributes data round-robin in absense of backpressure, but it would be better to do it based on buffer utilization, and give more data to MI100 nodes. Now, we are driving the MI50 nodes at 100% capacity with backpressure, and then only backpressured TFs go on MI100 nodes. This increases the memory pressure on the MI50 nodes, which is anyway a critical point. https://alice.its.cern.ch/jira/browse/EPN-397
      • TfBuilders should stop in ERROR when they lose connection.
      • Allow epn user and grid user to set nice level of processes: https://its.cern.ch/jira/browse/EPN-349
      • Tentative time for ALMA9 deployment: december 2024.

       

      Other EPN topics:

       

      Full system test issues:

      Topology generation:

      • 2 Occucances where the git repository in the topology cache was corrupted. Not really clear how this can happen, also not reproducible. Was solved by wiping the cache. Will add a check to the topology scripts to check for a corrupt repository, and in that case delete it and check it out anew.

       

      AliECS related topics:

      • Extra env var field still not multi-line by default.

       

      GPU ROCm / compiler topics:

      • ROCm 6.2.2 available from AMD, old problems seem fixed, but we see 2 new types of crashes.
        • GPU memory error has been reproduced by AMD, they are investigating.
        • We provided a reproducer setup for the server crash on one EPN for remote access. Waiting for AMD to check.
        • Did not find a workaround yet, need a workaround or fix for the server crash to move forward and update EPN OS.
      • New miscompilation for >ROCm 6.0
        • Waiting for AMD to fix the reproducer we provided (not yet fixed in 6.2.2, but we have a workaround).
      • Try to find a better solution for the problem with __device__ inline functions leaking symbols in the host code.
      • Created PR to bump GCC to 14.2 (just to test).
        • Fails in ROOT compilation for AliRoot: AliRoot switched to default ROOT, but CI must rerun?
        • Fails to compile clang on SLC9 for ARM: Status?

       

      TPC GPU Processing 

      • WIP: Use alignas() or find a better solution to fix alignment of monte carlo labels: https://its.cern.ch/jira/browse/O2-5314
      • Waiting for TPC to fix bogus TPC transformations for good, then we can revert the workaround.
      • Waiting for TPC to check PR which uses full cluster including average charge and occupancy map errors during seeding.
      • Implemented cluster rejection during CTF decoding, added class to filter cluster in custom way to be used by EPN for studies, now working to make this custom class work also during clusterization instead of CTF decoding.

       

      TPC processing performance regression:

      • Final solution: merging transformation maps on the fly into a single flat object: Still WIP

       

      General GPU Processing

      • Switched entire GPU code in O2 to using <cstdint> int types, which should fix some more issues with signed vs unsigned char on ARM.

       

       

    • 2
      Following up JIRA tickets
      Speaker: Ernst Hellbar (CERN)
    • 3
      TPC ML Clustering
      Speaker: Christian Sonnabend (CERN, Heidelberg University (DE))

      Updates since last week

      • Made some calculations for floating-point class label based on track inclination. Modified sigmoid function for which we can determine "steepness" by two paramters -> Gives a better handle on the value when the network rejects clusters (threshold value)
      • Created a function which calculates the overlap of two or more clusters. Returns 3 values: Total area covered by a cluster (units in pad and time direction), area that has overlap with charges of different MC label as a fraction of the total area, sum of charge that has overlap with charges of different MC label as a fraction of the total charge
      • Currently in the process of creating a branch for a PR on O2 dev with the NN clusterizer code to have some space for discussion
    • 4
      ITS Tracking
      Speaker: Matteo Concas (CERN)
    • 5
      TPC Track Model Decoding on GPU
      Speaker: Gabriele Cimador (Universita e INFN Trieste (IT))
    • 6
      Efficient Data Structures
      Speaker: Dr Oliver Gregor Rietmann (CERN)

      Efficient Data Structures

      Context

      • Create data structures for controling the data layout (AoS vs SoA)
      • These data structures should hide the underlying data layout.
      • We want to change the underlying data layout without affecting the code using it.

      Approach using a skeleton class (Stephen)

      /*
      ... omitted code ...
        ... omitted code ...
        ... omitted code ...
      */


      template <template <typename> typename F>
      struct S {
      using tuple_t = std::tuple<int, int, double, int>;
      F<int> x, y;
      F<double> activation;
      F<int> identifier;
      };

      int main() {
      array_wrapper<aos, S>::owner my_array_owner(4); // aos can be changed to soa
      array_wrapper<aos, S>::handle my_array(my_array_owner); // aos can be changed to soa
      for (int i = 0; i < 4; i++) {
      auto my_element = my_array[i];
        my_element.x = i - 10;
        my_element.y = i + 50;
        my_element.activation = std::sin(static_cast<double>(i));
        my_element.identifier = i;
      }
      return 0;

      }

       

      Advantages:

      • We can provide a nice interface, like the ".x" syntax.
      • Allows seemless switch between AoS and SoA.

       

      Disdvantages:

      • The skeleton class has to be implemented for every class that we want to support.

      David's variation of this approach

      • We want to support a "placement new", i.e. allocate within an existing buffer.
      • Needs "counting" for structured bindings (Question to Giulio: How did you do it?)