Alice Weekly Meeting: Software for Hardware Accelerators / PDP-SRC
-
-
10:00
→
10:20
Discussion 20mSpeakers: David Rohr (CERN), Giulio Eulisse (CERN)
Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)
GPU Benchmarks in HS23 Contribution from ALICE
High priority Framework issues:
- Stabilize calibration / fix EoS: New scheme: https://its.cern.ch/jira/browse/O2-4308:
- Timeout tests done, asked RC to repeat some tests since shutdown was not clean.
- Seems like 10/20 to 10/40 yield similar STOP times, so we can go with 10/40, but will try some more settings.
- PR with InfoLogger improvements still WIP - Status?
- Processes crashing at shutdown if STOP timeout was short, not clear if related, but should be checked. This still seems to happen, e.g. in the STOP tests with 1/1 and 5/10.
- Fix problem with ccdb-populator and cpv calib: no idea yet - since Ole left, someone else will have to take care.
- TF-status message (from https://github.com/AliceO2Group/AliceO2/pull/13495) sent by readout-proxy.
Sync reconstruction
Async reconstruction
- Need to investigate short GPU stall problem.
- Limiting factor for pp workflow is now the TPC time series, which is to slow and creates backpressure (costs ~20% performance on EPNs). Enabled multi-threading as recommended by Matthias - need to check if it works.
AliECS related topics:
- Extra env var field still not multi-line by default., created https://its.cern.ch/jira/browse/OGUI-1624 to follow this up separately from other tickets.
GPU ROCm / compiler topics:
- Issues that disappeared but not yet understood: random server reboot with alma 9.4, miscompilation with ROCm 6.2, GPU getting stuck when DMA engine turned off, MI100 stalling with ROCm 5.5.
- Problem with building ONNXRuntime with MigraphX support, to be checked.
- Need to find a way to build ONNXRuntime with support for CUDA and for ROCm.
- Try to find a better solution for the problem with __device__ inline functions leaking symbols in the host code.
- LLVM Bump to 20.1: status?
- ROCm 6.4.1 status:
- AMD is checking the reproducer. I have some idea how to narrow down where it miscompiles using different compile flags in per-kernel mode.
- New mitigation for MI100 serialization problem seems to work well.
- New gpu-system package:
- Package is ready, but needs a new version of aliBuild, with some features still to be merged.
- With new clang in new ROCm, receiving warnings about NaN being disabled with --fast-math. Coming e.g. from libfmt. Suppressed this warning.
- Improved Standalone Benchmark CI, now compiles with -Werror and checks for CMake errors, suppressed some warnings from clang / nvcc / CMake.
TPC / GPU Processing
- WIP: Use alignas() or find a better solution to fix alignment of monte carlo labels: https://its.cern.ch/jira/browse/O2-5314
- Waiting for TPC to fix bogus TPC transformations for good, then we can revert the workaround.
- Waiting for TPC to check PR which uses full cluster errors including average charge and occupancy map errors during seeding.
- Final solution: merging transformation maps on the fly into a single flat object: Still WIP
- Pending OpenCL2 issues:
- printf not working due to confirmed bug in clang, fix is being prepared. Prevents further debugging for now.
- Crash in merger, which can be worked around by disabling clang SPIRV optimization. Probably bug in clang, but need to fix printf first to debug.
- Also with optimization disabled, crashing later in TPC merging, need printf to debug.
- Felix debugged the OpenCL clusterization problem to be due to off-by-one offset in NoiseSuppression. Need to check how that can happen only in OpenCL.
- printf not working due to confirmed bug in clang, fix is being prepared. Prevents further debugging for now.
- Next high priority topic: Improvements for cluster sharing and cluster attachment at lower TPC pad rows.
- Need to check the problem with ONNX external memory allocator.
Other Topics
EPN major topics:
- Fast movement of nodes between async / online without EPN expert intervention.
- 2 goals I would like to set for the final solution:
- It should not be needed to stop the SLURM schedulers when moving nodes, there should be no limitation for ongoing runs at P2 and ongoing async jobs.
- We must not lose which nodes are marked as bad while moving.
- 2 goals I would like to set for the final solution:
- Interface to change SHM memory sizes when no run is ongoing. Otherwise we cannot tune the workflow for both Pb-Pb and pp: https://alice.its.cern.ch/jira/browse/EPN-250
- Lubos to provide interface to querry current EPN SHM settings - ETA July 2023, Status?
- Improve DataDistribution file replay performance, currently cannot do faster than 0.8 Hz, cannot test MI100 EPN in Pb-Pb at nominal rate, and cannot test pp workflow for 100 EPNs in FST since DD injects TFs too slowly. https://alice.its.cern.ch/jira/browse/EPN-244 NO ETA.
- DataDistribution distributes data round-robin in absense of backpressure, but it would be better to do it based on buffer utilization, and give more data to MI100 nodes. Now, we are driving the MI50 nodes at 100% capacity with backpressure, and then only backpressured TFs go on MI100 nodes. This increases the memory pressure on the MI50 nodes, which is anyway a critical point. https://alice.its.cern.ch/jira/browse/EPN-397
- TfBuilders should stop in ERROR when they lose connection.
- Allow epn user and grid user to set nice level of processes: https://its.cern.ch/jira/browse/EPN-349
- Slurm bump
Other EPN topics:
- Check NUMA balancing after SHM allocation, sometimes nodes are unbalanced and slow: https://alice.its.cern.ch/jira/browse/EPN-245
- Fix problem with SetProperties string > 1024/1536 bytes: https://alice.its.cern.ch/jira/browse/EPN-134 and https://github.com/FairRootGroup/DDS/issues/440
- After software installation, check whether it succeeded on all online nodes (https://alice.its.cern.ch/jira/browse/EPN-155) and consolidate software deployment scripts in general.
- Improve InfoLogger messages when environment creation fails due to too few EPNs / calib nodes available, ideally report a proper error directly in the ECS GUI: https://alice.its.cern.ch/jira/browse/EPN-65
- Create user for epn2eos experts for debugging: https://alice.its.cern.ch/jira/browse/EPN-383
- EPNs sometimes get in a bad state, with CPU stuck, probably due to AMD driver. To be investigated and reported to AMD.
- Understand different time stamps: https://its.cern.ch/jira/browse/EPN-487
- Stabilize calibration / fix EoS: New scheme: https://its.cern.ch/jira/browse/O2-4308:
-
10:20
→
10:25
Following up JIRA tickets 5mSpeaker: Ernst Hellbar (CERN)
Low-priority framework issues https://its.cern.ch/jira/browse/O2-5226
- Grafana metrics: Might want to introduce additional rate metrics that subtract the header overhead to have the pure payload: low priority.
- Merged workflow fails if outputs defined after being used as input
- needs to be implemented by Giulio
- Cannot override options for individual processors in a workflow
- requires development by Giulio first
- Problem with 2 devices of the same name
- Usage of valgrind in external terminal: The testcase is currently causing a segfault, which is an unrelated problem and must be fixed first. Reproduced and investigated by Giulio.
- Run getting stuck when too many TFs are in flight.
- Do not use string comparisons to derrive processor type, since DeviceSpec.name is user-defined.
- Support in DPL GUI to send individual START and STOP commands.
- Add additional check on DPL level, to make sure firstOrbit received from all detectors is identical, when creating the TimeFrame first orbit.
- Implement a proper solution to detect wheter a device is firstInChain
- Deploy topology with DPL driver
- Start / Stop / Start: 2 problems on O2 side left:
-
- All processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
- TPC ITS matching QC crashing accessing CCDB objects. Not clear if same problem as above, or a problem in the task itself
- All processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
-
PDP-SRC issues
- Check if we can remove dependencies on
/home/epn/odc/files
in DPL workflows to remove the dependency on the NFS- reading / writing already disabled
- remaining checks for file existence?
- check after Pb-Pb by removing files and find remaining dependencies
logWatcher.sh
andlogFetcher
scripts modified by EPN to remove dependencies onepnlog
user- node access privileges fully determined by e-groups
- new
log_access
role to allow access inlogWatcher
mode to retrieve log files, e.g. for on-call shifters - to be validated on STG
- waiting for EPN for further feedback and modifications of the test setup
- new
BEAMTYPE
for oxygen period- https://its.cern.ch/jira/browse/O2-5797
- RC asked for a synthetic OO dataset
- RC request for a pO dataset
- question if we can use the same CTP config for pO as for OO -> yes
- O2 code to be checked for pp and PbPb specific variables
- TMinuit errors (
Initial matrix not pos. def.
) in QC tasks during fitting procedure- mainly in vertex x, y fits, but also in some TPC QC tasks
- rare random errors, can reproduce locally
- to see and understand if they can be avoided with extra options in the Fit call, otherwise filter them in EPNstderrMonitor
-
10:25
→
10:30
TPC ML Clustering 5mSpeaker: Christian Sonnabend (CERN, Heidelberg University (DE))
Framework developments
- https://github.com/AliceO2Group/AliceO2/pull/14378
- Bug-fix for time boundary value filling
- Added deconvolution kernel setting for NN evaluation (makes it fully compatible with current clusterizer for flag setting)
Physics
- Now working on both data types: SC distorted and non-distorted; enforcing 0-5 centrality for better occupancy coverage
- Training data: Combination of 0-100 centrality, woSC (50%) and 0-5 centrality, SC (50%)
- Tested two settings for MC clusterizer, accumulation window (pad, time): (2, 4) and (3, 8)
- For current CoG: (abs(charge_pos - CoG) < window && mc_id_charge == mc_id_cog) ? accumulate : don't accumulate
- Conclusion: Large window performs slightly worse
- Most probably due to misassignment of MC clusters to peaks
- More MC CoGs triggers the looper tagger earlier -> Potentially some regions are not tagged with a larger window and the network training data gets some "confused" samples
(left: larger window, right: smaller window)
- Similar observation for CoG position vs. occupancy: At higher occupancies, the smalle window size works better
- Efficiency and fake rate improve for NN, however clone rate goes up for both primaries and secondaries. (Example in figure below: clone rate for secondaries)
Total number of clone tracks stays the same (within the uncertainty) -> Higher clone rate comes from overall reduction in number of tracks
Steps ahead
- Getting the full PID calibration to work
- Working until skim tree creation (at least it runs through)
- More thesis writing
- https://github.com/AliceO2Group/AliceO2/pull/14378
-
10:30
→
10:35
ITS Tracking 5mSpeaker: Matteo Concas (CERN)
-
10:35
→
10:40
TPC Track Model Decoding on GPU 5mSpeaker: Gabriele Cimador (Universita e INFN Torino (TO))
News from GPU parameters tuning
- Benchmark backend accepts only a pair of block and grid size per kernel
- Clusterizer step needs to tune only grid size as block size is constrained by
static asserts
- Almost modified backend to accept also one single variable, still tiny bug to fix
- Clusterizer step needs to tune only grid size as block size is constrained by
- Developing scripts to automate the tuning process of the whole TPC GPU sync tasks
- Benchmark backend accepts only a pair of block and grid size per kernel
-
10:40
→
10:45
Efficient Data Structures 5mSpeaker: Dr Oliver Gregor Rietmann (CERN)
SoA Benchmarks
- We have set up a benchmark repository: https://github.com/cern-nextgen/wp1.7-soa-benchmark
- We spotted the following issues:
- Much lower performance with GCC on one specific example: Fixed by David by force-inlining.
- In some examples, some loops were not vectorized: Fixed
- Baseline and our code is 2x slower with clang (compared to gcc) on one specific example: Still investigating.
Simplify the SoA Code
We got feedback from other developers using our code in their frameworks.
They said the code is too complicated. In particular, too many template parameters have to be specified.
This is an example of how to evaluate a struct of arrays, for each member, at index i.
helper::apply_to_members<M, const array_type&, proxy_type<const_reference, S>>(*this, evaluate_at<F>(i));
The (template) code was simplified. For example, the code above now looks as follows:
helper::apply_to_members<const_reference>(*this, evaluate_at<F>(i));
ALICE O2 CI-Pipelines on NGT Cluster
- Got the OK from Ricardo Rocha to set up a CI-pipeline for the O2 standalone benchmark on running NGT hardware.
- I am creating a proof-of-concept pipeline doing the following:
- Compile on GitHub hosted Runner
- Copy the executables to the NGT self-hosted runners
- Run the executables on NGT GPUs
- The goal is to have a CI-pipeline that tests and benchmarks O2 standalone in this fashion.
-
10:45
→
10:50
Following up GPU to-dos 5mSpeaker: Dr Vikas Singhal (Department of Atomic Energy (IN))
Below are ToDos:-
Class TODO/FIXME After changing Correctness ./Base/cuda/GPUReconstructionCUDAIncludesSystem.h #pragma GCC diagnostic push // FIXME: Is this still needed? //#pragma GCC diagnostic push // VS: It can be removed. Yes ./Base/GPUReconstructionConvert.cxx #include "AliHLTTPCRawCluster.h" // TODO: Is this still needed at all, or can it be removed? //#include "AliHLTTPCRawCluster.h" // VS: It cannot be removed. Need to understand the semantics of this and Look more into it. No (compile time error during Make) ./Base/opencl/GPUReconstructionOCL.cl #include "GPUCommonTypeTraits.h" // TODO: Once possible in OpenCL, should use GPUStdSystemHeaders.h here #include "GPUStdSystemHeaders.h" // VS: It can be replaced by GPUStdSystemHeaders.h Yes ./Base/opencl/GPUReconstructionOCL.cl #include "GPUCommonArray.h" // TODO: Same //#include "GPUCommonArray.h" // VS: These declarations are also available in GPUStdSystemHeaders.hSame Yes ./Global/GPUChainTrackingCompression.cxx #include "GPUConstantMem.h" // TODO: Try to get rid of as many GPUConstantMem includes as possible! //#include "GPUConstantMem.h" // VS: Try to get rid of as many GPUConstantMem includes as possible! No (No error during compilation but GPU.out is different.) -
10:50
→
10:55
TPC Clusterization / OpenCL / Highly Ionizing Particles 5mSpeaker: Felix Weiglhofer (Goethe University Frankfurt (DE))
OpenCL
Good news:
- Off-by-one error gone with Clang 20.1.6 + POCL 7
Bad news:
- Build failure in
GPUdEdx::fillCluster
, due to recursion? Doesn't affect clusterizer. - Still some differences in found & cluster count (even with deterministic mode)
Highly Ionizing Particles- Noisy pad filter already scans TPC pads along time axis, but neighboring pads processed in parallel- Need seperate step that filters HIP tails beforehand -
11:00
→
11:20
HS23 20mSpeakers: David Rohr (CERN), Domenico Giordano (CERN)
Gabriele will most likely the person from ALICE to take care of it.
We could provide a recipe to build a container to run the benchmark, or even some way to just run it from CVMFS.
Most practical approach is most likely:
- We take all the dependencies from CVMFS, this will keep the build times shor.
- We build the standalone benchmark for the container, with some option to define which architectures to build for. This will allow to run also on future hardware.
- We have to see how to provide the data sets for the standalone benchmarl (as part of container? On CVMFS?)It is not necessary to update often, but we might need to update to support new CUDA/ROCm version or new GPUs.
We should run the standalone benchmark in sync and async mode, exporting both performances independently, so they get results for online and for offline.
This will load only the GPU + 1 CPU core.
Should be enough for the start.
For loading GPU + CPU fully, we would need to run other algorithms on the CPU and need manual tuning like for async reco on the EPNs, which is infeasible in a generic form for the time being.
We could think about running the standalone benchmark on CPU and GPU in parallel though, which would not be a too complicated development, and could yield a benchmark to fully load the server.
-
10:00
→
10:20