Alice Weekly Meeting: Software for Hardware Accelerators / PDP-SRC
-
-
10:00 AM
→
10:20 AM
Discussion 20mSpeakers: David Rohr (CERN), Giulio Eulisse (CERN)
Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)
High priority Framework issues:
- Start / Stop / Start: 2 problems on O2 side left:
-
- All processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
- TPC ITS matching QC crashing accessing CCDB objects. Not clear if same problem as above, or a problem in the task itself:
- All processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
-
- Stabilize calibration / fix EoS: New scheme: https://its.cern.ch/jira/browse/O2-4308:
- RC did some timeout tests, need to repeat in physics, then decide on actual timeouts.
- Ernst will verify the processing of calib data after data processing timeout.
- PR with InfoLogger improvements still WIP.
- Fix problem with ccdb-populator: no idea yet - since Ole left, someone else will have to take care.
- TF-status message (from https://github.com/AliceO2Group/AliceO2/pull/13495) sent by readout-proxy. Status?
- ONNXRuntime update merged.
- Problem with GPU and non-GPU container building ONNXRuntime with same hash and uploading to binary repository.
- GPU O2 build does not get binary of ONNXRuntime with GPU support.
- Fixed by https://github.com/alisw/alidist/pull/5855 using new alibuild feature of versioned system packages.
- New GRPC still causing trouble with Mesos, SW at P2 uses old alidist. Can we do anything to help them debug?
Sync reconstruction
- Waiting for RC to test COSMIC replay data set.
- Waiting for RC to test STOP timeout impact.
- Problem with high CPU load due to DPL metrics. Status?
Async reconstruction
- Remaining oscilation problem: GPUs get sometimes stalled for a long time up to 2 minutes. Checking 2 things:
- does the situation get better without GPU monitoring? --> Inconclusive
- We can use increased GPU processes priority as a mitigation, but doesn't fully fix the issue.
- Need to investigate short GPU stall problem.
- Limiting factor for pp workflow is now the TPC time series, which is to slow and creates backpressure (costs ~20% performance on EPNs). Enabled multi-threading as recommended by Matthias - need to check if it works.
AliECS related topics:
- Extra env var field still not multi-line by default., created https://its.cern.ch/jira/browse/OGUI-1624 to follow this up seperately from other tickets.
GPU ROCm / compiler topics:
- List of important issues with AMD:
- Issues that disappeared but not yet understood: random server reboot with alma 9.4, miscompilation with ROCm 6.2, GPU getting stuck when DMA engine turned off, MI100 stalling with ROCm 5.5.
- EPN deployed the fix by AMD a second time, this time it works. Automatic workaround for MI100 removed in O2/dev. Will be deployed with next SW update.
- Problem with building ONNXRuntime with MigraphX support, to be checked. Status?
- slc9-gpu-builder container was lacking dependencies for building NVIDIA GPU ONNX Support with TensorRT. Fixed, and tested, but currently not really needed since we cannot build ONNXRuntime with AMD and NVIDIA support.
- Try to find a better solution for the problem with __device__ inline functions leaking symbols in the host code.
- Once we bump arrow (PR opened by Giulio), we can bump LLVM to 19.
- AMD reported a regression where deterministic mode has slight differences CPU v.s. GPU. Need to check if regression in O2 code or in ROCm. Fixed, was a bug in our CMakeLists.txt.
- ROCm 6.4 released. AMD split the driver and the ROCm part. Need to check if something do be done on our side. Synchronization fix shall come with ROCm 6.4.1, so waiting for that point release.
TPC / GPU Processing
- WIP: Use alignas() or find a better solution to fix alignment of monte carlo labels: https://its.cern.ch/jira/browse/O2-5314
- Waiting for TPC to fix bogus TPC transformations for good, then we can revert the workaround.
- Waiting for TPC to check PR which uses full cluster errors including average charge and occupancy map errors during seeding.
- Final solution: merging transformation maps on the fly into a single flat object: Still WIP
- Pending OpenCL2 issues:
- printf not working due to confirmed bug in clang, fix is being prepared. Prevents further debugging for now.
- Crash in merger, which can be worked around by disabling clang SPIRV optimization. Probably bug in clang, but need to fix printf first to debug.
- Also with optimization disabled, crashing later in TPC merging, need printf to debug.
- Felix debugged the OpenCL clusterization problem to be due to off-by-one offset in NoiseSuppression. Need to check how that can happen only in OpenCL.
- printf not working due to confirmed bug in clang, fix is being prepared. Prevents further debugging for now.
- Next high priority topic: Improvements for cluster sharing and cluster attachment at lower TPC pad rows.
- No almost all GPU constant parameters available as runtime configuration setting for RTC (some exceptions in TPC clusterization still need to be preprocessor defines during host compilation, need to check if I can fix that.)
- Switched many settings from preprocessor define to constexpr, which fixed also some bugs where some settings might not be picked up correctly due to #ifdef typos.
- Need to talk to Gabriele, so that he can include these parameters into his tuning.
- Fixed NN Clusterization compilation with CUDA (this is not tested in the CI, since we can build ONNX only with either ROCm or with CUDA support, so what we build has only ROCm support. Should see if that can be fixed in the future.
- GPU Standalone benchmark can now also compile with ONNX support.
- Will add a GPU Standalone Benchmark build to the FullCI, to avoid breaking it in the future.
- Merged Christian's PR for ONNX stream implementation, would like to check the problem with ONNX external memory allocator.
Other Topics
- Felix is hired, will start 1st of June.
EPN major topics:
- Fast movement of nodes between async / online without EPN expert intervention.
- 2 goals I would like to set for the final solution:
- It should not be needed to stop the SLURM schedulers when moving nodes, there should be no limitation for ongoing runs at P2 and ongoing async jobs.
- We must not lose which nodes are marked as bad while moving.
- 2 goals I would like to set for the final solution:
- Interface to change SHM memory sizes when no run is ongoing. Otherwise we cannot tune the workflow for both Pb-Pb and pp: https://alice.its.cern.ch/jira/browse/EPN-250
- Lubos to provide interface to querry current EPN SHM settings - ETA July 2023, Status?
- Improve DataDistribution file replay performance, currently cannot do faster than 0.8 Hz, cannot test MI100 EPN in Pb-Pb at nominal rate, and cannot test pp workflow for 100 EPNs in FST since DD injects TFs too slowly. https://alice.its.cern.ch/jira/browse/EPN-244 NO ETA
- DataDistribution distributes data round-robin in absense of backpressure, but it would be better to do it based on buffer utilization, and give more data to MI100 nodes. Now, we are driving the MI50 nodes at 100% capacity with backpressure, and then only backpressured TFs go on MI100 nodes. This increases the memory pressure on the MI50 nodes, which is anyway a critical point. https://alice.its.cern.ch/jira/browse/EPN-397
- TfBuilders should stop in ERROR when they lose connection.
- Allow epn user and grid user to set nice level of processes: https://its.cern.ch/jira/browse/EPN-349
- EPN would like to bump slurm, for that we also need to bump the async voboxes. I'd suggest to move them to ALMA9 directly. Probably need to sit together to do this. From then on, we also plan to put the vobox handling into the EPN ansible, so that EPN will take over its maintenance.
- Status? If I understand correctly, slurm bumped on staging now?
Other EPN topics:
- Check NUMA balancing after SHM allocation, sometimes nodes are unbalanced and slow: https://alice.its.cern.ch/jira/browse/EPN-245
- Fix problem with SetProperties string > 1024/1536 bytes: https://alice.its.cern.ch/jira/browse/EPN-134 and https://github.com/FairRootGroup/DDS/issues/440
- After software installation, check whether it succeeded on all online nodes (https://alice.its.cern.ch/jira/browse/EPN-155) and consolidate software deployment scripts in general.
- Improve InfoLogger messages when environment creation fails due to too few EPNs / calib nodes available, ideally report a proper error directly in the ECS GUI: https://alice.its.cern.ch/jira/browse/EPN-65
- Create user for epn2eos experts for debugging: https://alice.its.cern.ch/jira/browse/EPN-383
- EPNs sometimes get in a bad state, with CPU stuck, probably due to AMD driver. To be investigated and reported to AMD.
- Understand different time stamps: https://its.cern.ch/jira/browse/EPN-487
- Start / Stop / Start: 2 problems on O2 side left:
-
10:20 AM
→
10:25 AM
Following up JIRA tickets 5mSpeaker: Ernst Hellbar (CERN)
Low-priority framework issues https://its.cern.ch/jira/browse/O2-5226
- Grafana metrics: Might want to introduce additional rate metrics that subtract the header overhead to have the pure payload: low priority.
- Merged workflow fails if outputs defined after being used as input
- needs to be implemented by Giulio
- Cannot override options for individual processors in a workflow
- requires development by Giulio first
- Problem with 2 devices of the same name
- Usage of valgrind in external terminal: The testcase is currently causing a segfault, which is an unrelated problem and must be fixed first. Reproduced and investigated by Giulio.
- Run getting stuck when too many TFs are in flight.
- Do not use string comparisons to derrive processor type, since DeviceSpec.name is user-defined.
- Support in DPL GUI to send individual START and STOP commands.
- Add additional check on DPL level, to make sure firstOrbit received from all detectors is identical, when creating the TimeFrame first orbit.
- Implement a proper solution to detect wheter a device is firstInChain
- Deploy topology with DPL driver
- Automatic creation of CTP/Config/Config by o2-ecs-grp-create call from the ECS
PDP-SRC issues
- Check if we can remove dependencies on
/home/epn/odc/files
in DPL workflows to remove the dependency on the NFS- reading / writing already disabled
- remaining checks for file existence?
- check after Pb-Pb by removing files and find remaining dependencies
logWatcher.sh
andlogFetcher
scripts modified by EPN to remove dependencies onepnlog
user- node access privileges fully determined by e-groups
- new
log_access
role to allow access inlogWatcher
mode to retrieve log files, e.g. for on-call shifters - to be validated on STG
- waiting for EPN for further feedback and modifications of the test setup
- new
BEAMTYPE
for oxygen period- https://its.cern.ch/jira/browse/O2-5797
- beam types
- p-O and O-O
- Ne-Ne still to be confirmed
- scripts to be adjusted to set proper workflow parameters
- will as RC to create new configurations, workflow parameters to be set in the configuration instead of the scripts
- tests using Pb-Pb replay data with new beam types
- some performance issues due to unset process multiplicities, otherwise everything looks fine
- RC asked for a synthetic OO dataset, will check with Sandro if our generators are already set up for this
- new generator config files for OO, pO and NeNe set up by Marco
- O2 code to be checked for pp and PbPb specific variables
-
10:25 AM
→
10:30 AM
TPC ML Clustering 5mSpeaker: Christian Sonnabend (CERN, Heidelberg University (DE))
Framework
- PR with recent changes merged: https://github.com/AliceO2Group/AliceO2/pull/14117
- GPU stream implementation for ONNX sessions
- Full support for IO binding, memory arena allocations (tbd) and multiple ORT evaluations
- CCDB support for fetching NNs
-----------------------------------
tmux FST script for software validation on epn279
- Runs reconstruction for full FST simulation
- ORT models are loaded correctly from local files (CCDB optional)
Regular reconstruction
NN reconstruction
Crashes due to invalid read / writes. The logic for application seems correct, investigating why output gets corrupted.
--------------------------------------
To be improved
- ✅ Initial memory reservation for ONNX models works with call to AllocateVolatileMemory -> Printout confirmed, switched off for now due to missing protections
- ❌ "Secondary" allocation within ONNX (which is the bulk) somehow doesn't use that function yet (pointer adresses are sane and match expectation)
- I know exactly where everything happens: https://github.com/microsoft/onnxruntime/blob/fcf2c588972e748c1a5b72f2aa58f1783023f4d2/onnxruntime/core/framework/bfc_arena.cc#L114
- Tested my own dev package of ONNX with printouts. But for some unknown reason it doesn't use the right funciton yet
- When using multiple models I see messages of the form
Schema error: Trying to register schema with name Abs (domain: version: 1)
from file /local/workspace/DailyBuilds/DailyO2Physics-slc9/daily-tags.uO7Ig4H9DC/
SOURCES/onnx/v1.17.0-alice2/v1.17.0-alice2/onnx/defs/math/old.cc line 2729,
but it is already registered from file /local/workspace/DailyBuilds/
DailyO2Physics-slc9/daily-tags.uO7Ig4H9DC/SOURCES/onnx/v1.17.0-alice2/
v1.17.0-alice2/onnx/defs/math/old.cc line 2729
--------------------------------------
Next two weeks
- Vacation + LHCP. Talk is ready, just some plots need approval
- PR with recent changes merged: https://github.com/AliceO2Group/AliceO2/pull/14117
-
10:30 AM
→
10:35 AM
ITS Tracking 5mSpeaker: Matteo Concas (CERN)
ITS GPU tracking
- General priorities:
- F. Schlepper found some rare (more often in PbPb data) deviations in the deterministic mode, investigations restricted to a few blocks of code (findCellsNeighbours, post-CHEP).
- [WIP] Thrust allocator with external memory management: for the moment it compiles but it does not work, needs dedicated discussion.
- Focusing on porting all of what is possible on the device, extending the state of the art, and minimising computing on the host.
- Moving vertexing routines to the externally managed memory system. -> WIP
- Currently optimising the o2::its::timeframeGPU intialisation GPU transfers, trying to maximise data reuse & anticipating loads when it is possible.
- Moving vertexing routines to the externally managed memory system. -> WIP
- Optimizations:
- Asynchronous parallelisation in the tracklet finding, i.e. Multi-streaming for obvious parallelisations.
- intelligent scheduling and multi-streaming can happen right after.
- Kernel-level optimisations to be investigated.
TODO:-
- Reproducer for HIP bug on multi-threaded track fitting: no follow-up yet.
- Fix possible execution issues and known discrepancies when using
gpu-reco-workflow
: no progress.
DCAFitterGPU
- Deterministic approach via using
SMatrixGPU
on the host, under particular configuration: no progress.
- General priorities:
-
10:35 AM
→
10:45 AM
TPC Track Model Decoding on GPU 10mSpeaker: Gabriele Cimador (Universita e INFN Torino (TO))
Grid search vs LHS sampling for one kernel optimisation
MergerTrackFit
Grid Search
Beamtype IR Best Block Size Best Grid Size Best Mean Kernel Time (ms) Search Duration (minutes) pp 100kHz 64 960 152.66 29.53 pp 2MHz 256 180 1899.74 74.41 PbPb 5kHz 512 60 337.52 34.35 PbPb 50kHz 256 960 4816.70 137.68 LHS sampling
Beamtype IR Best Block Size Best Grid Size Best Mean Kernel Time (ms) Search Duration (minutes) pp 100kHz 192 720 148.55 22.38 pp 2MHz 192 180 1871.80 40.78 PbPb 5kHz 64 840 336.51 21.06 PbPb 50kHz 192 780 5160.10 95.41 MergerSectorRefit
Grid Search
Beamtype IR Best Block Size Best Grid Size Best Mean Kernel Time (ms) Search Duration (minutes) pp 100kHz 64 60 4.49 29.80 pp 2MHz 256 900 17.17 73.80 PbPb 5kHz 64 900 4.76 34.23 PbPb 50kHz 256 960 28.94 135.18 LHS sampling
Beamtype IR Best Block Size Best Grid Size Best Mean Kernel Time (ms) Search Duration (minutes) pp 100kHz 64 60 4.49 17.88 pp 2MHz 256 720 19.22 39.40 PbPb 5kHz 64 660 4.80 22.51 PbPb 50kHz 256 120 32.18 59.51 MergerCollect
Grid Search
Beamtype IR Best Block Size Best Grid Size Best Mean Kernel Time (ms) Search Duration (minutes) pp 100kHz 128 900 24.64 30.05 pp 2MHz 256 300 241.75 71.96 PbPb 5kHz 256 300 52.54 34.05 PbPb 50kHz 256 300 368.05 128.08 LHS sampling
Beamtype IR Best Block Size Best Grid Size Best Mean Kernel Time (ms) Search Duration (minutes) pp 100kHz 192 480 22.86 23.06 pp 2MHz 256 240 271.63 63.20 PbPb 5kHz 320 540 44.60 26.70 PbPb 50kHz 256 450 341.04 87.90 MergerFollowLoopers / CompressionKernels_step0 dependency
Checked with --debug 0 to serialize every kernel. Dependency seems mitigated but still present, will investigate further. In the meantime they are treated as one step with 4 parameters.
-
10:45 AM
→
10:55 AM
Efficient Data Structures 10mSpeaker: Dr Oliver Gregor Rietmann (CERN)
NextGenTrigger Hardware
- Can be accessed by a Kubernetes cluster
- Thus the code needs to run in a container
- Might require explicit approval for non-NGT people (to be checked)
- Needs kubernetes installed and configured correctly (on personal machine or on lxplus)
How to Run Our Code
Create a file "session.yml" defining a pod. The marked lines you might have to change.
apiVersion: v1
kind: Pod
metadata:
name: session-1
labels:
mount-eos: "true"
inject-oauth2-token-pipeline: "true"
annotations:
sidecar.istio.io/inject: "false"
spec:
containers:
- name: session-1
image: registry.cern.ch/ngt-wp1.7/wp1.7-soa-wrapper:latest
command: ["sleep", "infinity"]
resources:
limits:
nvidia.com/gpu: 1
securityContext:
runAsUser: 0
runAsGroup: 0In the terminal, run the following commands to start the pod and enter an interactive session.
kubectl apply -f session.yml
kubectl exec -it session-1 -- /bin/bashNow you can "git clone" your code and build it.
TODOs
- Persistent volume claim (PVC) to save stuff between sessions
- A secret for the ssh key when we use git in the container
- Edit code remotely e.g. with visual studio code
- Merge everything to a YAML file that works out of the box (maybe using a helm chart?)
- Find a setting that allows comparable benchmarks
-
10:55 AM
→
11:00 AM
Following up GPU to-dos 5mSpeaker: Dr Vikas Singhal (Department of Atomic Energy (IN))
- Compiled the O2 GPU TPC Standalone Benchmark following O2-5321.
- Used @David data set to run the benchmark.
- Run the ./ca data for o2-pbpb-100 and o2-pp-10 with debug 1. Output contains timing information for different kernels.
- To visualise, try to run with --display, but could not open display.
[vsinghal@epn000 alisoft]$ source ~/alisoft/sw/SOURCES/O2/daily-20250326-0000/daily-20250326-0000/GPU/GPUTracking/Standalone/cmake/prepare.sh
[vsinghal@epn000 alisoft]$ cd ../standalone/
[vsinghal@epn000 standalone]$ ./ca -e o2-pp-10 --gpuDevice 0 --display
Reading events from Directory events/o2-pp-10
GPU Tracker library loaded and GPU tracker object created sucessfully
Created GPUReconstruction instance for device type HIP (3)
Read event settings from dir events/o2-pp-10/ (solenoidBz: -5.006680, home-made events 0, constBz 0, maxTimeBin 57025)
Standalone Test Framework for CA Tracker - Using GPU
Enabling event display (X11 backend)
HIP Initialisation successfull (Device 0: AMD Instinct MI50/MI60 (Frequency 1725000, Cores 60), 6442516480 / 6442516480 bytes host / global memory, Stack frame 8192, Constant memory 26887)
GPU Tracker initialization successfull
Rescaling buffer size limits from 20500000000 to 6442516480 bytes of memory (factor 0.314269)
Using random seed 414215023
Loading time: 576,053 us
Processing Event 0
Trigger handling only possible with TPC Dense Link Based data, received version 2, disabling
Event has 2794 8kb TPC ZS pages (version 2), 388869 digits
Event has 46278 TPC Clusters, 0 TRD Tracklets
Output Tracks: 502 (0 / 31692 / 0 / 46278 clusters (fitted / attached / adjacent / total) - O2 format)
could not open display
Error occured
Maximum Memory Allocation: Host 1,488,519,168 / Device 1,521,374,144
HIP Uninitialized
[vsinghal@epn000 standalone]$Used -XY options during ssh, but how to enable display via srun?
- Any document for deterministic and non-deterministic .
-
10:00 AM
→
10:20 AM