Alice Weekly Meeting: Software for Hardware Accelerators / PDP-SRC
-
-
10:00
→
10:20
Discussion 20mSpeakers: David Rohr (CERN), Giulio Eulisse (CERN)
Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)
High priority Framework issues:
- Start / Stop / Start: 2 problems on O2 side left:
-
- All processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
- TPC ITS matching QC crashing accessing CCDB objects. Not clear if same problem as above, or a problem in the task itself:
- All processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
-
- Stabilize calibration / fix EoS: New scheme: https://its.cern.ch/jira/browse/O2-4308: Tentative fix by Giulio available, still need to find time to test?
- Fix problem with ccdb-populator: no idea yet - since Ole left, someone else will have to take care.
- TF-status message (from https://github.com/AliceO2Group/AliceO2/pull/13495) sent by readout-proxy. Status?
Sync reconstruction
- Waiting for RC to test COSMIC replay data set.
- Waiting for RC to test STOP timeout impact.
Async reconstruction
- Remaining oscilation problem: GPUs get sometimes stalled for a long time up to 2 minutes. Checking 2 things:
- does the situation get better without GPU monitoring? --> Inconclusive
- We can use increased GPU processes priority as a mitigation, but doesn't fully fix the issue.
- ḾI100 GPU stuck problem will only be addressed after AMD has fixed the operation with the latest official ROCm stack.
- Limiting factor for pp workflow is now the TPC time series, which is to slow and creates backpressure (costs ~20% performance on EPNs). Enabled multi-threading as recommended by Matthias - need to check if it works.
AliECS related topics:
- Extra env var field still not multi-line by default., created https://its.cern.ch/jira/browse/OGUI-1624 to follow this up seperately from other tickets.
GPU ROCm / compiler topics:
- List of important issues with AMD:
- Random server reboots on MI100: Tried several workarounds, but no solution found so far. Giada spotted some weird FairMQ problems in the large scale test, which could probably be due to some memory corruption happening.
- Random crashes on MI100 due to memory error, can be worked around by serializing all kernel and DMA transfers, which has 20% performance degradation.
- Miscompilation leading to crashes, worked around by changing our code, but compiler bug still there.
- Provide an RPM ROCm version with all fixes, so that we don't need to compile clang manually with custom patches.
- Proper way to enable amdgpu-function-calls instead of hacking AMD scripts and binaries.
- hipHostRegister has become very slow when more than 1 GPU visible (via ROCR_VISIBLE_DEVICES).
- EPNs provided 3 servers for ROCm 6.2.4 / Alma 9.4.
- Set up the reproducer for the reboot there, works reliable. Passed instructions to AMD how to run it.
- EPN also provided 3 servers with the new minor versions ROCm 6.3.2 / Alma 9.5.
- Tried to reproduce it on these 3 servers since 2 days, didn't happen.
- Exact same software for sure crashes with ROCm 6.3.1 / Alma 9.4.
- Also tried manual reboots of the servers in between, thinking perhaps they are some times in good and some times in bad state.
- Perhaps it is really fixed now, but want to gather more statistics before we can tell AMD they can stop looking into this and should focus on the memory error leasing to application crashes.
- Try to find a better solution for the problem with __device__ inline functions leaking symbols in the host code.
TPC / GPU Processing
- WIP: Use alignas() or find a better solution to fix alignment of monte carlo labels: https://its.cern.ch/jira/browse/O2-5314
- Waiting for TPC to fix bogus TPC transformations for good, then we can revert the workaround.
- Waiting for TPC to check PR which uses full cluster errors including average charge and occupancy map errors during seeding.
- Final solution: merging transformation maps on the fly into a single flat object: Still WIP
- Pending OpenCL2 issues:
- printf not working due to confirmed bug in clang, fix is being prepared. Prevents further debugging for now.
- GPU MemClean not working in TPC clusterization, need to debug.
- Crash in merger, which can be worked around by disabling clang SPIRV optimization. Probably bug in clang, but need to fix printf first to debug.
- Also with optimization disabled, crashing later in TPC merging, need printf to debug.
- printf not working due to confirmed bug in clang, fix is being prepared. Prevents further debugging for now.
- Solved memset issue with OpenCL, but Clusterizer still gives slightly different clusters running on OpenCL. Sent the logs to Felix.
- Next high priority topic: Improvements for cluster sharing and cluster attachment at lower TPC pad rows.
EPN major topics:
- Fast movement of nodes between async / online without EPN expert intervention.
- 2 goals I would like to set for the final solution:
- It should not be needed to stop the SLURM schedulers when moving nodes, there should be no limitation for ongoing runs at P2 and ongoing async jobs.
- We must not lose which nodes are marked as bad while moving.
- 2 goals I would like to set for the final solution:
- Interface to change SHM memory sizes when no run is ongoing. Otherwise we cannot tune the workflow for both Pb-Pb and pp: https://alice.its.cern.ch/jira/browse/EPN-250
- Lubos to provide interface to querry current EPN SHM settings - ETA July 2023, Status?
- Improve DataDistribution file replay performance, currently cannot do faster than 0.8 Hz, cannot test MI100 EPN in Pb-Pb at nominal rate, and cannot test pp workflow for 100 EPNs in FST since DD injects TFs too slowly. https://alice.its.cern.ch/jira/browse/EPN-244 NO ETA
- DataDistribution distributes data round-robin in absense of backpressure, but it would be better to do it based on buffer utilization, and give more data to MI100 nodes. Now, we are driving the MI50 nodes at 100% capacity with backpressure, and then only backpressured TFs go on MI100 nodes. This increases the memory pressure on the MI50 nodes, which is anyway a critical point. https://alice.its.cern.ch/jira/browse/EPN-397
- TfBuilders should stop in ERROR when they lose connection.
- Allow epn user and grid user to set nice level of processes: https://its.cern.ch/jira/browse/EPN-349
Other EPN topics:
- Check NUMA balancing after SHM allocation, sometimes nodes are unbalanced and slow: https://alice.its.cern.ch/jira/browse/EPN-245
- Fix problem with SetProperties string > 1024/1536 bytes: https://alice.its.cern.ch/jira/browse/EPN-134 and https://github.com/FairRootGroup/DDS/issues/440
- After software installation, check whether it succeeded on all online nodes (https://alice.its.cern.ch/jira/browse/EPN-155) and consolidate software deployment scripts in general.
- Improve InfoLogger messages when environment creation fails due to too few EPNs / calib nodes available, ideally report a proper error directly in the ECS GUI: https://alice.its.cern.ch/jira/browse/EPN-65
- Create user for epn2eos experts for debugging: https://alice.its.cern.ch/jira/browse/EPN-383
- EPNs sometimes get in a bad state, with CPU stuck, probably due to AMD driver. To be investigated and reported to AMD.
- Understand different time stamps: https://its.cern.ch/jira/browse/EPN-487
- Start / Stop / Start: 2 problems on O2 side left:
-
10:20
→
10:25
Following up JIRA tickets 5mSpeaker: Ernst Hellbar (CERN)
Low-priority framework issues https://its.cern.ch/jira/browse/O2-5226
- Grafana metrics: Might want to introduce additional rate metrics that subtract the header overhead to have the pure payload: low priority.
- Merged workflow fails if outputs defined after being used as input
- needs to be implemented by Giulio
- Cannot override options for individual processors in a workflow
- requires development by Giulio first
- Problem with 2 devices of the same name
- Usage of valgrind in external terminal: The testcase is currently causing a segfault, which is an unrelated problem and must be fixed first. Reproduced and investigated by Giulio.
- Run getting stuck when too many TFs are in flight.
- Do not use string comparisons to derrive processor type, since DeviceSpec.name is user-defined.
- Support in DPL GUI to send individual START and STOP commands.
- Add additional check on DPL level, to make sure firstOrbit received from all detectors is identical, when creating the TimeFrame first orbit.
- Implement a proper solution to detect wheter a device is firstInChain
- Deploy topology with DPL driver
PDP-SRC issues
- Check if we can remove dependencies on
/home/epn/odc/files
in DPL workflows to remove the dependency on the NFS- reading / writing already disabled
- remaining checks for file existence?
- check after Pb-Pb by removing files and find remaining dependencies
logWatcher.sh
andlogFetcher
scripts modified by EPN to remove dependencies onepnlog
user- node access privileges fully determined by e-groups
- new
log_access
role to allow access inlogWatcher
mode to retrieve log files, e.g. for on-call shifters - to be validated on STG
- waiting for EPN for further feedback and modifications of the test setup
- Promote critical DPL Errors to ILG Ops level
- new "critical" severity in latest FairLogger + unrelated changes (
string
->string_view
) - new InfoLogger tag (2.8.1) compatible with new severity and latest FairLogger changes
- known IL messages for QC shifter
- RC will follow up with detectors if some of the issues are fixed
- todo
- see if there are other critical stderr messages
- promote documented messages
- check content of Fatal and Critical messages, make sure they are appropriate for shifters, eventually putting extra information in an Error message right in front
- new "critical" severity in latest FairLogger + unrelated changes (
- new
BEAMTYPE
for oxygen period- https://its.cern.ch/jira/browse/O2-5797
- beam types
- p-O and O-O
- Ne-Ne still to be confirmed
- scripts to be adjusted to set proper workflow parameters
- O2 code to be checked for pp and PbPb specific variables
-
10:25
→
10:30
TPC ML Clustering 5mSpeaker: Christian Sonnabend (CERN, Heidelberg University (DE))
- Discovered problem in creation of training data for NN: Noise supression and deconvolution change digits in place
- Dumping digits from reco workflow now. First training: w noise supression, w/o deconvolution -> Almost no effect on resulting chi2 distribution (max. 7x7x7 window). Noise supression doesn't change digits.
- Retraining with deconvolution kernel now (this changes digits in place)
- Will also try reconstruction on ideal clusters to see if the chi2 is also worse here -> If that's the case, the network cannot get better...
-------------------------
Adding stuff here for next week.
- Finally found the bug which was killing the performance: Wrong boundary conditions in reconstruction and explicit casting from uint8_t to int / float using static_cast
- Used NN without deconvolution kernel and GPUCF with deconvolution (standard)
- Newest insight
- CNN works better for initial layers than fully connected
- 3D input works much better for deconvolution than 2D input. Current input size: 5x11x11 (row x pad x time)
- Now comparing the reconstruction quality.
- Similar results for non-distorted and distorted data
- NN appears a bit worse still by the official metrics (e.g. chi2) but I don't think its correct (see plots)
- Resulting Chi2/NCL distribution
- Checking Chi2 at high NCL region
- Checked some individual matched tracks, their Chi2/Ncl and the resulting MSE error in Z direction
- The network is consistently better in the MSE even if the chi2/Ncl is way worse for the NN
- This means, the resulting differences in Chi2/NCl must come from some calibrations and potentially some (selfmade) bug in the reco
-
10:30
→
10:35
ITS Tracking 5mSpeaker: Matteo Concas (CERN)
-
10:35
→
10:45
TPC Track Model Decoding on GPU 10mSpeaker: Gabriele Cimador (Universita e INFN Torino (TO))
-
10:45
→
10:55
Efficient Data Structures 10mSpeaker: Dr Oliver Gregor Rietmann (CERN)
Overview
The goal is to develop a C++ library that allows to abstract the data layout of an array. Possible data layouts include aray of struct (AoS) and struct of array (SoA), see the following example.
constexpr std::size_t N = 42;
struct Point { int x, y, z; };
Point point_aos[N]; // data layout: AoS
template <std::size_t N>
struct PointSoA {
int x[N];
int y[N];
int z[N];
};
PointSoA<N> point_soa; // data layout: SoA
We aim at writing a class that takes the struct Point, a data layout, and possibly more arguments. The class then allows for AoS access, but stores the data in a possibly different layout, thereby hiding the data layout. This class is called "wrapper":
template<
template <class> class F, // container
template <template <class> class> class S, // e.g. "Point"
layout L // data layout
>
struct wrapper;The first template parameter F could for example be one of th following:
- std::vector
- std::span
- pre-existing buffer of type char*
- std::pmr::vector
New: std::span on the Device
Idea:
- F1 is implicitly convertible to F2 --> wrapper<F1, S, L> implicitly convertible to wrapper<F2, S, L>
- Example: F1 = std::vector and F2 = std::span
template <template <class> class F, template <template <class> class> class S>
struct wrapper<F, S, layout::aos> {
// ...
template <template <class> class F_out>
operator wrapper<F_out, S, layout::aos>() { return {data}; };
// ...
};Remarks:
- std::span can point to device memory
- std::span can be used in CUDA kernels
- Example above is only AoS, but SoA is also quite trivial
New: Use std::shared_ptr as "Container" F on the Host
For Testing, we need a simple container that saves boilerplate code
template<class T>
struct Allocator {
using value_type = T;
Allocator() = default;
template <class U>
Allocator(const Allocator<U>&) {}
T* allocate(std::size_t n) {
T* ptr;
cuda_malloc_managed((void **) &ptr, sizeof(T) * n);
return ptr;
}
void deallocate(T* p, std::size_t n) noexcept { cuda_free(p); }
};
// Use as "Container" type F like
// std::shared_ptr<int[]> ptr = std::allocate_shared<int[]>(Allocator<int>{}, N)Remarks:
-
std::shared_ptr doesn't know the size of the array it is pointing to
- Not implicitly convertible to std::span
- Same container type for different kind of device memory
-
10:00
→
10:20