Alice Weekly Meeting: Software for Hardware Accelerators / PDP-SRC
→
Europe/Zurich
-
-
1
DiscussionSpeakers: David Rohr (CERN), Giulio Eulisse (CERN)
Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)
High priority Framework issues:
- Start / Stop / Start: 2 problems on O2 side left:
-
- All processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
- TPC ITS matching QC crashing accessing CCDB objects. Not clear if same problem as above, or a problem in the task itself:
- All processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
-
- Stabilize calibration / fix EoS: New scheme: https://its.cern.ch/jira/browse/O2-4308: Reported 2 issues to GIulio, waiting for a fix.
- Fix problem with ccdb-populator: no idea yet - since Ole left, someone else will have to take care.
- TF-status message (from https://github.com/AliceO2Group/AliceO2/pull/13495) sent by readout-proxy. Status?
Sync reconstruction
- Waiting for RC to test COSMIC replay data set.
- Waiting for RC to test STOP timeout impact.
- Had first fill with leveling at 50 kHz yesterday run 559933.
- 255 MI50 nodes 100% busy, 64 MI100 nodes 80% busy. --> 17 MI50 equivalent nodes margin out of 340 in production (5%).
- 255 MI50 nodes 100% busy, 64 MI100 nodes 80% busy. --> 17 MI50 equivalent nodes margin out of 340 in production (5%).
- Checked number of clusters in online processing (before CTF cluster removal compared to 2023 47 kHz Pb-Pb): Now 10% fewer clusters. Not clear why the processing requirements are still higher than with replay of last years data.
- From software side smooth data-taking, today possibility to install new fix by Ruben for corrupt ITS/MFT data.
Async reconstruction
- Remaining oscilation problem: GPUs get sometimes stalled for a long time up to 2 minutes. Checking 2 things:
- does the situation get better without GPU monitoring? --> Inconclusive
- We can use increased GPU processes priority as a mitigation, but doesn't fully fix the issue.
- ḾI100 GPU stuck problem will only be addressed after AMD has fixed the operation with the latest official ROCm stack.
- Limiting factor for pp workflow is now the TPC time series, which is to slow and creates backpressure (costs ~20% performance on EPNs). Enabled multi-threading as recommended by Matthias - need to check if it works.
EPN major topics:
- Fast movement of nodes between async / online without EPN expert intervention.
- 2 goals I would like to set for the final solution:
- It should not be needed to stop the SLURM schedulers when moving nodes, there should be no limitation for ongoing runs at P2 and ongoing async jobs.
- We must not lose which nodes are marked as bad while moving.
- 2 goals I would like to set for the final solution:
- Interface to change SHM memory sizes when no run is ongoing. Otherwise we cannot tune the workflow for both Pb-Pb and pp: https://alice.its.cern.ch/jira/browse/EPN-250
- Lubos to provide interface to querry current EPN SHM settings - ETA July 2023, Status?
- Improve DataDistribution file replay performance, currently cannot do faster than 0.8 Hz, cannot test MI100 EPN in Pb-Pb at nominal rate, and cannot test pp workflow for 100 EPNs in FST since DD injects TFs too slowly. https://alice.its.cern.ch/jira/browse/EPN-244 NO ETA
- DataDistribution distributes data round-robin in absense of backpressure, but it would be better to do it based on buffer utilization, and give more data to MI100 nodes. Now, we are driving the MI50 nodes at 100% capacity with backpressure, and then only backpressured TFs go on MI100 nodes. This increases the memory pressure on the MI50 nodes, which is anyway a critical point. https://alice.its.cern.ch/jira/browse/EPN-397
- TfBuilders should stop in ERROR when they lose connection.
- Allow epn user and grid user to set nice level of processes: https://its.cern.ch/jira/browse/EPN-349
- Tentative time for ALMA9 deployment: december 2024.
Other EPN topics:
- Check NUMA balancing after SHM allocation, sometimes nodes are unbalanced and slow: https://alice.its.cern.ch/jira/browse/EPN-245
- Fix problem with SetProperties string > 1024/1536 bytes: https://alice.its.cern.ch/jira/browse/EPN-134 and https://github.com/FairRootGroup/DDS/issues/440
- After software installation, check whether it succeeded on all online nodes (https://alice.its.cern.ch/jira/browse/EPN-155) and consolidate software deployment scripts in general.
- Improve InfoLogger messages when environment creation fails due to too few EPNs / calib nodes available, ideally report a proper error directly in the ECS GUI: https://alice.its.cern.ch/jira/browse/EPN-65
- Create user for epn2eos experts for debugging: https://alice.its.cern.ch/jira/browse/EPN-383
- EPNs sometimes get in a bad state, with CPU stuck, probably due to AMD driver. To be investigated and reported to AMD.
- Understand different time stamps: https://its.cern.ch/jira/browse/EPN-487
AliECS related topics:
- Extra env var field still not multi-line by default.
GPU ROCm / compiler topics:
- ROCm 6.2.2 available from AMD, old problems seem fixed, but we see 2 new types of crashes
- No update
- New miscompilation for >ROCm 6.0
- Waiting for AMD to fix the reproducer we provided (not yet fixed in 6.2.2, but we have a workaround).
- Waiting for AMD to fix the reproducer we provided (not yet fixed in 6.2.2, but we have a workaround).
- Try to find a better solution for the problem with __device__ inline functions leaking symbols in the host code.
- Merged GPUDataTypes and GPUDataTypeHeaders libraries, since it was causing trouple for Jens with ROOT DebugStreamers, since I had some hack in there with respect to ROOT dictionaries for the TRD GPU track model.
- A lot of progress for C++ for OpenCL / OpenCL 3.0 backend, using POCL (Portable OpenCL C++ runtime) and Clang 19, we can run our SPIR-V IL code on CPU. Clusterization works, but then TPC tracking crashes (under investigation).
- Plan is to fix this, and then remove all the obsolete support for ROOT 5 / AliRoot build / OpenCL 1.2 / C++ < C++17. Run 2 data will still be supported by using the existing raw dump in AliRoot and loading it in the standalone benchmark.
TPC GPU Processing
- WIP: Use alignas() or find a better solution to fix alignment of monte carlo labels: https://its.cern.ch/jira/browse/O2-5314
- Waiting for TPC to fix bogus TPC transformations for good, then we can revert the workaround.
- Waiting for TPC to check PR which uses full cluster including average charge and occupancy map errors during seeding.
- Changing IFC inner radius filter to remove cluster instead of increasing its error. Requested by Ruben.
- Next PR will allow to remove multiple clusters in the same pad row and keep only the best one.
TPC processing performance regression:
- Final solution: merging transformation maps on the fly into a single flat object: Still WIP
General GPU Processing
- Start / Stop / Start: 2 problems on O2 side left:
-
2
Following up JIRA ticketsSpeaker: Ernst Hellbar (CERN)
Low-priority framework issues https://its.cern.ch/jira/browse/O2-5226
- Grafana metrics: Might want to introduce additional rate metrics that subtract the header overhead to have the pure payload: low priority.
- Backpressure reporting when there is only 1 input channel: no progress
- Merged workflow fails if outputs defined after being used as input
- needs to be implemented by Giulio
- Cannot override options for individual processors in a workflow
- requires development by Giulio first
- Problem with 2 devices of the same name
- Usage of valgrind in external terminal: The testcase is currently causing a segfault, which is an unrelated problem and must be fixed first. Reproduced and investigated by Giulio.
- Run getting stuck when too many TFs are in flight.
- Do not use string comparisons to derrive processor type, since DeviceSpec.name is user-defined.
- Support in DPL GUI to send individual START and STOP commands.
- DPL sending SHM metrics for all devices, not only input proxy
- moved checking for
readout-proxy
name from when sending the metric to when registering the metric- sending the metric only when registered
- tested locally with GUI
- calibrations on aggregator nodes have different proxy names
- standalone calibration sometimes have different input proxy names
- moved checking for
- Some improvements to ease debugging with the GUI:
- ROOT Messages in the output of a workflow should not generally be interpreted as error
- https://github.com/AliceO2Group/AliceO2/pull/13683
- assignment of proper severity for ROOT logs in
Utilities/EPNMonitoring/src/EPNstderrMonitor.cxx
for EPN InfoLogger - assignment of proper severity for ROOT logs printed in terminal output (+GUI)
- new variables
DeviceInfo::logLevel
set to--severity
from command line arguments- now determines the minimum severity of printed logs for each device
- independent of
DeviceControl::logLevel
which determines the filter severity in the GUI
- merged
- assignment of proper severity for ROOT logs in
- https://github.com/AliceO2Group/AliceO2/pull/13683
- ROOT Messages in the output of a workflow should not generally be interpreted as error
- Add additional check on DPL level, to make sure firstOrbit received from all detectors is identical, when creating the TimeFrame first orbit.
- Implement a proper solution to detect wheter a device is firstInChain
- Deploy topology with DPL driver
PDP-SRC issues
- Check if we can remove dependencies on
/home/epn/odc/files
in DPL workflows to remove the dependency on the NFS- reading / writing already disabled
- remaining checks for file existence?
- check after Pb-Pb by removing files and find remaining dependencies
logWatcher.sh
andlogFetcher
scripts modified by EPN to remove dependencies onepnlog
user- node access privileges fully determined by e-groups
- new
log_access
role to allow access inlogWatcher
mode to retrieve log files, e.g. for on-call shifters - to be validated on STG
- waiting for EPN for further feedback and modifications of the test setup
-
3
TPC ML ClusteringSpeaker: Christian Sonnabend (CERN, Heidelberg University (DE))
General
- Checks on area overlap: Trending of CoG resolution degradation can not be reproduced when using q < Q_cut and also not for q > Q_cut, although the number of histogram entries matches the expected number
- Black data: Asked Alex Schmah, he provided me with the path at GSI, so I will now also investigate that data
Framework
- Now several PR's ongoing, this is the most recent one, it includes the ONNX runtime library in O2: https://github.com/AliceO2Group/AliceO2/pull/13709
- Not sure why copyright header check fails even though I have the specific file in 3rdparty
- o2-cs8 check fails with 1 error (looks like one track ref is not found):
But this PR is fully independent of any code that runs (no implementation in any task yet)549/549 Test #506: o2sim_checksimkinematics_G3 ..........................................................Subprocess aborted***Exception: 1.90 sec 99% tests passed, 1 tests failed out of 549
- Otherwise it compiles for all checks, so it would be nice to merge this
Multi-class networks
- Ran the full chain but found significant degradation in tracking efficiency / fake-rate -> investigating
Plans & To-Do's
Title and text; High priority, medium priority, low priority; short term, mid term, long term; other
- Neural networks
- Cluster splitter network
- (✓) N-class classifier network: Probably only going until split-level 2 or 3. Higher gets really sparse in training data
- (✓) N-class regression network: Similar approach as the N-class classifier, but need to see how good performance is...
- Pass momentum vector to downstream reconstruction
- Check performance of NN's on black data
- Cluster splitter network
- GPU developments
- Pull requests
- O2
- PR, ORT library integration: https://github.com/AliceO2Group/AliceO2/pull/13709
- PR, Full clusterization integration: https://github.com/AliceO2Group/AliceO2/pull/13610
- alidist
- PR, ORT GPU build: https://github.com/alisw/alidist/pull/5622
- O2
- Issues & Feature requests
- OnnxRuntime
- Coming up...
- OnnxRuntime
- Pull requests
- QA task & algorithmic developments
- Include SC distortion simulation
- Use black PbPb data to evaluate performance and for training
- Redo 2D study with NN
-
4
ITS TrackingSpeaker: Matteo Concas (CERN)
ITS GPU tracking
- General priorities:
- Focusing on porting all of what is possible on the device, extending the state of the art, and minimising computing on the host.
- Optimizations via intelligent scheduling and multi-streaming can happen right after.
- Kernel-level optimisations to be investigated.
- Tracklet finding:
- WIP, it requires some more new developments to port the features we have for the CPU version (delta-rof, timeframe fractioning, per-vertex tracking...).
- TODO:
- Reproducer for HIP bug on multi-threaded track fitting: no progress yet.
- Move more of the track-finding tricky steps on GPU: no progress yet.
- Fix possible execution issues and known discrepancies when using
gpu-reco-workflow
: no progress yet; will start after the tracklet finding is ported.
DCAFitterGPU
- Deterministic approach via using
SMatrixGPU
on the host, under particular configuration: no progress yet.
- General priorities:
-
5
TPC Track Model Decoding on GPUSpeaker: Gabriele Cimador (Universita e INFN Trieste (IT))
-
6
Efficient Data StructuresSpeaker: Dr Oliver Gregor Rietmann (CERN)
Efficient Data Structures
Context
- Create data structures for controling the data layout (AoS vs SoA)
- These data structures should hide the underlying data layout.
- We want to change the underlying data layout without affecting the code using it.
Using Memory of an Existing Buffer
Problem: Given an existing buffer, we want to allocate our data structures within that buffer.
Solution: We use anallocator that takes a pointer to the buffer and its size in bytes.
#include <cstddef>
#include <stdexcept>
template <typename T>
class BufferAllocator {
public:
using value_type = T;
BufferAllocator(char* buffer, std::size_t size)
: buffer_(buffer), size_(size), offset_(0) {}
template <typename U>
BufferAllocator(const BufferAllocator<U>& other) noexcept
: buffer_(other.buffer_), size_(other.size_), offset_(other.offset_) {}
T* allocate(std::size_t n) {
std::size_t bytes = n * sizeof(T);
if (offset_ + bytes > size_) throw std::bad_alloc();
T* ptr = reinterpret_cast<T*>(buffer_ + offset_);
offset_ += bytes;
return ptr;
}
void deallocate(T* ptr, std::size_t n) noexcept {}
private:
char* buffer_;
std::size_t size_;
std::size_t offset_;
};The allocator can then be used as follows.#include <vector>
#include <iostream>
int main() {
constexpr std::size_t bufferSize = 1024;
char buffer[bufferSize];
// vector of int
BufferAllocator<int> allocator(buffer, bufferSize);
std::vector<int, BufferAllocator<int>> v(allocator);
for (int i = 0; i < 10; ++i) v.push_back(i);
for (int value : v) std::cout << value << " "; // 0, 1, 2, ...
std::cout << std::endl;
// vector of double
BufferAllocator<double> new_allocator(buffer, bufferSize);
std::vector<double, BufferAllocator<double>> w(new_allocator);
for (int i = 0; i < 10; ++i) w.push_back(-1 * (double)i);
for (double value : w) std::cout << value << " "; // 0, -1, -2, ...
std::cout << std::endl;
// reading from the int vector now yields garbage
for (int value : v) std::cout << value << " "; // 2352344, 34553553, ...
std::cout << std::endl;
return 0;
}
-
1