Color code: (critical, news during the meeting: green, news from this week: blue, news from last week: purple, no news: black)
High priority RC YETS issues:
- Fix dropping lifetime::timeframe for good
- Still pending: problem with CCDB objects getting lost by DPL leading to "Dropping lifetime::timeframe", saw at least one occation during SW validation.
- Ruben reported a similar problem in async reco, but only during last TFs of a run, probably independent bug.
- No other instances of Dropping lifetime::timeframe seen at P2.
- Expandable tasks in QC. Everything merged on our side.
- With latest fix by Giulio seems to work, should not be tested at P2 with larger scale.
- Start / Stop / Start:
- Problems in readout and QC fixed. Now 3 new problems, at least 2 on our side: No news
- GPU multi-thread pipeline gets stuck after restart. Should be trivial to fix. https://its.cern.ch/jira/browse/O2-4638
- Some processes are crashing randomly (usually ~2 out of >10k) when restarting. Stack trace hints to FMQ. https://its.cern.ch/jira/browse/O2-4639
- TPC ITS matching QC crashing accessing CCDB objects. Not clear if same problem as above, or a problem in the task itself.
- Stabilize calibration / fix EoS: New scheme: https://its.cern.ch/jira/browse/O2-4308
- Fix problem with ccdb-populator: no idea yet, no ETA.
- Proposal to proceed:
- For the expendable tasks, we wait for the next test, if there are still problems Giulio needs to check again.
- David will look at the lost CCDB messages to free up Giulio a bit.
- Giulio can start implementing the new EoS scheme, such that we have it before restart of data taking.
- Other things are postponed, the Start/Stop/Start FMQ issue and the ccdb-populator issue are difficult to reproduce and will be a nightmare to debug. So we try to finish what has a good change to converge within the YETS.
High priority framework topics:
Other framework tickets:
- TOF problem with receiving condition in tof-compressor: https://alice.its.cern.ch/jira/browse/O2-3681
- Grafana metrics: Might want to introduce additional rate metrics that subtract the header overhead to have the pure payload: low priority.
- Backpressure reporting when there is only 1 input channel: no progress: https://alice.its.cern.ch/jira/browse/O2-4237
- Stop entire workflow if one process segfaults / exits unexpectedly. Tested again in January, still not working despite some fixes. https://alice.its.cern.ch/jira/browse/O2-2710
- https://alice.its.cern.ch/jira/browse/O2-1900 : FIX in PR, but has side effects which must also be fixed.
- https://alice.its.cern.ch/jira/browse/O2-2213 : Cannot override debug severity for tpc-tracker
- https://alice.its.cern.ch/jira/browse/O2-2209 : Improve DebugGUI information
- https://alice.its.cern.ch/jira/browse/O2-2140 : Better error message (or a message at all) when input missing
- https://alice.its.cern.ch/jira/browse/O2-2361 : Problem with 2 devices of the same name
- https://alice.its.cern.ch/jira/browse/O2-2300 : Usage of valgrind in external terminal: The testcase is currently causing a segfault, which is an unrelated problem and must be fixed first. Reproduced and investigated by Giulio.
- Found a reproducible crash (while fixing the memory leak) in the TOF compressed-decoder at workflow termination, if the wrong topology is running. Not critical, since it is only at the termination, and the fix of the topology avoids it in any case. But we should still understand and fix the crash itself. A reproducer is available.
- Support in DPL GUI to send individual START and STOP commands.
- Problem I mentioned last time with non-critical QC tasks and DPL CCDB fetcher is real. Will need some extra work to solve it. Otherwise non-critical QC tasks will stall the DPL chain when they fail.
- DPL sending SHM metrics for all devices, not only input proxy: https://alice.its.cern.ch/jira/browse/O2-4234
- Some improvements to ease debugging: https://alice.its.cern.ch/jira/browse/O2-4196 https://alice.its.cern.ch/jira/browse/O2-4195 https://alice.its.cern.ch/jira/browse/O2-4166
- After Pb-Pb, we need to do a cleanup session and go through all these pending DPL tickets with a higher priority, and finally try to clean up the backlog.
Global calibration topics:
- TPC IDC and SAC workflow issues to be reevaluated with new O2 at restart of data taking. Cannot reproduce the problems any more.
Sync processing
- Proposal to parse InfoLogger message and alert automatically: https://alice.its.cern.ch/jira/browse/R3C-992
- Software updated yesterday. OK from our side, but EPN farm still down after maintenance.
Async reconstruction
- Remaining oscilation problem: GPUs get sometimes stalled for a long time up to 2 minutes.
- Checking 2 things: does the situation get better without GPU monitoring? --> Inconclusive
- We can use increased GPU processes priority as a mitigation, but doesn't fully fix the issue.
- ḾI100 GPU stuck problem will only be addressed after AMD has fixed the operation with the latest official ROCm stack.
- EPN farm unavailble for async jobs since EPN maintenance yesterday.
EPN major topics:
- Fast movement of nodes between async / online without EPN expert intervention.
- 2 goals I would like to set for the final solution:
- It should not be needed to stop the SLURM schedulers when moving nodes, there should be no limitation for ongoing runs at P2 and ongoing async jobs.
- We must not lose which nodes are marked as bad while moving.
- Interface to change SHM memory sizes when no run is ongoing. Otherwise we cannot tune the workflow for both Pb-Pb and pp: https://alice.its.cern.ch/jira/browse/EPN-250
- Lubos to provide interface to querry current EPN SHM settings - ETA July 2023, Status?
- Improve DataDistribution file replay performance, currently cannot do faster than 0.8 Hz, cannot test MI100 EPN in Pb-Pb at nominal rate, and cannot test pp workflow for 100 EPNs in FST since DD injects TFs too slowly. https://alice.its.cern.ch/jira/browse/EPN-244 NO ETA
- DataDistribution distributes data round-robin in absense of backpressure, but it would be better to do it based on buffer utilization, and give more data to MI100 nodes. Now, we are driving the MI50 nodes at 100% capacity with backpressure, and then only backpressured TFs go on MI100 nodes. This increases the memory pressure on the MI50 nodes, which is anyway a critical point. https://alice.its.cern.ch/jira/browse/EPN-397
- TfBuilders should stop in ERROR when they lose connection.
Other EPN topics:
Raw decoding checks:
- Add additional check on DPL level, to make sure firstOrbit received from all detectors is identical, when creating the TimeFrame first orbit.
Full system test issues:
- Found another case where the workflow failed if CTP not included, fixed for Tuesday update.
Topology generation:
- Should test to deploy topology with DPL driver, to have the remote GUI available.
- DPL driver needs to implement FMQ state machine. Postponed until YETS issues solved.
QC / Monitoring / InfoLogger updates:
- TPC has opened first PR for monitoring of cluster rejection in QC. Trending for TPC CTFs is work in progress. Ole will join from our side, and plan is to extend this to all detectors, and to include also trending for raw data sizes.
AliECS related topics:
- Extra env var field still not multi-line by default.
GPU ROCm / compiler topics:
- Found new HIP internal compiler error when compiling without optimization: -O0 make the compilation fail with unsupported LLVM intrinsic. Reported to AMD.
- Found a new miscompilation with -ffast-math enabled in looper folllowing, for now disabled -ffast-math.
- Must create new minimal reproducer for compile error when we enable LOG(...) functionality in the HIP code. Check whether this is a bug in our code or in ROCm. Lubos will work on this.
- Found another compiler problem with template treatment found by Ruben. Have a workaround for now. Need to create a minimal reproducer and file a bug report.
- Debugging the calibration, debug output triggered another internal compiler error in HIP compiler. No problem for now since it happened only with temporary debug code. But should still report it to AMD to fix it.
- Had a call with AMD yesterday, basically summarized the status. Now again waiting for them to work on it.
TPC GPU Processing
- Bug in TPC QC with MC embedding, TPC QC does not respect sourceID of MC labels, so confuses tracks of signal and of background events.
- New problem with bogus values in TPC fast transformation map still pending. Sergey is investigating, but waiting for input from Alex.
- Discussion with Marian and TPC on Monday, clarified what additional cluster error formulas to implement.
General GPU Processing
- Consistency between CPU and GPU processing status:
- Trying to get fully deterministic tracking with GPUCA_NO_FAST_MATH + additional debug options, which will introduce many intermediate sorting steps.
- Several sorting kernels implemented. Status so far:
- Spotted 5 real bugs in GPU reconstruction, that lead to different / random results, which were in the shadow of parallel processing fluctuations.
- 2 could affect up to ~1% of clusters, so could be a source of the discrepancies we saw in the CPU v.s. GPU relval.
- Found another inconsistency in the clusterization, which should not be there by design. Felix will check, but he is busy so can have a look only in March. Affects less than e^-6 clusters in some TFs, so irrelevant for physics, but difficult for validation.
- On small TFs: fully reproducible when running on CPU or GPU multiple times, but differences CPU v.s. GPUs.
- On larger TFs, also differences rerunning on CPU or on GPU.
- Differences appear only after the sector tracking, so sector tracking part seems already fully deterministic.
- Started work to make O2 propagator easily usable in ITS tracking, which is not part of the GPU reconstruction library - TODOs:
- Provide an (optionally device-relocable-code) object that can be linked to other GPU code e.g. ITS, which provides all code needed to use the propagator. The same mechanism as for the other kernel files will obtain and fill the constant cache: WIP
- Use constant memory in fewer places, to disentangle the code. Particularly, pass processing context as kernel argument not in constant cache.
- Once this is all working in CUDA, port over all the work to the HIP backend, including RTC.
- Switch the HIP backend to autogenerate the HIP code from the CUDA code.