Framework developments
- https://github.com/AliceO2Group/AliceO2/pull/14378
- Bug-fix for time boundary value filling
- Added deconvolution kernel setting for NN evaluation (makes it fully compatible with current clusterizer for flag setting)
Physics
- Now working on both data types: SC distorted and non-distorted; enforcing 0-5 centrality for better occupancy coverage
- Training data: Combination of 0-100 centrality, woSC (50%) and 0-5 centrality, SC (50%)
- Tested two settings for MC clusterizer, accumulation window (pad, time): (2, 4) and (3, 8)
- For current CoG: (abs(charge_pos - CoG) < window && mc_id_charge == mc_id_cog) ? accumulate : don't accumulate
- Conclusion: Large window performs slightly worse
- Most probably due to misassignment of MC clusters to peaks
- More MC CoGs triggers the looper tagger earlier -> Potentially some regions are not tagged with a larger window and the network training data gets some "confused" samples
(left: larger window, right: smaller window)


- Similar observation for CoG position vs. occupancy: At higher occupancies, the smalle window size works better
- Efficiency and fake rate improve for NN, however clone rate goes up for both primaries and secondaries. (Example in figure below: clone rate for secondaries)
Total number of clone tracks stays the same (within the uncertainty) -> Higher clone rate comes from overall reduction in number of tracks

Steps ahead
- Getting the full PID calibration to work
- Working until skim tree creation (at least it runs through)
- More thesis writing