Speaker
Description
The landscape of frameworks for (re)implementing HEP analyses for reinterpretation is, fortunately, diverse. One advantage of this diversity is that multiple reimplementations of the same HEP analysis in different frameworks can be cross-validated. On the downside, statistical combinations of analyses across different frameworks must carefully
avoid double-counting events, which could erroneously inflate statistical significance. In both cases, validation and statistical combination, assessing the correlation between implementations in different frameworks is essential.
In this talk, we present studies determining this correlation for analysis implementations in two selected frameworks: MadAnalysis and Rivet. We highlight key achievements, challenges encountered, and lessons learned, providing insights for future developments.