WG1 - VH subgroup

Europe/Zurich
4/S-030 (CERN)

4/S-030

CERN

30
Show room on map
Giancarlo Ferrera (Università degli Studi e INFN Milano (IT)), Ciaran Williams (SUNY Buffalo), Alessandro Calandri (Eidgenoessische Technische Hochschule Zuerich (CH)), Hannah Arnold (Nikhef)
Registration
Participants
Participants
  • Congqiao Li
  • Elisabeth Schopf
  • Giancarlo Ferrera
  • Hannah Arnold
  • Huilin Qu
  • Kajari Mazumdar
  • Luca Mastrolorenzo
  • Marko Stamenkovic
  • Matthew Henry Klein
  • Paolo Francavilla
  • Philipp Windischhofer
  • Pierluigi Bortignon
  • Stephane Brunet Cooperstein
  • Yihui Lai
    • 16:00 16:05
      Introduction 5m
      Speakers: Giancarlo Ferrera (Università degli Studi e INFN Milano (IT)), Ciaran Williams (SUNY Buffalo), Alessandro Calandri (Eidgenoessische Technische Hochschule Zuerich (CH)), Hannah Arnold (Nikhef)
    • 16:05 16:30
      ATLAS VHcc full Run 2 results 25m
      Speaker: Maria Mironova (University of Oxford (GB))

      1) Will follow-up offline on b-tagging calibration

      2) Do you split V+jets SF’s across data-taking years as well? Do you observe significant differences? No, signal regions are inclusive in data-taking years, and so are our SFs, never looked at this

      3) Have you checked VHcc signal extraction without the b-tag veto? No, the c-tagging WP was optimised considering the veto. For the combination with VH(bb) we checked the VH(cc) “contamination” in the VH(bb) categories and it is at most a few percent of the VH(cc) yield in the VH(cc) categories; in the combined fit there was no change in signal strength uncertainty, i.e.  not losing much sensitivity using such a veto

      4) How can you separate different background-enriched categories just using DRcc selection? There is one dRcc CR per c-tag SR and to control V+l there are also the 0 c-tag regions in 1/2L

      5)  sl 10: Is the relative fraction used as priors calculated wrt the alternative generators? Yes; it’s the quadrature sum of the various variations (alternative generator and internal weight variations)

      6) sl 11: Expect these two shape uncertainties in SR and CR based on the cc mass to be quite correlated in the fit model as the nominal and alternative samples are the same and the main difference is the phase-space for the comparison. Is that the case? Can you comment on the correlation? Yes, this is correct, these shapes are usually quite slightly correlated. You can see that when deriving them, and also in the fit, where they usually show anti-correlations. It is worth noting though that the SR shape variations are usually small. 

      7) sl 13: What is the primary uncertainty of the top background component? The comparisons to of different generators (ME or PS variation) implemented as 2-point systematics

      8) sl 20: Why do you expect such a difference in impact between Z+jets and W+jets modeling (7% vs 3.9%)? The Z+jets modelling enters both via the 2L and 0L channels and the 0L is driving the sensitivity.

      9) sl 20: Why don’t you have constraints post-fit on ‘stat 0L SR’ which I assume it’s a bin-by-bin MC stats NP? Is it expected that the fit can constrain this parameter that much?  We have one gamma NP for each bin, and in a high-stat region as this - it is in the 1 c-tag region the 0- lepton channel - we have a lot of data statistics

      10) What accuracy is Sherpa V+jets? NLO-accurate ME for up to 2 jets, LO-accurate ME for up to 4 jets

    • 16:30 16:45
      Discussion 15m
    • 16:45 17:10
      CMS VHcc full Run 2 results (25'+5') 25m
      Speaker: Spandan Mondal (RWTH Aachen (DE))

      1) dRCC reweighting - ATLAS also observes mismodeling at low dRCC with MG  Sherpa samples, this is why we cut mCC > 50 GeV to avoid propagating any mismodelling to higher dRCC/MCC. Do you trust your reweighting enough to still use the low mCC region as a control region (which is highly correlated with low dRCC)? CMS uses corrections and propagate uncertainties associated to corrections in the fit.

      2) Correlation scheme: all NP’s are correlated between SR and CR

      3) sl 7: Why do merged jets have better acceptance than resolved jets in the medium VPT region? ATLAS observed slightly different behaviour in the med VPT region. Strong dependence of acceptance on clustering radius jet for FatJet reconstruction - different in ATLAS/CMS hence difficult to compare results on reconstruction efficiency 

      4) sl 7: Is this plot produced with requirement on flavour tagging? No, this is truth level; so in the resolved regime any small-R jets passing the selection are considered for Higgs cand if they contain a c-hadron; in the merged regime the leading large-R jet is used

      5) sl 23: Why per-year splitting of the floating normalisation of rate parameters? Different tunes in MC for 2016 and 2017/2018, hence expect minor differences. We observe very similar SF’s across years in 2017/2018 but still they’re kept separately to give more flexibility to the fit

      6) sl 31: what do you use as a signal in training? VHcc used as signal

      7) sl 39: Do you retrain separate BDT’s for VZcc in resolved? Yes. Have you tried simultaneous VZcc and VHcc fit? Not in the resolved regime, because they are different BDTs. In the boosted regime yes, and results on the signal extraction didn’t change much compared to the nominal strategy (of constraining VZcc in the VH(cc) fit)

      8) sl 25/12: are the various sources of uncertainties of the c-tagging calibration propagated to the analysis? Yes. So, there are some generator comparisons considered…  Are the scale variations correlated with the ones in the analysis? No.

      9) sl 13: lower pTV threshold in 1L than for VH(bb) (100 instead of 150 GeV), makes sense, less affected by ttbar. Went as low as possible without compromising data/MC modeling

    • 17:10 17:40
      Discussion 30m