- no comments, thanks to Martijn!
ATLAS ttbar diff
- bias from parton matching? - No, checked this - Olaf will follow up with analyzers offline
- when using CMS binning, this is comparison to CMS parton-level dilepton analysis
CMS single top Vtd, Vts
- constraining of systematic uncertainties in the fit - will consider adding information on constraints on ttbar modeling
- main effect on ttbar from normalization
- main result is dependent on contributions of different Vtx to the different regions, shape of different signal contributions is similar
- JES and JER are not in the fit because they affect the fit strongly
- Vts and Vtd are not included in ttbar or other single top contributions - checked for ttbar that these give the same shape
- In constrained fit, could better show the unitarity assumption explicitly. What’s shown on page 18 for Vtb violates unitarity when uncertainties are considered. Would be nice to see the effect of the unitarity constraint in a 2D likelihood plot
- Modeling of fakes background in multi-lepton ttbar+X events - special discussion at next closed meeting? or open meeting?
ttW/Z as signal
- make 13 TeV ttW and ttZ summary plots from LHCtopWG?
- Should have harmonization of ttW and ttZ modeling uncertainties? These are dominant in ATLAS but not CMS
- theory prediction cross-section for ttW and ttZ - even for ttW, p3 shows different predictions
- p8: what is the composition of the nonprompt background? Mainly heavy flavor decays for muon, while for electrons also significant fraction of jets misidentified as electrons
- Diboson as background - estimation of diboson + HF production is the most important part
- WZ as background - HF estimate? Z+b is not a similar process to WZ, Z+b has b’s from PDF, while WZ does not. - Focus on comparison in phase space that emphasizes b’s from gluon splitting for validation.
- non-prompt background in CMS analysis- estimated fake scale factors are applied to loose control region, with prompt contributions subtracted.
ttW/Z as background
- p9: using really ttZ XS with threshold of m(ll) of 1 GeV? Yes, in ATLAS, calculated with threshold of 1 GeV, which is much too low, where XS diverges. In YR4, on-shell Z was assumed. The region down to 1 GeV is important in the analysis since it contributes if one lepton is identified and the other one is lost, thus it is included even if the theory prediction for this is diverging.
- p21: Is the njet distribution the one that was reweighted?
- Different PDFs than NNPDF? Both experiments evaluate PDF uncertainties
- p25: Why does ATLAS split ttW scale factor into SS dilepton and 3-lepton? Different kinematic ranges for lepton pT
- p13: Can add this process by hand? Yes, but currently this won’t work in MG5 because it doesn’t know that interference term is zero. Need to work with Rikkert to work this out and use most recent beta version.
- Correction is small from this, as the table shows, but it is potentially larger in special regions of phase space because this produces a forward top quark.
Session Thu afternoon 1
Diff dilepton exp:
- p3: leptons are dressed leptons
- p14: ATLAS theory predictions turn around for ATLAS but not CMS - last bin in ATLAS has large stats uncertainty, so can’t say that this is physics
- Taking into account correlations between the two b-tags - yes, and this is modeled well by MC.
- Treatment of dilepton events where statistics are correlated - yes, correctly implemented.
Diff dilepton theory
- Request by Alex to ATLAS to remove the green line/band from the dilepton delta phi plot since this is not an accurate presentation of the NLO+EW prediction.
- Fiducial definition? How defined at NNLO? For leptons this is the same, and for jets this is comparing jet clustering with b-quarks and gluons (NNLO) with clustering of colorless objects (experiments)
- should experiments cluster b-quarks and gluons from NLO real emission? No, this is not well defined, and doesn’t compare to NNLO
- Experiments could compare the clustering of NLO to fiducial just to get an idea of the size of the effect
- Look at impact of fiducial requirement as a function of delta phi - acceptance the same? jet distributions the same for small and large delta phi? - yes, this was checked by ATLAS, changing jet pT cuts changes delta phi distributions.
- what about p2: using expansion or not using expansion makes a difference at NLO but not NNLO. So why not use the one NLO that agrees with NNLO? Because p2 shows inclusive, for fiducial selection both show differences.
- Need to decompose problem into even smaller individual steps to be able to break down where exactly there is agreement theory/experiment and where there isn’t.
BSM model for delta phi
- Contribution of this model to total XS? 40%
- Have looked at a pseudo-scalar model that’s not a full resonance but peak and dip? No, tried masses above threshold, didn’t try very heavy masses. But if want to change distributions, will have to increase cross-section at some point.
- Correlation between different observables holds information,
- Would be nice to have Delta Eta from CMS, they currently only provide Delta|Eta|
Session Thu afternoon 2
- p20, comment on new LHCtopWG summary plot: the plot has both pole mass (from XS) and from decay in it.
o both measurements based on XS (with phase-space cuts) and decay-based measurements suffer from non-perturbative issues, such as Renormalons, but the specific effects are different in the two cases.
- p21: means top parton level (not particle level)
- sensitivity to scale variations? smaller for relative differential distributions than absolute distributions. Sensitivity here comes specifically from being close to the threshold for ttbar production
- More options for exploring variations in the shower are now available experimentally (for example in ATLAS since recently) while CMS has been pursuing this already for a many of the measurements (for the measurement part).
Sven, running mass
- p17: m(ttbar) is well defined theoretically, it shouldn’t depend on the mass scheme, but you can see differences here. But in MSbar scheme, the curve turns on sooner. Actually, this is an issue of being very close to threshold, where the MSbar mass is not a good definition due to theoretical considerations.
- Running of mass - should improve measurement (which will happen), and also need to improve theory calculation (going to NNLO for the extraction.
- What about tt+2j? In principle could form heavier objects and have additional sensitivity by squeezing closer to threshold, but would need the theory calculation.
- Using SMEFTsim: the full package is very heavy, is it possible to load subset - can always use restriction cards, this will speed it up.
- Translation table available - yes, she can send it out.
Global EFT fit
- p7, 8: importance of quadratic terms - only important for large energies, but these slides seem to show that it makes a difference for small energies.
- If ATLAS and CMS measure deviations from the SM, would these show up in a EFT fit? Depends on how the excesses are distributed over different measurements.
- Several of the XS measurements are based on MVA’s and profile likelihood fits to many distributions, which makes them mostly insensitive to EFT contributions, which would give different kinematic distributions. Thus, these cannot be included in such a fit. Ideally would use only differential distributions, and particle-level would be good for this. But even then, the contribution of EFTs to background estimates are still not included.
- Systematic uncertainties - would also need to include correlations of these between different measurements.
- ATLAS and CMS are also working on similar efforts, but try to account for all of this, thus it takes longer.
- Using comparison to common generator? Unfolding the same way? Yes, this was part of charge asymmetry paper already
- Also, common MC effort for 13 TeV will show how ATLAS and MC compare for default generators
- Combining regularized cross-sections? Ideally would regularize after combination, otherwise regularization happens multiple times
- tau to lepton decays? These are not included in the signal for the theory prediction, but tau signal 'background' is scaled together with the signal component of the fit
- p7: any discrimination between signal and background? yes, there is, and this improves the fit
- fraction of events where photon comes from top or other (ISR or FSR)? - Estimate about 50%
CMS boosted top mass
- using standard CMS JES corrections? Not true for boosted jets - uncertainties on that extra correction? - p12 doesn’t show additional corrections?
- sideband regions in unfolding -
- systematic uncertainties also have large statistical component
- Top mass is extracted from fit to MC templates
ATLAS SMT top mass and uncertainties in ATLAS/CMS
- SMT uncertainty from b-quark FSR? See page 14, changing alpha_S and then keeping XB constant.
- taking b-quark mass into account? yes, but not varying b-quark mass, variation of rB takes care of this.
- CMS has ten times larger variation of rB, but obtains similar mass uncertainty due to rB - there are some differences in the procedure, but this is not yet fully understood.
color reconnection theory
- p7: gluon move model - we don’t want to use something that is intentionally extreme as a systematic uncertainty
- Measuring the quantities for b fragmentation and branching ratios at the LHC would be very useful in constraining the Pythia parameters. Should really do this in ttbar decays to have confidence.
- Observables: for example high-pT b-jets from top, look in cone around this, the tip of the jet should give information about vacuum-like color reconnection
Fri late morning
- should discuss this at a closed meeting
- should ideally come to an agreement on what tests to show and publish - closure, bottomline test
- Bottomline test - only tests response matrix, don’t include systematics; but assumes unfolding is ideal, otherwise unfolded chi-2 will be worse.
- p17: nice idea, but of course requires optimization studies and tests, have to be cautious.
- Marginalizing introduces correlations between systematic uncertainties, similar to profiling, reducing overall impact of systematic uncertainties.
CMS top in PbPb
- b-tag calibration: had to do it yourself?
- nuclear PDF set? Only used one, but expect effect to be small
- p12: excess in 2 b-tag events in data?
- how do you calibrate b-tagging? b-tagging re-trained in HI environment, then use p-p efficiencies, then blow up uncertainties, then b-tag overall scale factor is nuisance parameter in the fit. Not a dedicated calibration.
- isolation - also measured in Z events