Mw topical meeting

4/3-006 - TH Conference Room (CERN)

4/3-006 - TH Conference Room


Show room on map

This meeting is dedicated to physics modeling issues in the Wmass measurement at the LHC. It is part of the 2017 TH Institute on

"LHC and the Standard Model: Physics and Tools",

Registration Form
  • Alessandro Vicini
  • Andrzej Konrad Siodmok
  • Aram Apyan
  • Arie Bodek
  • Bennie Ward
  • Doreen Wackeroth
  • Elizabeth Locci
  • Elzbieta Richter-Was
  • Fabrice Balli
  • Fulvio Piccinini
  • Giancarlo Ferrera
  • Grigorios Chachamis
  • Ignazio Scimemi
  • Josh McFayden
  • Karla PENA
  • Louis Fayard
  • Ludovica Aperio Bella
  • Maarten Boonekamp
  • Marco Cipriani
  • Mariarosaria D'Alfonso
  • Matthias Schott
  • Michelangelo Mangano
  • Mika Anton Vesterinen
  • Nansi Andari
  • Nicolò Foppiani
  • Olli Lupton
  • Pedro Vieira De Castro Ferreira Da Silva
  • Simone Amoroso
  • Stefano Camarda
  • Suman Chatterjee
  • Tairan Xu

Minutes and action items of the MW topical meeting on June 22, 2017 at

  - write up a short summary of methods used in the ATLAS analysis (Maarten)

   Most of the discussion was about understanding the master formula for
   the physics modeling on slides 7,8 of Stefano's talk and how theory
   uncertainties propagate to the W masss sytematics.
   It was also asked what would happen if one theory prediction is used for
   all components instead of this 'composite' model?

  - initiate a quantitative study of the correlation/decorrelation of QCD
   uncertainties in W and Z production (pT distribution in particular)

   Are there theory arguments for a certain procedure, for instance, it was
   suggested that there are good arguments for keeping the scale variation
   for light quarks and gluons correlated in the W/Z ratio but not for
   heavy flavors?

   Another suggestion was to revisit the definition of the ratio, e.g., to
   study the effect of kinematic differences to first 'align the Sudakovs'
   and then take the ratio.

   It was emphasized that NNLO+NNLL uncertainties are still relatively
   large in case of small qT even in normalized distributions,
   that there is a 5% scale uncertainty in the ratio (fully correlated),
   and that interestingly the observed behaviour of the ratio
   agrees better with NNLL when the parton evolution is done at LO
   (see Massimiliano's talk)

  - further studies on understanding the origin of the hardening of the pTW
   distribution (wrt pTZ) when going to NNLL?
   Shall we plan a dedicated comparison of DYRes, Resbos, Cute, DIRE,
   Sherpa, and Geneva?
   See remark above and Massimiliano's slides showing a comparison of DYres,
   Resbos, CuTe with Phythia 8 AZ

  - role of heavy quarks
   discussion about the role played by collinear PDFs and sensitivity
   to improvements in the description of the heavy quark kinematics
   (follow up on Alessandro's study on b-quark mass effects,
    improved charm treatment? )

  After a discussion with Emanuele Re, it seems that the Les Houches study
  will mostly focus on the different approaches implemented in Parton
  Shower tools to handle the transition from bottom to g-> b bbar
  in the backward evolution
  and the impact of different choices on the ptZ prediction

  Frank Krauss suggested to check the y dependence of flavour decomposition,
  (already in progress) and the effect of flavour-blind kT kick to partons.

  Frank Krauss had a number of ideas for further studies of parton shower
  uncertainties (e.g. is the scale in alphas in Pythia varied) and noted that
  Sherpa has only one parameter for tuning. There are now two Sherpa PS (Dire)
  and it was suggested to compare them.

  -Z+jet at NNLO matched to analytic resummation is work in progress (see Aude
   Gehrmann's talk at the TH Institute, with P.F. Monni). We also need
   W+jet at NNLO(+resummation) (timescale?).

  -Maria's talk on Z pT and PDFs mainly concentrated on the impact of the
inclusion of large pT Z data. It was pointed out that to be able to
include the experimentally very precise small pt Z data one needs to
include resummation in the predictions. Other issues are how to estimate
the size of NP corrections and how to treat the relatively large
statistical uncertainty of the Monte Carlos providing the higher-order
predictions. Also, it was asked if we need to worry about power
corrections at this level of precision.


There are minutes attached to this event. Show them.