Area 3 meeting: experimental measurements and observables

Europe/Zurich
Andrei Gritsan (Johns Hopkins University (US)), Eleni Vryonidou, Florencia Canelli (Universitaet Zuerich (CH)), Nuno Castro (LIP and University of Minho (PT)), Pietro Govoni (Universita & INFN, Milano-Bicocca (IT))
Description

The agenda of this meeting is under construction.
The meeting will be held online via zoom.
 

Videoconference
Area 3 meeting: experimental measurements and observables
Zoom Meeting ID
95141222909
Host
Andrei Gritsan
Useful links
Join via phone
Zoom URL

Minutes of 2021-01-11 meeting 

 

agenda: https://indico.cern.ch/event/971725/

 

~100 participants


 

speaker: Florencia (Introduction)

 

  • No comments/questions

 

speaker: Jay (Inclusive, fiducial, and differential measurements in application to EFT)

 

  • Andrei: Use multiple 1D measurements or multi-D measurements? The latter is better, but depends on availability. Stat problems in multi 1D measurements? One could advocate that the approximation for the stat uncertainties is not that bad and this is extensively used in other cases (MC tuning, PDF fits, etc)

  • Kyle: in the transfer matrix: does it depend on detector only? there are variables on which integration is performed, i.e. the trasf. func. has still some imprint of the physics used to calculate it

    • true, but never seen it matter

  • Kyle: moving a lot of processes done at experiment level to the unfolded level, such as MC tuning, may be very expensive (not sure I got the comment). A possible way to deal with tuning would be to use reweight, which could be rederived each time a new feature becomes available in MC. Kyle sent me via chat a reference for his proposal : https://beta.briefideas.org/ideas/8106c030eba22dd3a8d268940d5e42d8

  • black again


 

speaker: Pietro (Current approach with dedicated EFT measurements)

 

  • Kyle: in the past, the full information from experimental data could only be handled in internal multi-experimental combinations. Publishing the likelihoods in HEPdata is now a standard, so it’s also an option. (RECAST analysis by ATLAS. https://home.cern/news/news/knowledge-sharing/new-open-release-allows-theorists-explore-lhc-data-new-way) Some more information with links to published likelihoods here: https://iris-hep.org/projects/pyhf.html

  • Andrei: likelihood as a function of what? How do you define the measurement? The particular model/parameter has to be agreed (answer by Kyle: agree with comment, need to define which EFT parameters to be constrained etc.)

  • Ulascan: in principle publishing a likelihood is a good idea, but only if the number of parameters is small. With hundreds of systematics, etc, is it viable? Kyle: yes. Some theory papers have already been published. [FC: we should check the theory papers that Kyle referred to, the ones using the likelihoods provided by the SUSY and EXO papers, also the implementation in HEPdata.]

  • Andrei: what about flat directions in reporting full likelihood? Technically not a problem, but depends how this reporting is done (e.g. approximation with covariance matrix will not work).  


 

speaker: Ulascan (MELA: matrix element inspired approach for EFT measurements)

 

  • MELA is supposed to be optimal given the NP lemma; but for composite hypotheses it is not true anymore (Pietro V.). True, but you can use 2 discriminants at the same time. Andrei: this is a key aspect of the MELA approach: it is shown that only 2 discriminants are sufficient to be fully optimal for a given parameter, so it works for a continuous set of parameters. In the limit of EFT validity, only 1 discriminant is optimal (for a small EFT parameter). 

  • Kyle: NP is optimal for simple tests, but when we go to continuous multi-parameter tests, this is complicated for different reasons. Optimal, in particular, becomes ill-defined in this context. Contours will always include the SM. In terms of these observables, this is no longer optimal in terms of systematics and detector effects. Ulascan: detector effects can be considered as transfer functions - approximations are always needed - correlations between observables.

  • Andrei: this is answered on slide 8: the same approach with 2 discriminants remains, detector can be incorporated with ML. Kyle agrees, difference between incorporating detector effects before or after the method.


 

speaker: Kyle (MadMiner: Machine learning–based inference for particle physics)

 

  • Jay: (sl 28) how are systematics dealt with in the framework?

    • starting from the simulation (geant) which implements some smearing, the smearing itself is considered to have an uncertainty gauged by additional nuisance parameters. The dependence on the nuisance parameter is looked at with additional generations

  • Andrei: do you confirm that there are two ways to use ML: (1) create an optimal observable to use in familiar ways in analysis and (2) perform inference directly with the ML method, and you advocate the latter. Kyle confirms: typical approach (sally, sallino) to construct observables and perform analysis, or use the full likelihood as input (rascal).

  • Ulascan: concern on the dependence on the simulation, which is used for training, and the sim does not always reproduce data well, so how much do you trust it? Referring in particular to tricky aspects, like the correlation between jets and MET, where not only resolution, but also scale factors matter

    • it’s a generic problem, true for every observable and to be considered by experimentalists. Need to be careful on the systematics and wisely choose what inputs to be provided to a NN according to the trustable pieces.

  • Alexander: what’s the status of the connection between MadMiner and Madgraph? 

    • Madgraph is used internally when calculating the propagation of uncertainties in the building of morphing, anyways the target is to factorise the morphing part as much as possible

  • Alexander: what’s causing the deformation in the plot of page 14? 

    • There’s some kind of degeneracy in place. Also effect from higher orders.

 

speaker: Cen Zhang (Fitting EFT models with experimental measurements in the Top Physics)

 

  • Andrei: what do you take as input from experimental measurements? List of SM measurements (histograms) listed in the talk. What about s.19, 5-D measurements? Yes, but has not yet been done yet.

  • s.30: we need to be consistent and compute all the diagrams at the same precision


 

speaker: Tevong You (Fitting EFT models with experimental measurements in the Higgs and EW Physics)

 

  • Robert: what would be the effect on the neglected correlations on some of the backgrounds would be considered (e.g. ttX and dibosons, etc). The short answer is no, since it hasn’t been studied so far.

  • Andrei: the same issue about four 1D measurements vs one 4D measurement. Answer: only single 1D measurements are used in this fit. Furthermore, analysis from run-1 should not be used for EFT since they assumed the SM. k-framework doesn’t include any acceptance effects. Also Run-2 STXS measurements require corrections and/or systematics due to SM assumptions, see for example ATLAS parameterization of a large effect in H->4l decay. This effect is large due to substituting gamma* for Z in the SM case, and acceptance is very different


 

speaker: Andrei (summary)

 

  • Kyle: for ME, we want to work as close as possible to the full kinematics. For ML we can define how much of the kinematic information we want to use - this can be an advantage or not. If likelihoods will be provided, people will use them.

  • Ken: we should use as many measurements as possible, but as few observables as possible, to avoid, as much as possible correlations (or at least to have them). Dedicated measurements might be tricky to use since they will overlap with other measurements, which may not be as optimal for certain measurements, but more general otherwise. 

    • Kyle: Beware the underlying assumptions on correlations (e.g. BLUE combinations assuming the gaussian regime)

  • Ken: acceptance effects from fiducial measurements. In some cases, parametric dependencies are reported. Andrei: the problem is that many times it is done for very specific operators, depending on the analysis.

  • Next steps: document the discussions held so far. 


 

There are minutes attached to this event. Show them.