Dear all,

See below for a summary of the discussion and plans following the first forum meeting on 16 January. The short term goal of the forum is now to produce a candidate list of the models. Analyzers should contact the relevant theorists and their counterparts from the other experiment and produce a proposal that works for their analysis. A dedicated higher-traffic email list will be created for that purpose, and the results reported back to this forum at the next meeting, which will be announced shortly.

Sarah Alam Malik (CMS),
Caterina Doglioni (ATLAS),
Antonio Boveia (ATLAS),
Steven Lowette (CMS),
Steve Mrenna (CMS)

=================================
***Introduction -  Antonio Boveia***
=================================

Presenting a limited scope-forum to bring the best benchmarks to experiments in terms of DM searches, on a short timescale. Experimenters need help from theorists, to give work already done on simplified models the relevance it needs.

**Twiki**

https://twiki.cern.ch/twiki/bin/view/LHCDMF/WebHome

**Mandate:**

1) Well-motivated, prioritized, small, practical set of simplified models
  * the EFT is a simple and general, but limited benchmark
  * simplified models are a step beyond: which ones to choose, and with which parameters?
     * make a conscious decision given many constraints and past experience
     * don't reinvent the wheel: either start from existing proposal or choose the best motivated from a complete set

2) Common implementation of matrix element + choices needed to generate models
  * Needed for experimentalists: choice of event generator, order of calculation, matching scales for MC simulations
  * Needed for theorists: once settled, comparison and reinterpretation is easier

3) EFT validity for Run-2
  * ATLAS has adopted a procedure for truncation: should this be discussed and agreed by both collaborations?

4) Conclusion of the works: Write-up (on arXiv)
  * Coherent and convincing documentation is needed from the two experiments and the theorists involved in the effort before Run-2.

Other topics: what is needed to put LHC DM searches in a broader context? Agreements and a collection of central tools would be useful for the presentation of results

**Aim and organization of meeting:**

Round-table to collect ideas, understand who can work on what and where more effort is needed, decide how to divide the work.

=================================
***Round table***
=================================

**List of models**

Linda Carpenter: mono-Higgs and EW gauge boson models. Advantages of searches at colliders: useful to rule out or confirm other evidence, interesting portals with low backgrounds. A suite of models exists, both EFT and simplified, and it would be useful to cross-correlate across channels.

Uli Haisch & collaborators: DM with gauge bosons couplings, where the weak interaction motivates the "darkness" of Dark Matter. Work and reinterpretation of various searches already done in arXiv:1501.00907 and references. These operators are interesting for colliders as both DD and ID have weaker limits for all/some of DM masses. The forecast for these models is that the limits will improve by 2x in the first year, then searches become systematic-dominated. Suggestion also to look at jet-jet angular correlations after VBF cuts (deltaPhi): angular decomposition can distinguish between models.

Matt Buckley: scalar and pseudoscalar models, summarizing the extensive work done in arXiv:1410.6497. Discussion on width: could be treated as an additional free parameter, or set as "minimal".

**List of models and implementation (experimental contributions)**

Andy Nelson and Amelia Brennan: ATLAS monoW/Z. The simplified models considered for the 8 TeV ATLAS monoZ paper are a subset of what shown by Linda Carpenter.
* Question/wishlist: why don't we have a simplified model version of the VVChiChi EFT model? Work has started in looking at s-channel (scalar & vector), t-channel (scalar). Further questions:
  * need an agreements on the approach to widths, either fixed or determined by other parameters
  * do we scan in coupling strengths?

Valerio Ippolito and David Salek: ATLAS monojet. Presenting a brief outline of the plan for run-2: need to produce simulation in timely manner and effective way:
1) focus on settings in the generators, but do studies to compare generators for similar models
2) be economic: experimentalists can't generate too many points at full-simulation level.

Lashkar Kashif and collaborators- ATLAS mono-Higgs. Run-2 plans: use only simplified models of DM production, study both s- and t-channel processes with various helicity structures and various decay channels. Issues: Higgs width, mediator width: need a common prescription before doing a grid scan.

Ren-jie Wang, Darien Wood and collaborators: CMS Mono-Z. Models being prepared for Run-2: s-channel with vector/scalar mediator, t-channel with colored scalar. Implementation in FeynRules -> Madgraph (MG_aMC_2.1), respecting coupling-width link.

Phil Harris and collaborators: overview of simplified models. Work has been done in arXiv:1411.0535 towards a thorough scan of simplified models for all jet/V/top/B-quark based on parameters:
 * process (axial/vector/scalar/pseudoscalar)
 * coupling
 * mediator
 * width (minimal width, 2x minimal width)
 * DM mass

Barbara Alvarez and collaborators: ATLAS monotop. Summary of models used for ATLAS leptonic search (arXiv:1410.5404). Two benchmark models and parameters from arXiv:1407.7529, where CMS used a slightly different set. Plans for scan of Run-2 are available and uploaded on Forum twiki (https://twiki.cern.ch/twiki/bin/view/LHCDMF/MonoTop, JO are restricted), help and feedback from theorists and collaboration with CMS are welcome.

Bjoern Penning and collaborators: presenting ongoing work for simplified models with light and heavy quark. Various simulation choices and first feasibility studies are available, collaborating with others in Forum after Durham/Bristol/Imperial interested in the implementation and testing of models. The material related to the b-flavored models has been collected for the DM@LHC proceedings: https://twiki.cern.ch/twiki/bin/view/DMLHC/DmRepository. The scan probes mediator mass/DM mass/coupling and width.

**Implementation**

Emanuele Re: Powheg implementation of spin-0 and spin-1 mediators, various coupling structures. Main innovation: QCD corrections to monojet signature, ggg-vertex through top loop will also be available soon. If Powheg is used, consistent matching is done up to 3 jets. The team is willing to make changes to the Powheg implementation on short timescale if needed. Other models are suggested, in particular DM+2-jets as an EFT, with a top loop - this could be looked at with azimuthal correlations between the two jets, close to a H->inv VBF search.

**EFT validity**

Thomas Jacques, Steven Schramm and collaborators: EFT validity in ATLAS. Review of the procedure to recover Mmed as a function of M*, then with simple mediator completion one can find out the relation of Mmed to M* and test the assumption of the validity. This depends on the couplings, but it will give a rescaling / truncation procedure. Robustness of LHC constraints when using EFT depend on this.

%%%
=================================
***Discussion***
=================================

This discussion covers the organization of the work - what are the next steps, how do we carry forward discussion we started? Writing down a list of the models, at the level of the appropriate Langrangian and free parameters for each model, is the first step.
We can organize the further discussion in a few ways: we (organizers) would like as much as possible for people to self-organize and start discussion. Our proposal is to set up meeting times for subsequent discussion, but carry on the discussion on the mailing list.

Main points of the discussion:

**There are various proposals in literature that converged on lists of models, but they are not distinct. We need to converge to a minimal set. How to do so?**

Phil Harris and others have a frameworks that could identify which regions of parameter space are most interesting, giving the distinct possibilities. Scans should tell us:
  * the phase space where LHC is effective, useful points where colliders complement DD bounds
  * whether models have degenerate kinematics: which model points experimentalists should fully simulate and study, and which points theorists can obtain by recasting results from these fully simulated points.

A discussion on the computational cost of such a framework follows. It is clear that experiments cannot do a scan of huge proportions including full simulation. It is argued that there might be no need to have full simulation, that generator-level with parton shower would be sufficient and this reduces the strain on the experimental MC production resources. However, previous experience with SUSY scans shows that effects e.g. due to pile-up that affect lepton isolation cannot be easily recasted; that it might not be possible to recast more complicated searches, and that collaborations will require that generator-level results are checked against full simulation.

It seems convenient to have generator-level scans of the kinematic quantities that are relevant for searches as a preliminary step to the choice of models and parameters to simulate. In some cases, the kinematic distributions will be nearly the same but the rates will change. One cannot however design right-away all searches that would be able to distinguish models based on more complicated variables (e.g. angular correlations). This can however be left as a problem to solve in case of discovery. Rather, the goal of these scans should be to identify regions of parameter space where existing analyses must be optimized differently, or new searches should be done instead.

A first-principles distinction like this would easily set the scene and define what work on benchmarks is done by experimentalists and what is done by theorists via recasts.

***Shall we divide the work in terms of model type/final states?***

The Twiki proposes a division in terms of:

1) monojet-style-models
2) scalar simplified models (heavy flavors)
3) EW and mono-H models (non-monojet)

Although these might be useful working divisions, we should keep coherence between the models used in the various signatures. We should also make sure models unique to a signature are not left out. This can be achieved with open discussions on the mailing lists.

***Shall we use non-collider/non-mono-X constraints to restrict the parameter space these models will cover?***

There are two points of view:
1) phenomenological studies of direct detection/indirect detection rates, or of cosmological constraints, should be done before starting the simulation, in order to restrict the parameter space. If those constraints are not applied, the comparison with DD/ID loses meaning.
2) simplified models should not be constrained. Constraints can be applied correctly only once a full UV completion has been specified. Simplified models are intended as benchmarks, or building blocks out of which a UV completion can be constructed. The community is aware of the limitations and uses them accordingly. It may be useful, to understand the parameter space favored by such constraints in an example completion, but this knowledge should not bias our choices of parameter space

*** Additional criteria to prioritize and select models:***
1) rank models based on their usefulness by early/late data. This is useful for experimentalists as it follows data taking.
2) have a wishlist from experimentalists and theorists to cross-check the choices made (e.g. practical constraints in terms of the number of events).

***Practical suggestion on how to move forward***

1) The organizers divide the contributors into groups: monojet-like, scalar and HF, EW models. In each of these groups, theorists and experimentalists are paired up in sub-topics - this approach to the discussion takes advantage of existing boundaries but should not be regarded as a rigid constraint.
2) Each of the groups has ~1 week to discuss and report to the organizers a list of relevant models, better if from existing literature, but new ideas can also be investigated. The lists have to include: process/coupling/mediator/width and DM mass range, and the rationale behind such choices. The additional information (eg MadGraph cards) needed to propagate information will be discussed and agreed upon.
3) Participants investigate kinematic distributions of the various models to identify which models and parameter ranges are distinct from an analysis optimization point of view, and which are purely a matter of different signal rates and theoretical reinterpretation. At the next meeting (last week of January) we discuss the results and converge on a minimal list. We should also agree on what plots we need to make our decisions.
4) Those interested in the implementation are welcome to produce complementary/cross-check results of these scans, and show them at the next meeting.