Speaker: Zhangqier Wang (Massachusetts Inst. of Technology (US))
P6; mixed QCD EW, this is now used as standard in all searches at the LHC?
Yes, for all V+jets
For the VBF topologies?
No, because different phase space
P10, H → invisible, for mono-jet 1.5 sigma excess, for mono-V 0.5 sigma excess, why combined excess is only 0.25 sigma? Naively
When we do the combined fit, many NPs are correlated -- combo falls into a different
Does this point to the tagger issues?
No, rather ID of leptons, and generic feature of a fit with many parameters. No specific direction that this points to, could be a stat fluctuation.
Does that point to complex correlations between the NPs - and a combined result with a dependence upon the correlation model between monojet/mono-V
That’s exactly it.
Recent work of mono-V in the context of SUSY -- Higgsino, well-tempered neutralino?
In short, no. Maybe there is a specific SUSY search.
You mentioned a MadAnalysis card (w/ Delphes) to reproduce the results - how much dependence does that have on the specific signal - you show ADD, does it work for rather different signals e.g. H->Inv?
H → inv very similar to ADD. Would be odd to have some systematic which would affect some model but not another.
What trigger do you use? Which thresholds?
MET trigger: 120 GeV online, offline 250 GeV offline; small HT requirement to reduce noise
Speaker: Guglielmo Frattari (Sapienza Universita e INFN, Roma I (IT))
Comparing to CMS: obvious difference is the fit; ATLAS uses V+jets with 1 NP that freely floating, CMS has 1 NP per bin; fit uses lots of freedom, fit must have been good already before any fit?
As you know, the modelling is not strikingly good. But use NPs to adjust the shape. Also theory systematic uncertainties do this.
These theory corrections are at level of 1% (p4)
This is post-fit. Slightly larger pre-fit.
Curiously, full Run 2 plots indicate that the systematic uncertainties halved. It does not really propagate to the limits?
Likely due to finer binning
Everywhere where possible, use V+jets?
At ATLAS, only mono-jets. This depends on the variable one wants to explore.
Background generation -- Sherpa. Have you considered Sherpa OpenLoops with EW corrections? Comparison with aMC@NLO?
Use the Sherpa 2.2.1 as per slides, then reweighted to the theory predictions. OpenLoops not used. Don’t use MG alternative setup with the same calculation accuracy.
Only 1 bin inconsistent with data, but all SUSY plots have a weaker observation?
P9: no, not quite: several bins around 500-900 GeV have a trend which mirrors
HEPdata material does not have xsec upper limits, only contours -- why?
They are available in auxmat.
For reinterpretation we need the limits on the cross sections in HEPdata. Not convenient to type off the xsecs from auxmat
Where is the tau veto? Dominant esp at high pT
Since we apply tau veto in every region same way, impact of the uncertainty cancels out in the fit procedure. Largest uncertainty
They would not be the same, since in W → tau v there are real taus. Z → tautau not used / wouldn’t give a strong constraint
Tau uncertainty impact is greatest at low recoil pT - use quite a high pT (20 GeV), don’t see big impacts - are talking about hadronic taus, in all regions.
Speaker: Elena Pompa Pacchi (Sapienza Universita e INFN, Roma I (IT))
P8: shown for different BR. does this scale linearly with the coupling?
Our limits are 50%, hence this is what we show.
Have you thought of new LLP models outside of the Higgs domain
Do you have something in mind?
Mono-jet not necessarily sensitive to disappearing tracks. Mediators don’t need to be neutral. Non-neutral mediators
Brought up in the past. Did not have time to suggest alternative interpretations.
Speaker: Wolfgang Waltenberger (Austrian Academy of Sciences (AT))
Two analyses with very similar NPs
Each analysis to publish LHs -- would be great to have common conventions for correlating NPs, naming schemes etc
NN what’s the advantage? Is it because one can have fewer parameters
Maybe easier to publish? More difficult to interpret, less explicit.
Comment: The hands-on workshop for publication of statistical models: https://indico.cern.ch/event/1088121/
Speaker: Felix Kahlhoefer (RWTH Aachen)
P8: either everything heavy or light -- these are things that can be tested. Is this true for EFT specifically?
Imagine theories in top left corner, 1 massive mediator, 2-3 light DOF. Difficult to make work.
Maybe coannihilation could make this work. You have one common scale?
Yes, that’s no included here. Lambda common, but Wilson coefficients different.
P6: difficult to generate events in the tails. Why not reweighting?
A priori you don’t know when coding this up.
Clipping LHs?
Interference is always challenging -- rely on bg estimates from experimentalists. Signal-background interference very difficult. (Uli - no interference in this models)
Is there a deeper reason?
Imagine 2.5 excess. Everywhere bg hypothesis neglected. This is not how we do LHC data analysis.
How do you determine the max LHC?
This the outcome of the scan.
Does that guarantee it’s teh maximum possible?
Safeguards: just bg LH. can calculate dLH w/r/t both (also reference LLH). We don’t give p-values
Is this applicable to PBC freeze-in scenarios?
DM light, everything else heavy. RD set by standard mechanisms via the non-zero DM operators. Difficult to generate large hierarchies in NP scales/masses? (relic density via freeze-out via standard operators)
P4: what’s missing from the ATLAS side to make this possible?
For ATLAS tehre will be a step chain. For CMS waiting for similar format of data to become available?
What do you need?
Correlation matrix, but nothing beyond at this stage.
Speaker: Phil Harris (MIT)
Dark photon -- dozen spin-1 resonances between 10 and 1 GeV; plot from PCB clearly shows them. Cumbersome. Not clear if this is worthwhile?
People have gone up to 10 GeV from the low end. Not sure about the details. For us below 10 GeV tricky.
How about RD?
This is done already from a theory perspective
Comment: ultimate goal to compare with the RD experiments in the range they are sensitive to. Up to LHC DM WG down to which threshold to compare with DD. If you do the limits at 1 GeV, need to fix to 10 GeV.
Comment: pseudovector mediator with scalar (or pseudo-dirac fermion DM) preferred because DiracDM to vector is excluded by CMB.
P17
Comment: visible dilepton bounds included.
Could not bring up CMB constraint
Plot for dark photon -- inelastic DM? What do you assume for the mass splitting? Fixed?
Yes. mass splitting very small (1e-6?)
Why rescaling with (mchi/mA’)^4 → enters into the RD
Why below 10 GeV?
In this framework Belle is better; need to include it
It would
There are theory limitations, can use codes to
Comment: How can LHC push further down for low mediator masses? Can we surpass the nu floor?
Comment: Everything below 10 GeV easy, hadron/mu BR well known and can be done in a data-driven way. DarkCast can do this.
CMS mono-jet interpretation in t-channel models
Speakers: Sunil Dogra (Kyuongpook University Daegu)
Pp > xd xd j, yy, why associated production of DM with one visible state?
At LO this is implied.
Speakers: Thomas Dieter Flacke (Korea Advanced Institute of Science and Technology (KR))
Two Q related to this dim(5) operator? Is it not dim(6)
Would normally get a dim(6) to get Higgs; we write dim(5) because we are in a broken phase, parameterisation as C x Higgs/Lambda^2
Scanned C/Lambda [1e-5,1e-3]. Scaling for dim(6) different -- we aim at a couple of TeV. In the scan don’t go too low because the BSM physics just decouples
Speaker: Gilly Elor
Is this asymmetric DM?
Not necessarily, need to have some asymmetric component?
Usual DM signatures like WIMPs? Weaker couplings?
Couplings enter through the mediator. Embedding in SUSY model (ref early in talk) would also give sterile neutrino (sneutrino?) searches. Signals are very different
Speaker: Saul Lopez Solino (Universidade de Santiago de Compostela (ES))
Why cannot this analysis be done at Belle?
In fact it can, but only for some operators originating from B0 or B+.
What are the systematics here?
Between 1 and 10%, fair to say that systematics will be the limiting factor
Comment: Belle is currently doing a search for exactly the example decay mode show in the talks B-> Lambda + MET
Speakers: Benedikt Maier (K) , Benedikt Maier (CERN) , David Yu (BNL) , David Yu
Many models b → MET+visible, contributions from both s- and t-channel diagrams. Will you only take t-channel and cut out the rest?
The focus is on the simplified t-channel (Sections 1-5), additional contributions to Section 6 “Going Beyond” would be very welcome
Comment: overleaf link is not public
Comment: HL-LHC projections could be interesting for Snowmass