- Compact style
- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
The LHC collaborations are pursuing searches for new physics in a vast variety of channels. While the collaborations typically provide themselves interpretations of their results, for instance in terms of simplified models, the full understanding of the implications of these searches requires the interpretation of the experimental results in the context of all kinds of theoretical models. This is a very active field, with close theory-experiment interaction and with several public tools being developed.
A Forum on the interpretation of the LHC results for BSM studies was thus initiated to discuss topics related to the BSM (re)interpretation of LHC data, including the development of the necessary public recasting tools and related infrastructure, and to and to provide a platform for a continued interaction between theorists and with the experiments.
This is the fifth workshop of this Forum - held at Blackett Laboratory, Imperial college London in Lecture Theatre 2 (LT2) on LVL1 of the Blackett Lab.
Previous meetings took place
VIDYO connection will be available.
Steering Group
Jon Butterworth
Andy Buckley
Kyle Cranmer
Daniel Dercks
Matthias Danninger
Matthew Dolan
Benjamin Fuks
Marie-Helene Genest
Ahmed Ismail
Sabine Kraml
Frank Krauss
Michael Krämer
Nazila Mahmoudi
Michelangelo Mangano
Stefano Moretti
Pat Scott
Sezen Sekmen
Wolfgang Waltenberger
Nick Wardle
Andreas Weiler
The workshop is supported by the Institute of Physics and the Institute for Particle Physics Phenomenology.
The CMS collaboration aims to provide high-quality analysis results to the particle physics community. The rich search program for beyond-the-standard-model phenomena in various topologies is an excellent basis for reinterpretation by external scientists. In this talk, I will discuss the steps taken by the collaboration in order to facilitate such reinterpretations.
In order to maximise the sensitivity to parameters of interest (e.g. Wilson Coefficients), theoretical uncertainties can be constrained using data itself. This builds in theoretical assumptions into measurements. Reinterpreting such measurements is a crucial component in order to prevent them from becoming obsolete with theoretical progress. In this talk, I will discuss the challenges and benefits of providing additional information that can allow such a reinterpretation.
With the second data-taking run of the LHC concluded, the analysis of the full Run-2 dataset on currently in preparation are expected to be the leading measurements of final states sensitive to BSM phenomena for an increasingly extended period of time. Therefore infrastructure to aid both external and internal reinterpretation is gaining in importance. We will provide an update on the efforts of the ATLAS experiment to make code as data available publicly, including e.g. multivariate discriminants. In addition we will review internal reinterpretations results which were partly derived through the reuse of the original analysis implementation in the context of the CERN Analysis Preservation Portal and RECAST.
The CMS collaboration has made a large fraction of their collision data recorded in 2010-2012 and corresponding simulation data sets publicly available. These data have been used for physics analysis, teaching, and outreach by non-CMS members. In this presentation, we will review the current status and discuss our plans for making CMS data and analyses more accessible.
This talk will cover a variety of developments from the ATLAS experiment on the performance of the experiment. The definition of the significance of the missing transverse momentum will be discussed, the benefits to searches of using this definition, and how this quantity can be estimated in the re-interpretation of these searches. Identifying boosted hadronically decaying vector bosons, top quarks and Higgs bosons is important in many searches. The methods of how this is achieved, including the use of machine learning, will be described and how the efficiency of these taggers is evaluated in data will be detailed. Additionally other combined performance developments will be mentioned with particular focus on those that could cause complications in re-interpreting ATLAS results - for example the interplay between pile-up suppression techniques and models with long-lived particles.
The large and growing library of measurements from the Large Hadron Collider has significant power to constrain extensions of the Standard Model. We consider such constraints on a well-motivated model involving a gauged and spontaneously-broken B − L symmetry, within the Contur framework. The model contains an extra Higgs boson, a gauge boson, and right-handed neutrinos with Majorana masses. This new particle content implies a varied phenomenology highly dependent on the parameters of the model, very well-suited to a general study of this kind. We find that existing LHC measurements significantly constrain the model in interesting regions of parameter space. Other regions remain open, some of which are within reach of future LHC data.
Besides the particle level information of LHC measurements, displaced vertices to search for long-lived particles such as the Majorana Neutrinos in the B-L model can be compliment to this research. Such processes mediated by the Higgs boson in the SM has been studied in particular showing a upper bound for the active-sterile neutrino mixing $V_{lN} < 10^{-7}$.
HEPData is a unique open-access repository for high-level data from experimental particle physics papers, typically the numbers behind the plots or tables that appear in publications. It is the primary repository for publication-related data from the LHC experiments. The HEPData software
was completely rewritten in 2015-16, in collaboration with CERN, and the
data was migrated to a new platform hosted at https://hepdata.net. In this presentation, I will give an overview of the HEPData project and describe some developments following previous talks in this forum in June 2016 and December 2016.
Is a new approach is needed to fully exploit the data provided by the LHC?
As the LHC begins long shutdown 2, despite hundreds of dedicated searches, there has been no sign of new physics. In these searches, much effort is put into understanding the control regions, but this information is almost never made public, and is therefore lost to posterity. In this talk, I will argue for a new approach, where, in addition to the traditional limit-setting (assuming no signal was seen!), future LHC search papers could publish simple differential particle-level measurements in the control regions, and possibly even the search regions. This information could be used to improve generator descriptions of the background, thus helping to improve the sensitivity of future searches. Furthermore, the unfolded data can be used as an input to tools like CONTUR to set constraints on BSM models. In the talk, I will present the results of prototype ATLAS paper of this type: a search for 1st/2nd generation lepto-quarks where differential particle-level measurements of SM processes were made in six regions. The talk will illustrate how such measurements could be used for generator tuning and re-interpretation in CONTUR.
Beyond Standard Model (BSM) scenarios often involve particles with significantly longer lifetimes compared to the particles in the Standard Model. Reliable Monte Carlo tools for simulation of processes involving such long-lived particles (LLPs) are essential for BSM searches and a variety of frameworks already exists (such as MadGraph accompanied by MadSpin, to name a few).
However, a fully accurate simulation of LLPs, with production and decay treated as a single process (with the LLP as an intermediate resonance) can be problematic due to various numerical instabilities caused by the narrow peaks in the amplitudes, and one often has to rely on some approximation to efficiently perform the simulations.
In this talk, I will discuss a specific solution to the numerical inefficiencies in the LLP simulations, with the custom event generator for long-lived neutrino production in the Left-Right Symmetric Model (KSEG) as an example. A more general approach to such simulations (inspired by some already existing specialized event generators) with the potential interface with well established Monte Carlo tools will also be discussed.
We will present a software called HEPLike, which can be used to share the likelihood functions of experimental measurements in HEP community. It uses YAML file to encode the experimental measurements and constructs proper likelihood functions. It can be interfaced with global fitting programs. The YAML files can be easily exchanged inside the community, allowing for more direct comparison of the results.
I will review detector-corrected measurements that have recently been made at the LHC with the aim of allowing reinterpretation for new physics searches. I will discuss new proof-of-concept measurements designed with searches for and measurement of new phenomena in mind, new results characterising anomalous Higgs boson interactions, and effective field theory interpretations of electroweak interactions. I will discuss aspects of what has been learned in existing approaches and future challenges.
I will present the latest work on electroweak supersymmetry from the GAMBIT collaboration. With GAMBIT we have performed a large-scale fit of the electroweakino (neutralino and chargino) sector of the MSSM. The fit incorporates simulations of several recent ATLAS and CMS SUSY searches, in addition to constraints from SUSY searches at LEP and the invisible decay widths of the Z and Higgs bosons. When considering the general four-dimensional electroweakino parameter space, we find that current results do not robustly exclude any range of electroweakino masses. Further, due to multiple small excesses in the data, there is a weak preference for a scenario where all electroweakinos are lighter than ~500 GeV. I will also present preliminary results from ongoing follow-up studies, looking at electroweakino models beyond the MSSM.
Searches for dark photons provide serendipitous discovery potential for other types of vector particles. We develop a framework for recasting dark photon searches to obtain constraints on more general theories - DarkCast.The framework includes a data-driven method for determining hadronic decay rates.
DarkCase can be used to any massive gauge boson with vector couplings to the Standard Model fermions.
We demonstrate our approach by deriving constraints on a vector that couples to the $B−L$ current, a leptophobic $B$ boson that couples directly to baryon number and to leptons via $B–\gamma$ kinetic mixing, and on a vector that mediates a protophobic force.
DarkCast is a public code https://gitlab.com/philten/darkcast.
Particle-level measurements, especially of differential cross-sections, made in fiducial regions of phase-space have a high degree of model-independence and can therefore be used to give information about a wide variety of Beyond the Standard Model (BSM) physics implemented in Monte Carlo generators, using a broad range of final states. The Contur package is used to make such comparisons. We summarise a snapshot of current results for a number of BSM scenarios including several Dark Matter simplified models, and two generic light scalar models.
SModelS is a tool to make systematic use of simplified models results. In this talk I shall briefly report on recent and ongoing developments of the tool.
The nature of Dark Matter (DM) is one of the most pressing issues in contemporary physics. For over 80 years, astrophysical and cosmological observations have indicated its existence indirectly. If the DM is composed of weakly-interacting massive particles (WIMPs) that were in thermal equilibrium with Standard Model (SM) particles in the early Universe, freeze-out calculations suggest that the WIMP is likely to weigh O(TeV), in which case it could be produced at accelerators, notably the Large Hadron Collider (LHC) at CERN. In this talk, we present the results from recent collider searches and a global likelihood analysis within the MasterCode framework of Dark Matter Simplified Models. We combine constraints from the LHC, cosmological DM density indicated by Planck, and limits on spin-independent and -dependent scattering from direct DM search experiments.
Calculating p-values to quantify the statistical significance of any excesses in an individual LHC analysis is usually a routine task for LHC experimentalists, however theories of physics Beyond the Standard Model typically make predictions that are relevant to many LHC analyses at once. If excesses appear in several analyses it is thus of great importance to accurately assess their joint statistical significance. In this talk I will discuss some methods for doing this, as well as some important limitations that exist at present; in particular, performing accurate look-elsewhere corrections for BSM signals is often prohibitively difficult.
Various MSSM scenarios have been well investigated at the LHC, with current bounds having implications on naturalness that suggest the timely consideration of non-minimal scenarios. The purpose of this talk is to present limits on the gluino (fermionic partner of the gluon) and squarks (scalar partners of the quarks) in the minimal Dirac gaugino extension of the MSSM, derived through a recasting of the ATLAS-SUSY-16-07 analysis and presented in arXiv:1812.09293. New states in the extended particle content - the Dirac-adjoint scalars - couple to the Higgs and lead to more varied decay signatures and a more complex electroweakino spectrum than in the MSSM. We look at four representative scenarios of these new couplings, and compare the limits in the gluino vs. squark plane to those obtained in equivalent MSSM scenarios.
In this talk, I will talk about the use of vacuum stability as a phenomenological constraint in BSM models, highlighting its complementarity with constraints coming from e.g. collider experiments. In the talk I will describe how to consider constraints from the decay to e.g. color- and charge- breaking minima at both zero and non-zero temperature in models with extended scalar sectors. I will then introduce the new version of the software Vevacious, a complete rewrite in C++ that is modular, does not depend on external codes for the calculation of the bounce action and with model files that can be generated automatically starting at the Lagrangian level. I will then finish with an outlook on future projects using the code, including its upcoming inclusion in world-leading global fits.
Blackett Level 8 Common Room
We shall review simplified Z' models as explanations for discrepancies between measurements of certain neutral current B meson decays and Standard Model predictions. We provide estimates of LHC and future collider sensitivity. Then a more complete model is introduced: The Third Family Hypercharge Model, which also explains some coarse features of the fermion mass spectrum.
In Left-Right symmetric theories, the origin of neutrino mass is due to spontaneous breaking of an extended gauge symmetry, which restores parity at high scales. The searches for the heavy Majorana neutrinos from single production via $W_R$ and pair production through the Higgs portal feature striking lepton number violating signatures. However, the final states vary drastically from same-sign lepton and jets to displaced neutrino jets and missing energy. We will discuss the signal features of heavy neutrinos and the tools that were developed to simulate neutrino jet displacement and how these overlap with the recast of prompt searches.
Effective Field Theory (EFT) is a powerful tool to parametrize high-scale New Physics in a largely model independent way. We employ this method to study the top quark sector of the dimension-six EFT extension of the Standard Model. In particular, we perform a global fit of top-quark related Wilson coefficients using experimental data. While fit results for TEVATRON and LHC Run I data have been previously presented we additionally include Run II data. Introducing an efficient method of sampling the Wilson coefficient parameter space facilitates the extension of the fit to fiducial particle-level measurements which otherwise would be computationally very expensive.
The ATLAS and CMS collaborations have recently released significant new data on Higgs and diboson production in LHC Run 2. Measurements of Higgs properties have improved in many channels, while kinematic information for h→γγ and h→ZZ can now be more accurately incorporated in fits using the STXS method, and W+W− diboson production at high pT gives new sensitivity to deviations from the Standard Model. We have performed an updated global fit to precision electroweak data, W+W− measurements at LEP, and Higgs and diboson data from Runs 1 and 2 of the LHC in the framework of the Standard Model Effective Field Theory (SMEFT), allowing all coefficients to vary across the combined dataset, and present the results in both the Warsaw and SILH operator bases. We exhibit the improvement in the constraints on operator coefficients provided by the LHC Run 2 data, and discuss the correlations between them. We also explore the constraints our fit results impose on several models of physics beyond the Standard Model, including models that contribute to the operator coefficients at the tree level and stops in the MSSM that contribute via loops.
The SM EFT is a field-theoretical framework for describing high-scale physics. The unique sensitivity of Higgs measurements to new physics can be demonstrated by constraining EFT parameters with these measurements. I will describe procedures for translating Higgs measurements into EFT constraints.
Rare semileptonic $b \to s \ell^+ \ell^-$ transitions provide some of the most promising framework to search for New Physics effects. Recent analyses of these decays have indicated an anomalous pattern in measurements of angular distributions of the decay $B^0 \to K^{*0} \mu^+ \mu^-$ and in lepton-flavour- universality observables. A direct determination of the Wilson coefficients from data is shown to be possible via an amplitude analysis of $B^0 \to K^{*0} \mu^+ \mu^-$ decays. Prospects for disentangling New Physics effects from non-local hadronic contributions are investigated, together with the determination of the difference of the Wilson Coefficients $C_9$ and $C_{10}$ between electrons and muons in a simultaneous amplitude analysis of $B^0 \to K^{*0} \mu^+ \mu^-$ and $B^0 \to K^{*0} e^+ e^-$ decays.