Thank you for making this a successful meeting!
Recordings of presentations are now available for most talks. The timetable lists all talks. Each contribution (where the speaker allowed us) should have a link labelled "Recording".
There will be no proceedings for DPF21.
Registration and abstract submission are now closed.
The APS Division of Particles & Fields (DPF) Meeting brings the members of the Division together to review results and discuss future plans and directions for our field. It is an opportunity for attendees, especially young researchers, to present their findings. The meeting opened each day with plenary sessions, followed by selected community sessions and then parallel sessions to complete the day.
Topics covered included: LHC Run 2 Results; LHC Run 3 and HL-LHC Projections; Accelerators & Detectors; Computing, Machine Learning, and AI; Quantum Computing and Sensing; Electroweak & Top Quark Physics; Higgs Physics; QCD & Heavy Ion Physics; Rare Processes and Precision Measurements; Neutrino Physics; Physics Beyond the Standard Model; Particle Astrophysics; Dark Matter; Cosmology & Dark Energy; Gravity & Gravitational Waves; Lattice Gauge Theory; Field & String Theory; Outreach & Education; Diversity, Equity, & Inclusion.
DPF2021 was held as a virtual-only event via Zoom. It is hosted by the Florida State University High Energy Physics group and with the scientific program determined by the DPF Program Committee.
There was no Registration Fee. You do not need to be a member of APS to register or to submit an abstract.
Support for this meeting was provided by the FSU Office of Research, the FSU College of Arts and Sciences, the FSU Physics Department, and the FSU High Energy Physics group.
Follow the meeting on Twitter @apsdpf2021 or #apdpf2021.
Experimental hints of lepton flavor non-universality in the decays of b-hadrons are an exciting sign of possible new physics beyond the Standard Model. Results from multiple experiments indicate that electrons, muons, and tau leptons may not be different only because of their masses. I will review the experimental situation of these “b-anomalies”, including recent developments and prospects for the near future. Further results from LHCb, Belle II, and other experiments in the coming years should be able to confirm or rule out the presence of new physics in these decays.
This talk will review the role of the CKM matrix in governing meson-antimeson oscillations and CP violation in the Standard Model. Recent measurements of B_s oscillations and decays by LHCb, CMS, and ATLAS will be discussed in this context, as will be measurements of CP violation in B_d and B_u decays. The direct measurements of the CKM angle gamma (from decays produced by tree-level amplitudes) are combined and the resulting value compared to that determined indirectly.
Flavor physics is addressing two complementary questions. First, what is the origin of the hierarchical flavor structure of the Standard Model quarks and leptons? Second, are there sources of flavor and CP violation beyond the Standard Model? I will discuss recent theoretical developments in this area, focusing mainly on the so-called "B-anomalies" -- persistent hints for the violation of lepton flavor universality in decays of B mesons. I will review the status of the anomalies, discuss possible new physics explanations, and outline the prospects of resolving the anomalies with expected experimental data.
HEP is funded essentially entirely with public funds. Since there are many organizations that wish to receive federal funds, it is imperative that the HEP community raise its visibility among the general public. However, outreach to the public has long been neglected by the HEP community. In this talk, I will discuss the importance of outreach. I will also describe some of the methods that work, and will also mention some programs that the Snowmass process is developing to make it easier for individuals to engage in communicating with the public.
Many have observed for organizations there is evidence to support the belief that their cultures are their destinies. During the summer of 2020, the DELTA-PHY initiative was launched in an effort for the APS to deliberate upon and if needed move to transform its culture. It does this by asking three key questions: (a.) What are the values of the APS? (b.) Aside from producing world-class physics, what are the inputs,
outputs, practices, traditions of the APS? (c.) Do answers to these questions align and are they in alignment with the APS 2019 strategic plan? DELTA PHY activities are envisioned to be timely and cutting across the society's 'stove pipe' structures.
A search is presented for chargino pair-production and chargino-neutralino production, where the almost mass-degenerate chargino and neutralino each decay via $R$-Parity-violating couplings to a boson ($W/Z/H$) and a charged lepton or neutrino. This analysis searches for a trilepton invariant mass resonance in data corresponding to an integrated luminosity of 139 fb$^{-1}$ recorded in proton-proton collisions at $\sqrt{s}$ = 13 TeV with the ATLAS detector at the Large Hadron Collider at CERN.
A search for production of the supersymmetric partners of the top quark, top squarks, is presented. The search is based on proton-proton collision events containing multiple jets, no leptons, and large transverse momentum imbalance. The data were collected with the CMS detector at the CERN LHC at a center-of-mass energy of 13 TeV, and correspond to an integrated luminosity of 137 fb-1. The targeted signal production scenarios are direct and gluino-mediated top squark production, including scenarios in which the top squark and neutralino masses are nearly degenerate. The search utilizes novel algorithms based on deep neural networks that identify hadronically decaying top quarks and W bosons, which are expected in many of the targeted signal models. No statistically significant excess of events is observed relative to the expectation from the standard model, and limits on the top squark production cross section are obtained in the context of simplified supersymmetric models for various production and decay modes.
With several recent anomalies observed that are in tension with the Standard Model, and with no clear roadmap to the source of new physics, this is an exciting time to explore for new particles at the LHC. Supersymmetry (SUSY) is an elegant solution to many of the Standard Model mysteries, and SUSY models with electroweakly produced sparticles are particularly interesting as possible explanations to the g-2 anomaly, the observed dark-matter density, and more. ATLAS has a rich program of complementary electroweak SUSY searches, and the latest Run 2 results using 139 fb$^{-1}$ of 13 TeV proton-proton collision data are discussed that shed light on where new physics may be found, such as in the three lepton final state.
Minimal Supersymmetric Standard Model (MSSM) is one of the most well-motivated and well-studied scenarios for going beyond the Standard Model (SM). Apart from solving the hierarchy problem, one of the primary motivations is the presence of a suitable dark matter (DM) candidate, namely the lightest neutralino, in the particle spectrum of SUSY. Measurement of DM relic density of the universe by the WMAP/PLANCK experiments puts the model under probe. In addition, stringent constraints on the masses of strongly interacting sparticles have also been shown at the Large Hadron Collider (LHC) experiment by analysing Run II data for specific simplified models. However, many assumptions made by the experimental collaborations can not be realized in the actual theoretically motivated models. In this study, we revisit the bound on the gluino mass placed by the ATLAS collaboration. We reveal that the exclusion region is shrunk in the $M_{\widetilde{g}}-M_{\widetilde{\chi}^0_1}$ plane in the pMSSM scenario corresponding to different hierarchies of left and right squark mass parameters. Importantly, for higgsino type lighter electro-weakinos, the bound on gluino mass from 1l + jets + MET search practically does not exist. We have also performed detailed analysis on neutralino dark matter and have found that in most of the region of LSP mass range, required relic density is achieved and also, the direct as well as the indirect detection constraints are satisfied.
A search for supersymmetry involving the pair production of gluions decaying via stop quarks into the lightest neutralino $\tilde{\chi}^{0}_{1}$ is reported. It uses LHC $pp$ collision data at $\sqrt{s}\ =\ 13\ TeV$ with an integrated luminosity of $139fb^{-1}$ collected with the ATLAS detector in 2015-2018. The search is performed in events containing large missing transverse momentum and several energetic jets, at least three of which must be identified as originating from b-quarks. The analysis is done in two final states, one of which is required to have at least one charged lepton (electron or muon), and the second one is required the veto on leptons. Expected exclusion limit for gluino and neutralino masses is evaluated using simplified signal model. It is found to be $800\ GeV$ and below for neutralino masses with gluino masses of less than $2275\ GeV$ at the $95\%$ confidence level.
A Left-Right Symmetric Model which utilizes VLFs to generate fermion masses via a universal see-saw mechanism is studied. In this talk, I will present the latest results of our analysis on the flavor observables constraining the model. Cabibbo anomaly can be easily resolved in this model, thereby predicting the mass of vector-like quarks. Further, I will discuss the possibility of explaining the neutral current B-anomalies using this model.
The discovery of a Higgs boson with mass near 125 GeV in 2012 marked one of the most important milestones in particle physics. The low mass of this Higgs boson with diverging loop corrections adds motivation to look for new physics Beyond the Standard Model (BSM). Several BSM theories introduced new heavy quark partners, called vector-like quarks (VLQ), with mass at the TeV scale. In particular, the vector-like top quark (T) can cancel the largest correction due to the top quark loop, which is one of the main contributions to the divergence, and stabilize the scalar Higgs boson mass. This analysis searches for pair production of vector-like T or B quarks with charge 2e/3 and e/3 in proton-proton collisions at 13 TeV at the LHC. Theories predict 3 decay modes for T and B, respectively : bW, tZ , tH and tW, bZ, bH. The branching ratios vary over different theoretical models. We focus on events where bosons decays leptonically and result in a final state with a same-sign (SS) di-lepton pair and a final state with multiple (3 or more) leptons. We analyze data collected by the CMS detector in the LHC in 2017 and 2018 with integrated luminosities of 41.5 and 59.7 fb^{-1}. Besides Standard Model (SM) processes with the same final states, lepton misidentification contributes a significant part of the background to both SS dilepton and multilepton channel and is estimated by a data-driven method. In addition, charge mis-identification is another source of background for SS dilepton channel, which is also estimated by a data-driven method. Comparing the estimated background with data, and considering uncertainties, we determine an upper limit on the TT or BB production cross section. We calculate limits at different mass points of T and B and different branching fraction combinations.
Vector-like quarks (VLQ) are predicted in many extensions to the Standard Model (SM), especially those aimed at solving the hierarchy problem. Their vector-like nature allows them to extend the SM while still being compatible with electroweak sector measurements. In many models, VLQs decay to a SM boson and to a third-generation quark. Pair production of VLQ provides a model-independent method of searching due to the Quantum Chromodynamics production of the particles. This talk presents a search for pair production of vector-like top quarks that each decay into a SM W boson and a bottom quark, with one W boson decaying leptonically and the other decaying hadronically. The analysis takes advantage of boosted boson identification and data-driven correction of the dominant ttbar background prediction to improve sensitivity. Further, this analysis extends the previous analysis sensitivity by including the full 140$fb^{-1}$ dataset of $pp$ collisions at $\sqrt{s}=$13 TeV collected with the ATLAS detector.
We present the status of our all-hadronic analysis in search of pair-produced Vector-Like Quarks (VLQs) using the Boosted Event Shape Tagger (BEST) with the CMS detector using 137 $fb^{-1}$ of $\sqrt{s} = 13$ TeV proton-proton collisions at the LHC. VLQs are motivated by models which predict compositeness of the scalar Higgs boson, and which avoid increasing constraints from Higgs measurements. In the all-hadronic channel, this analysis is sensitive to all possible VLQ decay modes: T(B)->t(b)H/t(b)Z/b(t)W, capturing the highest branching fraction of each process. The high mass of the VLQs produce highly boosted objects in the final state which can be reconstructed as anti-kt R=0.8 jets and identified as either QCD/b/W/Z/H/t using the BESTagger. The tagger boosts jet constituents into various rest frames and uses neural networks to find correlations between event shape variables, such as Fox-Wolfram moments and sphericity, to determine the category of identification. We define signal regions by the classification of the highest four jets in pT. We estimate our QCD-dominant background with a data-driven 3-jet control region, then fit its normalization simultaneously with simulations of well-modeled sub-dominant background such as ttbar and W/Z+jets. The HT (scalar sum $p_T$) of the event is scanned for an excess of signal in 120 of 126 possible combinations, and the least 6 signal-rich combinations are used as validation regions for the QCD estimation. The analysis is in progress and plans to be completed soon.
In many models that address the naturalness problem, top-quark partners are often postulated in order to cure the issue related to the quadratic corrections of the mass of the Higgs boson. In this work, we study alternative modes for the production of top- and bottom-quark partners ($T$ and $B$), $pp\rightarrow B$ and $pp\rightarrow T\bar{t}$, via a chromo-magnetic moment coupling. We adopt the simplest composite Higgs effective theory for the top-quark sector incorporating partial compositeness, and investigate the sensitivity of the 14 TeV LHC
The recently updated measurement of the muon anomalous magnetic moment strengthens the motivations for new particles beyond the Standard Model. We discuss two well-motivated 2HDM scenarios with vectorlike leptons as well as the Standard Model extended with vectorlike lepton doublets and singlets as possible explanations for the anomalous measurement. In these models we find that, with couplings of order 1, new leptons as heavy as 8 TeV can explain the anomaly, well out of reach of expectations for the LHC. We summarize the implications of future precision measurements of Higgs- and Z- boson couplings which can provide indirect probes of these scenarios and their viability to explain the anomalous magnetic moment of the muon.
In this talk, we will introduce a technique to train neural networks into being good event variables, which are useful to an analysis over a range of values for the unknown parameters of a model.
We will use our technique to learn event variables for several common event topologies studied in colliders. We will demonstrate that the networks trained using our technique can mimic powerful, previously known event variables like invariant mass, transverse mass, and MT2.
We will describe how the machine learned event variables can go beyond the hand-derived event variables in terms of sensitivity, while retaining several attractive properties of event variables, including the robustness they offer against unknown modeling errors.
To perform theoretical calculations and comparisons with collider data, it must first be corrected for various detector effects, namely noise processes, detector acceptance, detector distortions, and detector efficiency; this process is called “unfolding” in high energy physics (or “deconvolution” elsewhere). While most unfolding procedures are carried out over only one or two binned observables at a time, OmniFold is a simulation-based maximum likelihood procedure which employs deep learning to do unbinned and (variable-, and) high-dimensional unfolding. We apply OmniFold to a measurement of all charged particle properties in $Z+$jets events using the full Run 2 $pp$ collision dataset recorded by the ATLAS detector to complete the first application of OmniFold on physical collider data.
We examine the problem of unfolding in particle physics, or de-corrupting observed distributions to estimate underlying truth distributions, through the lens of Empirical Bayes and deep generative modeling. The resulting method, Neural Empirical Bayes (NEB), can unfold continuous multi-dimensional distributions, in contrast to traditional approaches that treat unfolding as a discrete linear inverse problem. We exclusively apply our method in the absence of a tractable likelihood function, as is typical in scientific domains relying on computer simulations. Moreover, combining NEB with suitable sampling methods allows posterior inference for individual samples, thus enabling the possibility of reconstruction with uncertainty estimation.
As the search for physics beyond the Standard Model widens, 'model-agnostic' searches, which do not assume any particular model of new physics, are increasing in importance. One promising model-agnostic search strategy is Classification Without Labels (CWoLa), in which a classifier is trained to distinguish events in a signal region from similar events in a sideband region, thereby learning about the presence of signal in the signal region. The CWoLa strategy was recently used in a full search for new physics in dijet events from Run-2 ATLAS data; in this search, only the masses of the two jets were used as classifier inputs. It has since been observed that while CWoLa performs well in such low-dimensional use cases, difficulties arise when adding additional jet features as classifier inputs. In this talk, we will describe ongoing work to combat these problems and extend the sensitivity of a CWoLa search by adding new observables to an ongoing analysis using $139$ $\text{fb}^{-1}$ of data from $pp$ collisions at $\sqrt{s}=$ 13 TeV in the ATLAS detector. In particular, we will discuss the anticipated benefits of adding classifier features, as well as the implementation of a simulation-assisted version of CWoLa which makes the strategy more robust.
Excursion is a tool to efficiently estimate level sets of
computationally expensive black box functions using Active Learning.
Excursion uses a Gaussian Process Regression as a surrogate model for
the black box function. It queries the target function (black box) iteratively in order to increase the available information regarding the desired level sets. We implement Excursion using GPyTorch which provides
state-of-the-art fast posterior fitting techniques and takes advantage
of GPUs to scale computations to higher dimensions.
In this talk, we demonstrate that Excursion significantly outperforms
traditional grid search approaches and we will detail the current work
in progress on improving Exotics searches as an intermediate step towards the ATLAS Run 2 pMSSM scan on $pp$ collisions at $\sqrt{s}=$ 13 TeV with the ATLAS detector.
Data Quality Monitoring (DQM) is an important process of collecting high quality data for physics analysis. Currently, the workflow of DQM is manpower intensive to scrutinize and certify hundreds of histograms. Identifying good quality and reliable data is necessary to make accurate predictions, simulations, therefore anomalies in the detector must be timely identified to minimize data loss. With the use of Machine Learning (ML) algorithms raising alarms at the anomalies or failures can be automated and data certification process be made more efficient. The Tracker DQM team at the CMS Experiment (at the LHC) has been working on designing and implementing ML features to monitor this complex detector. This contribution presents the recent progress in this direction.
Some of the open questions in fundamental physics can be addressed by looking at the distribution of matter in the Universe as a function of scale and time (or redshift). We can study the nature of dark energy, causing the accelerated expansion of the Universe. We can measure the sum of the neutrino masses, and potentially determine their hierarchy. We can test the standard model at energies higher that those accessible at the laboratory, by studying the primordial density perturbations. The Dark Energy Spectroscopic Instrument (DESI) has just started a 5-years program to generate the largest and most accurate 3D map of the distribution of galaxies and quasars. By measuring the statistical properties of these catalogs, DESI will be able to reconstruct the expansion history of the Universe over the last 11 billion years, while making precise measurements of the growth of structure. In this presentation, I will review the forecasted performance of the DESI survey, and show how it will dramatically improve our understanding of dark energy, inflation, and the mass of the neutrinos.
The Dark Energy Spectroscopic Instrument (DESI) has embarked on an ambitious survey to explore the nature of dark energy with spectroscopic measurements of 35 million galaxies and quasars in just five years. DESI will determine precise redshifts and employ the Baryon Acoustic Oscillation method to measure distances from the local universe to beyond 11 billion light years, as well as employ Redshift Space Distortions to measure the growth of structure and probe potential modifications to general relativity. In this presentation I will describe the instrumentation we developed to conduct the DESI survey, as well as the flowdown from the science requirements to the technical requirements on the instrumentation. The new instrumentation includes a wide-field, 3.2 degree diameter prime-focus corrector that focuses the light onto 5020 robotic fiber positioners on the 0.8-m diameter, aspheric focal surface. This high density is only possible because of the very compact positioner design, which allows a minimum separation of only 10.4-mm. The positioners and their fibers are evenly divided among ten wedge-shaped petals, and each bundle directs the light of 500 fibers into one of ten spectrographs via a contiguous, high-efficiency, nearly 50-m fiber cable bundle. The ten, identical spectrographs each use a pair of dichroics to split the light into three wavelength channels, and each channel is optimized for a distinct wavelength and spectral resolution that together record the light from 360-980 nm. I will conclude with some highlights from the on-sky validation of the instrument.
The Dark Energy Spectroscopic Instrument (DESI) started its main survey. Over 5 years, it will measure the spectra and redshifts of about 35 millions galaxies and quasars over 14,000 square degrees. This 3D map will be used to reconstruct the expansion history of the universe up to z=3.5, and measure the growth rate of structure in the redshift range 0.7-1.6 with unequaled precision. The start of the survey marks the end of a successful survey validation period during which more than one million cosmological redshifts were measured, already about as many as in any previous survey. This data set, along with many commissioning studies, has demonstrated the project meets its science requirements written many years ago. I will present how we have validated the target selection, the observation strategy and the data processing, demonstrating that we can achieve our goals in terms of density of galaxies and quasars with measured redshifts, with the required precision, for exposure times that allow us to cover one third of the sky in five years.
An intriguing and well-motivated possibility for the particle makeup of the dark sector is that a small fraction of the observed abundance is made up of light, feebly-interacting particle species. Due to their weakness of interaction but comparatively large number abundance, cosmological datasets are particularly powerful tools to leverage here. In this talk I discuss the impact of these new particle species on observables, the CMB and LSS in particular, and present the strongest constraints to date on existence light relics in our universe.
GAMBIT (the Global and Modular Beyond-the-standard-model Inference Tool) is a flexible and extensible framework that can be used to undertake global fits of essentially any BSM theory to relevant experimental data sets. Currently included in code are results from collider searches for new physics, cosmology, neutrino experiments, astrophysical and terrestrial dark matter searches, and precision measurements. In this talk I will begin with a brief update on recent additions to the code and then present the results of a recent global fit that we have undertaken. In this study, we simultaneously varied the coefficients of 14 EFT operators describing the interactions between dark matter, quarks, gluons and the photon, in order to determine the most general current constraints on the allowed properties of WIMP dark matter.
Automated tools for the computation of amplitudes and cross sections have become the backbone of phenomenological studies beyond the standard model. We present the latest developments in MadDM, a calculator of dark matter observables based on MadGraph5_aMC@NLO. The new version enables the fully automated computation of loop-induced annihilation processes, relevant for indirect detection of dark matter. Of particular interest is the electroweak annihilation into $\gamma X$, where $X=\gamma$, $Z$, $h$ or any new unstable particle even under the dark symmetry. These processes lead to the sharp spectral feature of monochromatic gamma lines: a smoking-gun signature for dark matter annihilation in our Galaxy. MadDM provides the predictions for the respective fluxes near Earth and derives constraints from the $\gamma$-ray line searches by Fermi-LAT and HESS. As an application, we present the implications for the parameter space of the Inert Doublet model and a top-philic $t$-channel mediator model.
The WIMP proposed here yields the observed abundance of dark matter, and is consistent with the current limits from direct detection, indirect detection, and collider experiments, if its mass is $\sim 72$ GeV/$c^2$. It is also consistent with analyses of the gamma rays observed by Fermi-LAT from the Galactic center (and other sources), and of the antiprotons observed by AMS-02, in which the excesses are attributed to dark matter annihilation. These successes are shared by the inert doublet model (IDM), but the phenomenology is very different: The dark matter candidate of the IDM has first-order gauge couplings to other new particles, whereas the present candidate does not. In addition to indirect detection through annihilation products, it appears that the present particle can be observed in the most sensitive direct-detection and collider experiments currently being planned.
If dark matter interacts too feebly with ordinary matter, it was not able to thermalize with the bath in the early universe. Such Feebly Interacting Massive Particles (FIMPs) would therefore be produced via the freeze-in mechanism. Testing FIMPs is a challenging task, given the smallness of their couplings. In this talk, I will discuss our recent proposal of a $Z’$ portal where freeze-in can be currently tested by many experiments. In our model, $Z’$ bosons with masses in the MeV-PeV range have both vector and axial couplings to ordinary and dark fermions. We place constraints on our parameter space with bounds from direct detection, atomic parity violation, leptonic anomalous magnetic moments, neutrino-electron scattering, collider, and beam dump experiments.
Dark, chiral fermions carrying lepton flavor quantum numbers are natural candidates for freeze-in. Small couplings with the Standard Model fermions of the order of lepton Yukawas are ‘automatic’ in the limit of Minimal Flavor Violation. In the absence of total lepton number violating interactions, particles with certain representations under the flavor group remain absolutely stable. For masses in the GeV-TeV range, the simplest model with three flavors, leads to signals at future direct detection experiments like DARWIN. Interestingly, freeze-in with a smaller flavor group such as SU (2) is already being probed by XENON1T.
The DAMIC experiment at SNOLAB uses thick, fully-depleted, scientific grade charge-coupled devices (CCDs) to search for the interactions between proposed dark matter particles in the galactic halo and the ordinary silicon atoms in the detector. DAMIC CCDs operate with an extremely low instrumental noise and dark current, making them particularly sensitive to ionization signals expected from low-mass dark matter particles. Throughout 2017-18, DAMIC has collected data with an array of seven CCDs (40-gram target) installed in a low radiation environment in the SNOLAB underground laboratory. This talk will focus on the recent dark matter search results from DAMIC. We will present the search methodology and results from an 11 kg day exposure WIMP search, including the strictest limit on the WIMP-nucleon scattering cross section for a silicon target for $m_\chi < 9 \ \rm GeV \ c^{-2}$. Additionally, we will discuss recent limits on light dark matter that could interact with the electrons of the silicon atoms.
SuperCDMS deploys cryogenic germanium and silicon detectors which are sensitive in both the athermal phonon and ionization channels to search for dark matter. In order to observe such a small potential signal, all background sources need to be well understood and then mitigated.
Low-background shielding was designed such that the environmental background is negligible compared to the irreducible background due to cosmogenic activation in the detectors themselves. The overall background budget of the SuperCDMS experiment will be presented, along with the iterative process of design, assay, and fabrication of the now complete shielding system.
The third science run of SuperCDMS HVeV detectors (single-charge sensitive detectors with high Neganov-Trofimov-Luke phonon gain) took place at the NEXUS underground test facility in early 2021, incorporating two important changes to test background hypotheses and enhance sensitivity. First, this was the first HVeV dataset taken underground (300 mwe) and in a shielded environment. Second, the run utilized three detectors operated simultaneously to identify sources of background events that produce 2 or more electron-hole pairs. We will present preliminary results and interpretation from these tests as well as an estimate of the expected sensitivity of the dataset.
We present the theoretical case along with some early measurements with diamond test chips that demonstrate the viability of TES on diamond as a potential platform for direct detection of sub-GeV dark matter.
Diamond targets can be sensitive to both nuclear and electron recoils from dark matter scattering in the MeV and above mass range, as well as to absorption processes of dark matter with masses between sub-eV to 10's of eV.
Compared to other proposed semiconducting targets such as germanium and silicon, diamond detectors can probe lower dark matter masses via nuclear recoils due to the lightness of the carbon nucleus. The expected reach for electron recoils is comparable to that of germanium and silicon, with the advantage that dark counts are expected to be under better control. Via absorption processes, unconstrained QCD axion parameter space can be successfully probed in diamond for masses of order 10~eV.
ABSTRACT:
HeRALD, the Helium Roton Apparatus for Light Dark Matter, will use a superfluid 4He target to probe the sub-GeV dark matter parameter space. The HeRALD design is sensitive to all signal channels produced by nuclear recoils in superfluid helium: singlet and triplet excimers, as well as phonon-like excitations of the superfluid medium. Excimers are detected via calorimetry with Transition-Edge-Sensor readout in and around the superfluid helium. Phonon-like vibrational excitations eject helium atoms from the superfluid-vacuum interface which are detected by adsorption onto calorimetry suspended above the interface. I will discuss the design, sensitivity projections, and ongoing R&D for the HeRALD experiment. In particular, I will present an initial light yield measurement of superfluid helium down to order 50 keV.
Absorption of dark matter (DM) allows direct detection experiments to probe a broad range of DM candidates with masses much smaller than kinematically allowed via scattering. It has been known for some time that for vector and pseudoscalar DM the absorption rate can be related to the target's optical properties, i.e. the conductivity/dielectric. However this is not the case for scalar DM, where the absorption rate is determined by a, formally, NLO operator which does not appear in the photon absorption process. Therefore the absorption rate must be determined by other methods. We use a combination of first principles numeric calculations and semi-analytic modeling to compute the absorption rate in silicon, germanium and a superconducting aluminum target. We also find good agreement between these approaches and the data-driven approach for the vector and pseudoscalar DM models.
It has long been known that the coarse-grained approximation to the black hole density of states can be computed using classical Euclidean gravity. In this talk I will present evidence for another entry in the dictionary between Euclidean gravity and black hole physics, namely that Euclidean wormholes describe a coarse-grained approximation to the energy level statistics of black hole microstates. Our main result is an integral representation for wormhole amplitudes in Einstein gravity and in full-fledged AdS/CFT. These amplitudes are non-perturbative corrections to the two-boundary problem in AdS quantum gravity. The full amplitude is UV sensitive, dominated by small wormholes, but it admits an integral transformation with a macroscopic, weakly curved saddle-point approximation. In the boundary description this saddle appears to dominate a smeared version of the connected two-point function of the black hole density of states, and suggests level repulsion in the spectrum of AdS black hole microstates.
We will discuss constructions of string-inspired higher-derivative non-local extension of particle theory which is explicitly ghost-free. Showing quantum loop calculations in the weak perturbation limit we explore the implications on the hierarchy problem and vacuum instability problem in Higgs theory. Then we will discuss the abelian and non-abelian model-building in infinite derivative QFT in 4-D which naturally leads to the predictions of dynamical conformal invariance in the UV at the quantum level due to the vanishing of the \beta-functions above the energy scale of non-locality M. The theory remains finite and perturbative upto infinite energy scales resolving the issue of Landau poles. We move on to the implications of infinite-derivatives in LHC, dark matter, astrophysical and inflationary observables and comment on constraints on the scale M and dimensional transmutation of the scale M. Next we will discuss the strong perturbation limit and show that mass gap that arises due to the interactions in the theory gets diluted in the UV due to the higher-derivatives again reaching a conformal limit in the asymptotic regions both for the scalar field case and Yang-Mills cases. For the Yang-Mills, the gauge theory is confining without fermions and we explore the exact beta-function in the theory. We conclude by summarising the non-locality as a framework UV-completion in particle theory and gravity and the road ahead for its fate with model-building with respect to BSM physics, particularly neutrinos, dark matter and axions.
We derive an expression for the one-loop determinant of the massive vector field in the Anti-de Sitter black brane geometry with large dimension limit. We utilize the Denef, Hartnoll and Sachdev method, which constructs the one-loop determinant from the quasinormal modes of the field. The large dimension limit decouples the equations of motion for different field components, and also selects a specific set of quasinormal modes that contribute to the non-polynomial part of the one-loop determinant. We hope this result can provide some useful information even when the number of dimension D is finite, since it's the leading order contribution when we treat D as a parameter and do an expansion in terms of 1/D.
Generic arguments lead to the idea of a minimal length scale in quantum gravity. An observational signal of such a minimal length scale is that photons would exhibit dispersion. In 2009, the observation of a short gamma ray burst seemed to push the minimal length scale to distances smaller than the Planck length. This poses a challenge for minimal length models. Here we propose a modification of the position and momentum operators which lead to a minimal length scale, but preserve the photon energy-momentum relationship E=pc. In this way there is no dispersion of photons with different energies. This can be accomplished without modifying the commutation relationship [x,p]=iℏ.
The quantization of Einsteins's general relativity leads to a nonrenormalizable quantum field theory. However, the potential harm of nonrenormalizability, can be overcome in the effective field theory (EFT) framework, where there is an unambiguous way to define a well behaved and reliable quantum theory of gravitation, if only we agree to restrict ourselves to low energies compared to the Planck scale. Although the effective field theory of gravitation is perfectly well-defined as a quantum field theory, some subtleties arise from its nonrenormalizability, such as the use of the renormalization group equations, as illustrated by the controversy involving the gravitational corrections to the beta function of gauge theories. In 2005, Robinson and Wilczek announced their conclusion that gravity contributes with a negative term to the beta function of the gauge coupling, meaning that quantum gravity could make gauge theories asymptotically free. This result was soon contested. It was shown that the claimed gravitational correction is gauge dependent, and a lot of subsequent research on the subject followed with varying conclusions. In this work we use the framework of effective field theory to couple Einstein's gravity to quantum electrodynamics and determine the gravitational corrections to the two-loop beta function of the electric charge. Our results indicate that gravitational corrections do not alter the running behavior of the electric charge, on the contrary, we observe that it gives a positive contribution to the beta function, making the electric charge grow faster.
Measurements of Higgs boson production cross sections are carried out in the diphoton decay channel using 139 $fb^{-1}$ of $pp$ collision data at $\sqrt{s}=$13 TeV collected by the ATLAS experiment. Cross-sections for gluon fusion, weak vector boson fusion, associated production with a $W$ or $Z$ boson, and top quark associated production processes are reported. An upper limit of eight times the Standard Model prediction is set for the associated production of a Higgs boson with a single top quark process. Higgs boson production is further characterized through measurements of the Simplified Template Cross-Sections (STXS) in 27 fiducial regions. All the measurement results are compatible with the Standard Model predictions.
The precision measurements of the properties of the Higgs boson are among the principal goals of the LHC Run-2 program. This talk reports on the measurements of the fiducial and differential Higgs boson production cross sections via Vector Boson Fusion with a muon, an electron, and two neutrinos from the decay of W bosons, along with the presence of two energetic jets in the final state. The analysis uses $pp$ collision data at a center-of-mass energy of 13 TeV collected with the ATLAS detector between 2015 and 2018 corresponding to an integrated luminosity of 139 fb$^{−1}$. The optimizations of the selection criteria and the signal extraction methods will be discussed in detail, in particular the use of machine learning techniques for performing a multidimensional fit for extracting the signal and normalizing the simulated backgrounds to data.
The Large Hadron Collider (LHC) is a “top quark factory”. It allows for precise measurements of several top quark properties. In addition to this, for the first time ever it is now possible to measure rare processes involving top quarks. Associated production of top and anti-top quarks along with the Higgs boson or with electro-weak gauge bosons like W or Z has been observed at the LHC. Precise measurements of these processes have implications on the Standard Model of particle physics and even in cosmology. Recent results from measurements of these rare top quarks processes involving multileptonic final states, at the ATLAS experiment in 𝑝𝑝 collisions at $\sqrt(s)=13$ TeV with 80 fb−1 of data will be discussed.
Following the discovery of the Higg's boson in 2012 by both the ATLAS and CMS experiments, a wealth of papers have been published concerning measurements or observations of the Higgs' decay modes. However, the most dominant decay mode, $H \rightarrow b\bar{b}$, proved to be an elusive and challenging search due to the low signal-to-background environment, and a diverse range of backgrounds arising from multiple Standard Model processes. The backgrounds include $W$+jets, $Z$+jets, and $t\bar{t}$ production amongst others. Measurements of the $WH$ and $ZH$ production, with the $W$ or $Z$ boson decaying into charged leptons (electrons or muons, including those produced from the leptonic decay of a tau lepton), in the $H\rightarrow b\bar{b}$ decay channel in $pp$ collisions at 13 TeV, corresponding to an integrated luminosity of 139 fb$^{-1}$, with the ATLAS detector was performed. The production of a Higgs boson in association with a $W$ or $Z$ boson has been established with observed (expected) significances of 4.0 (4.1) and 5.3 (5.1) standard deviations, respectively.
In this talk I will present results of the simulation of electroweak Higgs boson production at the CERN LHC using the Herwig 7 general purpose event generator using one-loop matrix elements via the interface to HJets. The main result will be the simulation of next-to-leading order merging of Higgs boson plus 2 and 3 jets with a dipole parton shower. Additionally, I will comment on non-factorizable radiative corrections to this important Higgs boson production process. I will, also, provide a comparison of the full calculation with the well known t-channel approximation (a.k.a VBF) provided by the parton-level Monte Carlo program, VBFNLO.
With the standard model working well in describing the collider data, the focus is now on determining the standard model parameters as well as for any hint of deviation. In particular, the determination of the couplings of the Higgs boson with itself and with other particles of the model is important to better understand the electroweak symmetry breaking sector of the model. In this
letter, we look at the process pp → W W H, in particular through the fusion of bottom quarks. Due to the non-negligible coupling of the Higgs boson with the bottom quarks, there is a dependence on the W W HH coupling in this process. This sub-process receives the largest contribution when the Wbosons are longitudinally polarized. We compute one-loop QCD corrections to various final states with polarized W bosons. We find that the corrections to the final state with the longitudinally polarized W bosons are large. It is shown that the measurement of the polarization of the W bosons can be used as a tool to probe the WWHH coupling in this process. We also examine the effect of varying
WWHH coupling in the κ-framework.
Experimentally probing the charm-Yukawa coupling in the LHC experiments
is important, but very challenging due to an enormous QCD background. We study a new channel that can be used to search for the Higgs decay $H\to c\bar c$, using the vector boson fusion (VBF) mechanism with an associated photon. In addition to suppressing the QCD background, the photon gives an effective trigger handle. We discuss the trigger implications of this final state that can be utilized in ATLAS and CMS. We propose a novel search strategy for $H\to c\bar c$ in association with VBF jets and a photon, where we find a projected sensitivity of about 5 times the SM charm-Yukawa coupling at 95$\%$ C.L. at High Luminosity LHC (HL-LHC). Our result is comparable and complementary to existing projections at the HL-LHC. We also discuss the implications of increasing the center of mass collision energy to 30 TeV and 100 TeV.
The measurements at the Large Hadron Collider(LHC), so far, have established Higgs Yukawa couplings to Fermions are close to the Standard Model(SM) expectation for the 3rd Fermion generation. However, the rather ad hoc assumption of universal Yukawa coupling for other Fermion generations has a little experimental constraint. This is very challenging to probe due to small branching fractions, extensive quantum chromodynamics(QCD) backgrounds, and difficulties in jet flavor identification. A direct search by the ATLAS experiment for the SM Higgs boson decaying to a pair of charm quarks is presented. The dataset delivered by the LHC in $pp$ collisions at $\sqrt{s}=$ 13 TeV and recorded by the ATLAS detector corresponds to an integrated luminosity of 139 fb-1. Charm tagging algorithms are optimized to distinguish c-quark jets from both light flavor jets and b-quark jets. The analysis method is validated with the study of diboson (WW, WZ, and ZZ) production, with observed (expected) significances of 2.6(2.2) standard deviations above the background-only hypothesis for the (W/Z)Z(→cc¯) process and 3.8(4.6) standard deviations for the (W/Z)W(→cq) process. The (W/Z)H(→cc¯) search yields an observed (expected) limit of 26(31) times the predicted cross-section times branching fraction for a Higgs boson with a mass of 125 GeV, corresponding to an observed (expected) constraint on the charm Yukawa coupling modifier |κc|<8.5(12.4), at the 95% confidence level.
The dimuon decay of the Higgs boson is the most promising process for probing the Yukawa couplings to the second generation fermions at the Large Hadron Collider (LHC). We present a search for this important process using the data corresponding to an integrated luminosity of 139 fb$^{-1}$ collected with the ATLAS detector in $pp$ collisions at $\sqrt{s} = 13 \mathrm{TeV}$ at the LHC. Events are divided into several regions using boosted decision trees to target different production modes of the Higgs boson. The measured signal strength (defined as the ratio of the observed signal yield to the one expected in the Standard Model) is $\mu = 1.2 \pm 0.6$. The observed (expected) significance over the background-only hypothesis for a Higgs boson with a mass of 125.09 GeV is 2.0$\sigma$ (1.7$\sigma$).
The Higgs Boson is expected to decay to bb approximately 58% of the time. Despite the large branching fraction, due to the large background from Standard Model events with b-jets, measuring this decay has been less precise than other, less frequent, decays. Measuring H(bb) in the vector boson fusion production mode has historically been insensitive, but developments in the background estimates and discrimination, as well as improvements in the signal extraction techniques, have resulted in an observed (expected) significance of 2.6 (2.8) standard deviations from the background-only hypothesis. This analysis uses a dataset with an integrated luminosity of 126 $fb^{-1}$, collected in $pp$ collisions at $\sqrt{s}=$ 13 TeV with the ATLAS detector at the Large Hadron Collider (LHC) during LHC Run 2 and considers only fully-hadronic final states. This talk will focus on the background estimation and signal extraction techniques that are unique to this analysis, as well as the results.
The ever-growing interest into high-energy production of the Higgs boson, motivated by an enhanced sensitivity to New Physics scenarios, pushes the development of experimental techniques for the reconstruction of boosted decay products from the Higgs-boson hadronic decays.
This talk will discuss recent studies of inclusive Higgs-boson production with sizable transverse momentum decaying to a $b\bar{b}$ quark pair (ATLAS-CONF-2021-010). The analyzed data were recorded with the ATLAS detector in proton-proton collisions with a center-of-mass energy of $\sqrt{s}=13\,$ TeV at the Large Hadron Collider between 2015 and 2018, corresponding to an integrated luminosity of $136\,\text{fb}^{-1}$.
Higgs bosons decaying to $b\bar{b}$ are reconstructed as single large-radius jets and identified by the experimental signature of two $b$-hadron decays. The analysis takes advantage of an analytical model for the description of the multi-jet background, and combines multiple regions rich in Higgs-boson signal and specific background signatures. The experimental techniques are validated in the same kinematic regime using the $Z\to b\bar{b}$ process.
For Higgs-boson production at transverse momenta above 450 GeV, the production cross section is found to be 13±57(stat.)±22(syst.)±3(theo.) fb. The differential cross section 95% confidence level upper limits as a function of Higgs boson transverse momentum are $σ_H$(300<$p_{\text{T}}^H$<450 GeV)<2.8 pb, $σ_H$(450<$p_{\text{T}}^H$<650 GeV)<91 fb, $σ_H$($p_{\text{T}}^H$>650GeV)<40.5 fb, and $σ_H$($p_{\text{T}}^H$>1TeV)<10.3 fb. Evidence for the production of $Z->b\bar{b}$ with $p_{\text{T}}^Z>650\,\text{GeV}$ is obtained. All results are consistent with the Standard Model predictions.
A search for the Standard Model Higgs boson produced in association with a high-energy photon is performed using ${132}$ ${fb^{-1}}$ of $pp$ collision data at $\sqrt{s}={13}$ TeV collected with the ATLAS detector at the Large Hadron Collider. The vector boson fusion production mode of the Higgs boson is particularly powerful for studying the $H(\rightarrow b \bar{b})\gamma$ final state because the photon requirement greatly reduces the multijet background and because the Higgs boson decays primarily to bottom quark-antiquark pairs. Utilization of Monte Carlo, machine learning, and model fitting techniques resulted in a measured Higgs boson signal strength of $1.3 \pm 1.0$ relative to the Standard Model prediction. This correlates with an observed signal significance greater than background of 1.3 standard deviations, compared to 1.0 standard deviations expected.
ProtoDUNE-SP and ProtoDUNE-DP DUNE's large scale single-phase and dual-phase prototypes of DUNEs far detector modules, operated at CERN Neutrino Platform. ProtoDUNE-SP has finished its Phase-1 running in 2020 and has successfully collected test beam and cosmic ray data. In this talk, I will discuss the first results on ProtoDUNE-SP Phase-1's physics performance and ProtoDUNE-DPs design and progress.
Large liquid argon time projection chambers (LAr TPCs) at SBN and DUNE will provide an unprecedented amount of information about GeV-scale neutrino interactions. By taking advantage of the excellent tracking and calorimetric performance of LAr TPCs, we present a novel method for estimating the neutrino energy in neutral current interactions that significantly improves upon conventional methods in terms of energy resolution and bias. We present a toy study exploring the application of this new method to the sterile neutrino search at SBN under a 3+1 model.
The Deep Underground Neutrino Experiment (DUNE) is an upcoming long-baseline neutrino experiment which will study neutrino oscillations. Neutrino oscillations will detected at the DUNE far detector 1300 km away from the start of the beam at Fermilab. The DUNE near detector (ND) will be located on-site at Fermilab, and will be used to provide an initial characterization of the neutrino beam, as well as to constrain systematic uncertainties on neutrino oscillation measurements. The detector suite consists of a modular 50-ton LArTPC (ND-LAr), a magnetized 1-ton gaseous argon time projection chamber (ND-Gar) surrounded by an electromagnetic calorimeter, and the System for on-Axis Neutrino Detection (SAND), composed by magnetized electromagnetic calorimeter and inner tracker. In this talk, these detectors and their physics goals will be discussed.
In order to achieve a precise measurement of the leptonic CP violation phase, Deep Underground Neutrino Experiment (DUNE) will employ four 10 kt scale far detector modules and a near detector complex.
In the near detector complex, a System for on-Axis Neutrino Detection (SAND) is located downstream of a liquid-argon TPC (LAr) and a high pressure gaseous-argon TPC (GAr). SAND consists of an inner tracking system, surrounded by the KLOE superconducting magnet with an electromagnetic calorimeter inside. Due to the high event rate and accurate neutrino energy reconstruction capability, SAND can serve as a good beam monitor. Besides, SAND provides comprehensive measurements on non-Ar targets allowing constraints on the A-dependence of neutrino interaction models. In addition, with the capability of neutron kinetic energy detection, a full reconstruction of neutrino interaction would be possible, which opens new ways to analyze the events. In this talk, a number of physics studies and the latest design of SAND will be presented.
The XENON collaboration has recently published results lowering the energy threshold to search for nuclear recoils produced by solar $^8$B neutrinos using a $0.6$ tonne-year exposure with the XENON1T detector. Due to the low energy threshold, a number of novel techniques are required to reduce the consequent increase in backgrounds. No significant $^8$B neutrino-like excess is found after unblinding. New upper limits are placed on the dark matter-nucleus cross section for dark matter masses as low as $3~\mathrm{GeV}/c^2$, as well as on a model of non-standard neutrino interactions. This talk will present the techniques used to lower backgrounds and to validate signal and background models.
The CMS electromagnetic calorimeter (ECAL) of the Compact Muon Solenoid (CMS) is a high granularity lead tungstate crystal calorimeter operating at the CERN Large Hadron Collider. The ECAL is designed to achieve excellent energy resolution which is crucial for studies of Higgs boson decays with electromagnetic particles in the final state, as well as for searches for new physics involving electrons and photons. Recently the energy response of the calorimeter has been precisely calibrated exploiting the full Run 2 data, with the goal of achieving the most optimal performance. A dedicated calibration of each detector channel has been performed with physics events using electrons from W and Z boson decays, photons from pi0/eta decays, and the azimuthally symmetric energy distribution of minimum bias events. We will describe the calibration strategies that have been implemented and the excellent performance achieved by the CMS ECAL with the ultimate calibration of Run 2 data, in terms of energy scale stability and energy resolution.
The CMS electromagnetic calorimeter (ECAL) is a high resolution crystal calorimeter operating at the CERN LHC. The on-detector readout electronics digitizes the signals and provides information on the deposited energy in the ECAL to the hardware-based Level-1 trigger system. The L1 trigger system receives information from different CMS subdetectors at 40 MHz, the proton bunch collision rate, and decides for each collision whether the full detector must be read out, reducing the rate of accepted events to about 100 kHz. The increased luminosity of the LHC Run2 with respect to Run1 has required frequent calibrations during data taking operations to account at trigger level for radiation-induced changes in crystal and photodetector response. For the LHC Run3 (2022-24), further improvements in the energy and time reconstruction of the CMS ECAL trigger primitives are being explored. These exploit additional features of the on-detector electronics. In this presentation we will review the ECAL trigger primitives performance during LHC Run2 and present the improvements to the ECAL trigger system envisaged for the LHC Run3.
To address the challenges of providing high performance calorimetry and other types of instrumentation in future experiments under high luminosity and difficult radiation and pileup conditions, R&D is being conducted on promising optical-based technologies that can inform the design of future detectors, with emphasis on ultra-compactness, excellent energy resolution and spatial resolution, and especially fast timing capability.
The strategy builds upon the following concepts: use of dense materials to minimize the cross sections and lengths (depths) of detector elements; maintaining Molière Radii of the structures as small as possible; use of radiation-hard materials; use of optical techniques that can provide high efficiency and fast response while keeping optical paths as short as possible; and use of radiation resistant, high efficiency photosensors.
High material density is achieved by using thin layers of tungsten absorber interleaved with active layers of dense, highly efficient crystal or ceramic scintillator. Several scintillator approaches are currently being explored, including rare-earth 3+ activated materials Ce3+ and Pr3+ for brightness and Ca co-doping for improved (faster) fluorescence decay time.
Light collection and transfer from the scintillation layers to photosensors is enabled by the development and refinement of new waveshifters (WLS) and the incorporation of these materials into radiation hard quartz waveguide elements. WLS dye developments include fast organic dyes of the DSB1 type, ESIPT (excited state intermolecular proton transfer) dyes having very large Stokes’ Shifts and hence very low optical self-absorption, and inorganic fluorescent materials such as LuAG:Ce, which is noted for its radiation resistance.
Optical waveguide approaches include quartz capillaries containing WLS cores to: (1) provide high resolution EM energy measurement; (2) with WLS materials strategically placed at the location of the EM shower maximum to provide high resolution timing of EM showers, and (3) with WLS shifter elements placed at various depth locations to provide depth segmentation and angular measurement of the EM shower development.
Light directly from the scintillators or indirectly via wave shifters is detected by pixelated, Geiger-mode photosensors that have high quantum efficiency over a wide spectral range and designed to avoid saturation. These include the development of very small pixel (5-7 micron) silicon photomultiplier devices (SiPM) operated at low gain and cooled (typically -35°C or below), and longer-term R&D on photosensors based upon large band-gap materials including GaInP. Both efforts are directed toward improved device performance in high radiation fields.
The main emphases of the RADiCAL R&D program are: (1) the bench, beam and radiation testing of individual scintillator, wave shifter and photo sensing elements; and (2) by combining these into ultra-compact modular structures, to characterize and assess their performance for measurement of energy, fast timing, and depth segmentation. Recent results and program plans will be presented.
A challenge in large LArTPCs is efficient photon collection for low energy, MeV-scale, deposits. Past studies have demonstrated that augmenting traditional ionization-based calorimetry with information from the scintillation signals can greatly improve the precision of measurements of energy deposited. We propose the use of photosensitive dopants to efficiently convert the scintillation signals of the liquid argon directly into ionization signals. This could enable the collection of more than 40% of all the scintillation information, a considerable improvement over conventional light collection solutions. We will discuss the implications this can have on LArTPC physics programs, what hints of performance improvements we can gather from past studies, and what R&D we envision are needed to establish using these dopants in large LArTPCs.
The “muon-to-electron conversion” (Mu2e) experiment at Fermilab will search for the Charged Lepton Flavour Violating neutrino-less coherent conversion of a muon into an electron in the field of an aluminum nucleus. The observation of this process would be the unambiguous evidence of physics beyond the Standard Model. Mu2e detectors comprise a straw-tracker, an electromagnetic calorimeter and an external veto for cosmic rays. The calorimeter provides excellent electron identification, complementary information to aid pattern recognition and track reconstruction, and a fast calorimetric online trigger. The detector has been designed as a state-of-the-art crystal calorimeter and employs 1340 pure Cesium Iodide (CsI) crystals readout by UV-extended silicon photosensors and fast front-end and digitization electronics. A design consisting of two identical annular matrices (named “disks”) positioned at the relative distance of 70 cm downstream the aluminum target along the muon beamline satisfies the Mu2e physics requirements.
The hostile Mu2e operational conditions, in terms of radiation levels (total ionizing dose of 12 krad and a neutron fluence of 5x1010 n/cm2 @ 1 MeVeq (Si)/y), magnetic field intensity (1 T) and vacuum level (10^-4 Torr) have posed tight constraints on the design of the detector mechanical structures and materials choice. The support structure of the two 670 crystal matrices employs two aluminum hollow rings and parts made of open-cell vacuum-compatible carbon fiber. The photosensors and service front-end electronics for each crystal are assembled in a unique mechanical unit inserted in a machined copper holder. The 670 units are supported by a machined plate made of vacuum-compatible plastic material. The plate also integrates the cooling system made of a network of copper lines flowing a low temperature radiation-hard fluid and placed in thermal contact with the copper holders to constitute a low resistance thermal bridge. The data acquisition electronics is hosted in aluminum custom crates positioned on the external lateral surface of the two disks. The crates also integrate the electronics cooling system as lines running in parallel to the front-end system.
In this talk we will review the constraints on the calorimeter mechanical structures design, the development from the conceptual design to the specifications of all the structural components, including the mechanical and thermal simulations that have determined the materials and technological choices and the specifications of the cooling station, the status of components production, the components quality assurance tests, the detector assembly procedures, and the procedures for detector transportation and installation in the experimental area.
Measurements of the di-leptonic top-antitop events at the LHC unraveled several important excesses. We examine the possibility that those excesses are consequences of the lack of non-perturbative enhancement of the production cross section near the t-tbar threshold. While sub-dominant in terms of total rates, so-far neglected toponium effects yield the additional production of di-leptonic systems of small invariant mass and small azimuthal angle separation, which could contribute the above-mentioned deviations from the Standard Model. We propose a method to discover toponium in present and future data, and our results should pave the way to further experimental and phenomenological studies on toponium. Deeper understanding of the threshold behavior of the top pair production is necessary to accurately determine the top quark mass, which is one of the most important parameters of the SM.
We investigate the prospects of discovering the top quark decay into
a charm quark and a Higgs boson ($t \to c h^0$) in top quark pair
production at the CERN Large Hadron Collider (LHC).
A general two Higgs doublet model is adopted to study flavor changing
neutral Higgs (FCNH) interactions.
We perform a parton level analysis as well as Monte Carlo simulations
using \textsc{Pythia}~8 and \textsc{Delphes} to study the flavor changing
top quark decay
$t \to c h^0$, followed by the Higgs decaying into $\tau^+ \tau^-$,
with the other top quark decaying to a bottom quark ($b$) and
two light jets ($t\to bW\to bjj$).
To reduce the physics background to the Higgs signal,
only the leptonic decays of tau leptons are considered,
$\tau^+\tau^- \to e^\pm\mu^\mp +$ MET,
where MET represents the missing transverse energy from
the neutrinos.
In order to reconstruct the Higgs boson and top quark masses as well as
to reduce the physics background, the collinear approximation
for the highly boosted tau decays is employed.
Furthermore, the energy distribution of the charm quark helps set the
acceptance criteria used to reduce the background and improve the statistical
significance of the signal.
We study the discovery potential for the FCNH top decay
at the LHC with collider energy $\sqrt{s} = 13$ and 14 TeV as well as
a future hadron collider with $\sqrt{s} = 27$ TeV.
Our analysis suggests that a high energy LHC at $\sqrt{s} = 27$ TeV
will be able to discover this FCNH signal with an integrated
luminosity $\mathcal{L} = 3$ ab$^{-1}$ for a branching fraction
${\cal B}(t \to ch^0) > 1.4 \times 10^{-4}$,
which corresponds to a FCNH coupling $|\lambda_{tch}| > 0.023$.
This FCNH coupling is significantly below the current ATLAS combined
upper limit of $|\lambda_{tch}| = 0.064$.
Variable Importance is a variable ranking framework that uses machine learning methods, such as neural networks, to construct a quantitative metric for characterizing a variable's discriminatory power in binary classification problems. The Variable Importance framework is presented in the context of the CMS search for the rare Standard Model process of three top quark production to the single lepton final state. The importance metrics for a set of 76 multivariate variables describing a 13 TeV proton-proton collision are determined, and a neural network discriminator is trained for a subset of the ranked variables which will be used in a likelihood analysis. The Variable Importance framework includes hyper parameter optimization and k-fold cross validation training when constructing the final discriminator. Preliminary results for the expected three top signal and cross section using 101 $\mathrm{fb}^{-1}$ of simulated Monte Carlo samples using the Run 2 CMS detector is shown. Additionally, a study on Variable Importance's predictive power of the expected significance using a cumulative importance metric is shown to further validate the accuracy of Variable Importance's quantitative ranking.
We present a search for the standard model four top quark (tttt) production in the single-lepton final state. We analyze the proton-proton collision data collected by the CMS experiment at center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 35.8 $fb^{-1}$ in 2016, 41.5 $fb^{-1}$ in 2017 and 59.97 $fb^{-1}$ in 2018. The single lepton final state features a high jet multiplicity, with at least four jets coming from the hadronization of a bottom quark, an electron or a muon, and missing transverse momentum from neutrino. We consider the distribution of HT and BDT to discriminate the signal from background, where HT is the scalar sum of transverse momentum of all the jets and BDT is the discriminator from a multivariate analysis based on Boosted Decision Tree approach. We set limits on the four tops production cross section in the absence of signal. The expected limits and significances on tttt production cross section for data-taking period 2016 to 2018, and their combination are presented.
We discuss heavy-flavor production in modern global QCD analyses to determine the structure of the proton. We discuss new factorization schemes in presence of heavy quarks in proton-proton collisions, as well as the impact of the latest charm and bottom production at HERA and top-quark pair production at the LHC on recent PDF analyses from CTEQ.
The top quark pair production cross-section is measured in proton-proton and lead-lead collisions at a center-of-mass energy of 5.02 TeV. The data, collected in 2017 and 2018 by the CMS experiment at the LHC, correspond to a proton-equivalent integrated luminosity of 304 and 78 pb$^{-1}$, respectively. The measurements are performed using events with one electron and one muon of opposite sign, and at least two jets. The measured cross-sections are found to be consistent with each other as well as perturbative QCD calculations, including state-of-the-art free- or bound-nucleon parton distribution functions. They constitute the first step towards using the top quark as a novel tool to probe the quark-gluon plasma, an exotic state of strongly interacting quantum chromodynamics matter which is routinely produced in ultrarelativistic heavy nuclei collisions.
The Higgs boson could provide the key to discover new physics at the Large Hadron Collider. We investigate novel decays of the Standard Model (SM) Higgs boson into leptophobic gauge bosons which can be light in agreement with all experimental constraints. We study the associated production of the SM Higgs and the leptophobic gauge boson that could be crucial to test the existence of a leptophobic force. Our results demonstrate that it is possible to have a simple gauge extension of the SM at the low scale, without assuming very small couplings and in agreement with all the experimental bounds that can be probed at the LHC (ArXiv: 2003.09426)
We explore the implications of $g_\mu-2$ new result to five models based on the $SU(3)_C×SU(3)_L×U(1)_N$ gauge symmetry and put our conclusions into perspective with LHC bounds. We show that previous conclusions found in the context of such models change if there are more than one heavy particle running in the loop. Moreover, having in mind the projected precision aimed by the $g_\mu-2$ experiment at FERMILAB, we place lower mass bounds on the particles that contribute to muon anomalous magnetic moment assuming the anomaly is resolved otherwise. Lastly, we discuss how these models could accommodate such anomaly in agreement with existing bounds.
In a particle theory model whose most readily discovered new particle is the $\sim 1$TeV bilepton resonance in same-sign leptons, currently being sought at CERN's LHC, there exist three quarks ${\cal D, S, T}$ which will be bound by QCD into baryons and mesons. We consider the decays of these additional baryons and mesons whose detailed experimental study will be beyond the reach of the 14 TeV CERN collider and accessible only at an O(100 TeV) collider.
Recently, there has been great interest in beyond-the-Standard Model (BSM) physics involving new low-mass matter and mediator particles. One such model, $U(1)_{T3R}$, proposes a new U(1) gauge symmetry under which only right-handed fermions of the standard model are charged, as well as the addition of new vector-like fermions (e.g., $\chi_t$) and a new dark scalar particle ($\phi$) whose vacuum expectation value breaks the $U(1)_{T3R}$ symmetry. For this work, we perform a feasibility study to explore the mass ranges for which these new particles can be probed at the LHC. We consider the interaction $pp\rightarrow \chi_t t \phi$ in which the top quark decays purely hadronically, the $\chi_t$ decays semi-leptonically ($\chi_t\rightarrow W+b$), and the $\phi$ decays to two photons. The proposed search is expected to achieve a discovery reach with signal significance greater than $5\sigma$ for $\chi_t$ masses up to 1.8 TeV and $\phi$ masses as low as 1 MeV, assuming an integrated luminosity of 3000 fb$^{-1}$.
Scenarios in which right-handed light Standard Model fermions couple to a new gauge group, $U(1)_{T3R}$ can naturally generate a sub-GeV dark matter candidate. But such models necessarily have large couplings to the Standard Model, generally yielding tight experimental constraints. We show that the contributions to $g_\mu-2$ from the dark photon and dark Higgs largely cancel out in the narrow window where all the experimental constraints are satisfied, leaving a net correction which is consistent with recent measurements from Fermilab. These models inherently violate lepton universality, and UV completions of these models can include quark flavor violation which can explain $R_{K^{(\ast)}}$ anomalies as observed at the LHCb experiment after satisfying constraints on $Br(B_s\rightarrow\mu^+\mu^-)$ and various other constraints in the allowed parameter space of the model. This scenario can be probed by FASER, SeaQuest, SHiP, LHCb, Belle, etc.
A non-Abelian $SU(2)_X$ gauge extension of the Standard Model is considered under which leptons carry non-trivial charge. Gauge anomaly cancellation requires additional vectorlike fermions, which along with neutral vector bosons that play the role of Dark Matter correct the muon and the electron anomalous magnetic moments as preferred by experiments. When Collider bounds, electroweak precision data, dark matter relic abundance, and lepton $g-2$ are considered, the model is viable only within a narrow range of parameter space that corresponds to 1-3 TeV mass for the dark matter.
The $\eta$ and $\eta'$ mesons are almost unique in the particle universe since they are Goldstone boson and the dynamics of their decay are strongly constrained. The integrated eta meson samples collected in earlier experiments have been about ~$10^9$ events, dominated by the WASA at Cosy experiment, limiting considerably the search for such rare decays. A new experiment, REDTOP, is being proposed, with the intent
of collecting more than $10^{13}$ eta/yr ($10^{11}$ eta'/yr) for studying of rare $\eta$ decays.
Such statistics are sufficient for investigating several symmetry violations, and for searches of new particles beyond the Standard Model.
With tagged-eta experiment the fully constrained kinematic of the process allows for searches of light dark matter with a "Missing 4-momentum technique" which, at present, cannot be exploited by any other existing or proposed experiment.
The physics program and the detector for REDTOP will be discussed during the presentation.
The searches for permanent Electric Dipole Moments (EDMs) of elementary particles constitute one of the most powerful tools to probe physics beyond the Standard Model (SM). The existence of EDM can provide an explanation of the dominance of matter over antimatter in the universe which still is considered as one of the most puzzling questions in physics.
The JEDI Collaboration is conducting experimental EDM searches on protons and deuterons at the Cooler Synchrotron (COSY) storage ring at Forschunsgzentrum Jülich (Germany).
This talk will report on some of the major milestones achieved so far by the the JEDI Collaboration, which in many aspects were world-first achievements including some intermediate and preliminary results of the last presursor EDM experiment conducted on deuterons. Furthermore, an overview of the activities towards a prototype ring of the newly formed CPEDM collaboration will also be briefly presented.
The REDTOP experiment aims at collecting more than $10^{13}$ $\eta$/yr and $10^{11}$ $\eta'$/yr for studying rare meson decays.
Such large statistics provide the base for the investigation of several discrete symmetries, and the search for particles beyond the Standard Model.
The physics program and the ongoing sensitivity studies will be discussed during the presentation.
The Gamma Factory is a proposal to back-scatter laser photons off a beam of partially-stripped ions at the LHC, producing a beam of $\sim 10$ MeV to $1$ GeV photons with intensities of $10^{16}$ to $10^{18}~\text{s}^{-1}$. This implies $\sim 10^{23}$ to $10^{25}$ photons on target per year, many orders of magnitude greater than existing accelerator light sources and also far greater than all current and planned electron and proton fixed target experiments. We determine the Gamma Factory's discovery potential through "dark Compton scattering", $\gamma e \to e X$, where $X$ is a new, weakly-interacting particle. For dark photons and other new gauge bosons with masses in the 1 to 100 MeV range, the Gamma Factory has the potential to discover extremely weakly-interacting particles with just a few hours of data and will probe couplings as low as $\sim 10^{-9}$ with a year of running. The Gamma Factory therefore may probe couplings lower than all other terrestrial experiments and is highly complementary to astrophysical probes. We outline the requirements of an experiment to realize this potential and determine the sensitivity reach for various experimental configurations.
A search is presented for new physics beyond the standard model, including versions of Supersymmetry characterized by R-parity Violating (RPV) and Stealth SUSY. The result of this search is in events with two top quarks, no extra transverse momentum, and many light flavor jets as a final state of the top squark. The Run2 data used were collected with the CMS detector at the LHC in 2016 to 2018, and correspond to a total integrated luminosity of 137 fb$^{−1}$. The search is performed using events with at least seven jets and exactly one electron or muon. A neural network based on event shape and kinematical variables is used for background discrimination. The method of gradient reversal is used to ensure that the neural network score is independent of jet multiplicity as required by the primary background estimation method. Top squark masses up to 670 (870) GeV are excluded at 95% confidence level for the RPV (stealth) scenario, and the maximum observed local significance is 2.8 standard deviations for the RPV scenario with top squark mass of 400 GeV.
In a well-motivated class of beyond the Standard Model scenarios, dark matter interacts mainly with neutrinos of the SM via a neutrinophilic mediator. This scenario can leave a striking signature in neutrino detectors -- the mono-neutrino signature. In this process, invisible particles (either dark matter or the mediators) can be radiated off neutrinos when they undergo charged-current weak interactions, resulting in missing transverse momentum with respect to the incoming neutrino. In this talk we discuss the possibility of probing neutrinophilic scalar mediators via the mono-neutrino signature at the proposed Forward Physics Facility (FPF) at the LHC. Because of the high energy neutrino flux produced in the forward direction of LHC detectors, the FPF will play a leading role in probing neutrinophilic scalars in a so-far unconstrained parameter space and shed light on the origin of neutrinphilic dark matter scenarios.
We present a phenomenological investigation of color-octet
scalars (sgluons) in supersymmetric models with Dirac gaugino masses that feature an explicitly broken $R$ symmetry ($R$-broken models). We have constructed such models by augmenting minimal $R$-symmetric models with a set of supersymmetric and softly supersymmetry-breaking operators that explicitly break $R$ symmetry. We have found new features that appear as a result of $R$ symmetry breaking, including enhancements to extant decay rates, novel tree- and loop-level decays, and improved cross sections of single sgluon production. We have also explored constraints on these models from the Large Hadron Collider. We find that, in general, $R$ symmetry breaking quantitatively affects existing limits on color-octet scalars, closing loopholes for light CP-odd (pseudoscalar) sgluons while perhaps opening one for a light CP-even (scalar) particle. Altogether, scenarios with broken $R$ symmetry and two sgluons at or below the TeV scale can be accommodated by existing searches.
We explore the implications of supersymmetric grand unified theories about the muon anomalous magnetic moment (muon g-2). The discrepancy between the Standard Model (SM) prediction and the experiments in muon g-2 can be resolved by the contributions from the supersymmetric particles, and the fundamental parameter space of the muon g-2 resolution typically favors light sleptons (<~ 800 GeV), charginos (<~ 900 GeV) and LSP neutralino (<~ 600 GeV). On the other hand, the current LHC experiments can probe the mass scales for the mentioned particles, and it is expected to have a stronger impact from LHC-Run3. We find that the chargino mass can be probed up to about 600 GeV, and LHC-Run3 is expected to test chargino masses up to about 700 GeV. Even though there is no direct impact on the slepton masses, these experiments are able to probe the sleptons up to about 350 GeV. However, these scales depend on the chirality of lighter slepton states, and one can still realize solutions with lighter charginos when the lighter slepton is mostly right-handed.
Clockwork models can explain the flavor hierarchies in the Standard Model quark and lepton spectrum. We construct supersymmetric versions of such flavor clockwork models. The zero modes of the clockwork are identified with the fermions and sfermions of the Minimal Supersymmetric Standard Model. In addition to generating a hierarchical fermion spectrum, the clockwork also predicts a specific flavor structure for the soft SUSY breaking sfermion masses. We find sizeable flavor mixing among first and second generation squarks. Constraints from Kaon oscillations require the masses of either squarks or gluinos to be above a scale of ~3 PeV.
Though collider searches are constraining supersymmetric parameter space, generic model independent bounds on sneutrinos remain very low. We calculate new model independent lower bounds on general supersymmetric scenarios with sneutrino LSP and NLSPs. By recasting ATLAS LHC exotic searches in mono boson channels, we place an upper bound on the cross section on $pp\rightarrow\tilde{\nu}\tilde{\nu}+V$ processes in mono-photon, mono-$Z$ and mono-Higgs channels. We also evaluate the LHC discovery potential of sneutrinos in the HL-LHC 3 $\text{ab}^{-1}$ run.
In this work we study the collider phenomenology of color-octet scalars (sgluons) in minimal supersymmetric models endowed with a global continuous R symmetry. We systematically catalog the significant decay channels of scalar and pseudoscalar sgluons and identify novel features that are natural in these models. These include decays in nonstandard diboson channels, such as to a gluon and a photon; three-body decays with considerable branching fractions; and long-lived particles with displaced vertex signatures. We also discuss the single and pair production of these particles and show that they can evade existing constraints from the Large Hadron Collider, to varying extents, in large regions of reasonable parameter space. We find, for instance, that a 725 GeV scalar and a 350 GeV or lighter pseudoscalar can still be accommodated in realistic scenarios.
Naturalness suggests that the masses of the lightest electroweak gauginos (electroweakinos) are near the electroweak scale and, as a result, are within the scope of current LHC searches. However, LHC searches have not yet provided evidence of any supersymmetric (SUSY) particles. While exclusion limits for SUSY particles have been commonly reported assuming a simplified model where the branching ratio of targeted decays is 100%, this is not realistic in the minimal supersymmetric standard model (MSSM) and does not represent the effect of a large number of competing production and decay processes. If the decay branching ratios of these particles are lower, however, the reported mass limits are likely to be optimistic.
Focusing on chargino-neutralino production in a conventionally considered scenario where the LSP is bino-like and the next lightest SUSY particle is wino-like, electroweakino decay branching ratio into on-shell SM bosons can be established in terms of electroweak phenomenological MSSM (pMSSM) parameters. This talk will present the dependence of the branching ratios on the electroweak MSSM parameters, restate mass limits from simplified ATLAS searches in terms of the pMSSM, and ultimately, present the implications of a pMSSM based approach to SUSY searches.
Application of machine learning methods in high energy physics has received tremendous success in recent years with rapidly growing use cases. A key aspect in improving the performance of a given machine learning model has been the optimization of its hyperparameters which is usually computationally expensive. A framework has been developed to provide a high-level interface for automatic hyperparameter optimization that utilizes the ATLAS grid computing resource with hardware acceleration from GPU machines. The framework is equipped with a wide variety of hyperparameter optimization algorithms, distributed optimization schemes, intelligent job scheduling strategy based on available resources, flexible hyperparameter configuration space generation, and adaptation to the ATLAS intelligent Data Delivery Service. An example use case for the hyperparameter optimization of a Boosted Decision Tree model in the $HH \to b\bar{b}\gamma\gamma$ non-resonant analysis in $pp$ collisions at $\sqrt{s}=$ 13 TeV with the ATLAS detector is also presented.
The intelligent Data Delivery Service (iDDS) has been developed to cope with the huge increase of computing and storage resource usage in the coming Large Hadron Collider (LHC) data taking. It has been designed to intelligently orchestrate workflow and data management systems, decoupling data pre-processing, delivery, and main processing in various workflows. It is an experiment-agnostic service around a workflow-oriented structure with Directed Acyclic Graph (DAG) support to work with existing and emerging use cases in ATLAS and other experiments. Here we will present the motivation for iDDS, its design schema and architecture, use cases and current status for the ATLAS and Rubin Observatory exercise, and plans for the future.
The Reproducible Open Benchmarks for Data Analysis Platform (ROB) is a platform that allows for the evaluation of different data analysis algorithms in a controlled competition-style format [1]. One example for such a comparison and evaluation of different algorithms is the “The Machine Learning Landscape of Top Taggers” paper, which compiled and compared multiple different top tagger neural networks [2]. Motivated by the significant amount of time required to organize and evaluate such benchmarks, ROB provides a platform that automates the collection, execution, and comparison of participant submissions in a benchmark. Although convenient, the ROB currently requires participants to package their submissions into docker containers, which can pose an additional burden due to the steep learning curve.
To increase ease of use, we implement support for the commonly used Jupyter Notebooks [3] in ROB. Jupyter Notebooks are a popular tool that many physicists are already familiar with. Using Jupyter notebooks, physicists are able to combine live code, comments, and documentation inside one document. By utilizing the PaperMill package [4], we allow ROB users to submit their implementations directly as Jupyter Notebooks in order to evaluate different data analysis algorithms without the need to package the code into Docker containers. To demonstrate functionality and spur usage of the ROB, we provide demos using bottom and top tagging neural networks that display the application of the ROB within particle physics as a way of providing a competition style platform for algorithm evaluation [5].
References:
[1] “Reproducible and Reusable Data Analysis Workflow Server”, https://github.com/scailfin/flowserv-core
[2] Kasieczka, Gregor, Plehn, Tilman, Butter, Anja, Cranmer, Kyle, Debnath, Dipsikha, Dillon, Barry M, . . . Varma, Sreedevi. (2019). The Machine Learning landscape of top taggers. SciPost Physics, 7(1), 014.
[3] “Jupyter Notebooks”, https://jupyter.org/
[4] “Papermill”, https://papermill.readthedocs.io/en/latest/
[5] “Particle Physics”, https://github.com/anrunw/ROB
We introduce CaloFlow, a fast detector simulation framework based on normalizing flows. For the first time, we demonstrate that normalizing flows can reproduce many-channel calorimeter showers with extremely high fidelity, providing a fresh alternative to computationally expensive GEANT4 simulations, as well as other state-of-the-art fast simulation frameworks based on GANs and VAEs. Besides the usual histograms of physical features and images of calorimeter showers, we introduce a new metric for judging the quality of generative modeling: the performance of a classifier trained to differentiate real from generated images. We show that GAN-generated images can be identified by the classifier with 100% accuracy, while images generated from CaloFlow are able to fool the classifier much of the time. More broadly, normalizing flows offer several advantages compared to other state-of-the-art approaches (GANs and VAEs), including: tractable likelihoods; stable and convergent training; and principled model selection. Normalizing flows also provide a bijective mapping between data and the latent space, which could have other applications beyond simulation, for example, to detector unfolding.
We put forth a technique to generate images of particle trajectories (particularly electrons and protons) in a liquid argon time projection chamber (LArTPC). LArTPCs are a type of particle physics detector used by several current and future experiments focused on studies of the neutrino. We implement a quantized variational autoencoder and an autoregressive model which produces images conditioned on momentum with LArTPC like features. In this paper, we adopt a hybrid approach to generative modeling via combining the decoder from the autoencoder together with an explicit generative model for the latent space to produce momentum-conditioned images of particle trajectories in a LArTPC.
Current measurements of Standard-Model parameters suggest that the electroweak vacuum is metastable. This metastability has important cosmological implications because large fluctuations in the Higgs field could trigger vacuum decay in the early universe. For the false vacuum to survive, interactions which stabilize the Higgs during inflation---e.g., inflaton-Higgs interactions or non-minimal couplings to gravity---are typically necessary. However, the post-inflationary preheating dynamics of these same interactions could also trigger vacuum decay, thereby recreating the problem we sought to avoid. This dynamics is often assumed catastrophic for models exhibiting scale invariance since these generically allow for unimpeded growth of fluctuations. In this talk, we examine the dynamics of such "massless preheating" scenarios and show that the competing threats to metastability can nonetheless be balanced to ensure viability. We find that fully accounting for both the backreaction from particle production and the effects of perturbative decays reveals a large number of disjoint "islands of (meta)stability" over the parameter space of couplings. Ultimately, the interplay among Higgs-stabilizing interactions plays a significant role, leading to a sequence of dynamical phases that effectively extend the metastable regions to large Higgs-curvature couplings.
I will present the Sejong Suite, an extensive collection of state-of-the-art high-resolution cosmological hydrodynamical simulations spanning a variety of cosmological and astrophysical parameters, primarily developed for modeling the Lyman-Alpha forest and the high-redshift cosmic web. Adopting a particle-based implementation, we follow the evolution of gas, dark matter (cold and warm), massive neutrinos, and dark radiation, and consider several combinations of box sizes and number of particles. Noticeably, for the first time, we simulate extended mixed scenarios describing the combined effects of warm dark matter, neutrinos, and dark radiation, modeled consistently by taking into account the neutrino mass splitting. Along the way, I will also highlight some new results on cosmological neutrinos and the dark sector focused on the matter and flux statistics.
The axion is a well-motivated candidate for the inflaton, as the radiative corrections that spoil many single-field models are avoided by virtue of its shift symmetry. However, axions generically couple to gauge sectors. As the axion rolls through its potential, this coupling can result in the production of a co-evolving thermal bath, a situation known as "warm inflation." Inflationary dynamics in this warm regime can be dramatically altered and result in significantly different observable predictions. In this talk, I will show that for large regions of parameter space, axion inflation models once assumed to be safely "cold" are in fact warm, and must be reevaluated in this context.
The cold dark matter (CDM) candidate with weakly interacting massive
particles can successfully explain the observed dark matter relic
density in cosmic scale and the large-scale structure of the Universe.
However, a number of observations at the satellite galaxy scale seem
to be inconsistent with CDM simulation.
This is known as the small-scale problem of CDM.
In recent years, it has been demonstrated that
self-interacting dark matter (SIDM) with a light mediator offers
a reasonable explanation for the small-scale problem.
We adopt a simple model with SIDM and focus on the effects of
Sommerfeld enhancement.
In this model, the dark matter candidate is a leptonic scalar particle
with a light mediator.
We have found favored regions of the parameter space with proper masses and
coupling strength generating a relic density that is
consistent with the observed CDM relic density.
Furthermore, this model satisfies the constraints of recent direct searches
and indirect detection for dark matter
as well as the effective number of neutrinos and the
observed small-scale structure of the Universe.
In addition, this model with the favored parameters can resolve the
discrepancies between astrophysical observations and $N$-body simulations.
We present models of resonant self-interacting dark matter in a dark sector with QCD, based on analogies to the meson spectra in Standard Model QCD. For dark mesons made of two light quarks, we present a simple model that realizes resonant self-interaction (analogous to the ϕ-K-K system) and thermal freeze-out. We also consider asymmetric dark matter composed of heavy and light dark quarks to realize a resonant self-interaction (analogous to the Υ(4S)-B-B system) and discuss the experimental probes of both setups. Finally, we comment on the possible resonant self-interactions already built into SIMP and ELDER mechanisms while making use of lattice results to determine feasibility.
Dark matter self-interactions have been proposed as a solution to various astrophysical small-scale structure anomalies. We explore the scenario in which dark matter self-interacts through a continuum of low-mass states. This happens if dark matter couples to a strongly-coupled nearly-conformal hidden sector. This type of theory is holographically described by brane-localized dark matter interacting with bulk fields in a slice of 5D anti-de Sitter space. The long-range potential in this scenario depends on a non-integer power of the spatial separation. We find that continuum mediators introduce novel power-law scalings for the scattering cross section, opening new possibilities for dark matter self-interaction phenomenology.
I will discuss dark matter production mechanism based on decays of a messenger WIMP-like state into a pair of dark matter particles that are self-interacting via exchange of a light, stable mediator. A natural by-product of this mechanism is a possibility of a late time transition to subdominant dark radiation component which increases the present-day Hubble rate. Simple realization of the proposed mechanism was studied in a Higgs portal dark matter model. We found a significant region of the parameter space that leads to a mild relaxation of the Hubble tension while simultaneously having the potential of addressing small-scale structure problems of ΛCDM.
We present two distinct models which rely on 1st order phase transitions in a dark sector. The first is a minimal model for baryogenesis which employs a new dark SU(2) gauge group with two doublet Higgs bosons, two lepton doublets, and two singlets. The singlets act as a neutrino portal that transfers the generated baryon asymmetry to the Standard Model. The model predicts extra relativistic degrees of freedom, exotic decays of the Higgs and Z bosons, and stochastic gravitational waves detectable by future experiments.
The second model additionally produces (asymmetric) dark matter while the dark sector is expanded to an SU(3)xSU(2)xU(1) gauge group. Dark matter is comprised of dark neutrons or dark protons and pions.This model is highly discoverable at both dark matter direct detection and dark photon search experiments and the strong dark matter self interactions may ameliorate small-scale structure problems.
The XENONnT experiment has made great commissioning strides in the last year. Operating at the INFN Gran Sasso National Laboratory in Italy, XENONnT has substantially improved upon its predecessor, XENON1T, which to date is the most sensitive direct-detection dark-matter experiment for spin-independent WIMPs above 6 GeV/c^{2}. As part of its multi-pronged physics program, XENONnT aims to reach a sensitivity of 2.6x10^{–48}cm^{2} for the WIMP-nucleon cross section. In this talk, I will describe the improved subsystems (ranging from liquid purification, radon distillation, neutron veto and data processing) and their impacts on various physics searches.
The Scintillating Bubble Chamber (SBC) is a rapidly developing new technology for 0.7 - 7 GeV nuclear recoil detection. Demonstrations in liquid xenon at the few-gram scale have confirmed that this technique combines the event-by-event energy resolution of a liquid-noble scintillation detector with the world-leading electron-recoil discrimination capability of the bubble chamber, and in fact maintains that discrimination capability at much lower thresholds than traditional Freon-based bubble chambers. The promise of unambiguous identification of sub-keV nuclear recoils in a scalable detector makes this an ideal technology for both GeV-mass WIMP searches and CEvNS detection at reactor sites. We will present progress from the SBC Collaboration towards the construction of a pair of 10-kg argon bubble chambers at Fermilab and SNOLAB to test the low-threshold performance of this technique in a physics-scale device and search for dark matter, respectively.
The Scintillating Bubble Chamber (SBC) Collaboration is constructing a 10-kg liquid argon bubble chamber with scintillation readout. The goal for this new technology is to achieve a nuclear recoil detection threshold as low as 100 eV with near complete discrimination against electron recoil events. Following initial characterization in a near-surface site at Fermilab, an underground deployment is planned at SNOLAB for a dark matter search. The sub-keV nuclear recoil threshold would enable sensitivity to GeV-mass WIMPs, and a future ton-scale version could probe for dark matter down to the solar neutrino floor. The same technology has been considered for a first measurement of coherent elastic neutrino nucleus scattering (CEvNS) with reactor neutrinos. With high statistics and high signal-to-background, precision searches for beyond-standard-model physics would be possible. I will discuss the physics case for the liquid argon bubble chamber technology, and SBC studies of backgrounds and nuclear recoil calibration approaches.
The detection of low mass dark matter is under development with the advancement of experiment techniques. The superfluid helium-4 detector covers an extensive detection range from DM mass keV to GeV among the setups. I will present a complete theoretical framework for all processes within the superfluid to fill in the missing theory for sub-GeV DM detection. First, we use effective field theories to construct the interaction Lagrangian between quasi-particles. Second, we use a U(1) gauge spontaneous breaking and current element method to derive the interaction between test particles and quasi-particles. In the end, I will discuss relevant cross-sections and decay rates.
Recent theoretical calculations have shown that it is possible to
attempt the direct detection of dark matter in the laboratory through
its gravitational interaction alone. This is particularly relevant
around the well-motivated Planck mass scale (22 micro-g or $10^{19}$ GeV).
The Windchime collaboration is working on arrays of mechanical
accelerometers with quantum-enhanced readout to ultimately achieve this
goal. In this talk, I will present the idea of Windchime, our recent
prototype setup, sensor development, and simulation and analysis frameworks.
In this talk, we correct previous work on magnetic charge plus a photon mass. We show that contrary to previous claims this system has a very simple, closed form solution which is the Dirac string potential multiplied by a exponential decaying part. Interesting features of this solution are discussed namely: (i) the Dirac string becomes a real feature of the solution; (ii) the breaking of gauge symmetry via the photon mass leads to a breaking of the rotational symmetry of the monopole's magnetic field; (iii) the Dirac quantization condition is potentially altered.
Quantum field theories generally contain small quantum excitations around a true vacuum that we call particles and large classical structures called solitons that interpolate between different degenerate vacua. Often the solitons have a topological character and are then also known as topological defects of which kinks, domain walls, strings, and magnetic monopoles are all examples. After a quantum phase transition, the quantum vacuum can break up to form these classical topological defects. We study these phase transitions with global symmetry breakings and their dynamics, where the only interactions are with external parameters that induce the phase transition. We evaluate the number densities of the defects in 1,2 and 3-dimensions (kinks, vortices, and monopoles respectively) and find that they scale as $t^{−d/2}$ and evolve towards attractor solutions that are independent of the externally controlled time dependence.
A method to construct the asymptotic eigenstates of two-dimensional adjoint QCD in all parton sectors is described. It is used to explain known properties of the spectrum of QCD$_{2A}$, as well as the basis of a numerical approach to tackle the full theory. First results in a discrete approximation and a continuous formulation are presented. Prospects to uncover the true single-particle content of the theory are discussed.
Non-topological solitons like Q-balls and Q-shells are fascinating field theory objects. They may also relate to what lies beyond the standard model such as, for instance, as a macroscopic dark matter candidate. I describe recent improvements in the analytic understanding of these objects, leading to accurate descriptions of their essential characteristics, like size, charge, and mass. I also discuss new classes of solutions which this new understanding has revealed. These advances pave the way for systematic investigations of how Q-balls and Q-shells can interact with standard model fields.
A search for resonant Higgs boson pair production in the four b-jet final state is conducted. The analysis uses 36 fb$^{-1}$ of pp collision data at $\sqrt{s}$ = 13 TeV collected with the ATLAS detector. The analysis is divided into two regimes, targeting Higgs boson decays which are reconstructed as pairs of b-tagged small-radius jets or as single large-radius jets associated with b-tagged track-jets. Spin-0 and spin-2 benchmark signal models are considered, both of which correspond to resonant HH production via gluon–gluon fusion and decaying to two Standard Model Higgs bosons. No significant evidence for a signal is observed. Upper limits are set on the production cross-section times branching ratio to Higgs boson pairs of a new resonance in the mass range from 251 GeV to 3 TeV.
We present a search for non-resonant di-Higgs production in the $HH\rightarrow b\bar{b}\gamma\gamma$ decay channel. The measurement uses 139 $\mathrm{fb}^{-1}$ of pp collisions recorded by the ATLAS experiment at a center-of-mass energy of 13 TeV. Selected events are separated into multiple regions, targeting both the Standard Model (SM) signal and Beyond Standard Model (BSM) signals with modified Higgs self-couplings. Further details on the optimization of the event selection are highlighted. No excess with respect to background expectations are found and upper limits at 95% confidence level are set on the di-Higgs production cross sections. The observed (expected) limit on the Standard Model cross section is 130 fb (180 fb), corresponding to 4.1 (5.5) times the predicted value. The observed (expected) Higgs trilinear coupling modifier is constrained to be between [-1.5, 6.7] ([2.4, 7.7]).
After the Higgs Boson, with a mass of 125 GeV, was discovered in 2012, studies of single Higgs boson production have largely confirmed that this particle has similar properties to the Higgs boson predicted by the Standard Model (SM). However, it is clear that physics beyond the SM is required to explain many observed phenomena in nature, and there remains the possibility that the Higgs Boson can act as a portal for BSM physics. Studies of Higgs boson pair production (HH) represent the next crucial step to constraining the Higgs sector and allow for the chance to explore resonant HH production as well as refining measurements of the Higgs boson self-coupling. While previous searches have focused on the HH production in the gluon-gluon and vector-boson fusion modes, this analysis documents a new search, with 136 fb^-1 of pp collisions at √s = 13 TeV collected by ATLAS in LHC Run 2, for both resonant and non-resonant HH production in association with a vector boson (VHH). Three different channels are considered, corresponding to either ZllHH, ZvvHH, or WlvHH, in order to have good coverage over the different final states. Only H→bb ̅ is considered for simplicity and for the sake of high-statistics. The analysis benefits from small backgrounds and attempts to set limits for the first time on VHH production. Analysis techniques and expected significance will be presented.
Precision measurements of Higgs boson couplings to SM particles is a central task at the LHC today and for the future HL-LHC. Due to the $\sim$ O(nb) $t\bar{t}$ cross section and large Yukawa coupling, measurements of the interaction of the Higgs with top quarks is particularly compelling. The $t\bar{t}HH$ signal can be used to probe this coupling and also provides a direct measurement of trilinear Higgs self-coupling. We search for $t\bar{t}HH$ production with the CMS detector at the LHC both in the SM and in an EFT model. In SM we look for semi-leptonic decay of the top-quark pair and the decay of both Higgs bosons to b-quarks using full Run 2 data. We also develop a simplified EFT model to study this signal independently of $t\bar{t}H$, in which 6D and 8D gauge-invariant operators are included to modify $t\bar{t}HH$ while keeping $t\bar{t}H$ unchanged at tree level. In this model, which includes a BSM $t\bar{t}HH$ vertex, Higgs bosons are produced at higher $p_T$ compared with those from SM production. Due to the resulting Lorentz boost, we observe an enhancement around the Higgs mass in the single b-jet mass spectrum.
A search for Higgs boson pair production in bbll+MET final state with the ATLAS experiment will be presented. The analysis uses the full Run~2 data-set (139fb−1) collected at the LHC in pp collisions at √s=13TeV. Di-Higgs boson production from the SM tri-Higgs-boson interaction and from BSM resonant decays are investigated with a final state containing two jets (one or two tagged as b-jets) and two leptons with opposite electric charge. Three different channels where one of the Higgs bosons decays via H→bb and the other via H→WW∗/ZZ∗/τ+τ− are included as di-Higgs signals contributions in the analysis. A deep-learning neural network has been used for event selection to improve the ATLAS di-Higgs boson detection sensitivity. The expected upper limits on the cross-sections were investigated based on MC simulated events.
Higgs boson pair production (HH) is one of the more interesting processes to study at the LHC, as it allows us to probe Higgs Boson self-coupling and associated parameters of the Higgs potential, as well as search for physics beyond the standard model. The $b\bar{b}\tau\tau$ final state is one of the most sensitive channels for HH studies due to an appreciable branching ratio, and a relatively clean background. In this talk, the methods used in calculating a few of the more important theoretical uncertainties associated with this analysis are presented - in particular, perturbative QCD (pQCD) calculations and Parton Showers for single Higgs backgrounds. From pQCD, three main sources of uncertainties come from (i) missing higher orders in the perturbative expansion from the partonic cross section, (ii) parton distribution functions and (iii) experimental determination of the strong coupling constant. These uncertainties associated with pQCD correspond to parton-level final states. Since the simulated samples also pass-through showering and hadronization generators that convert the parton cross section to a hadron level cross section, additional uncertainties occur in (i) modelling parton shower and hadronization through the algorithm or parameters and (ii) matrix element next-to-leading-order calculations.
NOvA is a long-baseline neutrino oscillations experiment designed to precisely measure the neutrino oscillation parameters. We do this by directing a beam of predominantly muon neutrinos from Fermilab towards northern Minnesota to measure the rate of electron-neutrino appearance. The experiment consists of two functionally equivalent detectors each located 14.6 mrad off the central axis of Fermilab's nearly 700 kW NuMI neutrino beam, the world's most intense neutrino beam. Both the Near Detector, located 1 km downstream from the beam source, and the Far Detector, located 810 km away in Ash River, MN, were constructed from plastic extrusions filled with liquid scintillator. With the data measured at the Near Detector being used to accurately determine the expected rate at the Far Detector, it is very important to have automated and accurate monitoring of the data recorded by the detectors so any hardware, data acquisition systems or beam issues arising in the 344k (20k) channels of the Far (Near) detector which could affect quality of the datasets collected for physics analyses are determined. I will present the techniques and detector monitoring systems in various stages of data taking and show the NOvA detectors data taking performance up to the end of recent beam run period.
NOvA is a long-baseline neutrino experiment optimized to observe the oscillation of muon neutrinos to electron neutrinos. It uses a high purity muon neutrino beam produced at Fermilab with central energy of approximately 1.8 GeV. NOvA consists of a near detector located 1 km downstream of the neutrino production target at Fermilab and a far detector located 810 km away in Ash River, Minnesota. Neutrino cross-section measurements performed at the near detector are affected by a large uncertainty on the absolute neutrino flux. Since the neutrino-electron elastic-scattering cross section can be accurately calculated, the measured rate of these interactions can be used to constrain the neutrino flux. We present the status of the neutrino-electron elastic-scattering measurement using a Convolutional Neural Network (CNN) to identify signal events with high purity.
NOvA (NuMI Off-Axis ve Appearance) is a long-baseline oscillation neutrino experiment composed by two functional identical detectors, a 300 ton Near Detector and a 14 kton Far Detector separated by 809 km and placed 14 mrad off-axis to the NuMI neutrino beam created at Fermilab. This configuration enables NOvA's rich neutrino physics program, which includes measuring neutrino mixing parameters, determining the neutrino mass hierarchy, and probing CP violation in the leptonic sector. The NOvA Test Beam experiment uses a scaled-down 30 ton Detector to analyze tagged beamline particles. A new tertiary beamline deployed at Fermilab can select and identify electrons, muons, pions, kaons and protons with momentum ranging from 0.3 to 2.0 GeV/c. The Test Beam program data will provide NOvA with a better understanding of the largest systematic uncertainties impacting NOvA’s analyses, which include the detector response, calibration, and hadronic energy resolution. In this talk, I will present the status and future plans for the NOvA Test beam program, along with the most recent results.
NOvA is a long-baseline neutrino experiment based at Fermilab that studies neutrino oscillation parameters via electron neutrino appearance and muon neutrino disappearance. In these measurements, we compare the Far Detector data to a predicted energy spectrum constrained by the Near Detector (ND) data. The ND data is simulated using GENIE, with the neutrino cross section model adjusted to better describe the data by modifying the rate of Meson Exchange Current (MEC) interactions and the Final State Interactions. To characterize the performance of these adjustments, the ND simulation and data are divided into a set of samples based on multiplicity and topology. A fit to constrain MEC and other cross section parameters using these samples will be described.
NOvA is a long-baseline neutrino oscillation experiment, designed to make precision neutrino oscillation measurements using $\nu_\mu$ disappearance and $\nu_e$ appearance. It consists of two functionally equivalent detectors and utilizes the Fermilab NuMI neutrino beam. NOvA uses a convolutional neural network for particle identification of $\nu_e$ events in each detector. As part of the validation process of this classifier’s performance, we apply a data-driven technique called Muon Removal. In a Muon-Removed Electron-Added study we select $\nu_\mu$ charged current candidates from both data and simulation in our Near Detector and then replace the muon candidate with a simulated electron of the same energy. In a Muon-Removed Decay-in-Flight study we remove the muonic hits from events where cosmic muons entering the detector have decayed in flight, resulting in samples of pure electromagnetic showers. Each sample is then evaluated by our classifier to obtain selection efficiencies. Our recent analysis found agreement between the selection efficiencies of data and simulation within our uncertainties, showing that our classifier selection is generally robust in $\nu_e$ charged current signal selection.
The NOvA experiment uses a convolutional neural network (CNN) that analyzes topological features to determine neutrino flavor. Alternative approaches to flavor identification using machine learning are being investigated with the goal of developing a network trained with both event-level and particle-level images in addition to reconstructed physical variables while maintaining the performance of the CNN. Such a network could be used to analyze the individual prediction importances of these inputs. An original network that uses a combination of transformer and MobileNet CNN blocks will be discussed.
The upgrade of the Mu2e experiment at Fermilab, Mu2e-II, is proposed to improve the expected Mu2e sensitivity. Mu2e-II will search for the neutrinoless conversion of a muon into an electron in the field of an Al nucleus, with a sensitivity up to few 10$^{-18}$.
As for Mu2e, the tracker system for the Mu2e-II will be responsible for precisely measuring the momentum of the conversion electron to distinguish it from the background electrons coming from the muons decay in orbit.
To meet the requirements, a preliminary calculation indicates that the Mu2e-II tracker system should be even lighter than the Mu2e tracker by reaching a total material budget of about 4$\times$10$^{-3}$ X/X$_{0}$. Moreover, it must preserve or increase the rate capability of the M2e tracker. We present the ongoing R&D studies and some preliminary simulation results for a tracker made with about 20,000 8um thin-walled straw tubes operating in a vacuum of 10$^{−4}$ torr and for possible alternatives.
The Inner Tracker is an all-silicon detector that will replace ATLAS’ inner tracking layers for the High Luminosity LHC. SLAC National Accelerator Laboratory is responsible for the loading and integration of the pixel layers closest to the LHC Beamline, the Inner System. We’ll mount the silicon pixel detectors on their mechanical supports, then connect the loaded mechanical supports (“loaded local supports”) to integrate the full Inner System. This talk will present the loading of the first thermomechanical local support prototype.
The inner tracking detector of the ATLAS experiment at CERN is currently preparing for an upgrade to operate in the high Luminosity LHC, scheduled to start in 2027. A complete replacement of the existing Inner Detector of ATLAS is required to cope with the expected radiation damage. The all-silicon Inner Tracker (ITk) design under construction composes a mixture of Pixel and Strips layers. At the core of the strips detector barrel are the staves, which host 28 silicon modules. A thorough characterization of the modules before the assembly on each stave is critical; therefore, each module undergoes electrical and thermal quality control (QC) testing between module production and stave assembly. All the modules must be thermal cycled ten times between -35C and +40C. This talk will show the thermal and electrical performance of the US testing setup, focusing on the difficulties encountered to meet the QC requirements. It will also give an overview of the results obtained by analyzing the first batch of produced modules.
The Upstream Tracker (UT) is a silicon tracking sub-detector currently under construction that will sit just upstream of LHCb's dipole magnets during Run III of the LHC. It improves on the previous tracker in several ways, including enabling LHCb's new 40 MHz fully-software trigger, and comprises 968 silicon sensors mounted in four planes together with their requisite readout electronics and cooling systems.
This talk will give an overview of the UT and describe its construction with an emphasis on its mechanical and thermal structures.
The Large Hadron Collider (LHC) will soon undergo an upgrade, referred to as the High-Luminosity LHC (HL-LHC), which will increase the instantaneous luminosity beyond the LHC's design value. The ATLAS experiment is upgrading the innermost portion of the detector to the ITk pixel detector to accommodate the increase in luminosity. The RD53 collaboration was formed to develop the ASIC read out chips that are used inside the ITk pixel detector. In the preproduction RD53B chip, an encoding system was implemented to help shrink data streams and reduce the overall bandwidth of the system. An exploratory effort was undertaken to create a hardware decoder for Field Programmable Gate Arrays (FPGA) to cut down on CPU usage from software decoders later in the system. A parallelized hardware decoder was designed to meet the data rates produced by an RD53B chip. Overall, the final product is a base hardware decoder design that can handle the throughput constraints of a single RD53B and is resource efficient. In this talk I will report necessary background, hardware decoder design, and conclusions based on this design.
In Run 3 of the LHC (2022-2024), the Level-1 trigger system of the ATLAS experiment will introduce three feature extractors (FEX): eFEX for electron/photon, jFEX for jets/MET, and gFEX for global quantities. The increased calorimeter granularity is useful for all physics channels that deposit energy in the calorimeter, from high-bandwidth items like electrons to MET (missing transverse momentum). An overview of the hardware implementation will be discussed. Details of the algorithm design will be presented, along with the projected performance for electron/photon, jet, and MET triggers.
There is a significant gap between the inclusive measurement of the $B \rightarrow X_{c} l \nu$ branching fraction and the sum of the measurements of the exclusive $B \rightarrow X_{c} l \nu$ channels. The dominant contributions $B \rightarrow D^{*} l \nu$ and $B \rightarrow D l \nu$ are precisely known but the branching fractions of $B \rightarrow D^{**} l \nu$ have higher uncertainties. Here, the $D^{**}$ is an orbitally excited charmed meson, which can decay into $D^{*} \pi$ and $D \pi$. The decay $B \rightarrow D^{(*)} \pi\pi l \nu$ with two bachelor pions in the final state has so far only been observed by the Babar collaboration.
Over the course of about 10 years the Belle collaboration has recorded about 772 million $B\overline{B}$ pairs produced in $e^{+} e^{-} \rightarrow \Upsilon(4S)$ at the KEKB asymmetric-energy $e^{+} e^{-}$ collider. The status of the measurement of the branching fraction of $B \rightarrow D^{(*)} \pi l \nu$ as well as $B \rightarrow D^{(*)} \pi\pi l \nu$ using the full Belle data sample will be presented. In addition, an analysis of the invariant $D^{(*)} \pi$ mass distribution will be shown.
We report the first search of CP violation using T-odd triple product asymmetries and the most precise branching fraction measurement for the singly Cabibbo suppressed decay $D^{0}\rightarrow K_{s}^{0} K_{s}^{0} \pi^{+} \pi^{-}$. These results will be obtained using $922\,{\rm fb}^{-1}$ data sample that was collected with the Belle detector at the KEKB asymmetric energy $e^+ e^-$ collider. The branching fraction is measured relative to the normalization channel $D^{0}\rightarrow K_{s}^{0} \pi^{+} \pi^{-}$. Charm decays are expected to have very small CP violation in the standard model, this makes CP violation searches in charm decays an excellent probe of physics beyond the standard model. We have probed the asymmetries in the observable C$_{T}$ = $\vec{P}_{K_{s}^{0}} \cdot (\vec{P}_{\pi^{+}} \times \vec{P}_{\pi^{-}})$ for $D^{0}$ and $\overline{D}^{0}$ decays. Looking at the difference of T-odd asymmetries between CP conjugate $D^{0}$ and $\overline{D}^{0}$ decays provides us a CP asymmetry observable free from strong interaction effects.
We study flavor–conserving M1 radiative decays of heavy flavor bottom baryons in the framework of Effective Mass Scheme (EMS) within the quark model. The intent of the EMS lies in the fact that the masses of the quarks inside the baryon are modified as a consequence of one-gluon exchange interaction with the spectator quarks and it treats all the quarks at the same footing. The baryon mass can be written as the sum of the constituent quark masses and the spin-dependent hyperfine
interaction among them. We show that EMS can successfully describe the masses and the magnetic moments, transition moments, and radiative decay widths of the lowest-lying singly heavy flavor baryons in a parameter independent way. The
exchange contribution is worked out through interaction terms bij from the recently observed experimental masses for the heavy flavored charm and bottom baryons for the calculation of effective quark masses. We then compute the magnetic and transition moments of ground state J^P = (1/2)^(+) and J^P = (3/2)^(+),and(1/2)^('+) → (1/2)^(+),(3/2)^(+) → (1/2)^(+),and (3/2)^(+) → (1/2)^('+) heavy flavor charm and bottom baryon states. Finally, we make sturdy model independent predictions for radiative M1 decay widths of heavy flavored baryons. The radiative transitions between the states occur mainly through the M1-type, while there are negligible contributions from E2-type transitions, which are therefore ignored. We also extend our analysis to the triply heavy charm and bottom baryons.
We present measurements of the branching fractions and $CP$ asymmetries for $D_s^{+} \rightarrow K^{+} \eta $, $D_s^{+} \rightarrow K^{+} \pi^0 $, and $D_s^{+} \rightarrow \pi^{+} \eta $ decays, and the branching fraction for $D_s^{+} \rightarrow \pi^{+} \pi^0$ based on the full data sample collected by the Belle detector at the KEKB $e^+e^-$ asymmetric-energy collider. No evidence for $CP$ violation is found.
A measurement of the $B_S^0\rightarrow J/\psi\phi$ decay parameters using $80~\mathrm{fb}^{-1}$ of integrated luminosity collected with the ATLAS detector from $13~\mathrm{TeV}$ proton-proton collisions at the LHC is presented. The measured parameters include the CP-violating phase $\phi_S$, the width difference $\Delta\Gamma_S$ between the $B_S^0$ meson mass eigenstates and the average decay width $\Gamma_S$. The values measured for the physical parameters are combined with those from $19.2~\mathrm{fb}^{-1}$ of $7~\mathrm{TeV}$ and $8~\mathrm{TeV}$ data, leading to the following:
$\phi_S=-0.087\pm0.036~(\mathrm{stat.})\pm0.021~(\mathrm{syst.})~\mathrm{rad}$
$\Delta\Gamma_S=0.0657\pm0.0043~(\mathrm{stat.})\pm0.0037~(\mathrm{syst.})~\mathrm{ps}^{-1}$
$\Gamma_S=0.6703\pm0.0014~(\mathrm{stat.})\pm0.0018~(\mathrm{syst.})~\mathrm{ps}^{-1}$
Results for $\phi_S$ and $\Delta\Gamma_S$ are also presented as $68\%$ confidence level contours in the $\phi_S-\Delta\Gamma_S$ plane. Furthermore, the transversity amplitudes and corresponding strong phases are measured. $\phi_S$ and $\Delta\Gamma_S$ measurements are in agreement with the Standard Model predictions.
Many analyses in ATLAS rely on the identification of jets containing $b$-hadrons ($b$-jets) with high efficiency while rejecting more than 99% of non-$b$-jets. Identification algorithms, called $b$-taggers, exploit $b$-hadron properties such as their long lifetime, their high mass, and high decay multiplicity. Recently developed ATLAS $b$-taggers using neural networks are expected to outperform previous $b$-taggers by a factor of two in terms of light jet rejection. Nevertheless, contributions from light jet mistags can be non-negligible in certain analyses phase spaces. It is therefore important to precisely measure the mistag rate of the light jets in both data and simulation to correct the corresponding rate in simulation.
Due to the high light jet rejection of the $b$-taggers, the mistag rate cannot be measured directly but rather by means of a modified tagger, designed to decrease the $b$-jet efficiency while leaving the light jet response unchanged. This so-called "negative tag method" has been improved recently: uncertainties are reduced by constraining non-light flavour contributions with a data-driven method and the dominant systematic uncertainty has been reduced significantly, from 10-60% to 5-20% due to improved inner detector modeling and an auxiliary analysis. The method and a selection of results released recently to the ATLAS collaboration using $pp$ collisions at $\sqrt{s}=$ 13 TeV with the ATLAS detector will be presented.
The Dark Energy Survey has observed large scale structure data in 5000 sq. deg in sky. This effort was done in collaboration with hundreds of scientists culminating in the more accurate and precise constraints to date on the cosmology of the late-time Universe.
In this talk, I discuss the methodology and measurements in the third year of the Dark Energy Survey. I review the results from the Cosmological analysis and, finally, I discuss the tension of Large Scale Structure constraints from DES and the Planck experiment and the efforts that are already being developed for the next generation of DES data.
Twenty years ago, in an experiment at Brookhaven National Laboratory, physicists detected what seemed to be a discrepancy between measurements of the muon’s magnetic moment and theoretical calculations of what that measurement should be, raising the tantalizing possibility of physical particles or forces as yet undiscovered. The Fermilab team has just announced that their precise measurement supports this possibility. The reported significance for new physics is 4.2 sigma just slightly below the discovery level of 5 sigma. However, an extensive new calculation of the muon's magnetic moment using lattice QCD by the BMW-collaboration reduces the gap between theory and experimental measurements. The lattice result appeared in Nature on the day of the Fermilab announcement. In this talk both the theoretical and experimental aspects are summarized with two possible narratives: a) almost discovery or b) Standard Model re-inforced. Some details of the lattice caluculation are also shown.
The Deep Underground Neutrino Experiment (DUNE) is a next-generation long-baseline neutrino experiment. Its main physics goals are the precise measurement of the neutrino oscillation parameters, in particular the violation of the charge-parity symmetry and the neutrino mass hierarchy. DUNE consists of a Far Detector (FD) complex with four multi-kiloton liquid argon detectors, and a Near Detector (ND) complex located close to the neutrino source at Fermilab (USA). Here we present an overview of the DUNE experiment, its detectors, and physics capabilities, within the context of the long baseline program of the next decade.
Ultralight axions (ULA), whose masses can lie in a wide range of values and can be even smaller than $10^{−28}$ eV, are generically predicted in UV theories such as string theory. In the cosmological context, the early Universe may have gotten filled with a network of ultralight axion (ULA) cosmic strings which, depending upon the mass of the axion, can survive till very late times. If the ULA also couples to electromagnetism, and the network survives post recombination, then the interaction between the strings and the CMB photons induces a rotation of the polarization axis of the CMB photons (otherwise known as the birefringence effect). This effect is independent of the string tension, and only depends on the coupling between the ULA and the photon (which in turn is sensitive to UV physics). In this talk I will present some results for this birefringence effect on CMB, due to three different models of string network. Interestingly, this is within the reach of some current and future CMB experiments.
The cosmological collider physics program aims at probing particle physics at energies as high as the inflationary Hubble scale, $H \le 10^{13}$ GeV, using precision measurements from CMB, large scale structure surveys, and 21-cm cosmology. Heavy particles produced during inflation can impart unique correlations in the density fluctuations across the sky, leading to non-gaussianity (NG) in the cosmological observables. This presents a unique opportunity for the “direct detection” of particles with masses as large as $H$. However, the strength of this signal drops exponentially due to a Boltzmann-like factor as masses exceed $H$. In this talk, I will discuss a mechanism that overcomes this suppression and broadens the scope of cosmological collider physics, focusing on the case of a massive complex scalar field. The mechanism allows us to harness large kinetic energy of the inflaton to produce particles with masses as large as $\sim 60H$. I will show that NG with $f_{\rm NL} \sim {\cal O}(0.01-10)$ can be obtained, and delineate a procedure to infer the mass of the heavy field from the signal.
SPT-3G is the third survey receiver operating on the South Pole Telescope dedicated to high-resolution observations of the cosmic microwave background (CMB). Sensitive measurements of the temperature and polarization anisotropies of the CMB provide a powerful dataset for constraining the fundamental physics of the early universe, including models of inflation and the neutrino sector. Additionally, CMB surveys with arcminute-scale resolution are capable of detecting galaxy clusters, millimeter-wave bright galaxies, and a variety of transient phenomena. The SPT-3G instrument provides a significant improvement in mapping speed over its predecessors, SPT-SZ and SPTpol. The broadband optics design of the instrument achieves a 430 mm diameter image plane across observing bands of 95 GHz, 150 GHz, and 220 GHz, with 1.2 arcmin FWHM beam response at 150 GHz. In the receiver, this image plane is populated with 2690 dual-polarization, tri-chroic pixels (~16000 detectors) read out using a 68X digital frequency-domain multiplexing readout system. In 2018, SPT-3G began a multiyear survey of 1500 deg$^{2}$ of the southern sky. I will summarize the unique optical, cryogenic, detector, and readout technologies employed in SPT-3G, and I will report on the integrated performance of the instrument.
Line-intensity mapping (LIM) of millimeter-wavelength tracers is a promising new technique for mapping cosmic structure at redshifts beyond the reach of galaxy surveys. I will describe the design and science motivation for the South Pole Telescope Summertime Line Intensity Mapper (SPT-SLIM), which seeks to demonstrate the use of on-chip spectrometers based on microwave kinetic inductance detectors (MKIDs) for LIM observations of CO at z~1-3. The design of SPT-SLIM is enabled by key technical developments, including MKID-coupled R=300 filter-bank spectrometers between 120-180 GHz, as well as a new low-cost, high-throughput MKID readout architecture based on the ICE platform. When deployed in the 2022-23 austral summer, SPT-SLIM will produce strong constraints on the CO power spectrum, while developing the experimental and observational techniques needed to use LIM as a cosmological probe in future survey instruments.
In the inflationary paradigm, a background of primordial gravitational waves is predicted to be produced. These perturbations would leave a unique signature in the curl component of the cosmic microwave background (CMB) polarization (B-modes). A detection of B-modes spectrum power at degree angular scale would constrain the intensity of the tensor perturbations generated during inflation . This information is encoded in the tensor to scalar ratio r.
The B-modes power spectrum is dominated by foregrounds such as synchrotron emission and polarized dust at small angular scales and by lensed curl-free CMB polarization (E-modes) at large angular scales. To isolate the large angular scale primordial B-modes signal, small aperture telescopes such as Bicep/Keck BK must work in conjunction with Large aperture telescopes (LAT) telescopes such as the South Pole Telescope that have higher resolution and are more sensitive to smaller angular scales to be able to pristinely remove the non-primordial signals.
To date we only have upper limits on r.
Several other collaborations are measuring stringent upper limits with state of the art instruments.
The combined efforts of the SPT and the BK collaboration joint analysis group, South Pole Observatory (SPO), will significantly improve the constraint (𝜎(r)~0.003 SPO) that could come from BK data alone (𝜎(r)~0.02 BK15). The SPO r limit will hold until the CMB-S4 results (𝜎(r)~5e−4 forecast).
Moreover, thanks to its large number of sensitive detectors, scan strategy and sky coverage on a foreground clean patch of the sky, SPT-3G will give us the capability to deliver an independent constraint on r, that will be informative on the performances of the Large Aperture Telescope design for CMB-S4.
Many physics models beyond the Standard Model predict heavy new particles preferentially decaying to at least one top quark. Three searches for a heavy resonance decaying into at least one top quark in pp collision at a center-of-mass energy of 13 TeV at the LHC will be presented in the talk. These searches include: The search for a heavy resonance decaying to a top quark and a W boson in the fully hadronic final state as well as in the lepton+jets final state, and the search for W' bosons decaying to a top and a bottom quark in the all-hadronic final state. The three searches use the data collected by the CMS experiment between 2016 and 2018, corresponding to an integrated luminosity of 137 fb−1. Novel machine learning techniques and reconstruction techniques are used to optimize discrimination of top quarks with high Lorentz boosts, which requires the use of non isolated leptons and jet substructure techniques, as well as allowing for a significant improvement of the analysis sensitivity compared with earlier results. No significant excess of events relative to the expected yield from standard model processes is observed. The most stringent limits to date are obtained from these searches.
A search for dijet resonances in events with identified leptons has been performed using full Run 2 data collected in 𝑝𝑝 collisions at √s=13 TeV by the ATLAS detector, corresponding to an integrated luminosity of 139 𝑓𝑏−1. The dijet invariant-mass ($m_{jj}$) distribution from events with at least one isolated electron or muon was probed in the range of $0.22 < m_{jj} < 6.3$ TeV. The analysis probes much lower $m_{jj}$ than traditional inclusive dijet searches and has been sensitive to a large range of new physics models in association with a final-state lepton. As no statistically significant deviation from the Standard Model background hypothesis was found, limits were set on contributions from generic gaussian signals and on various beyond the Standard Model scenarios including the Sequential Standard Model, a technicolor model, a charged Higgs boson model and a simplified Dark Matter model.
Many theories beyond the Standard Model predict new phenomena, such as $Z'$ and vector-like quarks, in final states containing bottom- or top-quarks. It is challenging to reconstruct and identify the decay products and model the major backgrounds. Nevertheless, such final states offer great potential to reduce the Standard Model backgrounds due to their characteristic decay signature. The latest search in the two quark final states using the full Run-2 ( $139$ fb$^{−1}$) proton-proton collision data collected at a center-of-mass energy of $\sqrt{s} = 13$ TeV with the ATLAS detector will be presented. In particular, this presentation will summarize the recent results of dijet and top-antitop resonance searches in the hadronic top-quark final state. This talk will also highlight associated improvements coming from deep learning-based $b$-quark and top-quark identification techniques. Furthermore, the interpretations of these results in the context of $s$-channel dark matter mediator models will be discussed.
This talk presents a search for a new resonance $W^\prime$ decaying into a $W$ boson and a $125~\text{GeV}$ Higgs boson $H$ in the ${\ell^{\pm}{\nu}b\bar{b}}$ final states, where $\ell = e,~\mu,~\mathrm{or}~\tau$, using $pp$ collision data at 13 TeV corresponding to an integrated luminosity of 139 fb$^{-1}$ collected by the ATLAS detector at LHC. The search considers the one-lepton channel, where an electron, muon, or leptonically decaying tau lepton is successfully reconstructed. Both resolved and merged regimes, as well as one and two b-tag regions, are employed to reconstruct the $H\rightarrow bb$ decay across the range of $W^{\prime}$ masses. The search is conducted by examining the reconstructed invariant mass distributions of $W^\prime \to WH$ candidates in the mass range from $400~\text{GeV}$ to $5~\text{TeV}$. Upper limits are placed at the 95% confidence level on the production cross-section times branching fraction of heavy $W^{\prime}$ resonances in heavy-vector-triplet models.
A search for a new heavy boson $W^{\prime}$ in proton-proton collisions at $\sqrt{s}$ = 13 TeV is presented. The search focuses on the decay of the $W^{\prime}$ to a top quark and a bottom quark, using the full Run 2 dataset collected with the ATLAS detector at the LHC with an integrated luminosity of 139 $\text{fb}^{−1}$. The talk will give an overview of the analysis, which includes the hadronic-decaying top-quark identification using a Deep Neural Network trained on jet substructure variables and the data-driven background estimation. It will show the search sensitivity as expected exclusion limits on the $W^{\prime}$ production cross-section times the top-bottom channel branching ratio for several $W^{\prime}$ masses between 1.5 and 6 TeV.
A search for electroweak production of charginos and neutralinos at the Large Hadron Collider was conducted in 139 fb$^{-1}$ of proton-proton collision data collected at a center of mass energy of $\sqrt{s} = 13$ TeV with the ATLAS detector. This search utilizes fully hadronic final states with missing transverse momentum to identify signal events with a pair of charginos or neutralinos that subsequently decay into high-$p_T$ gauge or Higgs bosons as well as a lighter chargino or neutralino. The light chargino or neutralino creates missing transverse momentum and each of the bosons can decay to light- or heavy-flavor quark pairs. Fully hadronic final states have a large branching ratio compared to leptonic or semi-leptonic decays, probing high-mass signals which have a smaller production cross-section, giving strong motivation to explore this final state. The inclusion of more signal leads to more background, by exploiting boosted boson tagging techniques the additional background can be suppressed. This boson tagging is achieved by reconstructing and identifying the high-$p_T$ SM bosons using large-radius jets and their substructure. No significant excess is found beyond standard model expectations. Various assumptions in decay branching ratios and the type of LSP were made to set exclusion limits on wino or higgsino production at a 95% confidence level. These excluded a mass of 1050 and 900 GeV for the wino and higgsino respectively when the lightest SUSY particle has a mass below 400 and 250 GeV.
Most searches for new physics at the Large Hadron Collider assume that a new particle produced in pp-collisions decays almost immediately or is non-interacting and escapes the detector. However, a variety of new physics models predict particles that decay inside the detector at a discernible distance from the interaction point. Such long-lived particles would create spectacular signatures that evade many prompt searches. This talk will present recent CMS searches for new long-lived particles using Run 2 data. This talk will also highlight the experimental challenges that these signatures pose for the trigger, offline reconstruction, and non-standard backgrounds.
A search for neutral long-lived particles decaying into displaced jets in the ATLAS hadronic calorimeter in $pp$ collisions at $\sqrt{s} = 13 \textrm{ TeV}$ during 2016 with data corresponding to $10.8 \textrm{ fb}^{-1}$ or $33.0 \textrm{ fb}^{-1}$ of integrated luminosity (depending on the trigger) is preserved in RECAST and thereafter used to constrain three new physics models not studied in the original work. A Stealth SUSY model and a Higgs-portal baryogenesis model, both predicting long-lived particles and therefore displaced decays, are probed for proper decay lengths between a few cm and 500 m. A dark sector model predicting Higgs and heavy boson decays to collimated hadrons via long-lived dark photons is also probed. The cross-section times branching ratio for the Higgs channel is constrained between a few millimeters and a few meters, while for a heavier 800 GeV boson the constraints extend from tenths of a millimeter to a few tens of meters. The original data analysis workflow was completely captured using virtualization techniques, allowing for an accurate and efficient reinterpretation of the published result in terms of new signal models following the RECAST protocol.
Triggering long-lived particles (LLPs) at the first stage of the trigger system is very crucial in LLP searches to ensure that we do not miss them at the very beginning. The future High Luminosity runs of the Large Hadron Collider will have an increased number of pile-up events per bunch crossing. There will be major upgrades in hardware, firmware and software sides, like tracking at level-1 (L1). The L1 trigger menu will also be modified to cope with pile-up and maintain the sensitivity to physics processes. In our study we found that the usual level-1 triggers, mostly meant for triggering prompt particles, will not be very efficient for LLP searches in the 140 pile-up environment of HL-LHC, thus pointing to the need to include dedicated L1 triggers in the menu for LLPs. We consider the decay of the LLP into jets and develop dedicated jet triggers using the track information at L1 to select LLP events. We show in our work that these triggers give promising results in identifying LLP events with moderate trigger rates.
The search for long-lived particles (LLP) at the LHC can be improved with timing information. If the visible decay products of the LLP form jets, the arrival time is not well-defined. In this talk, I will discuss possible definitions and how they are affected by the kinematics of the underlying parton-level event.
Weakly coupled light new physics is a well motivated lamppost often referred to as a dark sector. At low masses and weak couplings, dark sector particles are generically long-lived. In this talk I will describe how neutrino-portals to a dark sector can be efficiently probed by looking for the decay of heavy neutral leptons that are produced via the upscattering of solar neutrinos within the Earth's core/mantle. Large volume detectors (such as Borexino or Super Kamiokande) can search for MeV-scale photons and electron-positrons pairs from HNLs decaying while passing through their detectors.
Searches for physics beyond the Standard Model (SM) at collider experiments—mostly focused on prompt signatures with high momentum and high missing transverse energy—have thus far produced no definitive evidence for such phenomena. But what if they have been looking in the wrong places? Just as long-lived particles exist in the SM, beyond the SM physics may too feature such particles. Here, a novel search for displaced photons is introduced, using 139 fb$^{−1}$ of $pp$ collision data at center-of-mass energy $\sqrt{s} =$ 13 TeV collected with the ATLAS detector. The search specifically targets the relatively unconstrained branching ratio of the Higgs boson to invisible particles, where there is still ample room for signatures featuring relatively soft photons and modest missing transverse energy. Exploiting the longitudinal segmentation and the excellent precision timing capabilities of the ATLAS detector’s liquid argon electromagnetic calorimeter, the striking, smoking-gun signature of a displaced photon that both fails to point back to the interaction point and arrives significantly delayed can be employed as a powerful discrimination variable. The analysis strategy, including the entirely data-driven background estimation method and expected sensitivities, is presented in detail.
We present a search for dark matter candidates produced in association with a Higgs boson using data collected from $pp$ collision at $\sqrt{s}=13$ TeV with the ATLAS detector that corresponds to an integrated luminosity of 139 $fb^{-1}$. This search targets events that contain a large missing transverse momentum and a Higgs boson reconstructed either as two $b$-tagged small-radius jets or as a single large-radius jet associated with two $b$-tagged sub-jets. Compared to the previous iteration, this search represents an optimised event selection and advances in object identification that enhance the expected sensitivity and simplify the analysis. No significant excess from the Standard Model prediction is observed. The results are interpreted in two benchmark models with a Two-Higgs-Doublet extended by either a heavy vector boson $Z’$ or a pseudoscalar singlet $a$ which provide dark matter candidates.
Many beyond the Standard Model (BSM) theories suggest the existence of multiple fundamental scalar fields and associated Higgs bosons, with the standard model Higgs boson being the lightest and most easily discovered. The dimension-4 interactions between a theorized generic heavy Higgs boson and Standard Model (SM) particles have already been explored in all major Higgs boson production channels, particularly in gluon-gluon fusion, with no evidence of BSM effects so far. Thus, our study takes a new direction by accounting for effective dimension-6 interactions with SM particles in addition to dimension-4 interactions, and by probing the VH channel for a heavy Higgs boson. If the generic heavy Higgs boson is connected with BSM physics at the scale of a few TeV, these dimension-6 operators will dramatically boost heavy Higgs boson momentum such that it can be distinguished from background. This particular region of the phase space has not been investigated by previous LHC studies, enhancing its potential for discovery of BSM physics and a generic heavy Higgs boson.
In this talk, I will present the motivations for the Generic Heavy Higgs Search and the reason for exploring this particular corner of the phase space, as well as the work-in-progress Monte-Carlo kinematic distributions and upper limits describing various signal hypotheses.
Charged Higgs bosons produced either in top-quark decays or in association with a top-quark, subsequently decaying via $H^{\pm} \to \tau^{\pm}\nu_{\tau}$, are searched for in $36.1 \mathrm{fb^{-1}}$ of proton-proton collision data at $\sqrt{s}=13$ TeV recorded with the ATLAS detector. Depending on whether the associated top-quark decays hadronically or leptonically, the search targets $\tau$+jets and $\tau$+lepton final states. In both cases, the $\tau$-lepton decays hadronically. No evidence of a charged Higgs boson is found. For the mass range of $m_{H^{\pm}} =$ 90-2000 GeV, upper limits at the 95% confidence level are set on the production cross-section of the charged Higgs boson times the branching fraction $\mathrm{\cal{B}}(H^{\pm} \to \tau^{\pm}\nu_{\tau})$ in the range 4.2-0.0025 pb. In the mass range 90-160 GeV, assuming the Standard Model cross-section for $t\overline t$ production, this corresponds to upper limits between 0.25% and 0.031% for the branching fraction $\mathrm{\cal{B}}(t\to bH^{\pm}) \times \mathrm{\cal{B}}(H^{\pm} \to \tau^{\pm}\nu_{\tau})$. In the newest iteration of the search, the mass range has been extended to $m_{H^{\pm}}$ = 80-3000 GeV and novel machine learning techniques have been developed to sift through $139\,\mathrm{fb^{-1}}$ of data. A parameterized neural network (PNN) is trained across the entire mass spectrum to provide signal-background discrimination in $\tau$+jets or $\tau$+lepton final states.
Four top-quark production, a rare process in the Standard Model (SM) with a cross-section around 12 fb, is one of the heaviest final states produced at the LHC, and it is naturally sensitive to physics beyond the Standard Model (BSM). A data excess is observed with twice of the expectation. A follow-up analysis is the search for Heavy (pseudo)Higgs boson A/H produced in association with a top-antitop quark pair leading to the final state with four top quarks. The data analyzed correspond to an integrated luminosity of 139 fb$^{−1}$ of proton-proton collision data at a centre-of-mass energy of 13 TeV collected by the ATLAS detector at the LHC. In this talk, the four top-quark decay final states containing either a pair of same-sign leptons or multi-lepton (SSML) are considered. To enhance the search sensitivity, a mass-parameterized BDT is introduced to discriminate the BSM signal against the irreducible SM four-top and other dominant SM backgrounds. Expected upper bounds on the production cross-section of A/H are derived in the mass range from 400 GeV to 1000 GeV.
Many extensions of the Standard Model include the addition of charged Higgs bosons. The two-Higgs doublet model (2HDM) is one such extension that predicts the presence of charged Higgs bosons. The 2HDM predicts three neutral Higgs bosons along with a positive and negative charged pair of Higgs bosons. In this talk, we present a search for these charged Higgs bosons decaying into a top and bottom quark with single-lepton final states. We perform a multivariable analysis using a Gradient Boosted Decision Tree approach to aid in signal-to-background discrimination. CMS data collected at 13 TeV in 2016 (35.9 fb^{-1}), 2017 (41.5 fb^{-1}), and 2018 (59.97 fb^{-1}) are considered in this search.
A search is presented for a light pseudoscalar Higgs boson (a) using data collected by the CMS experiment at LHC, at the center-of-mass of energy of 13 TeV. The study looks into the decay Higgs boson (H) via the H→aa→μμττ channel. The Higgs boson can be both standard-model-like (125 GeV) or heavier. The pseudoscalar mass falls within the range ma ϵ[2mτ,mH/2]. The large mass difference between the Higgs and the pseudoscalar means that the final tau lepton decay products are highly boosted in the decay direction and collimated. A modified version of tau reconstruction is used to account for the highly overlapping decay products. The modified reconstruction technique gives higher reconstruction efficiency over the standard tau reconstruction and hence better signal significance and background rejection. This technique also becomes useful when looking into various final states, especially the ones where one of the taus decays hadronically while the other decays leptonically (μ/e). The performance of the altered reconstruction technique, as opposed to the standard tau reconstruction, is also presented. The results from the 2016 and 2017 CMS datasets will be shown.
Athena is the software framework used in the ATLAS experiment throughout the data processing path, from the software trigger system through offline event reconstruction to physics analysis. The shift from high-power single-core CPUs to multi-core systems in the computing market means that the throughput capabilities of the framework have become limited by the available memory per process. For Run 2 of the Large Hadron Collider (LHC), ATLAS has exploited a multi-process forking approach with the copy-on-write mechanism to reduce the memory use. To better match the increasing CPU core count and the, therefore, decreasing available memory per core, a multi-threaded framework, AthenaMT, has been designed and is now being implemented. The ATLAS High Level Trigger (HLT) system has been remodelled to fit the new framework and to rely on common solutions between online and offline software to a greater extent than in Run 2.
We present the implementation of the new HLT system within AthenaMT, which is being commissioned now for ATLAS data-taking during LHC Run 3.
We present a novel implementation of classification using boosted decision trees (BDT) on field programmable gate arrays (FPGA). Two example problems are presented, in the separation of electrons vs. photons and in the selection of vector boson fusion-produced Higgs bosons vs. the rejection of the multijet processes. The firmware implementation of binary classification requiring 100 training trees with a maximum depth of 4 using four input variables gives a latency value of about 10ns. Implementations such as these enable the level-1 trigger systems to be more sensitive to new physics at high energy experiments. The work is described in [2104.03408].
The hls4ml library [1] is a powerful tool that provides automated deployment of ultra low-latency, low-power deep neural networks. We extend the hls4ml library to recurrent architectures and demonstrate low latency by considering multiple benchmark applications. We consider Gated Recurrent Units (GRU) and Long Short Term Memory(LSTM) Models trained using the CERN Large Hadron Collider Top tagging data [2], jet flavor tagging data [3], and the quickdraw dataset as our benchmark applications. By using a large parameter range in between these benchmark models, we demonstrate that low-latency inference across a wide variety of model weights, and show that resource utilization of recurrent neural networks can be significantly reduced with little loss to model accuracy.
Reference:
[1] J. Duarte et al., “Fast inference of deep neural networks in FPGAs for particle physics”,JINST13(2018), no. 07,P07027, doi:10.1088/1748-0221/13/07/P07027,arXiv:1804.06913
[2] CERNbox, .https://cernbox.cern.ch/index.php/s/AgzB93y3ac0yuId?path=%2F, 2016
[3] Guest, Daniel, et al. "Jet flavor classification in high-energy physics with deep neural networks." Physical Review D 94.11 (2016): 112002.
[4] Google, “Quick, Draw!”, https://quickdraw.withgoogle.com/
This talk introduces and shows the simulated performance of an FPGA-based technique to improve fast track finding in the ATLAS trigger. A fast track trigger is being developed in ATLAS for the High Luminosity upgrade of the Large Hadron Collider (HL-LHC), the goal of which is to provide the high-level trigger with full-scan tracking at 100 kHz in the high pile-up conditions of the HL-LHC. Options under development for achieving this include a method based on matching detector hits to pattern banks of simulated tracks stored in a custom made Associative Memory ASIC (Hardware Track Trigger, “HTT”) and one using the Hough transform (whereby detector hits are mapped onto a 2D parameter space with one parameter related to the transverse momentum and one to the initial track direction) on FPGAs (“Hough”).
Both of these methods can benefit from a pre-filtering step, to reduce the number of hit clusters that need to be considered and hence reduce the overall system size and/or power consumption, by examining pairs of clusters in adjacent strip detector layers (or lack thereof). This stub-filtering was first investigated by CMS but has been unexplored in ATLAS until now, and we will show the reduction in throughput enabled along with the performance impact on both HTT and Hough systems of track finding, as well as estimates of resource usage.
The high collision energy and luminosity of the LHC allow studying jets and hadronically-decaying tau leptons at extreme energies with the ATLAS detector. These signatures lead to topologies with charged particles, which are reconstructed as tracks with the ATLAS inner detector, at an angular separation smaller than the size of a charge cluster in the ATLAS pixel detector, forming merged pixel clusters. In the presence of these merged clusters, the track reconstruction efficiency is reduced, as hits can no longer be uniquely assigned to individual tracks. Well-defined tracks are very important for many analyses. To partially recover the track reconstruction efficiency loss, a neural network (NN) based approach was adopted in the ATLAS pixel detector in 2011 to split the merged clusters by estimating particle hit multiplicity, hit positions, and associated uncertainties. An improved algorithm based on Mixture Density Networks (MDN) shows promising performance and will be used in the ATLAS inner detector track reconstruction in Run-3. An overview of the MDN algorithm and its performance will be highlighted in this presentation. This talk will also show a performance comparison between the Run-2 NN and Run-3 MDN.
We report on the development of a track finding algorithm for the Fermilab Muon g-2 Experiment’s straw tracker using advanced Deep Learning techniques. Taking inspiration from original studies by the HEP.TrkX project, our algorithm relies on a Recurrent Neural Network with bi-directional LSTM layers to build and evaluate track candidates. The model achieves good performance on a 2D representation of the Muon g-2 tracker detector. We will discuss our targets for improving efficiency and performance, and plans towards application on real data via training on a synthetic dataset.
The Axion Dark Matter Experiment (ADMX) is an experiment that searches for axions as dark matter with a resonant cavity in a strong magnetic field. In previous operations, ADMX achieved DFSZ sensitivity between 2.66-3.31 micro eV with yocto Watt level background using a quantum amplifier and dilution refrigerator. The latest operation has searched between 3.3 to 4.2 micro eV between October 2019 and May 2021, and implemented several improvements, including synthetic axion injections and a more efficient data-taking cycle. I will show new axion search results from the latest operation as well as improvements on operation and analysis.
I will describe two precision experiments searching for ultralight axion-like dark matter. The SHAFT experiment uses ferromagnetic toroidal magnets, and is sensitive to the electromagnetic coupling in the 12 peV to 12 neV mass range. The CASPEr-e experiment is based on precision magnetic resonance, and is sensitive to the EDM and the gradient couplings in the 162-166 neV mass range. These two searches have recently produced leading experimental limits on all three of the possible interactions of axion-like dark matter in those mass ranges.
Cosmic Axion Spin Precession Experiment (CASPEr) is a laboratory scale experiment searching for ultralight axion-like dark matter, using nuclear magnetic resonance [D. Budker, et al. Phys. Rev. X, 4,021030 and D. Aybas, J. Adam, et al., Phys. Rev. Lett. 126, 141802]. I will describe our work on the next phase of the experiment, with the goal of searching in the kHz – MHz frequency band, using SQUID sensors. I will also describe our study of transient light-induced paramagnetic centers in ferroelectric PMN-PT ($\mathrm{(PbMg_{1/3}Nb_{2/3}O_3)_{2/3} - (PbTiO_3)_{1/3}}$) crystals. We use these paramagnetic centers to control the polarization and relaxation of the nuclear spin qubit ensemble, allowing us to improve sensitivity to axion-like dark matter.
Detection and understanding of dark matter is one of the major unsolved problems of modern particle physics and cosmology. Several theories of fundamental physics predict bosonic dark matter candidates that can modify Maxwell’s equations resulting in additional photon emission from conducting surfaces. One of these promising dark matter candidates is known as the axion, which could be detected by observing the emitted electromagnetic radiation resulting from axion-photon coupling.
The Broadband Reflector Experiment for Axion Detection (BREAD) haloscope experiment will investigate a currently underprobed dark matter parameter space using novel reflector technology. This new experiment will develop technology for a new type of wideband axion dark matter search experiment capable of detecting axions in the mass range of approximately 10 meV -- 30 eV, a range not currently accessible by other techniques. This target mass range corresponds to an observable dark matter signal in the under-probed terahertz regime.
This presentation will cover the commissioning and building of a preliminary, room-temperature, terahertz photon source testing and calibration system that is intended to be used for a prototype BREAD detector.
This work is supported by the Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. This work was supported in part by the Kavli Institute for Cosmological Physics at the University of Chicago through grant NSF PHY-1125897 and an endowment from the Kavli Foundation and its founder Fred Kavli. JL is supported by the Grainger Fellowship.
The QCD axion represents a well-motivated new physics candidate capable of explaining dark matter and the absence of the neutron electric dipole moment. If realized after the breaking of a Peccei-Quinn symmetry after the end of inflation, the late-time number density of axions is jointly determined by radiation of axions from topological defects known as strings and from the dynamics of the axion field as it acquires a mass through the QCD phase transition. Here I present the results of simulations of axion radiation from strings using the block-structured adaptive mesh refinement code AMReX, which greatly extends the dynamical range of conventional simulation techniques, towards a precise determination of the mass of the QCD axion which produces the observed relic abundance of dark matter.
The constituents of dark matter are still unknown, and the viable possibilities span a very large mass range. Specific scenarios for the origin of dark matter sharpen the focus on a narrower range of masses: the natural scenario where dark matter originates from thermal contact with familiar matter in the early Universe requires the DM mass to lie within about an MeV to 100 TeV. Considerable experimental attention has been given to exploring Weakly Interacting Massive Particles in the upper end of this range (few GeV – ~TeV), while the region ~MeV to ~GeV is largely unexplored. Most of the stable constituents of known matter have masses in this lower range, tantalizing hints for physics beyond the Standard Model have been found here, and a thermal origin for dark matter works in a simple and predictive manner in this mass range as well. It is therefore a priority to explore. If there is an interaction between light DM and ordinary matter, as there must be in the case of a thermal origin, then there necessarily is a production mechanism in accelerator-based experiments. The most sensitive way, (if the interaction is not electron-phobic) to search for this production is to use a primary electron beam to produce DM in fixed-target collisions. The Light Dark Matter eXperiment (LDMX) is a planned electron-beam fixed-target missing-momentum experiment that has unique sensitivity to light DM in the sub-GeV range. This contribution will give an overview of the theoretical motivation, the main experimental challenges and how they are addressed, as well as projected sensitivities in comparison to other experiments.
New theoretical developments have motivated “hidden sector” dark matter with mass below the proton mass. The Light Dark Matter Experiment (LDMX) will use an electron beam to produce dark matter in fixed-target collisions. A low current, high repetition rate (37.2MHz) electron beam extracted from SLAC’s LCLS-II will provide LDMX with sufficient luminosity to explore many dark matter candidates. Using a novel detector design, LDMX is expected to definitively search for thermal relic dark matter with masses between 1 MeV and several hundred MeV. The LDMX trigger system will reduce the high repetition rate of 37.2MHz down to about 5 kHz. In order to identify signal events, a missing energy trigger will be used that will rely on knowledge of the number of incoming electrons. To determine the electron multiplicity, arrays of fast scintillators will be used. A strategy for the missing energy trigger will be described. An overview of the LDMX trigger scintillators and the current status of simulation studies will be presented.
New physics beyond the Standard Model (SM) could be responsible for the presence of Dark Matter in the Universe. A hidden, or "dark", sector interacting with SM particles via new force carriers is a natural scenario to explain the features of Dark Matter. In the last decade, growing interest has been dedicated to the search for dark sectors with force carriers in the MeV-GeV mass range. A well motivated model envisions the presence of a $U(1)$ gauge boson, the heavy photon $A'$, whose existence can be probed with fixed-target experiments at accelerators.
The Heavy Photon Search Experiment (HPS) at the Thomas Jefferson National Accelerator Facility (JLAB) searches for heavy photons and other new force carriers that are produced via electro-production and decay visibly to electron-positron pairs. This talk presents recent developments in reconstruction and calibration of the 2019 Data Run at 4.55 GeV, including performance of the newly adopted Kalman Filter track reconstruction algorithm.
Cosmological observations indicate that our universe contains dark matter (DM), yet we have no measurements of its microscopic properties. Whereas the gravitational interaction of DM is well understood, its interaction with the Standard Model is not. Direct detection experiments, the current standard, search for nuclear recoil interactions and have low-mass sensitivities down to ~1 GeV. A path to detect DM with masses below 1 GeV is the use of accelerators producing boosted low-mass DM. The Coherent CAPTAIN Mills (CCM) experiment uses a 10-ton liquid argon scintillation detector at the Lujan Center at LANSCE to search for physics beyond the standard model. The Lujan Center delivers a 100-kW, 800 MeV, 290 ns wide proton pulse onto a tungsten target at 20 Hz to generate a stopped pion source. The fast pulse, in combination with the speed of the CCM scintillation detector, is crucial for isolating prompt speed of light particles generated by the stopped pion source and reducing neutron and steady state background. In this talk I will discuss CCM’s search for Vector Portal Dark Matter by showing the results from our Fall 2019 run, as well as the projected reach of the experiment based on the current upgrades to the CCM detector.
The existence of dark matter is ubiquitous in cosmological data, yet numerous particle detectors have been thoroughly looking for it without any success. For strongly interacting dark matter, the bounds from these experiments are actually irrelevant; as dark matter enters the atmosphere, it scatters and slows down, such that it has a much lower velocity than the detector threshold when it reaches underground laboratories. In this case, however, it would accumulate within the Earth and reach a density much greater than that of the dark matter halo. Here, I will describe a scheme for adapting present-day underground nuclear physics experiments to detect dark matter within this context. In particular, I will show that accumulated dark matter can be up-scattered to resolvable energies using underground nuclear accelerators, such as LUNA in Gran Sasso, and captured in nearby located low-background detectors.
The existence of dark matter is widely accepted, with a well motivated theo-retical candidate being a class of particles known as WIMPs (weakly interacting massive particles), which appear in the spectra of many extensions to the stan-dard model.
We explore a particular WIMP-like model in which fermionic dark matter weakly couples to the muon/tau sectors of the standard model through a new vector boson Z’, in addition to electrically charged particles through kinetic mixing of the Z’ with the SM photon. As well as the model potentially providing a candidate dark matter particle, the hypothetical Z’ could also aid in explaining the discrepancy between the predicted and observed value of the anomalous magnetic dipole moment of the muon.
Cosmological observations of the dark matter relic density along with findings from direct detection attempts allow us to tightly constrain the parameter space of the model. By initially assuming a momentum independent kinetic mixing parameter, it is difficult for the resulting parameter space to satisfy the restrictions imposed by both sets of experimental results. In this talk, we focus on the work done to remedy this disagreement. Our work involves an attempt at softening the direct detection constraint by considering the general case in which the mixing parameter is momentum dependent. We construct it in such a way that it vanishes in the zero-momentum transfer limit, which results in a viable parameter space. Our goal is then to compare model derived quantities including interaction cross sections and early universe annihilation rates to well established experimental bounds to determine if the resulting parameter space is consistent with the constraints imposed by both direct detection and relic abundance.
The ongoing pandemic has exacerbated the isolation of people with disabilities, due to the loss of physical access to habilitation personnel and facilities. However, the elimination of business travel in favor of virtual meetings has simplified the participation of physically disabled scientists in the intellectual life of the particle physics community. In view of the imminent restart of in-person conferences, it behooves us to re-examine the accessibility standards for all our upcoming events. In this talk, I will give an overview of accessibility considerations for in-person scientific meetings and highlight a few suggestions for improvement, with the goal of making our community more inclusive, and our conferences more enjoyable for all attendees.
The ATLAS Collaboration has developed a variety of printables for education and outreach activities. We present two ATLAS Coloring Books, the ATLAS Fact Sheets, the ATLAS Physics Cheat Sheets, and ATLAS Activity Sheets. These materials are intended to cover key topics of the work done by the ATLAS Collaboration and the physics behind the experiment for a broad audience of all ages and levels of experience. In addition, there is ongoing work in translating these documents to different languages, with one of the coloring books already available in 18 languages. These printables are prepared to complement the information found in all ATLAS digital channels, they are particularly useful in outreach events and in the classroom.
We created an extremely successful planetarium show called: Phantom of the Universe - The Hunt for Dark Matter, which has been seen in more than 600 planetariums in 67 countries and 42 US states. It has been translated into 22 languages. We were motivated in part by envisioning several scenes that could only work in a planetarium. Our target audiences were the public and students. We found that many planetariums had an interest in a dark matter show. They present our show for many months at a time (more than feature films). Planetariums have the perfect science-interested audience for us. None of the physicist organizers had ever made a planetarium show before (involving a spherical screen). To create the show, we worked with renowned people with extensive experience in filmmaking and with people at seven planetariums (in multiple countries). We hired a Hollywood producer and screenwriter. Our narrator for the English-language version was Academy Award-winning actor Tilda Swinton. Sound editing and sound effects were done by an Academy-Award-winning team at Skywalker Sound. As we developed the show, we never imagined such success.
Simulating Particle Detection (SPD) stream is a research program within UMD's FIRE, a gen-ed sequential course-based undergraduate research experience program. SPD introduces undergraduate students to experimental high energy particle physics. It concentrates on computing, data analysis and visualization, specifically using simulations of the upgrade calorimeters (HGCAL) of the CMS experiment at CERN. After an introduction to the stream's wide-range research outcomes, pedagogical principles, and diverse community, I will share my experiences from the measures taken for the challenges imposed by the remote-learning period during the pandemic.
Since 1984 the Italian groups of the Istituto Nazionale di Fisica Nucleare (INFN) and Italian Universities, collaborating with the DOE laboratory of Fermilab (US) have been running a two-month summer training program for Italian university students. While in the first year the program involved only four physics students of the University of Pisa, in the following years it was extended to engineering students. This extension was very successful and the engineering students have been since then extremely well accepted by the Fermilab Technical, Accelerator and Scientific Computing Division groups. Over the many years of its existence, this program has proven to be the most effective way to engage new students in Fermilab endeavors. Many students have extended their collaboration with Fermilab with their Master Thesis and PhD.
Since 2004 the program has been supported in part by DOE in the frame of an exchange agreement with INFN. An additional agreement for sharing support for engineers of the School of Advanced Studies of S.Anna (SSSA) of Pisa was established in 2007 between SSSA and Fermilab. In the frame of this program four SSSA students are supported each year. Over its 35 years of history, the program has grown in scope and size and has involved more than 500 Italian students from more than 20 Italian Universities, Since the program does not exclude appropriately selected non-italian students, a handful of students of European and non-European Universities were also accepted in the years.
Each intern is supervised by a Fermilab Mentor responsible for performing the training program. Training programs spanned from Tevatron, CMS, Muon (g-2), Mu2e and SBN design and experimental data analysis, development of particle detectors (silicon trackers, calorimeters, drift chambers, neutrino and dark matter detectors), design of electronic and accelerator components, development of infrastructures and software for tera-data handling, research on superconductive elements and on accelerating cavities, theory of particle accelerators.
Since 2010, within an extended program supported by the Italian Space Agency and the Italian National Institute of Astrophysics, a total of 30 students in physics, astrophysics and engineering have been hosted for two months in summer at US space science Research Institutes and laboratories.
In 2015 the University of Pisa included these programs within its own educational programs. Accordingly, Summer School students are enrolled at the University of Pisa for the duration of the internship and are identified and ensured as such. At the end of the internship the students are required to write summary reports on their achievements. After positive evaluation by a University Examining Board, interns are acknowledged 6 ECTS credits for their Diploma Supplement.
Information on student recruiting methods, on training programs of recent years and on final student`s evaluation process at Fermilab and at the University of Pisa will be given in the presentation.
A direct measurement of the Higgs self coupling is very crucial to understand the nature of electroweak symmetry breaking. This requires an observation of production of Higgs boson pair, which suffers from very low event rate even at the current LHC run. In our work, we study the prospects of observing the Higgs pair production at the high luminosity run of the 14 TeV LHC (HL-LHC) and also the proposed high energy upgrade of the LHC at 27 TeV, namely, HE-LHC. For the HL-LHC study, we choose multiple final states based on the event rate and cleanliness, namely, $b\bar{b}\gamma \gamma$, $b\bar{b} \tau^+ \tau^-$, $b\bar{b} WW^*$, $WW^*\gamma \gamma$ and $4W$ channels and do a collider study by employing a cut-based as well as multivariate analyses using the Boosted Decision Tree (BDT) algorithm. In case of HE-LHC study, we select various di-Higgs final states based on their cleanliness and production rates, namely, $b\bar{b}\gamma\gamma$, $b\bar{b}\tau^{+}\tau^{-}$, $b\bar{b}WW^{*}$, $WW^{*}\gamma\gamma$, $b\bar{b}ZZ^{*}$ and $b\bar{b}\mu^{+}\mu^{-}$ channels. We adopt multivariate analyses using BDT algorithm, the XGBoost toolkit and Deep Neural Network (DNN) for the signal-background discrimination. Also, we perform a study on the ramifications of varying the self-coupling of Higgs boson from its Standard Model (SM) value.
In this talk we will discuss the production of three Higgs bosons in the LHC and at a proton-proton collider running at a centre-of-mass energy of 100 TeV. We will argue that the seemingly challenging 6-botton jets final state is a very good candidate to investigate triple Higgs production within and beyond the SM in proton-proton colliders. In particular we will consider three different scenarios: one in which the triple and quartic Higgs boson self-couplings are not affected by
new physics phenomena besides the Standard Model (SM) and in addition, we will explore two possible SM extensions by one and two new scalars. We will show that a 100 TeV machine can impose competitive constraints on the quartic coupling in the SM-like scenario. In the case of the scalar extensions of the SM, we will show that large significances can be obtained in the LHC and the 100 TeV collider while obeying current theoretical and experimental constraints including a first order
electroweak phase transition.
One of the assumptions of simplified models is that there are a few new particles and interactions accessible at the LHC and all other new particles are heavy and decoupled. The effective field theory (EFT) method provides a consistent method to test this assumption. Simplified models can be augmented with higher order operators involving the new particles accessible at the LHC. Any UV completion of the simplified model will be able to match onto these beyond the Standard Model EFTs (BSM-EFT). In this paper we study the simplest simplified model: the Standard Model extended by a real gauge singlet scalar. In addition to the usual renormalizable interactions, we include dimension-5 interactions of the singlet scalar with Standard Model particles. As we will show, even when the cutoff scale is 3 TeV, these new effective interactions can drastically change the interpretation of Higgs precision measurements and scalar searches.
An Effective Field Theory (EFT) re-interpretation of the differential measurement of Vector Boson Fusion Higgs production and decay to two W bosons will be reported. The analysis used the full Run-2 data in 2015--2018 of $pp$ collisions at $\sqrt{s}$=13 TeV with the ATLAS detector at the LHC, which correspond to an integrated luminosity of 139 fb$^{−1}$. Events with an electron and a muon from the decay of the W bosons and two energetic jets in the final state are considered as signals. At particle level, Standard Model predictions can be modified by expressing the differential cross section of various observables as a function of parameters that represent new phenomena predicted by EFT. The background subtracted and unfolded data are used to set limits on these new physics parameters.
Rare decays of the Higgs boson are promising laboratories to search for physics beyond the standard model (BSM). Such BSM physics might alter Yukawa couplings to lighter quarks and add loop diagrams, possibly resulting in higher decay rates than predicted by the standard model. For the first time in four-lepton final states, decays of the Higgs boson into a $ZJ/\psi$ or $Z\psi(2S)$ final states are searched for. In addition, Higgs decays into $J/\psi$ pair, $\Upsilon$ pair, $\psi(2S)J/\psi$, or $\psi(2S)\psi(2S)$ final states are studied. Events with subsequent decays of the $Z$ boson into lepton pairs ($e^{+}e^{−}$ or $\mu^{+}\mu^{−}$) and $J/\psi$ or $\Upsilon$ mesons into muon pairs are selected using online event filters. Final states with $\psi(2S)$ mesons are accessed via the inclusive decay of $\psi(2S)$ into $J/\psi$. A data sample of proton-proton collisions collected at a center-of-mass energy of 13 TeV with the Compact Muon Solenoid detector at the Large Hadron Collider that corresponds to an integrated luminosity of about 137 fb$^{−1}$ is used. This talk will present recent searches and implications for future searches of such BSM signatures with higher luminosities.
The generation of the neutrino mass is an essential observation from the neutrino oscillation experiments. This indicates a major revision of the Standard Model which initiated with the massless neutrinos. A possible interesting scenario is the seesaw mechanism where SM gauge singlet Right Handed Neutrinos are introduced. Another interesting aspect is the extension of the SM with SU$(2)_𝐿$ triplet fermions. Alternatively a general U$(1)_L$ extension of the SM is also an interesting idea which involves three generations of the SM singlet RHNs to generate the tiny neutrino mass through the seesaw mechanism. Additionally such models can contain a $𝑍^\prime$ boson which could be tested at the colliders through the pair production of the RHNs.
We propose a model-independent framework to classify and study neutrino mass models and their phenomenology. The idea is to introduce one particle beyond the Standard Model which couples to leptons and carries lepton number together with an operator which violates lepton number by two units and contains this particle. This allows to study processes which do not violate lepton number, while still working with an effective field theory. The contribution to neutrino masses translates to a robust upper bound on the mass of the new particle. We compare it to the stronger but less robust upper bound from Higgs naturalness and discuss several lower bounds.
We consider the generation of neutrino masses via a singly-charged scalar singlet. Under general assumptions we identify two distinct structures for the neutrino mass matrix which are realised in several well-known radiative models. Either structure implies a constraint for the antisymmetric Yukawa coupling of the singly-charged scalar singlet to two left-handed lepton doublets, irrespective of how the breaking of lepton-number conservation is achieved. The constraint disfavours large hierarchies among the Yukawa couplings. We study the implications for the phenomenology of lepton-flavour non-universality, measurements of the $W$-boson mass, flavour violation in the charged-lepton sector and decays of the singly-charged scalar singlet. We also discuss the parameter space that can address the Cabibbo Angle Anomaly.
In a seesaw scenario, GUT and family symmetry can severely constrain the structure of the Dirac and Majorana mass matrices of neutrinos. We will discuss an interesting case where these matrices are related in such a way that definite predictions for light neutrino masses are achieved without specifying the seesaw scale. This opens up the possibility to consider both high- and low-scale leptogenesis. We will explore both of these possibilities in an $SU(5) \times \mathcal{T}_{13}$ model and show that sub-GeV right-handed neutrinos with active-sterile mixing large enough to be probed by DUNE can explain baryon asymmetry of the Universe through resonant leptogenesis.
Possible structural linkage between neutrinos and charged leptons under neutrino mixing of being in neutrinos’ own maximum contact is studied. Where it is provided that one neutrino (fermion) and neutrino-pair (boson) may interact. So it should have such a restrained condition that one neutrino makes a maximum contact number-6 (six) with other 6 neutrinos under 2D mixing. Then possible structural linkage between neutrinos and charged leptons will emerge vertically, and it seems common and recurrent in their three generations. There, the winding angles of series of the neutrino-pair, with which higher charged leptons (μ, τ) possibly build up, appear to fall almost on observed neutrino mixing angles of θ12 and θ23, respectively.
The challenging experimental environment at the High Luminosity LHC (HL-LHC) will require replacement of the existing endcap calorimeters of the CMS experiment. In their place, the new HGCAL detector will offer a radiation hard, high granularity calorimeter which meets the challenge and offers improved abilities for physics object reconstruction. We review the design and current status of the HGCAL upgrade project.
The HL-LHC upgrade of the CMS experiment includes a replacement of the endcap calorimeters with the new HGCAL High-Granularity Calorimeter. Development of radiation-hard 8" silicon sensors is an important part of the upgrade project. We will review the status of the sensor development, including radiation tests, and describe the plans towards the full sensor production.
The HGCAL endcap calorimeter of the CMS experiment at HL-LHC will include a hadronic compartment that is based partly on the SiPM-on-tile concept. Building a performant SiPM-on-tile system involves the development and testing of rad-hard scintillators and SiPMs to meet the challenges of the HL-LHC experimental environment. We will review the design of the SiPM-on-tile part of the calorimeter, and describe the current status of the effort.
T2K is a long-baseline accelerator neutrino oscillation experiment which has precisely measured neutrino oscillation parameters and hinted at a significant matter-antimatter asymmetry in the lepton sector. In view of the upcoming program of upgrades of the beam intensity, a novel plastic-scintillator detector for the T2K near detector upgrade, called SuperFGD, is proposed aiming to reduce the statistical and systematic uncertainties of the measurements. The scintillation light from particle interactions in SuperFGD is collected by Multi-Pixel Photon Counter (MPPC), a solid state photomultiplier with high internal gain. In this talk, the characterization of the MPPC sensors is presented.
A novel particle detector design is proposed utilizing a modified bandgap reference circuit. The output of the circuit is calibrated to be proportional to the work function of gallium nitride, which provides a reference voltage that is independent of temperature variations, supply variations and loading. It is hypothesized that particle interactions with the detector cause temporal fluctuations in the output. Experimental data of transient signals observed under neutron and alpha irradiation are presented.
The ATLAS missing transverse momentum trigger is susceptible to the impact of multiple proton-proton interactions (pileup) in the same event. To mitigate the impact of pileup, sophisticated subtraction schemes are utilized. During the Run 2 data-taking (2015-2018), these methods focused only on information from the calorimeter due to the limited time available for the algorithms to utilize tracks in the HLT. HLT is the High Level Trigger software-based second-level trigger subsystem. In this talk, I will present updates on the missing transverse momentum trigger algorithms utilizing tracking information for Run 3 (2022-2024).
We present measurements of CMS jet energy scale (JES) and resolutions, based on a data sample
collected in proton-proton collisions at a center-of-mass energy of 13 TeV. The corrections,
extracted from data and simulated events using the combination of several channels and methods,
account successively for the effects of pileup, simulated jet response, and residual JES eta and
pT dependences. The jet energy resolution is measured in data and simulated events, where it is
studied as a function of pileup and jet pT and eta. The studies exploit events with dijet
topology, photon+jet, Z+jet and multijet events.
The ongoing CMS analysis on the measurement of the full spin density production matrix, which includes multi-differential measurements of variables sensitive to the top quark spin correlation, polarization and related angular observables, is presented. Events containing two leptons, two b-jets and additional jets, as well as missing transverse momentum produced in proton-proton collisions at a center-of-mass energy of 13 TeV are considered. The data corresponds to an integrated luminosity of 137/fb collected with the CMS detector at the LHC. Results are used to challenge Standard Model predictions and also to indirectly search for contributions of new physics.
We present our recent NNLO calculation of t-channel single-top-quark production and decay that resolves a disagreement between two previous calculations whose size at the inclusive level was comparable to the NNLO correction itself, and was even larger differentially. Moving beyond those comparisons, we have included b-quark tagging to allow for comparison with experiment, and added the ability to use double deep inelastic scattering (DDIS) scales ($\mu^2=Q^2$ for the light-quark line and $\mu^2=Q^2+m_t^2$ for the heavy-quark line) that allow for direct testing of parton distribution function (PDF) stability. All code will be publicly available in MCFM.
We demonstrate that several characteristic fiducial and differential standard model observables, and observables sensitive to new physics, are stable between NLO and NNLO, but point out there is a sizable difference in the prediction of some exclusive t+n-jet cross sections. Finally, we use this calculation to present preliminary results which indicate that some commonly used PDF sets are in significant disagreement, both with each other and with themselves
between perturbative orders when evaluated at Tevatron energies.
A simultaneous measurement of the three components of the top-quark and top-antiquark polarization vectors in $t$-channel single-top-quark production is presented. Due to the large mass of the top quark, the $t\rightarrow Wb$ decay occurs before hadronization, giving one access to its polarization through the angular distribution of the decay products. The analysis we present uses an integrated luminosity of 139 $fb^{-1}$ of proton-proton collisions at 13 TeV, collected with the ATLAS detector at the LHC. We also discuss the more intricate analysis of the quadruple-differential decay rate of $t$-channel single-top-quark, which is currently in progress; its purpose is the simultaneous determination of four decay amplitudes and their phases in addition to all three components of the polarization vector for top quarks and antiquarks separately. Prospects for constraining anomalous couplings/Effective-Field-Theory coefficients with this analysis are also discussed.
The Mu2e experiment, currently under construction at Fermilab, will search for charged lepton flavor violation (CLFV) in the form of coherent neutrinoless conversion of muons to electrons in the presence of an atomic nucleus. In order to reach its projected single-event sensitivity of $3 \times 10 ^{−17}$, Mu2e will need to create the most intense muon beam ever developed, with $10^{10}$ muons per second stopping in the stopping target. These muons will be produced from pions originating from the production target. Optimizing pion production is therefore a vital component of the Mu2e design. This talk will discuss how the design of the production target, solenoid, and instrumentation is optimized for pion production.
The Mu2e experiment being