- Compact style
- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Thank you for making this a successful meeting!
Recordings of presentations are now available for most talks. The timetable lists all talks. Each contribution (where the speaker allowed us) should have a link labelled "Recording".
There will be no proceedings for DPF21.
Registration and abstract submission are now closed.
The APS Division of Particles & Fields (DPF) Meeting brings the members of the Division together to review results and discuss future plans and directions for our field. It is an opportunity for attendees, especially young researchers, to present their findings. The meeting opened each day with plenary sessions, followed by selected community sessions and then parallel sessions to complete the day.
Topics covered included: LHC Run 2 Results; LHC Run 3 and HL-LHC Projections; Accelerators & Detectors; Computing, Machine Learning, and AI; Quantum Computing and Sensing; Electroweak & Top Quark Physics; Higgs Physics; QCD & Heavy Ion Physics; Rare Processes and Precision Measurements; Neutrino Physics; Physics Beyond the Standard Model; Particle Astrophysics; Dark Matter; Cosmology & Dark Energy; Gravity & Gravitational Waves; Lattice Gauge Theory; Field & String Theory; Outreach & Education; Diversity, Equity, & Inclusion.
DPF2021 was held as a virtual-only event via Zoom. It is hosted by the Florida State University High Energy Physics group and with the scientific program determined by the DPF Program Committee.
There was no Registration Fee. You do not need to be a member of APS to register or to submit an abstract.
Support for this meeting was provided by the FSU Office of Research, the FSU College of Arts and Sciences, the FSU Physics Department, and the FSU High Energy Physics group.
Follow the meeting on Twitter @apsdpf2021 or #apdpf2021.
Experimental hints of lepton flavor non-universality in the decays of b-hadrons are an exciting sign of possible new physics beyond the Standard Model. Results from multiple experiments indicate that electrons, muons, and tau leptons may not be different only because of their masses. I will review the experimental situation of these “b-anomalies”, including recent developments and prospects for the near future. Further results from LHCb, Belle II, and other experiments in the coming years should be able to confirm or rule out the presence of new physics in these decays.
This talk will review the role of the CKM matrix in governing meson-antimeson oscillations and CP violation in the Standard Model. Recent measurements of B_s oscillations and decays by LHCb, CMS, and ATLAS will be discussed in this context, as will be measurements of CP violation in B_d and B_u decays. The direct measurements of the CKM angle gamma (from decays produced by tree-level amplitudes) are combined and the resulting value compared to that determined indirectly.
Flavor physics is addressing two complementary questions. First, what is the origin of the hierarchical flavor structure of the Standard Model quarks and leptons? Second, are there sources of flavor and CP violation beyond the Standard Model? I will discuss recent theoretical developments in this area, focusing mainly on the so-called "B-anomalies" -- persistent hints for the violation of lepton flavor universality in decays of B mesons. I will review the status of the anomalies, discuss possible new physics explanations, and outline the prospects of resolving the anomalies with expected experimental data.
HEP is funded essentially entirely with public funds. Since there are many organizations that wish to receive federal funds, it is imperative that the HEP community raise its visibility among the general public. However, outreach to the public has long been neglected by the HEP community. In this talk, I will discuss the importance of outreach. I will also describe some of the methods that work, and will also mention some programs that the Snowmass process is developing to make it easier for individuals to engage in communicating with the public.
Many have observed for organizations there is evidence to support the belief that their cultures are their destinies. During the summer of 2020, the DELTA-PHY initiative was launched in an effort for the APS to deliberate upon and if needed move to transform its culture. It does this by asking three key questions: (a.) What are the values of the APS? (b.) Aside from producing world-class physics, what are the inputs,
outputs, practices, traditions of the APS? (c.) Do answers to these questions align and are they in alignment with the APS 2019 strategic plan? DELTA PHY activities are envisioned to be timely and cutting across the society's 'stove pipe' structures.
A search is presented for chargino pair-production and chargino-neutralino production, where the almost mass-degenerate chargino and neutralino each decay via $R$-Parity-violating couplings to a boson ($W/Z/H$) and a charged lepton or neutrino. This analysis searches for a trilepton invariant mass resonance in data corresponding to an integrated luminosity of 139 fb$^{-1}$ recorded in proton-proton collisions at $\sqrt{s}$ = 13 TeV with the ATLAS detector at the Large Hadron Collider at CERN.
A search for production of the supersymmetric partners of the top quark, top squarks, is presented. The search is based on proton-proton collision events containing multiple jets, no leptons, and large transverse momentum imbalance. The data were collected with the CMS detector at the CERN LHC at a center-of-mass energy of 13 TeV, and correspond to an integrated luminosity of 137 fb-1. The targeted signal production scenarios are direct and gluino-mediated top squark production, including scenarios in which the top squark and neutralino masses are nearly degenerate. The search utilizes novel algorithms based on deep neural networks that identify hadronically decaying top quarks and W bosons, which are expected in many of the targeted signal models. No statistically significant excess of events is observed relative to the expectation from the standard model, and limits on the top squark production cross section are obtained in the context of simplified supersymmetric models for various production and decay modes.
With several recent anomalies observed that are in tension with the Standard Model, and with no clear roadmap to the source of new physics, this is an exciting time to explore for new particles at the LHC. Supersymmetry (SUSY) is an elegant solution to many of the Standard Model mysteries, and SUSY models with electroweakly produced sparticles are particularly interesting as possible explanations to the g-2 anomaly, the observed dark-matter density, and more. ATLAS has a rich program of complementary electroweak SUSY searches, and the latest Run 2 results using 139 fb$^{-1}$ of 13 TeV proton-proton collision data are discussed that shed light on where new physics may be found, such as in the three lepton final state.
Minimal Supersymmetric Standard Model (MSSM) is one of the most well-motivated and well-studied scenarios for going beyond the Standard Model (SM). Apart from solving the hierarchy problem, one of the primary motivations is the presence of a suitable dark matter (DM) candidate, namely the lightest neutralino, in the particle spectrum of SUSY. Measurement of DM relic density of the universe by the WMAP/PLANCK experiments puts the model under probe. In addition, stringent constraints on the masses of strongly interacting sparticles have also been shown at the Large Hadron Collider (LHC) experiment by analysing Run II data for specific simplified models. However, many assumptions made by the experimental collaborations can not be realized in the actual theoretically motivated models. In this study, we revisit the bound on the gluino mass placed by the ATLAS collaboration. We reveal that the exclusion region is shrunk in the $M_{\widetilde{g}}-M_{\widetilde{\chi}^0_1}$ plane in the pMSSM scenario corresponding to different hierarchies of left and right squark mass parameters. Importantly, for higgsino type lighter electro-weakinos, the bound on gluino mass from 1l + jets + MET search practically does not exist. We have also performed detailed analysis on neutralino dark matter and have found that in most of the region of LSP mass range, required relic density is achieved and also, the direct as well as the indirect detection constraints are satisfied.
A search for supersymmetry involving the pair production of gluions decaying via stop quarks into the lightest neutralino $\tilde{\chi}^{0}_{1}$ is reported. It uses LHC $pp$ collision data at $\sqrt{s}\ =\ 13\ TeV$ with an integrated luminosity of $139fb^{-1}$ collected with the ATLAS detector in 2015-2018. The search is performed in events containing large missing transverse momentum and several energetic jets, at least three of which must be identified as originating from b-quarks. The analysis is done in two final states, one of which is required to have at least one charged lepton (electron or muon), and the second one is required the veto on leptons. Expected exclusion limit for gluino and neutralino masses is evaluated using simplified signal model. It is found to be $800\ GeV$ and below for neutralino masses with gluino masses of less than $2275\ GeV$ at the $95\%$ confidence level.
A Left-Right Symmetric Model which utilizes VLFs to generate fermion masses via a universal see-saw mechanism is studied. In this talk, I will present the latest results of our analysis on the flavor observables constraining the model. Cabibbo anomaly can be easily resolved in this model, thereby predicting the mass of vector-like quarks. Further, I will discuss the possibility of explaining the neutral current B-anomalies using this model.
The discovery of a Higgs boson with mass near 125 GeV in 2012 marked one of the most important milestones in particle physics. The low mass of this Higgs boson with diverging loop corrections adds motivation to look for new physics Beyond the Standard Model (BSM). Several BSM theories introduced new heavy quark partners, called vector-like quarks (VLQ), with mass at the TeV scale. In particular, the vector-like top quark (T) can cancel the largest correction due to the top quark loop, which is one of the main contributions to the divergence, and stabilize the scalar Higgs boson mass. This analysis searches for pair production of vector-like T or B quarks with charge 2e/3 and e/3 in proton-proton collisions at 13 TeV at the LHC. Theories predict 3 decay modes for T and B, respectively : bW, tZ , tH and tW, bZ, bH. The branching ratios vary over different theoretical models. We focus on events where bosons decays leptonically and result in a final state with a same-sign (SS) di-lepton pair and a final state with multiple (3 or more) leptons. We analyze data collected by the CMS detector in the LHC in 2017 and 2018 with integrated luminosities of 41.5 and 59.7 fb^{-1}. Besides Standard Model (SM) processes with the same final states, lepton misidentification contributes a significant part of the background to both SS dilepton and multilepton channel and is estimated by a data-driven method. In addition, charge mis-identification is another source of background for SS dilepton channel, which is also estimated by a data-driven method. Comparing the estimated background with data, and considering uncertainties, we determine an upper limit on the TT or BB production cross section. We calculate limits at different mass points of T and B and different branching fraction combinations.
Vector-like quarks (VLQ) are predicted in many extensions to the Standard Model (SM), especially those aimed at solving the hierarchy problem. Their vector-like nature allows them to extend the SM while still being compatible with electroweak sector measurements. In many models, VLQs decay to a SM boson and to a third-generation quark. Pair production of VLQ provides a model-independent method of searching due to the Quantum Chromodynamics production of the particles. This talk presents a search for pair production of vector-like top quarks that each decay into a SM W boson and a bottom quark, with one W boson decaying leptonically and the other decaying hadronically. The analysis takes advantage of boosted boson identification and data-driven correction of the dominant ttbar background prediction to improve sensitivity. Further, this analysis extends the previous analysis sensitivity by including the full 140$fb^{-1}$ dataset of $pp$ collisions at $\sqrt{s}=$13 TeV collected with the ATLAS detector.
We present the status of our all-hadronic analysis in search of pair-produced Vector-Like Quarks (VLQs) using the Boosted Event Shape Tagger (BEST) with the CMS detector using 137 $fb^{-1}$ of $\sqrt{s} = 13$ TeV proton-proton collisions at the LHC. VLQs are motivated by models which predict compositeness of the scalar Higgs boson, and which avoid increasing constraints from Higgs measurements. In the all-hadronic channel, this analysis is sensitive to all possible VLQ decay modes: T(B)->t(b)H/t(b)Z/b(t)W, capturing the highest branching fraction of each process. The high mass of the VLQs produce highly boosted objects in the final state which can be reconstructed as anti-kt R=0.8 jets and identified as either QCD/b/W/Z/H/t using the BESTagger. The tagger boosts jet constituents into various rest frames and uses neural networks to find correlations between event shape variables, such as Fox-Wolfram moments and sphericity, to determine the category of identification. We define signal regions by the classification of the highest four jets in pT. We estimate our QCD-dominant background with a data-driven 3-jet control region, then fit its normalization simultaneously with simulations of well-modeled sub-dominant background such as ttbar and W/Z+jets. The HT (scalar sum $p_T$) of the event is scanned for an excess of signal in 120 of 126 possible combinations, and the least 6 signal-rich combinations are used as validation regions for the QCD estimation. The analysis is in progress and plans to be completed soon.
In many models that address the naturalness problem, top-quark partners are often postulated in order to cure the issue related to the quadratic corrections of the mass of the Higgs boson. In this work, we study alternative modes for the production of top- and bottom-quark partners ($T$ and $B$), $pp\rightarrow B$ and $pp\rightarrow T\bar{t}$, via a chromo-magnetic moment coupling. We adopt the simplest composite Higgs effective theory for the top-quark sector incorporating partial compositeness, and investigate the sensitivity of the 14 TeV LHC
The recently updated measurement of the muon anomalous magnetic moment strengthens the motivations for new particles beyond the Standard Model. We discuss two well-motivated 2HDM scenarios with vectorlike leptons as well as the Standard Model extended with vectorlike lepton doublets and singlets as possible explanations for the anomalous measurement. In these models we find that, with couplings of order 1, new leptons as heavy as 8 TeV can explain the anomaly, well out of reach of expectations for the LHC. We summarize the implications of future precision measurements of Higgs- and Z- boson couplings which can provide indirect probes of these scenarios and their viability to explain the anomalous magnetic moment of the muon.
In this talk, we will introduce a technique to train neural networks into being good event variables, which are useful to an analysis over a range of values for the unknown parameters of a model.
We will use our technique to learn event variables for several common event topologies studied in colliders. We will demonstrate that the networks trained using our technique can mimic powerful, previously known event variables like invariant mass, transverse mass, and MT2.
We will describe how the machine learned event variables can go beyond the hand-derived event variables in terms of sensitivity, while retaining several attractive properties of event variables, including the robustness they offer against unknown modeling errors.
To perform theoretical calculations and comparisons with collider data, it must first be corrected for various detector effects, namely noise processes, detector acceptance, detector distortions, and detector efficiency; this process is called “unfolding” in high energy physics (or “deconvolution” elsewhere). While most unfolding procedures are carried out over only one or two binned observables at a time, OmniFold is a simulation-based maximum likelihood procedure which employs deep learning to do unbinned and (variable-, and) high-dimensional unfolding. We apply OmniFold to a measurement of all charged particle properties in $Z+$jets events using the full Run 2 $pp$ collision dataset recorded by the ATLAS detector to complete the first application of OmniFold on physical collider data.
We examine the problem of unfolding in particle physics, or de-corrupting observed distributions to estimate underlying truth distributions, through the lens of Empirical Bayes and deep generative modeling. The resulting method, Neural Empirical Bayes (NEB), can unfold continuous multi-dimensional distributions, in contrast to traditional approaches that treat unfolding as a discrete linear inverse problem. We exclusively apply our method in the absence of a tractable likelihood function, as is typical in scientific domains relying on computer simulations. Moreover, combining NEB with suitable sampling methods allows posterior inference for individual samples, thus enabling the possibility of reconstruction with uncertainty estimation.
As the search for physics beyond the Standard Model widens, 'model-agnostic' searches, which do not assume any particular model of new physics, are increasing in importance. One promising model-agnostic search strategy is Classification Without Labels (CWoLa), in which a classifier is trained to distinguish events in a signal region from similar events in a sideband region, thereby learning about the presence of signal in the signal region. The CWoLa strategy was recently used in a full search for new physics in dijet events from Run-2 ATLAS data; in this search, only the masses of the two jets were used as classifier inputs. It has since been observed that while CWoLa performs well in such low-dimensional use cases, difficulties arise when adding additional jet features as classifier inputs. In this talk, we will describe ongoing work to combat these problems and extend the sensitivity of a CWoLa search by adding new observables to an ongoing analysis using $139$ $\text{fb}^{-1}$ of data from $pp$ collisions at $\sqrt{s}=$ 13 TeV in the ATLAS detector. In particular, we will discuss the anticipated benefits of adding classifier features, as well as the implementation of a simulation-assisted version of CWoLa which makes the strategy more robust.
Excursion is a tool to efficiently estimate level sets of
computationally expensive black box functions using Active Learning.
Excursion uses a Gaussian Process Regression as a surrogate model for
the black box function. It queries the target function (black box) iteratively in order to increase the available information regarding the desired level sets. We implement Excursion using GPyTorch which provides
state-of-the-art fast posterior fitting techniques and takes advantage
of GPUs to scale computations to higher dimensions.
In this talk, we demonstrate that Excursion significantly outperforms
traditional grid search approaches and we will detail the current work
in progress on improving Exotics searches as an intermediate step towards the ATLAS Run 2 pMSSM scan on $pp$ collisions at $\sqrt{s}=$ 13 TeV with the ATLAS detector.
Data Quality Monitoring (DQM) is an important process of collecting high quality data for physics analysis. Currently, the workflow of DQM is manpower intensive to scrutinize and certify hundreds of histograms. Identifying good quality and reliable data is necessary to make accurate predictions, simulations, therefore anomalies in the detector must be timely identified to minimize data loss. With the use of Machine Learning (ML) algorithms raising alarms at the anomalies or failures can be automated and data certification process be made more efficient. The Tracker DQM team at the CMS Experiment (at the LHC) has been working on designing and implementing ML features to monitor this complex detector. This contribution presents the recent progress in this direction.
Some of the open questions in fundamental physics can be addressed by looking at the distribution of matter in the Universe as a function of scale and time (or redshift). We can study the nature of dark energy, causing the accelerated expansion of the Universe. We can measure the sum of the neutrino masses, and potentially determine their hierarchy. We can test the standard model at energies higher that those accessible at the laboratory, by studying the primordial density perturbations. The Dark Energy Spectroscopic Instrument (DESI) has just started a 5-years program to generate the largest and most accurate 3D map of the distribution of galaxies and quasars. By measuring the statistical properties of these catalogs, DESI will be able to reconstruct the expansion history of the Universe over the last 11 billion years, while making precise measurements of the growth of structure. In this presentation, I will review the forecasted performance of the DESI survey, and show how it will dramatically improve our understanding of dark energy, inflation, and the mass of the neutrinos.
The Dark Energy Spectroscopic Instrument (DESI) has embarked on an ambitious survey to explore the nature of dark energy with spectroscopic measurements of 35 million galaxies and quasars in just five years. DESI will determine precise redshifts and employ the Baryon Acoustic Oscillation method to measure distances from the local universe to beyond 11 billion light years, as well as employ Redshift Space Distortions to measure the growth of structure and probe potential modifications to general relativity. In this presentation I will describe the instrumentation we developed to conduct the DESI survey, as well as the flowdown from the science requirements to the technical requirements on the instrumentation. The new instrumentation includes a wide-field, 3.2 degree diameter prime-focus corrector that focuses the light onto 5020 robotic fiber positioners on the 0.8-m diameter, aspheric focal surface. This high density is only possible because of the very compact positioner design, which allows a minimum separation of only 10.4-mm. The positioners and their fibers are evenly divided among ten wedge-shaped petals, and each bundle directs the light of 500 fibers into one of ten spectrographs via a contiguous, high-efficiency, nearly 50-m fiber cable bundle. The ten, identical spectrographs each use a pair of dichroics to split the light into three wavelength channels, and each channel is optimized for a distinct wavelength and spectral resolution that together record the light from 360-980 nm. I will conclude with some highlights from the on-sky validation of the instrument.
The Dark Energy Spectroscopic Instrument (DESI) started its main survey. Over 5 years, it will measure the spectra and redshifts of about 35 millions galaxies and quasars over 14,000 square degrees. This 3D map will be used to reconstruct the expansion history of the universe up to z=3.5, and measure the growth rate of structure in the redshift range 0.7-1.6 with unequaled precision. The start of the survey marks the end of a successful survey validation period during which more than one million cosmological redshifts were measured, already about as many as in any previous survey. This data set, along with many commissioning studies, has demonstrated the project meets its science requirements written many years ago. I will present how we have validated the target selection, the observation strategy and the data processing, demonstrating that we can achieve our goals in terms of density of galaxies and quasars with measured redshifts, with the required precision, for exposure times that allow us to cover one third of the sky in five years.
An intriguing and well-motivated possibility for the particle makeup of the dark sector is that a small fraction of the observed abundance is made up of light, feebly-interacting particle species. Due to their weakness of interaction but comparatively large number abundance, cosmological datasets are particularly powerful tools to leverage here. In this talk I discuss the impact of these new particle species on observables, the CMB and LSS in particular, and present the strongest constraints to date on existence light relics in our universe.
GAMBIT (the Global and Modular Beyond-the-standard-model Inference Tool) is a flexible and extensible framework that can be used to undertake global fits of essentially any BSM theory to relevant experimental data sets. Currently included in code are results from collider searches for new physics, cosmology, neutrino experiments, astrophysical and terrestrial dark matter searches, and precision measurements. In this talk I will begin with a brief update on recent additions to the code and then present the results of a recent global fit that we have undertaken. In this study, we simultaneously varied the coefficients of 14 EFT operators describing the interactions between dark matter, quarks, gluons and the photon, in order to determine the most general current constraints on the allowed properties of WIMP dark matter.
Automated tools for the computation of amplitudes and cross sections have become the backbone of phenomenological studies beyond the standard model. We present the latest developments in MadDM, a calculator of dark matter observables based on MadGraph5_aMC@NLO. The new version enables the fully automated computation of loop-induced annihilation processes, relevant for indirect detection of dark matter. Of particular interest is the electroweak annihilation into $\gamma X$, where $X=\gamma$, $Z$, $h$ or any new unstable particle even under the dark symmetry. These processes lead to the sharp spectral feature of monochromatic gamma lines: a smoking-gun signature for dark matter annihilation in our Galaxy. MadDM provides the predictions for the respective fluxes near Earth and derives constraints from the $\gamma$-ray line searches by Fermi-LAT and HESS. As an application, we present the implications for the parameter space of the Inert Doublet model and a top-philic $t$-channel mediator model.
The WIMP proposed here yields the observed abundance of dark matter, and is consistent with the current limits from direct detection, indirect detection, and collider experiments, if its mass is $\sim 72$ GeV/$c^2$. It is also consistent with analyses of the gamma rays observed by Fermi-LAT from the Galactic center (and other sources), and of the antiprotons observed by AMS-02, in which the excesses are attributed to dark matter annihilation. These successes are shared by the inert doublet model (IDM), but the phenomenology is very different: The dark matter candidate of the IDM has first-order gauge couplings to other new particles, whereas the present candidate does not. In addition to indirect detection through annihilation products, it appears that the present particle can be observed in the most sensitive direct-detection and collider experiments currently being planned.
If dark matter interacts too feebly with ordinary matter, it was not able to thermalize with the bath in the early universe. Such Feebly Interacting Massive Particles (FIMPs) would therefore be produced via the freeze-in mechanism. Testing FIMPs is a challenging task, given the smallness of their couplings. In this talk, I will discuss our recent proposal of a $Z’$ portal where freeze-in can be currently tested by many experiments. In our model, $Z’$ bosons with masses in the MeV-PeV range have both vector and axial couplings to ordinary and dark fermions. We place constraints on our parameter space with bounds from direct detection, atomic parity violation, leptonic anomalous magnetic moments, neutrino-electron scattering, collider, and beam dump experiments.
Dark, chiral fermions carrying lepton flavor quantum numbers are natural candidates for freeze-in. Small couplings with the Standard Model fermions of the order of lepton Yukawas are ‘automatic’ in the limit of Minimal Flavor Violation. In the absence of total lepton number violating interactions, particles with certain representations under the flavor group remain absolutely stable. For masses in the GeV-TeV range, the simplest model with three flavors, leads to signals at future direct detection experiments like DARWIN. Interestingly, freeze-in with a smaller flavor group such as SU (2) is already being probed by XENON1T.
The DAMIC experiment at SNOLAB uses thick, fully-depleted, scientific grade charge-coupled devices (CCDs) to search for the interactions between proposed dark matter particles in the galactic halo and the ordinary silicon atoms in the detector. DAMIC CCDs operate with an extremely low instrumental noise and dark current, making them particularly sensitive to ionization signals expected from low-mass dark matter particles. Throughout 2017-18, DAMIC has collected data with an array of seven CCDs (40-gram target) installed in a low radiation environment in the SNOLAB underground laboratory. This talk will focus on the recent dark matter search results from DAMIC. We will present the search methodology and results from an 11 kg day exposure WIMP search, including the strictest limit on the WIMP-nucleon scattering cross section for a silicon target for $m_\chi < 9 \ \rm GeV \ c^{-2}$. Additionally, we will discuss recent limits on light dark matter that could interact with the electrons of the silicon atoms.
SuperCDMS deploys cryogenic germanium and silicon detectors which are sensitive in both the athermal phonon and ionization channels to search for dark matter. In order to observe such a small potential signal, all background sources need to be well understood and then mitigated.
Low-background shielding was designed such that the environmental background is negligible compared to the irreducible background due to cosmogenic activation in the detectors themselves. The overall background budget of the SuperCDMS experiment will be presented, along with the iterative process of design, assay, and fabrication of the now complete shielding system.
The third science run of SuperCDMS HVeV detectors (single-charge sensitive detectors with high Neganov-Trofimov-Luke phonon gain) took place at the NEXUS underground test facility in early 2021, incorporating two important changes to test background hypotheses and enhance sensitivity. First, this was the first HVeV dataset taken underground (300 mwe) and in a shielded environment. Second, the run utilized three detectors operated simultaneously to identify sources of background events that produce 2 or more electron-hole pairs. We will present preliminary results and interpretation from these tests as well as an estimate of the expected sensitivity of the dataset.
We present the theoretical case along with some early measurements with diamond test chips that demonstrate the viability of TES on diamond as a potential platform for direct detection of sub-GeV dark matter.
Diamond targets can be sensitive to both nuclear and electron recoils from dark matter scattering in the MeV and above mass range, as well as to absorption processes of dark matter with masses between sub-eV to 10's of eV.
Compared to other proposed semiconducting targets such as germanium and silicon, diamond detectors can probe lower dark matter masses via nuclear recoils due to the lightness of the carbon nucleus. The expected reach for electron recoils is comparable to that of germanium and silicon, with the advantage that dark counts are expected to be under better control. Via absorption processes, unconstrained QCD axion parameter space can be successfully probed in diamond for masses of order 10~eV.
ABSTRACT:
HeRALD, the Helium Roton Apparatus for Light Dark Matter, will use a superfluid 4He target to probe the sub-GeV dark matter parameter space. The HeRALD design is sensitive to all signal channels produced by nuclear recoils in superfluid helium: singlet and triplet excimers, as well as phonon-like excitations of the superfluid medium. Excimers are detected via calorimetry with Transition-Edge-Sensor readout in and around the superfluid helium. Phonon-like vibrational excitations eject helium atoms from the superfluid-vacuum interface which are detected by adsorption onto calorimetry suspended above the interface. I will discuss the design, sensitivity projections, and ongoing R&D for the HeRALD experiment. In particular, I will present an initial light yield measurement of superfluid helium down to order 50 keV.
Absorption of dark matter (DM) allows direct detection experiments to probe a broad range of DM candidates with masses much smaller than kinematically allowed via scattering. It has been known for some time that for vector and pseudoscalar DM the absorption rate can be related to the target's optical properties, i.e. the conductivity/dielectric. However this is not the case for scalar DM, where the absorption rate is determined by a, formally, NLO operator which does not appear in the photon absorption process. Therefore the absorption rate must be determined by other methods. We use a combination of first principles numeric calculations and semi-analytic modeling to compute the absorption rate in silicon, germanium and a superconducting aluminum target. We also find good agreement between these approaches and the data-driven approach for the vector and pseudoscalar DM models.
It has long been known that the coarse-grained approximation to the black hole density of states can be computed using classical Euclidean gravity. In this talk I will present evidence for another entry in the dictionary between Euclidean gravity and black hole physics, namely that Euclidean wormholes describe a coarse-grained approximation to the energy level statistics of black hole microstates. Our main result is an integral representation for wormhole amplitudes in Einstein gravity and in full-fledged AdS/CFT. These amplitudes are non-perturbative corrections to the two-boundary problem in AdS quantum gravity. The full amplitude is UV sensitive, dominated by small wormholes, but it admits an integral transformation with a macroscopic, weakly curved saddle-point approximation. In the boundary description this saddle appears to dominate a smeared version of the connected two-point function of the black hole density of states, and suggests level repulsion in the spectrum of AdS black hole microstates.
We will discuss constructions of string-inspired higher-derivative non-local extension of particle theory which is explicitly ghost-free. Showing quantum loop calculations in the weak perturbation limit we explore the implications on the hierarchy problem and vacuum instability problem in Higgs theory. Then we will discuss the abelian and non-abelian model-building in infinite derivative QFT in 4-D which naturally leads to the predictions of dynamical conformal invariance in the UV at the quantum level due to the vanishing of the \beta-functions above the energy scale of non-locality M. The theory remains finite and perturbative upto infinite energy scales resolving the issue of Landau poles. We move on to the implications of infinite-derivatives in LHC, dark matter, astrophysical and inflationary observables and comment on constraints on the scale M and dimensional transmutation of the scale M. Next we will discuss the strong perturbation limit and show that mass gap that arises due to the interactions in the theory gets diluted in the UV due to the higher-derivatives again reaching a conformal limit in the asymptotic regions both for the scalar field case and Yang-Mills cases. For the Yang-Mills, the gauge theory is confining without fermions and we explore the exact beta-function in the theory. We conclude by summarising the non-locality as a framework UV-completion in particle theory and gravity and the road ahead for its fate with model-building with respect to BSM physics, particularly neutrinos, dark matter and axions.
We derive an expression for the one-loop determinant of the massive vector field in the Anti-de Sitter black brane geometry with large dimension limit. We utilize the Denef, Hartnoll and Sachdev method, which constructs the one-loop determinant from the quasinormal modes of the field. The large dimension limit decouples the equations of motion for different field components, and also selects a specific set of quasinormal modes that contribute to the non-polynomial part of the one-loop determinant. We hope this result can provide some useful information even when the number of dimension D is finite, since it's the leading order contribution when we treat D as a parameter and do an expansion in terms of 1/D.
Generic arguments lead to the idea of a minimal length scale in quantum gravity. An observational signal of such a minimal length scale is that photons would exhibit dispersion. In 2009, the observation of a short gamma ray burst seemed to push the minimal length scale to distances smaller than the Planck length. This poses a challenge for minimal length models. Here we propose a modification of the position and momentum operators which lead to a minimal length scale, but preserve the photon energy-momentum relationship E=pc. In this way there is no dispersion of photons with different energies. This can be accomplished without modifying the commutation relationship [x,p]=iℏ.
The quantization of Einsteins's general relativity leads to a nonrenormalizable quantum field theory. However, the potential harm of nonrenormalizability, can be overcome in the effective field theory (EFT) framework, where there is an unambiguous way to define a well behaved and reliable quantum theory of gravitation, if only we agree to restrict ourselves to low energies compared to the Planck scale. Although the effective field theory of gravitation is perfectly well-defined as a quantum field theory, some subtleties arise from its nonrenormalizability, such as the use of the renormalization group equations, as illustrated by the controversy involving the gravitational corrections to the beta function of gauge theories. In 2005, Robinson and Wilczek announced their conclusion that gravity contributes with a negative term to the beta function of the gauge coupling, meaning that quantum gravity could make gauge theories asymptotically free. This result was soon contested. It was shown that the claimed gravitational correction is gauge dependent, and a lot of subsequent research on the subject followed with varying conclusions. In this work we use the framework of effective field theory to couple Einstein's gravity to quantum electrodynamics and determine the gravitational corrections to the two-loop beta function of the electric charge. Our results indicate that gravitational corrections do not alter the running behavior of the electric charge, on the contrary, we observe that it gives a positive contribution to the beta function, making the electric charge grow faster.
Measurements of Higgs boson production cross sections are carried out in the diphoton decay channel using 139 $fb^{-1}$ of $pp$ collision data at $\sqrt{s}=$13 TeV collected by the ATLAS experiment. Cross-sections for gluon fusion, weak vector boson fusion, associated production with a $W$ or $Z$ boson, and top quark associated production processes are reported. An upper limit of eight times the Standard Model prediction is set for the associated production of a Higgs boson with a single top quark process. Higgs boson production is further characterized through measurements of the Simplified Template Cross-Sections (STXS) in 27 fiducial regions. All the measurement results are compatible with the Standard Model predictions.
The precision measurements of the properties of the Higgs boson are among the principal goals of the LHC Run-2 program. This talk reports on the measurements of the fiducial and differential Higgs boson production cross sections via Vector Boson Fusion with a muon, an electron, and two neutrinos from the decay of W bosons, along with the presence of two energetic jets in the final state. The analysis uses $pp$ collision data at a center-of-mass energy of 13 TeV collected with the ATLAS detector between 2015 and 2018 corresponding to an integrated luminosity of 139 fb$^{−1}$. The optimizations of the selection criteria and the signal extraction methods will be discussed in detail, in particular the use of machine learning techniques for performing a multidimensional fit for extracting the signal and normalizing the simulated backgrounds to data.
The Large Hadron Collider (LHC) is a “top quark factory”. It allows for precise measurements of several top quark properties. In addition to this, for the first time ever it is now possible to measure rare processes involving top quarks. Associated production of top and anti-top quarks along with the Higgs boson or with electro-weak gauge bosons like W or Z has been observed at the LHC. Precise measurements of these processes have implications on the Standard Model of particle physics and even in cosmology. Recent results from measurements of these rare top quarks processes involving multileptonic final states, at the ATLAS experiment in 𝑝𝑝 collisions at $\sqrt(s)=13$ TeV with 80 fb−1 of data will be discussed.
Following the discovery of the Higg's boson in 2012 by both the ATLAS and CMS experiments, a wealth of papers have been published concerning measurements or observations of the Higgs' decay modes. However, the most dominant decay mode, $H \rightarrow b\bar{b}$, proved to be an elusive and challenging search due to the low signal-to-background environment, and a diverse range of backgrounds arising from multiple Standard Model processes. The backgrounds include $W$+jets, $Z$+jets, and $t\bar{t}$ production amongst others. Measurements of the $WH$ and $ZH$ production, with the $W$ or $Z$ boson decaying into charged leptons (electrons or muons, including those produced from the leptonic decay of a tau lepton), in the $H\rightarrow b\bar{b}$ decay channel in $pp$ collisions at 13 TeV, corresponding to an integrated luminosity of 139 fb$^{-1}$, with the ATLAS detector was performed. The production of a Higgs boson in association with a $W$ or $Z$ boson has been established with observed (expected) significances of 4.0 (4.1) and 5.3 (5.1) standard deviations, respectively.
In this talk I will present results of the simulation of electroweak Higgs boson production at the CERN LHC using the Herwig 7 general purpose event generator using one-loop matrix elements via the interface to HJets. The main result will be the simulation of next-to-leading order merging of Higgs boson plus 2 and 3 jets with a dipole parton shower. Additionally, I will comment on non-factorizable radiative corrections to this important Higgs boson production process. I will, also, provide a comparison of the full calculation with the well known t-channel approximation (a.k.a VBF) provided by the parton-level Monte Carlo program, VBFNLO.
With the standard model working well in describing the collider data, the focus is now on determining the standard model parameters as well as for any hint of deviation. In particular, the determination of the couplings of the Higgs boson with itself and with other particles of the model is important to better understand the electroweak symmetry breaking sector of the model. In this
letter, we look at the process pp → W W H, in particular through the fusion of bottom quarks. Due to the non-negligible coupling of the Higgs boson with the bottom quarks, there is a dependence on the W W HH coupling in this process. This sub-process receives the largest contribution when the Wbosons are longitudinally polarized. We compute one-loop QCD corrections to various final states with polarized W bosons. We find that the corrections to the final state with the longitudinally polarized W bosons are large. It is shown that the measurement of the polarization of the W bosons can be used as a tool to probe the WWHH coupling in this process. We also examine the effect of varying
WWHH coupling in the κ-framework.
Experimentally probing the charm-Yukawa coupling in the LHC experiments
is important, but very challenging due to an enormous QCD background. We study a new channel that can be used to search for the Higgs decay $H\to c\bar c$, using the vector boson fusion (VBF) mechanism with an associated photon. In addition to suppressing the QCD background, the photon gives an effective trigger handle. We discuss the trigger implications of this final state that can be utilized in ATLAS and CMS. We propose a novel search strategy for $H\to c\bar c$ in association with VBF jets and a photon, where we find a projected sensitivity of about 5 times the SM charm-Yukawa coupling at 95$\%$ C.L. at High Luminosity LHC (HL-LHC). Our result is comparable and complementary to existing projections at the HL-LHC. We also discuss the implications of increasing the center of mass collision energy to 30 TeV and 100 TeV.
The measurements at the Large Hadron Collider(LHC), so far, have established Higgs Yukawa couplings to Fermions are close to the Standard Model(SM) expectation for the 3rd Fermion generation. However, the rather ad hoc assumption of universal Yukawa coupling for other Fermion generations has a little experimental constraint. This is very challenging to probe due to small branching fractions, extensive quantum chromodynamics(QCD) backgrounds, and difficulties in jet flavor identification. A direct search by the ATLAS experiment for the SM Higgs boson decaying to a pair of charm quarks is presented. The dataset delivered by the LHC in $pp$ collisions at $\sqrt{s}=$ 13 TeV and recorded by the ATLAS detector corresponds to an integrated luminosity of 139 fb-1. Charm tagging algorithms are optimized to distinguish c-quark jets from both light flavor jets and b-quark jets. The analysis method is validated with the study of diboson (WW, WZ, and ZZ) production, with observed (expected) significances of 2.6(2.2) standard deviations above the background-only hypothesis for the (W/Z)Z(→cc¯) process and 3.8(4.6) standard deviations for the (W/Z)W(→cq) process. The (W/Z)H(→cc¯) search yields an observed (expected) limit of 26(31) times the predicted cross-section times branching fraction for a Higgs boson with a mass of 125 GeV, corresponding to an observed (expected) constraint on the charm Yukawa coupling modifier |κc|<8.5(12.4), at the 95% confidence level.
The dimuon decay of the Higgs boson is the most promising process for probing the Yukawa couplings to the second generation fermions at the Large Hadron Collider (LHC). We present a search for this important process using the data corresponding to an integrated luminosity of 139 fb$^{-1}$ collected with the ATLAS detector in $pp$ collisions at $\sqrt{s} = 13 \mathrm{TeV}$ at the LHC. Events are divided into several regions using boosted decision trees to target different production modes of the Higgs boson. The measured signal strength (defined as the ratio of the observed signal yield to the one expected in the Standard Model) is $\mu = 1.2 \pm 0.6$. The observed (expected) significance over the background-only hypothesis for a Higgs boson with a mass of 125.09 GeV is 2.0$\sigma$ (1.7$\sigma$).
The Higgs Boson is expected to decay to bb approximately 58% of the time. Despite the large branching fraction, due to the large background from Standard Model events with b-jets, measuring this decay has been less precise than other, less frequent, decays. Measuring H(bb) in the vector boson fusion production mode has historically been insensitive, but developments in the background estimates and discrimination, as well as improvements in the signal extraction techniques, have resulted in an observed (expected) significance of 2.6 (2.8) standard deviations from the background-only hypothesis. This analysis uses a dataset with an integrated luminosity of 126 $fb^{-1}$, collected in $pp$ collisions at $\sqrt{s}=$ 13 TeV with the ATLAS detector at the Large Hadron Collider (LHC) during LHC Run 2 and considers only fully-hadronic final states. This talk will focus on the background estimation and signal extraction techniques that are unique to this analysis, as well as the results.
The ever-growing interest into high-energy production of the Higgs boson, motivated by an enhanced sensitivity to New Physics scenarios, pushes the development of experimental techniques for the reconstruction of boosted decay products from the Higgs-boson hadronic decays.
This talk will discuss recent studies of inclusive Higgs-boson production with sizable transverse momentum decaying to a $b\bar{b}$ quark pair (ATLAS-CONF-2021-010). The analyzed data were recorded with the ATLAS detector in proton-proton collisions with a center-of-mass energy of $\sqrt{s}=13\,$ TeV at the Large Hadron Collider between 2015 and 2018, corresponding to an integrated luminosity of $136\,\text{fb}^{-1}$.
Higgs bosons decaying to $b\bar{b}$ are reconstructed as single large-radius jets and identified by the experimental signature of two $b$-hadron decays. The analysis takes advantage of an analytical model for the description of the multi-jet background, and combines multiple regions rich in Higgs-boson signal and specific background signatures. The experimental techniques are validated in the same kinematic regime using the $Z\to b\bar{b}$ process.
For Higgs-boson production at transverse momenta above 450 GeV, the production cross section is found to be 13±57(stat.)±22(syst.)±3(theo.) fb. The differential cross section 95% confidence level upper limits as a function of Higgs boson transverse momentum are $σ_H$(300<$p_{\text{T}}^H$<450 GeV)<2.8 pb, $σ_H$(450<$p_{\text{T}}^H$<650 GeV)<91 fb, $σ_H$($p_{\text{T}}^H$>650GeV)<40.5 fb, and $σ_H$($p_{\text{T}}^H$>1TeV)<10.3 fb. Evidence for the production of $Z->b\bar{b}$ with $p_{\text{T}}^Z>650\,\text{GeV}$ is obtained. All results are consistent with the Standard Model predictions.
A search for the Standard Model Higgs boson produced in association with a high-energy photon is performed using ${132}$ ${fb^{-1}}$ of $pp$ collision data at $\sqrt{s}={13}$ TeV collected with the ATLAS detector at the Large Hadron Collider. The vector boson fusion production mode of the Higgs boson is particularly powerful for studying the $H(\rightarrow b \bar{b})\gamma$ final state because the photon requirement greatly reduces the multijet background and because the Higgs boson decays primarily to bottom quark-antiquark pairs. Utilization of Monte Carlo, machine learning, and model fitting techniques resulted in a measured Higgs boson signal strength of $1.3 \pm 1.0$ relative to the Standard Model prediction. This correlates with an observed signal significance greater than background of 1.3 standard deviations, compared to 1.0 standard deviations expected.
ProtoDUNE-SP and ProtoDUNE-DP DUNE's large scale single-phase and dual-phase prototypes of DUNEs far detector modules, operated at CERN Neutrino Platform. ProtoDUNE-SP has finished its Phase-1 running in 2020 and has successfully collected test beam and cosmic ray data. In this talk, I will discuss the first results on ProtoDUNE-SP Phase-1's physics performance and ProtoDUNE-DPs design and progress.
Large liquid argon time projection chambers (LAr TPCs) at SBN and DUNE will provide an unprecedented amount of information about GeV-scale neutrino interactions. By taking advantage of the excellent tracking and calorimetric performance of LAr TPCs, we present a novel method for estimating the neutrino energy in neutral current interactions that significantly improves upon conventional methods in terms of energy resolution and bias. We present a toy study exploring the application of this new method to the sterile neutrino search at SBN under a 3+1 model.
The Deep Underground Neutrino Experiment (DUNE) is an upcoming long-baseline neutrino experiment which will study neutrino oscillations. Neutrino oscillations will detected at the DUNE far detector 1300 km away from the start of the beam at Fermilab. The DUNE near detector (ND) will be located on-site at Fermilab, and will be used to provide an initial characterization of the neutrino beam, as well as to constrain systematic uncertainties on neutrino oscillation measurements. The detector suite consists of a modular 50-ton LArTPC (ND-LAr), a magnetized 1-ton gaseous argon time projection chamber (ND-Gar) surrounded by an electromagnetic calorimeter, and the System for on-Axis Neutrino Detection (SAND), composed by magnetized electromagnetic calorimeter and inner tracker. In this talk, these detectors and their physics goals will be discussed.
In order to achieve a precise measurement of the leptonic CP violation phase, Deep Underground Neutrino Experiment (DUNE) will employ four 10 kt scale far detector modules and a near detector complex.
In the near detector complex, a System for on-Axis Neutrino Detection (SAND) is located downstream of a liquid-argon TPC (LAr) and a high pressure gaseous-argon TPC (GAr). SAND consists of an inner tracking system, surrounded by the KLOE superconducting magnet with an electromagnetic calorimeter inside. Due to the high event rate and accurate neutrino energy reconstruction capability, SAND can serve as a good beam monitor. Besides, SAND provides comprehensive measurements on non-Ar targets allowing constraints on the A-dependence of neutrino interaction models. In addition, with the capability of neutron kinetic energy detection, a full reconstruction of neutrino interaction would be possible, which opens new ways to analyze the events. In this talk, a number of physics studies and the latest design of SAND will be presented.
The XENON collaboration has recently published results lowering the energy threshold to search for nuclear recoils produced by solar $^8$B neutrinos using a $0.6$ tonne-year exposure with the XENON1T detector. Due to the low energy threshold, a number of novel techniques are required to reduce the consequent increase in backgrounds. No significant $^8$B neutrino-like excess is found after unblinding. New upper limits are placed on the dark matter-nucleus cross section for dark matter masses as low as $3~\mathrm{GeV}/c^2$, as well as on a model of non-standard neutrino interactions. This talk will present the techniques used to lower backgrounds and to validate signal and background models.
The CMS electromagnetic calorimeter (ECAL) of the Compact Muon Solenoid (CMS) is a high granularity lead tungstate crystal calorimeter operating at the CERN Large Hadron Collider. The ECAL is designed to achieve excellent energy resolution which is crucial for studies of Higgs boson decays with electromagnetic particles in the final state, as well as for searches for new physics involving electrons and photons. Recently the energy response of the calorimeter has been precisely calibrated exploiting the full Run 2 data, with the goal of achieving the most optimal performance. A dedicated calibration of each detector channel has been performed with physics events using electrons from W and Z boson decays, photons from pi0/eta decays, and the azimuthally symmetric energy distribution of minimum bias events. We will describe the calibration strategies that have been implemented and the excellent performance achieved by the CMS ECAL with the ultimate calibration of Run 2 data, in terms of energy scale stability and energy resolution.
The CMS electromagnetic calorimeter (ECAL) is a high resolution crystal calorimeter operating at the CERN LHC. The on-detector readout electronics digitizes the signals and provides information on the deposited energy in the ECAL to the hardware-based Level-1 trigger system. The L1 trigger system receives information from different CMS subdetectors at 40 MHz, the proton bunch collision rate, and decides for each collision whether the full detector must be read out, reducing the rate of accepted events to about 100 kHz. The increased luminosity of the LHC Run2 with respect to Run1 has required frequent calibrations during data taking operations to account at trigger level for radiation-induced changes in crystal and photodetector response. For the LHC Run3 (2022-24), further improvements in the energy and time reconstruction of the CMS ECAL trigger primitives are being explored. These exploit additional features of the on-detector electronics. In this presentation we will review the ECAL trigger primitives performance during LHC Run2 and present the improvements to the ECAL trigger system envisaged for the LHC Run3.
To address the challenges of providing high performance calorimetry and other types of instrumentation in future experiments under high luminosity and difficult radiation and pileup conditions, R&D is being conducted on promising optical-based technologies that can inform the design of future detectors, with emphasis on ultra-compactness, excellent energy resolution and spatial resolution, and especially fast timing capability.
The strategy builds upon the following concepts: use of dense materials to minimize the cross sections and lengths (depths) of detector elements; maintaining Molière Radii of the structures as small as possible; use of radiation-hard materials; use of optical techniques that can provide high efficiency and fast response while keeping optical paths as short as possible; and use of radiation resistant, high efficiency photosensors.
High material density is achieved by using thin layers of tungsten absorber interleaved with active layers of dense, highly efficient crystal or ceramic scintillator. Several scintillator approaches are currently being explored, including rare-earth 3+ activated materials Ce3+ and Pr3+ for brightness and Ca co-doping for improved (faster) fluorescence decay time.
Light collection and transfer from the scintillation layers to photosensors is enabled by the development and refinement of new waveshifters (WLS) and the incorporation of these materials into radiation hard quartz waveguide elements. WLS dye developments include fast organic dyes of the DSB1 type, ESIPT (excited state intermolecular proton transfer) dyes having very large Stokes’ Shifts and hence very low optical self-absorption, and inorganic fluorescent materials such as LuAG:Ce, which is noted for its radiation resistance.
Optical waveguide approaches include quartz capillaries containing WLS cores to: (1) provide high resolution EM energy measurement; (2) with WLS materials strategically placed at the location of the EM shower maximum to provide high resolution timing of EM showers, and (3) with WLS shifter elements placed at various depth locations to provide depth segmentation and angular measurement of the EM shower development.
Light directly from the scintillators or indirectly via wave shifters is detected by pixelated, Geiger-mode photosensors that have high quantum efficiency over a wide spectral range and designed to avoid saturation. These include the development of very small pixel (5-7 micron) silicon photomultiplier devices (SiPM) operated at low gain and cooled (typically -35°C or below), and longer-term R&D on photosensors based upon large band-gap materials including GaInP. Both efforts are directed toward improved device performance in high radiation fields.
The main emphases of the RADiCAL R&D program are: (1) the bench, beam and radiation testing of individual scintillator, wave shifter and photo sensing elements; and (2) by combining these into ultra-compact modular structures, to characterize and assess their performance for measurement of energy, fast timing, and depth segmentation. Recent results and program plans will be presented.
A challenge in large LArTPCs is efficient photon collection for low energy, MeV-scale, deposits. Past studies have demonstrated that augmenting traditional ionization-based calorimetry with information from the scintillation signals can greatly improve the precision of measurements of energy deposited. We propose the use of photosensitive dopants to efficiently convert the scintillation signals of the liquid argon directly into ionization signals. This could enable the collection of more than 40% of all the scintillation information, a considerable improvement over conventional light collection solutions. We will discuss the implications this can have on LArTPC physics programs, what hints of performance improvements we can gather from past studies, and what R&D we envision are needed to establish using these dopants in large LArTPCs.
The “muon-to-electron conversion” (Mu2e) experiment at Fermilab will search for the Charged Lepton Flavour Violating neutrino-less coherent conversion of a muon into an electron in the field of an aluminum nucleus. The observation of this process would be the unambiguous evidence of physics beyond the Standard Model. Mu2e detectors comprise a straw-tracker, an electromagnetic calorimeter and an external veto for cosmic rays. The calorimeter provides excellent electron identification, complementary information to aid pattern recognition and track reconstruction, and a fast calorimetric online trigger. The detector has been designed as a state-of-the-art crystal calorimeter and employs 1340 pure Cesium Iodide (CsI) crystals readout by UV-extended silicon photosensors and fast front-end and digitization electronics. A design consisting of two identical annular matrices (named “disks”) positioned at the relative distance of 70 cm downstream the aluminum target along the muon beamline satisfies the Mu2e physics requirements.
The hostile Mu2e operational conditions, in terms of radiation levels (total ionizing dose of 12 krad and a neutron fluence of 5x1010 n/cm2 @ 1 MeVeq (Si)/y), magnetic field intensity (1 T) and vacuum level (10^-4 Torr) have posed tight constraints on the design of the detector mechanical structures and materials choice. The support structure of the two 670 crystal matrices employs two aluminum hollow rings and parts made of open-cell vacuum-compatible carbon fiber. The photosensors and service front-end electronics for each crystal are assembled in a unique mechanical unit inserted in a machined copper holder. The 670 units are supported by a machined plate made of vacuum-compatible plastic material. The plate also integrates the cooling system made of a network of copper lines flowing a low temperature radiation-hard fluid and placed in thermal contact with the copper holders to constitute a low resistance thermal bridge. The data acquisition electronics is hosted in aluminum custom crates positioned on the external lateral surface of the two disks. The crates also integrate the electronics cooling system as lines running in parallel to the front-end system.
In this talk we will review the constraints on the calorimeter mechanical structures design, the development from the conceptual design to the specifications of all the structural components, including the mechanical and thermal simulations that have determined the materials and technological choices and the specifications of the cooling station, the status of components production, the components quality assurance tests, the detector assembly procedures, and the procedures for detector transportation and installation in the experimental area.
Measurements of the di-leptonic top-antitop events at the LHC unraveled several important excesses. We examine the possibility that those excesses are consequences of the lack of non-perturbative enhancement of the production cross section near the t-tbar threshold. While sub-dominant in terms of total rates, so-far neglected toponium effects yield the additional production of di-leptonic systems of small invariant mass and small azimuthal angle separation, which could contribute the above-mentioned deviations from the Standard Model. We propose a method to discover toponium in present and future data, and our results should pave the way to further experimental and phenomenological studies on toponium. Deeper understanding of the threshold behavior of the top pair production is necessary to accurately determine the top quark mass, which is one of the most important parameters of the SM.
We investigate the prospects of discovering the top quark decay into
a charm quark and a Higgs boson ($t \to c h^0$) in top quark pair
production at the CERN Large Hadron Collider (LHC).
A general two Higgs doublet model is adopted to study flavor changing
neutral Higgs (FCNH) interactions.
We perform a parton level analysis as well as Monte Carlo simulations
using \textsc{Pythia}~8 and \textsc{Delphes} to study the flavor changing
top quark decay
$t \to c h^0$, followed by the Higgs decaying into $\tau^+ \tau^-$,
with the other top quark decaying to a bottom quark ($b$) and
two light jets ($t\to bW\to bjj$).
To reduce the physics background to the Higgs signal,
only the leptonic decays of tau leptons are considered,
$\tau^+\tau^- \to e^\pm\mu^\mp +$ MET,
where MET represents the missing transverse energy from
the neutrinos.
In order to reconstruct the Higgs boson and top quark masses as well as
to reduce the physics background, the collinear approximation
for the highly boosted tau decays is employed.
Furthermore, the energy distribution of the charm quark helps set the
acceptance criteria used to reduce the background and improve the statistical
significance of the signal.
We study the discovery potential for the FCNH top decay
at the LHC with collider energy $\sqrt{s} = 13$ and 14 TeV as well as
a future hadron collider with $\sqrt{s} = 27$ TeV.
Our analysis suggests that a high energy LHC at $\sqrt{s} = 27$ TeV
will be able to discover this FCNH signal with an integrated
luminosity $\mathcal{L} = 3$ ab$^{-1}$ for a branching fraction
${\cal B}(t \to ch^0) > 1.4 \times 10^{-4}$,
which corresponds to a FCNH coupling $|\lambda_{tch}| > 0.023$.
This FCNH coupling is significantly below the current ATLAS combined
upper limit of $|\lambda_{tch}| = 0.064$.
Variable Importance is a variable ranking framework that uses machine learning methods, such as neural networks, to construct a quantitative metric for characterizing a variable's discriminatory power in binary classification problems. The Variable Importance framework is presented in the context of the CMS search for the rare Standard Model process of three top quark production to the single lepton final state. The importance metrics for a set of 76 multivariate variables describing a 13 TeV proton-proton collision are determined, and a neural network discriminator is trained for a subset of the ranked variables which will be used in a likelihood analysis. The Variable Importance framework includes hyper parameter optimization and k-fold cross validation training when constructing the final discriminator. Preliminary results for the expected three top signal and cross section using 101 $\mathrm{fb}^{-1}$ of simulated Monte Carlo samples using the Run 2 CMS detector is shown. Additionally, a study on Variable Importance's predictive power of the expected significance using a cumulative importance metric is shown to further validate the accuracy of Variable Importance's quantitative ranking.
We present a search for the standard model four top quark (tttt) production in the single-lepton final state. We analyze the proton-proton collision data collected by the CMS experiment at center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 35.8 $fb^{-1}$ in 2016, 41.5 $fb^{-1}$ in 2017 and 59.97 $fb^{-1}$ in 2018. The single lepton final state features a high jet multiplicity, with at least four jets coming from the hadronization of a bottom quark, an electron or a muon, and missing transverse momentum from neutrino. We consider the distribution of HT and BDT to discriminate the signal from background, where HT is the scalar sum of transverse momentum of all the jets and BDT is the discriminator from a multivariate analysis based on Boosted Decision Tree approach. We set limits on the four tops production cross section in the absence of signal. The expected limits and significances on tttt production cross section for data-taking period 2016 to 2018, and their combination are presented.
We discuss heavy-flavor production in modern global QCD analyses to determine the structure of the proton. We discuss new factorization schemes in presence of heavy quarks in proton-proton collisions, as well as the impact of the latest charm and bottom production at HERA and top-quark pair production at the LHC on recent PDF analyses from CTEQ.
The top quark pair production cross-section is measured in proton-proton and lead-lead collisions at a center-of-mass energy of 5.02 TeV. The data, collected in 2017 and 2018 by the CMS experiment at the LHC, correspond to a proton-equivalent integrated luminosity of 304 and 78 pb$^{-1}$, respectively. The measurements are performed using events with one electron and one muon of opposite sign, and at least two jets. The measured cross-sections are found to be consistent with each other as well as perturbative QCD calculations, including state-of-the-art free- or bound-nucleon parton distribution functions. They constitute the first step towards using the top quark as a novel tool to probe the quark-gluon plasma, an exotic state of strongly interacting quantum chromodynamics matter which is routinely produced in ultrarelativistic heavy nuclei collisions.
The Higgs boson could provide the key to discover new physics at the Large Hadron Collider. We investigate novel decays of the Standard Model (SM) Higgs boson into leptophobic gauge bosons which can be light in agreement with all experimental constraints. We study the associated production of the SM Higgs and the leptophobic gauge boson that could be crucial to test the existence of a leptophobic force. Our results demonstrate that it is possible to have a simple gauge extension of the SM at the low scale, without assuming very small couplings and in agreement with all the experimental bounds that can be probed at the LHC (ArXiv: 2003.09426)
We explore the implications of $g_\mu-2$ new result to five models based on the $SU(3)_C×SU(3)_L×U(1)_N$ gauge symmetry and put our conclusions into perspective with LHC bounds. We show that previous conclusions found in the context of such models change if there are more than one heavy particle running in the loop. Moreover, having in mind the projected precision aimed by the $g_\mu-2$ experiment at FERMILAB, we place lower mass bounds on the particles that contribute to muon anomalous magnetic moment assuming the anomaly is resolved otherwise. Lastly, we discuss how these models could accommodate such anomaly in agreement with existing bounds.
In a particle theory model whose most readily discovered new particle is the $\sim 1$TeV bilepton resonance in same-sign leptons, currently being sought at CERN's LHC, there exist three quarks ${\cal D, S, T}$ which will be bound by QCD into baryons and mesons. We consider the decays of these additional baryons and mesons whose detailed experimental study will be beyond the reach of the 14 TeV CERN collider and accessible only at an O(100 TeV) collider.
Recently, there has been great interest in beyond-the-Standard Model (BSM) physics involving new low-mass matter and mediator particles. One such model, $U(1)_{T3R}$, proposes a new U(1) gauge symmetry under which only right-handed fermions of the standard model are charged, as well as the addition of new vector-like fermions (e.g., $\chi_t$) and a new dark scalar particle ($\phi$) whose vacuum expectation value breaks the $U(1)_{T3R}$ symmetry. For this work, we perform a feasibility study to explore the mass ranges for which these new particles can be probed at the LHC. We consider the interaction $pp\rightarrow \chi_t t \phi$ in which the top quark decays purely hadronically, the $\chi_t$ decays semi-leptonically ($\chi_t\rightarrow W+b$), and the $\phi$ decays to two photons. The proposed search is expected to achieve a discovery reach with signal significance greater than $5\sigma$ for $\chi_t$ masses up to 1.8 TeV and $\phi$ masses as low as 1 MeV, assuming an integrated luminosity of 3000 fb$^{-1}$.
Scenarios in which right-handed light Standard Model fermions couple to a new gauge group, $U(1)_{T3R}$ can naturally generate a sub-GeV dark matter candidate. But such models necessarily have large couplings to the Standard Model, generally yielding tight experimental constraints. We show that the contributions to $g_\mu-2$ from the dark photon and dark Higgs largely cancel out in the narrow window where all the experimental constraints are satisfied, leaving a net correction which is consistent with recent measurements from Fermilab. These models inherently violate lepton universality, and UV completions of these models can include quark flavor violation which can explain $R_{K^{(\ast)}}$ anomalies as observed at the LHCb experiment after satisfying constraints on $Br(B_s\rightarrow\mu^+\mu^-)$ and various other constraints in the allowed parameter space of the model. This scenario can be probed by FASER, SeaQuest, SHiP, LHCb, Belle, etc.
A non-Abelian $SU(2)_X$ gauge extension of the Standard Model is considered under which leptons carry non-trivial charge. Gauge anomaly cancellation requires additional vectorlike fermions, which along with neutral vector bosons that play the role of Dark Matter correct the muon and the electron anomalous magnetic moments as preferred by experiments. When Collider bounds, electroweak precision data, dark matter relic abundance, and lepton $g-2$ are considered, the model is viable only within a narrow range of parameter space that corresponds to 1-3 TeV mass for the dark matter.
The $\eta$ and $\eta'$ mesons are almost unique in the particle universe since they are Goldstone boson and the dynamics of their decay are strongly constrained. The integrated eta meson samples collected in earlier experiments have been about ~$10^9$ events, dominated by the WASA at Cosy experiment, limiting considerably the search for such rare decays. A new experiment, REDTOP, is being proposed, with the intent
of collecting more than $10^{13}$ eta/yr ($10^{11}$ eta'/yr) for studying of rare $\eta$ decays.
Such statistics are sufficient for investigating several symmetry violations, and for searches of new particles beyond the Standard Model.
With tagged-eta experiment the fully constrained kinematic of the process allows for searches of light dark matter with a "Missing 4-momentum technique" which, at present, cannot be exploited by any other existing or proposed experiment.
The physics program and the detector for REDTOP will be discussed during the presentation.
The searches for permanent Electric Dipole Moments (EDMs) of elementary particles constitute one of the most powerful tools to probe physics beyond the Standard Model (SM). The existence of EDM can provide an explanation of the dominance of matter over antimatter in the universe which still is considered as one of the most puzzling questions in physics.
The JEDI Collaboration is conducting experimental EDM searches on protons and deuterons at the Cooler Synchrotron (COSY) storage ring at Forschunsgzentrum Jülich (Germany).
This talk will report on some of the major milestones achieved so far by the the JEDI Collaboration, which in many aspects were world-first achievements including some intermediate and preliminary results of the last presursor EDM experiment conducted on deuterons. Furthermore, an overview of the activities towards a prototype ring of the newly formed CPEDM collaboration will also be briefly presented.
The REDTOP experiment aims at collecting more than $10^{13}$ $\eta$/yr and $10^{11}$ $\eta'$/yr for studying rare meson decays.
Such large statistics provide the base for the investigation of several discrete symmetries, and the search for particles beyond the Standard Model.
The physics program and the ongoing sensitivity studies will be discussed during the presentation.
The Gamma Factory is a proposal to back-scatter laser photons off a beam of partially-stripped ions at the LHC, producing a beam of $\sim 10$ MeV to $1$ GeV photons with intensities of $10^{16}$ to $10^{18}~\text{s}^{-1}$. This implies $\sim 10^{23}$ to $10^{25}$ photons on target per year, many orders of magnitude greater than existing accelerator light sources and also far greater than all current and planned electron and proton fixed target experiments. We determine the Gamma Factory's discovery potential through "dark Compton scattering", $\gamma e \to e X$, where $X$ is a new, weakly-interacting particle. For dark photons and other new gauge bosons with masses in the 1 to 100 MeV range, the Gamma Factory has the potential to discover extremely weakly-interacting particles with just a few hours of data and will probe couplings as low as $\sim 10^{-9}$ with a year of running. The Gamma Factory therefore may probe couplings lower than all other terrestrial experiments and is highly complementary to astrophysical probes. We outline the requirements of an experiment to realize this potential and determine the sensitivity reach for various experimental configurations.
A search is presented for new physics beyond the standard model, including versions of Supersymmetry characterized by R-parity Violating (RPV) and Stealth SUSY. The result of this search is in events with two top quarks, no extra transverse momentum, and many light flavor jets as a final state of the top squark. The Run2 data used were collected with the CMS detector at the LHC in 2016 to 2018, and correspond to a total integrated luminosity of 137 fb$^{−1}$. The search is performed using events with at least seven jets and exactly one electron or muon. A neural network based on event shape and kinematical variables is used for background discrimination. The method of gradient reversal is used to ensure that the neural network score is independent of jet multiplicity as required by the primary background estimation method. Top squark masses up to 670 (870) GeV are excluded at 95% confidence level for the RPV (stealth) scenario, and the maximum observed local significance is 2.8 standard deviations for the RPV scenario with top squark mass of 400 GeV.
In a well-motivated class of beyond the Standard Model scenarios, dark matter interacts mainly with neutrinos of the SM via a neutrinophilic mediator. This scenario can leave a striking signature in neutrino detectors -- the mono-neutrino signature. In this process, invisible particles (either dark matter or the mediators) can be radiated off neutrinos when they undergo charged-current weak interactions, resulting in missing transverse momentum with respect to the incoming neutrino. In this talk we discuss the possibility of probing neutrinophilic scalar mediators via the mono-neutrino signature at the proposed Forward Physics Facility (FPF) at the LHC. Because of the high energy neutrino flux produced in the forward direction of LHC detectors, the FPF will play a leading role in probing neutrinophilic scalars in a so-far unconstrained parameter space and shed light on the origin of neutrinphilic dark matter scenarios.
We present a phenomenological investigation of color-octet
scalars (sgluons) in supersymmetric models with Dirac gaugino masses that feature an explicitly broken $R$ symmetry ($R$-broken models). We have constructed such models by augmenting minimal $R$-symmetric models with a set of supersymmetric and softly supersymmetry-breaking operators that explicitly break $R$ symmetry. We have found new features that appear as a result of $R$ symmetry breaking, including enhancements to extant decay rates, novel tree- and loop-level decays, and improved cross sections of single sgluon production. We have also explored constraints on these models from the Large Hadron Collider. We find that, in general, $R$ symmetry breaking quantitatively affects existing limits on color-octet scalars, closing loopholes for light CP-odd (pseudoscalar) sgluons while perhaps opening one for a light CP-even (scalar) particle. Altogether, scenarios with broken $R$ symmetry and two sgluons at or below the TeV scale can be accommodated by existing searches.
We explore the implications of supersymmetric grand unified theories about the muon anomalous magnetic moment (muon g-2). The discrepancy between the Standard Model (SM) prediction and the experiments in muon g-2 can be resolved by the contributions from the supersymmetric particles, and the fundamental parameter space of the muon g-2 resolution typically favors light sleptons (<~ 800 GeV), charginos (<~ 900 GeV) and LSP neutralino (<~ 600 GeV). On the other hand, the current LHC experiments can probe the mass scales for the mentioned particles, and it is expected to have a stronger impact from LHC-Run3. We find that the chargino mass can be probed up to about 600 GeV, and LHC-Run3 is expected to test chargino masses up to about 700 GeV. Even though there is no direct impact on the slepton masses, these experiments are able to probe the sleptons up to about 350 GeV. However, these scales depend on the chirality of lighter slepton states, and one can still realize solutions with lighter charginos when the lighter slepton is mostly right-handed.
Clockwork models can explain the flavor hierarchies in the Standard Model quark and lepton spectrum. We construct supersymmetric versions of such flavor clockwork models. The zero modes of the clockwork are identified with the fermions and sfermions of the Minimal Supersymmetric Standard Model. In addition to generating a hierarchical fermion spectrum, the clockwork also predicts a specific flavor structure for the soft SUSY breaking sfermion masses. We find sizeable flavor mixing among first and second generation squarks. Constraints from Kaon oscillations require the masses of either squarks or gluinos to be above a scale of ~3 PeV.
Though collider searches are constraining supersymmetric parameter space, generic model independent bounds on sneutrinos remain very low. We calculate new model independent lower bounds on general supersymmetric scenarios with sneutrino LSP and NLSPs. By recasting ATLAS LHC exotic searches in mono boson channels, we place an upper bound on the cross section on $pp\rightarrow\tilde{\nu}\tilde{\nu}+V$ processes in mono-photon, mono-$Z$ and mono-Higgs channels. We also evaluate the LHC discovery potential of sneutrinos in the HL-LHC 3 $\text{ab}^{-1}$ run.
In this work we study the collider phenomenology of color-octet scalars (sgluons) in minimal supersymmetric models endowed with a global continuous R symmetry. We systematically catalog the significant decay channels of scalar and pseudoscalar sgluons and identify novel features that are natural in these models. These include decays in nonstandard diboson channels, such as to a gluon and a photon; three-body decays with considerable branching fractions; and long-lived particles with displaced vertex signatures. We also discuss the single and pair production of these particles and show that they can evade existing constraints from the Large Hadron Collider, to varying extents, in large regions of reasonable parameter space. We find, for instance, that a 725 GeV scalar and a 350 GeV or lighter pseudoscalar can still be accommodated in realistic scenarios.
Naturalness suggests that the masses of the lightest electroweak gauginos (electroweakinos) are near the electroweak scale and, as a result, are within the scope of current LHC searches. However, LHC searches have not yet provided evidence of any supersymmetric (SUSY) particles. While exclusion limits for SUSY particles have been commonly reported assuming a simplified model where the branching ratio of targeted decays is 100%, this is not realistic in the minimal supersymmetric standard model (MSSM) and does not represent the effect of a large number of competing production and decay processes. If the decay branching ratios of these particles are lower, however, the reported mass limits are likely to be optimistic.
Focusing on chargino-neutralino production in a conventionally considered scenario where the LSP is bino-like and the next lightest SUSY particle is wino-like, electroweakino decay branching ratio into on-shell SM bosons can be established in terms of electroweak phenomenological MSSM (pMSSM) parameters. This talk will present the dependence of the branching ratios on the electroweak MSSM parameters, restate mass limits from simplified ATLAS searches in terms of the pMSSM, and ultimately, present the implications of a pMSSM based approach to SUSY searches.
Application of machine learning methods in high energy physics has received tremendous success in recent years with rapidly growing use cases. A key aspect in improving the performance of a given machine learning model has been the optimization of its hyperparameters which is usually computationally expensive. A framework has been developed to provide a high-level interface for automatic hyperparameter optimization that utilizes the ATLAS grid computing resource with hardware acceleration from GPU machines. The framework is equipped with a wide variety of hyperparameter optimization algorithms, distributed optimization schemes, intelligent job scheduling strategy based on available resources, flexible hyperparameter configuration space generation, and adaptation to the ATLAS intelligent Data Delivery Service. An example use case for the hyperparameter optimization of a Boosted Decision Tree model in the $HH \to b\bar{b}\gamma\gamma$ non-resonant analysis in $pp$ collisions at $\sqrt{s}=$ 13 TeV with the ATLAS detector is also presented.
The intelligent Data Delivery Service (iDDS) has been developed to cope with the huge increase of computing and storage resource usage in the coming Large Hadron Collider (LHC) data taking. It has been designed to intelligently orchestrate workflow and data management systems, decoupling data pre-processing, delivery, and main processing in various workflows. It is an experiment-agnostic service around a workflow-oriented structure with Directed Acyclic Graph (DAG) support to work with existing and emerging use cases in ATLAS and other experiments. Here we will present the motivation for iDDS, its design schema and architecture, use cases and current status for the ATLAS and Rubin Observatory exercise, and plans for the future.
The Reproducible Open Benchmarks for Data Analysis Platform (ROB) is a platform that allows for the evaluation of different data analysis algorithms in a controlled competition-style format [1]. One example for such a comparison and evaluation of different algorithms is the “The Machine Learning Landscape of Top Taggers” paper, which compiled and compared multiple different top tagger neural networks [2]. Motivated by the significant amount of time required to organize and evaluate such benchmarks, ROB provides a platform that automates the collection, execution, and comparison of participant submissions in a benchmark. Although convenient, the ROB currently requires participants to package their submissions into docker containers, which can pose an additional burden due to the steep learning curve.
To increase ease of use, we implement support for the commonly used Jupyter Notebooks [3] in ROB. Jupyter Notebooks are a popular tool that many physicists are already familiar with. Using Jupyter notebooks, physicists are able to combine live code, comments, and documentation inside one document. By utilizing the PaperMill package [4], we allow ROB users to submit their implementations directly as Jupyter Notebooks in order to evaluate different data analysis algorithms without the need to package the code into Docker containers. To demonstrate functionality and spur usage of the ROB, we provide demos using bottom and top tagging neural networks that display the application of the ROB within particle physics as a way of providing a competition style platform for algorithm evaluation [5].
References:
[1] “Reproducible and Reusable Data Analysis Workflow Server”, https://github.com/scailfin/flowserv-core
[2] Kasieczka, Gregor, Plehn, Tilman, Butter, Anja, Cranmer, Kyle, Debnath, Dipsikha, Dillon, Barry M, . . . Varma, Sreedevi. (2019). The Machine Learning landscape of top taggers. SciPost Physics, 7(1), 014.
[3] “Jupyter Notebooks”, https://jupyter.org/
[4] “Papermill”, https://papermill.readthedocs.io/en/latest/
[5] “Particle Physics”, https://github.com/anrunw/ROB
We introduce CaloFlow, a fast detector simulation framework based on normalizing flows. For the first time, we demonstrate that normalizing flows can reproduce many-channel calorimeter showers with extremely high fidelity, providing a fresh alternative to computationally expensive GEANT4 simulations, as well as other state-of-the-art fast simulation frameworks based on GANs and VAEs. Besides the usual histograms of physical features and images of calorimeter showers, we introduce a new metric for judging the quality of generative modeling: the performance of a classifier trained to differentiate real from generated images. We show that GAN-generated images can be identified by the classifier with 100% accuracy, while images generated from CaloFlow are able to fool the classifier much of the time. More broadly, normalizing flows offer several advantages compared to other state-of-the-art approaches (GANs and VAEs), including: tractable likelihoods; stable and convergent training; and principled model selection. Normalizing flows also provide a bijective mapping between data and the latent space, which could have other applications beyond simulation, for example, to detector unfolding.
We put forth a technique to generate images of particle trajectories (particularly electrons and protons) in a liquid argon time projection chamber (LArTPC). LArTPCs are a type of particle physics detector used by several current and future experiments focused on studies of the neutrino. We implement a quantized variational autoencoder and an autoregressive model which produces images conditioned on momentum with LArTPC like features. In this paper, we adopt a hybrid approach to generative modeling via combining the decoder from the autoencoder together with an explicit generative model for the latent space to produce momentum-conditioned images of particle trajectories in a LArTPC.
Current measurements of Standard-Model parameters suggest that the electroweak vacuum is metastable. This metastability has important cosmological implications because large fluctuations in the Higgs field could trigger vacuum decay in the early universe. For the false vacuum to survive, interactions which stabilize the Higgs during inflation---e.g., inflaton-Higgs interactions or non-minimal couplings to gravity---are typically necessary. However, the post-inflationary preheating dynamics of these same interactions could also trigger vacuum decay, thereby recreating the problem we sought to avoid. This dynamics is often assumed catastrophic for models exhibiting scale invariance since these generically allow for unimpeded growth of fluctuations. In this talk, we examine the dynamics of such "massless preheating" scenarios and show that the competing threats to metastability can nonetheless be balanced to ensure viability. We find that fully accounting for both the backreaction from particle production and the effects of perturbative decays reveals a large number of disjoint "islands of (meta)stability" over the parameter space of couplings. Ultimately, the interplay among Higgs-stabilizing interactions plays a significant role, leading to a sequence of dynamical phases that effectively extend the metastable regions to large Higgs-curvature couplings.
I will present the Sejong Suite, an extensive collection of state-of-the-art high-resolution cosmological hydrodynamical simulations spanning a variety of cosmological and astrophysical parameters, primarily developed for modeling the Lyman-Alpha forest and the high-redshift cosmic web. Adopting a particle-based implementation, we follow the evolution of gas, dark matter (cold and warm), massive neutrinos, and dark radiation, and consider several combinations of box sizes and number of particles. Noticeably, for the first time, we simulate extended mixed scenarios describing the combined effects of warm dark matter, neutrinos, and dark radiation, modeled consistently by taking into account the neutrino mass splitting. Along the way, I will also highlight some new results on cosmological neutrinos and the dark sector focused on the matter and flux statistics.
The axion is a well-motivated candidate for the inflaton, as the radiative corrections that spoil many single-field models are avoided by virtue of its shift symmetry. However, axions generically couple to gauge sectors. As the axion rolls through its potential, this coupling can result in the production of a co-evolving thermal bath, a situation known as "warm inflation." Inflationary dynamics in this warm regime can be dramatically altered and result in significantly different observable predictions. In this talk, I will show that for large regions of parameter space, axion inflation models once assumed to be safely "cold" are in fact warm, and must be reevaluated in this context.
The cold dark matter (CDM) candidate with weakly interacting massive
particles can successfully explain the observed dark matter relic
density in cosmic scale and the large-scale structure of the Universe.
However, a number of observations at the satellite galaxy scale seem
to be inconsistent with CDM simulation.
This is known as the small-scale problem of CDM.
In recent years, it has been demonstrated that
self-interacting dark matter (SIDM) with a light mediator offers
a reasonable explanation for the small-scale problem.
We adopt a simple model with SIDM and focus on the effects of
Sommerfeld enhancement.
In this model, the dark matter candidate is a leptonic scalar particle
with a light mediator.
We have found favored regions of the parameter space with proper masses and
coupling strength generating a relic density that is
consistent with the observed CDM relic density.
Furthermore, this model satisfies the constraints of recent direct searches
and indirect detection for dark matter
as well as the effective number of neutrinos and the
observed small-scale structure of the Universe.
In addition, this model with the favored parameters can resolve the
discrepancies between astrophysical observations and $N$-body simulations.
We present models of resonant self-interacting dark matter in a dark sector with QCD, based on analogies to the meson spectra in Standard Model QCD. For dark mesons made of two light quarks, we present a simple model that realizes resonant self-interaction (analogous to the ϕ-K-K system) and thermal freeze-out. We also consider asymmetric dark matter composed of heavy and light dark quarks to realize a resonant self-interaction (analogous to the Υ(4S)-B-B system) and discuss the experimental probes of both setups. Finally, we comment on the possible resonant self-interactions already built into SIMP and ELDER mechanisms while making use of lattice results to determine feasibility.
Dark matter self-interactions have been proposed as a solution to various astrophysical small-scale structure anomalies. We explore the scenario in which dark matter self-interacts through a continuum of low-mass states. This happens if dark matter couples to a strongly-coupled nearly-conformal hidden sector. This type of theory is holographically described by brane-localized dark matter interacting with bulk fields in a slice of 5D anti-de Sitter space. The long-range potential in this scenario depends on a non-integer power of the spatial separation. We find that continuum mediators introduce novel power-law scalings for the scattering cross section, opening new possibilities for dark matter self-interaction phenomenology.
I will discuss dark matter production mechanism based on decays of a messenger WIMP-like state into a pair of dark matter particles that are self-interacting via exchange of a light, stable mediator. A natural by-product of this mechanism is a possibility of a late time transition to subdominant dark radiation component which increases the present-day Hubble rate. Simple realization of the proposed mechanism was studied in a Higgs portal dark matter model. We found a significant region of the parameter space that leads to a mild relaxation of the Hubble tension while simultaneously having the potential of addressing small-scale structure problems of ΛCDM.
We present two distinct models which rely on 1st order phase transitions in a dark sector. The first is a minimal model for baryogenesis which employs a new dark SU(2) gauge group with two doublet Higgs bosons, two lepton doublets, and two singlets. The singlets act as a neutrino portal that transfers the generated baryon asymmetry to the Standard Model. The model predicts extra relativistic degrees of freedom, exotic decays of the Higgs and Z bosons, and stochastic gravitational waves detectable by future experiments.
The second model additionally produces (asymmetric) dark matter while the dark sector is expanded to an SU(3)xSU(2)xU(1) gauge group. Dark matter is comprised of dark neutrons or dark protons and pions.This model is highly discoverable at both dark matter direct detection and dark photon search experiments and the strong dark matter self interactions may ameliorate small-scale structure problems.
The XENONnT experiment has made great commissioning strides in the last year. Operating at the INFN Gran Sasso National Laboratory in Italy, XENONnT has substantially improved upon its predecessor, XENON1T, which to date is the most sensitive direct-detection dark-matter experiment for spin-independent WIMPs above 6 GeV/c^{2}. As part of its multi-pronged physics program, XENONnT aims to reach a sensitivity of 2.6x10^{–48}cm^{2} for the WIMP-nucleon cross section. In this talk, I will describe the improved subsystems (ranging from liquid purification, radon distillation, neutron veto and data processing) and their impacts on various physics searches.
The Scintillating Bubble Chamber (SBC) is a rapidly developing new technology for 0.7 - 7 GeV nuclear recoil detection. Demonstrations in liquid xenon at the few-gram scale have confirmed that this technique combines the event-by-event energy resolution of a liquid-noble scintillation detector with the world-leading electron-recoil discrimination capability of the bubble chamber, and in fact maintains that discrimination capability at much lower thresholds than traditional Freon-based bubble chambers. The promise of unambiguous identification of sub-keV nuclear recoils in a scalable detector makes this an ideal technology for both GeV-mass WIMP searches and CEvNS detection at reactor sites. We will present progress from the SBC Collaboration towards the construction of a pair of 10-kg argon bubble chambers at Fermilab and SNOLAB to test the low-threshold performance of this technique in a physics-scale device and search for dark matter, respectively.
The Scintillating Bubble Chamber (SBC) Collaboration is constructing a 10-kg liquid argon bubble chamber with scintillation readout. The goal for this new technology is to achieve a nuclear recoil detection threshold as low as 100 eV with near complete discrimination against electron recoil events. Following initial characterization in a near-surface site at Fermilab, an underground deployment is planned at SNOLAB for a dark matter search. The sub-keV nuclear recoil threshold would enable sensitivity to GeV-mass WIMPs, and a future ton-scale version could probe for dark matter down to the solar neutrino floor. The same technology has been considered for a first measurement of coherent elastic neutrino nucleus scattering (CEvNS) with reactor neutrinos. With high statistics and high signal-to-background, precision searches for beyond-standard-model physics would be possible. I will discuss the physics case for the liquid argon bubble chamber technology, and SBC studies of backgrounds and nuclear recoil calibration approaches.
The detection of low mass dark matter is under development with the advancement of experiment techniques. The superfluid helium-4 detector covers an extensive detection range from DM mass keV to GeV among the setups. I will present a complete theoretical framework for all processes within the superfluid to fill in the missing theory for sub-GeV DM detection. First, we use effective field theories to construct the interaction Lagrangian between quasi-particles. Second, we use a U(1) gauge spontaneous breaking and current element method to derive the interaction between test particles and quasi-particles. In the end, I will discuss relevant cross-sections and decay rates.
Recent theoretical calculations have shown that it is possible to
attempt the direct detection of dark matter in the laboratory through
its gravitational interaction alone. This is particularly relevant
around the well-motivated Planck mass scale (22 micro-g or $10^{19}$ GeV).
The Windchime collaboration is working on arrays of mechanical
accelerometers with quantum-enhanced readout to ultimately achieve this
goal. In this talk, I will present the idea of Windchime, our recent
prototype setup, sensor development, and simulation and analysis frameworks.
In this talk, we correct previous work on magnetic charge plus a photon mass. We show that contrary to previous claims this system has a very simple, closed form solution which is the Dirac string potential multiplied by a exponential decaying part. Interesting features of this solution are discussed namely: (i) the Dirac string becomes a real feature of the solution; (ii) the breaking of gauge symmetry via the photon mass leads to a breaking of the rotational symmetry of the monopole's magnetic field; (iii) the Dirac quantization condition is potentially altered.
Quantum field theories generally contain small quantum excitations around a true vacuum that we call particles and large classical structures called solitons that interpolate between different degenerate vacua. Often the solitons have a topological character and are then also known as topological defects of which kinks, domain walls, strings, and magnetic monopoles are all examples. After a quantum phase transition, the quantum vacuum can break up to form these classical topological defects. We study these phase transitions with global symmetry breakings and their dynamics, where the only interactions are with external parameters that induce the phase transition. We evaluate the number densities of the defects in 1,2 and 3-dimensions (kinks, vortices, and monopoles respectively) and find that they scale as $t^{−d/2}$ and evolve towards attractor solutions that are independent of the externally controlled time dependence.
A method to construct the asymptotic eigenstates of two-dimensional adjoint QCD in all parton sectors is described. It is used to explain known properties of the spectrum of QCD$_{2A}$, as well as the basis of a numerical approach to tackle the full theory. First results in a discrete approximation and a continuous formulation are presented. Prospects to uncover the true single-particle content of the theory are discussed.
Non-topological solitons like Q-balls and Q-shells are fascinating field theory objects. They may also relate to what lies beyond the standard model such as, for instance, as a macroscopic dark matter candidate. I describe recent improvements in the analytic understanding of these objects, leading to accurate descriptions of their essential characteristics, like size, charge, and mass. I also discuss new classes of solutions which this new understanding has revealed. These advances pave the way for systematic investigations of how Q-balls and Q-shells can interact with standard model fields.
A search for resonant Higgs boson pair production in the four b-jet final state is conducted. The analysis uses 36 fb$^{-1}$ of pp collision data at $\sqrt{s}$ = 13 TeV collected with the ATLAS detector. The analysis is divided into two regimes, targeting Higgs boson decays which are reconstructed as pairs of b-tagged small-radius jets or as single large-radius jets associated with b-tagged track-jets. Spin-0 and spin-2 benchmark signal models are considered, both of which correspond to resonant HH production via gluon–gluon fusion and decaying to two Standard Model Higgs bosons. No significant evidence for a signal is observed. Upper limits are set on the production cross-section times branching ratio to Higgs boson pairs of a new resonance in the mass range from 251 GeV to 3 TeV.
We present a search for non-resonant di-Higgs production in the $HH\rightarrow b\bar{b}\gamma\gamma$ decay channel. The measurement uses 139 $\mathrm{fb}^{-1}$ of pp collisions recorded by the ATLAS experiment at a center-of-mass energy of 13 TeV. Selected events are separated into multiple regions, targeting both the Standard Model (SM) signal and Beyond Standard Model (BSM) signals with modified Higgs self-couplings. Further details on the optimization of the event selection are highlighted. No excess with respect to background expectations are found and upper limits at 95% confidence level are set on the di-Higgs production cross sections. The observed (expected) limit on the Standard Model cross section is 130 fb (180 fb), corresponding to 4.1 (5.5) times the predicted value. The observed (expected) Higgs trilinear coupling modifier is constrained to be between [-1.5, 6.7] ([2.4, 7.7]).
After the Higgs Boson, with a mass of 125 GeV, was discovered in 2012, studies of single Higgs boson production have largely confirmed that this particle has similar properties to the Higgs boson predicted by the Standard Model (SM). However, it is clear that physics beyond the SM is required to explain many observed phenomena in nature, and there remains the possibility that the Higgs Boson can act as a portal for BSM physics. Studies of Higgs boson pair production (HH) represent the next crucial step to constraining the Higgs sector and allow for the chance to explore resonant HH production as well as refining measurements of the Higgs boson self-coupling. While previous searches have focused on the HH production in the gluon-gluon and vector-boson fusion modes, this analysis documents a new search, with 136 fb^-1 of pp collisions at √s = 13 TeV collected by ATLAS in LHC Run 2, for both resonant and non-resonant HH production in association with a vector boson (VHH). Three different channels are considered, corresponding to either ZllHH, ZvvHH, or WlvHH, in order to have good coverage over the different final states. Only H→bb ̅ is considered for simplicity and for the sake of high-statistics. The analysis benefits from small backgrounds and attempts to set limits for the first time on VHH production. Analysis techniques and expected significance will be presented.
Precision measurements of Higgs boson couplings to SM particles is a central task at the LHC today and for the future HL-LHC. Due to the $\sim$ O(nb) $t\bar{t}$ cross section and large Yukawa coupling, measurements of the interaction of the Higgs with top quarks is particularly compelling. The $t\bar{t}HH$ signal can be used to probe this coupling and also provides a direct measurement of trilinear Higgs self-coupling. We search for $t\bar{t}HH$ production with the CMS detector at the LHC both in the SM and in an EFT model. In SM we look for semi-leptonic decay of the top-quark pair and the decay of both Higgs bosons to b-quarks using full Run 2 data. We also develop a simplified EFT model to study this signal independently of $t\bar{t}H$, in which 6D and 8D gauge-invariant operators are included to modify $t\bar{t}HH$ while keeping $t\bar{t}H$ unchanged at tree level. In this model, which includes a BSM $t\bar{t}HH$ vertex, Higgs bosons are produced at higher $p_T$ compared with those from SM production. Due to the resulting Lorentz boost, we observe an enhancement around the Higgs mass in the single b-jet mass spectrum.
A search for Higgs boson pair production in bbll+MET final state with the ATLAS experiment will be presented. The analysis uses the full Run~2 data-set (139fb−1) collected at the LHC in pp collisions at √s=13TeV. Di-Higgs boson production from the SM tri-Higgs-boson interaction and from BSM resonant decays are investigated with a final state containing two jets (one or two tagged as b-jets) and two leptons with opposite electric charge. Three different channels where one of the Higgs bosons decays via H→bb and the other via H→WW∗/ZZ∗/τ+τ− are included as di-Higgs signals contributions in the analysis. A deep-learning neural network has been used for event selection to improve the ATLAS di-Higgs boson detection sensitivity. The expected upper limits on the cross-sections were investigated based on MC simulated events.
Higgs boson pair production (HH) is one of the more interesting processes to study at the LHC, as it allows us to probe Higgs Boson self-coupling and associated parameters of the Higgs potential, as well as search for physics beyond the standard model. The $b\bar{b}\tau\tau$ final state is one of the most sensitive channels for HH studies due to an appreciable branching ratio, and a relatively clean background. In this talk, the methods used in calculating a few of the more important theoretical uncertainties associated with this analysis are presented - in particular, perturbative QCD (pQCD) calculations and Parton Showers for single Higgs backgrounds. From pQCD, three main sources of uncertainties come from (i) missing higher orders in the perturbative expansion from the partonic cross section, (ii) parton distribution functions and (iii) experimental determination of the strong coupling constant. These uncertainties associated with pQCD correspond to parton-level final states. Since the simulated samples also pass-through showering and hadronization generators that convert the parton cross section to a hadron level cross section, additional uncertainties occur in (i) modelling parton shower and hadronization through the algorithm or parameters and (ii) matrix element next-to-leading-order calculations.
NOvA is a long-baseline neutrino oscillations experiment designed to precisely measure the neutrino oscillation parameters. We do this by directing a beam of predominantly muon neutrinos from Fermilab towards northern Minnesota to measure the rate of electron-neutrino appearance. The experiment consists of two functionally equivalent detectors each located 14.6 mrad off the central axis of Fermilab's nearly 700 kW NuMI neutrino beam, the world's most intense neutrino beam. Both the Near Detector, located 1 km downstream from the beam source, and the Far Detector, located 810 km away in Ash River, MN, were constructed from plastic extrusions filled with liquid scintillator. With the data measured at the Near Detector being used to accurately determine the expected rate at the Far Detector, it is very important to have automated and accurate monitoring of the data recorded by the detectors so any hardware, data acquisition systems or beam issues arising in the 344k (20k) channels of the Far (Near) detector which could affect quality of the datasets collected for physics analyses are determined. I will present the techniques and detector monitoring systems in various stages of data taking and show the NOvA detectors data taking performance up to the end of recent beam run period.
NOvA is a long-baseline neutrino experiment optimized to observe the oscillation of muon neutrinos to electron neutrinos. It uses a high purity muon neutrino beam produced at Fermilab with central energy of approximately 1.8 GeV. NOvA consists of a near detector located 1 km downstream of the neutrino production target at Fermilab and a far detector located 810 km away in Ash River, Minnesota. Neutrino cross-section measurements performed at the near detector are affected by a large uncertainty on the absolute neutrino flux. Since the neutrino-electron elastic-scattering cross section can be accurately calculated, the measured rate of these interactions can be used to constrain the neutrino flux. We present the status of the neutrino-electron elastic-scattering measurement using a Convolutional Neural Network (CNN) to identify signal events with high purity.
NOvA (NuMI Off-Axis ve Appearance) is a long-baseline oscillation neutrino experiment composed by two functional identical detectors, a 300 ton Near Detector and a 14 kton Far Detector separated by 809 km and placed 14 mrad off-axis to the NuMI neutrino beam created at Fermilab. This configuration enables NOvA's rich neutrino physics program, which includes measuring neutrino mixing parameters, determining the neutrino mass hierarchy, and probing CP violation in the leptonic sector. The NOvA Test Beam experiment uses a scaled-down 30 ton Detector to analyze tagged beamline particles. A new tertiary beamline deployed at Fermilab can select and identify electrons, muons, pions, kaons and protons with momentum ranging from 0.3 to 2.0 GeV/c. The Test Beam program data will provide NOvA with a better understanding of the largest systematic uncertainties impacting NOvA’s analyses, which include the detector response, calibration, and hadronic energy resolution. In this talk, I will present the status and future plans for the NOvA Test beam program, along with the most recent results.
NOvA is a long-baseline neutrino experiment based at Fermilab that studies neutrino oscillation parameters via electron neutrino appearance and muon neutrino disappearance. In these measurements, we compare the Far Detector data to a predicted energy spectrum constrained by the Near Detector (ND) data. The ND data is simulated using GENIE, with the neutrino cross section model adjusted to better describe the data by modifying the rate of Meson Exchange Current (MEC) interactions and the Final State Interactions. To characterize the performance of these adjustments, the ND simulation and data are divided into a set of samples based on multiplicity and topology. A fit to constrain MEC and other cross section parameters using these samples will be described.
NOvA is a long-baseline neutrino oscillation experiment, designed to make precision neutrino oscillation measurements using $\nu_\mu$ disappearance and $\nu_e$ appearance. It consists of two functionally equivalent detectors and utilizes the Fermilab NuMI neutrino beam. NOvA uses a convolutional neural network for particle identification of $\nu_e$ events in each detector. As part of the validation process of this classifier’s performance, we apply a data-driven technique called Muon Removal. In a Muon-Removed Electron-Added study we select $\nu_\mu$ charged current candidates from both data and simulation in our Near Detector and then replace the muon candidate with a simulated electron of the same energy. In a Muon-Removed Decay-in-Flight study we remove the muonic hits from events where cosmic muons entering the detector have decayed in flight, resulting in samples of pure electromagnetic showers. Each sample is then evaluated by our classifier to obtain selection efficiencies. Our recent analysis found agreement between the selection efficiencies of data and simulation within our uncertainties, showing that our classifier selection is generally robust in $\nu_e$ charged current signal selection.
The NOvA experiment uses a convolutional neural network (CNN) that analyzes topological features to determine neutrino flavor. Alternative approaches to flavor identification using machine learning are being investigated with the goal of developing a network trained with both event-level and particle-level images in addition to reconstructed physical variables while maintaining the performance of the CNN. Such a network could be used to analyze the individual prediction importances of these inputs. An original network that uses a combination of transformer and MobileNet CNN blocks will be discussed.
The upgrade of the Mu2e experiment at Fermilab, Mu2e-II, is proposed to improve the expected Mu2e sensitivity. Mu2e-II will search for the neutrinoless conversion of a muon into an electron in the field of an Al nucleus, with a sensitivity up to few 10$^{-18}$.
As for Mu2e, the tracker system for the Mu2e-II will be responsible for precisely measuring the momentum of the conversion electron to distinguish it from the background electrons coming from the muons decay in orbit.
To meet the requirements, a preliminary calculation indicates that the Mu2e-II tracker system should be even lighter than the Mu2e tracker by reaching a total material budget of about 4$\times$10$^{-3}$ X/X$_{0}$. Moreover, it must preserve or increase the rate capability of the M2e tracker. We present the ongoing R&D studies and some preliminary simulation results for a tracker made with about 20,000 8um thin-walled straw tubes operating in a vacuum of 10$^{−4}$ torr and for possible alternatives.
The Inner Tracker is an all-silicon detector that will replace ATLAS’ inner tracking layers for the High Luminosity LHC. SLAC National Accelerator Laboratory is responsible for the loading and integration of the pixel layers closest to the LHC Beamline, the Inner System. We’ll mount the silicon pixel detectors on their mechanical supports, then connect the loaded mechanical supports (“loaded local supports”) to integrate the full Inner System. This talk will present the loading of the first thermomechanical local support prototype.
The inner tracking detector of the ATLAS experiment at CERN is currently preparing for an upgrade to operate in the high Luminosity LHC, scheduled to start in 2027. A complete replacement of the existing Inner Detector of ATLAS is required to cope with the expected radiation damage. The all-silicon Inner Tracker (ITk) design under construction composes a mixture of Pixel and Strips layers. At the core of the strips detector barrel are the staves, which host 28 silicon modules. A thorough characterization of the modules before the assembly on each stave is critical; therefore, each module undergoes electrical and thermal quality control (QC) testing between module production and stave assembly. All the modules must be thermal cycled ten times between -35C and +40C. This talk will show the thermal and electrical performance of the US testing setup, focusing on the difficulties encountered to meet the QC requirements. It will also give an overview of the results obtained by analyzing the first batch of produced modules.
The Upstream Tracker (UT) is a silicon tracking sub-detector currently under construction that will sit just upstream of LHCb's dipole magnets during Run III of the LHC. It improves on the previous tracker in several ways, including enabling LHCb's new 40 MHz fully-software trigger, and comprises 968 silicon sensors mounted in four planes together with their requisite readout electronics and cooling systems.
This talk will give an overview of the UT and describe its construction with an emphasis on its mechanical and thermal structures.
The Large Hadron Collider (LHC) will soon undergo an upgrade, referred to as the High-Luminosity LHC (HL-LHC), which will increase the instantaneous luminosity beyond the LHC's design value. The ATLAS experiment is upgrading the innermost portion of the detector to the ITk pixel detector to accommodate the increase in luminosity. The RD53 collaboration was formed to develop the ASIC read out chips that are used inside the ITk pixel detector. In the preproduction RD53B chip, an encoding system was implemented to help shrink data streams and reduce the overall bandwidth of the system. An exploratory effort was undertaken to create a hardware decoder for Field Programmable Gate Arrays (FPGA) to cut down on CPU usage from software decoders later in the system. A parallelized hardware decoder was designed to meet the data rates produced by an RD53B chip. Overall, the final product is a base hardware decoder design that can handle the throughput constraints of a single RD53B and is resource efficient. In this talk I will report necessary background, hardware decoder design, and conclusions based on this design.
In Run 3 of the LHC (2022-2024), the Level-1 trigger system of the ATLAS experiment will introduce three feature extractors (FEX): eFEX for electron/photon, jFEX for jets/MET, and gFEX for global quantities. The increased calorimeter granularity is useful for all physics channels that deposit energy in the calorimeter, from high-bandwidth items like electrons to MET (missing transverse momentum). An overview of the hardware implementation will be discussed. Details of the algorithm design will be presented, along with the projected performance for electron/photon, jet, and MET triggers.
There is a significant gap between the inclusive measurement of the $B \rightarrow X_{c} l \nu$ branching fraction and the sum of the measurements of the exclusive $B \rightarrow X_{c} l \nu$ channels. The dominant contributions $B \rightarrow D^{*} l \nu$ and $B \rightarrow D l \nu$ are precisely known but the branching fractions of $B \rightarrow D^{**} l \nu$ have higher uncertainties. Here, the $D^{**}$ is an orbitally excited charmed meson, which can decay into $D^{*} \pi$ and $D \pi$. The decay $B \rightarrow D^{(*)} \pi\pi l \nu$ with two bachelor pions in the final state has so far only been observed by the Babar collaboration.
Over the course of about 10 years the Belle collaboration has recorded about 772 million $B\overline{B}$ pairs produced in $e^{+} e^{-} \rightarrow \Upsilon(4S)$ at the KEKB asymmetric-energy $e^{+} e^{-}$ collider. The status of the measurement of the branching fraction of $B \rightarrow D^{(*)} \pi l \nu$ as well as $B \rightarrow D^{(*)} \pi\pi l \nu$ using the full Belle data sample will be presented. In addition, an analysis of the invariant $D^{(*)} \pi$ mass distribution will be shown.
We report the first search of CP violation using T-odd triple product asymmetries and the most precise branching fraction measurement for the singly Cabibbo suppressed decay $D^{0}\rightarrow K_{s}^{0} K_{s}^{0} \pi^{+} \pi^{-}$. These results will be obtained using $922\,{\rm fb}^{-1}$ data sample that was collected with the Belle detector at the KEKB asymmetric energy $e^+ e^-$ collider. The branching fraction is measured relative to the normalization channel $D^{0}\rightarrow K_{s}^{0} \pi^{+} \pi^{-}$. Charm decays are expected to have very small CP violation in the standard model, this makes CP violation searches in charm decays an excellent probe of physics beyond the standard model. We have probed the asymmetries in the observable C$_{T}$ = $\vec{P}_{K_{s}^{0}} \cdot (\vec{P}_{\pi^{+}} \times \vec{P}_{\pi^{-}})$ for $D^{0}$ and $\overline{D}^{0}$ decays. Looking at the difference of T-odd asymmetries between CP conjugate $D^{0}$ and $\overline{D}^{0}$ decays provides us a CP asymmetry observable free from strong interaction effects.
We study flavor–conserving M1 radiative decays of heavy flavor bottom baryons in the framework of Effective Mass Scheme (EMS) within the quark model. The intent of the EMS lies in the fact that the masses of the quarks inside the baryon are modified as a consequence of one-gluon exchange interaction with the spectator quarks and it treats all the quarks at the same footing. The baryon mass can be written as the sum of the constituent quark masses and the spin-dependent hyperfine
interaction among them. We show that EMS can successfully describe the masses and the magnetic moments, transition moments, and radiative decay widths of the lowest-lying singly heavy flavor baryons in a parameter independent way. The
exchange contribution is worked out through interaction terms bij from the recently observed experimental masses for the heavy flavored charm and bottom baryons for the calculation of effective quark masses. We then compute the magnetic and transition moments of ground state J^P = (1/2)^(+) and J^P = (3/2)^(+),and(1/2)^('+) → (1/2)^(+),(3/2)^(+) → (1/2)^(+),and (3/2)^(+) → (1/2)^('+) heavy flavor charm and bottom baryon states. Finally, we make sturdy model independent predictions for radiative M1 decay widths of heavy flavored baryons. The radiative transitions between the states occur mainly through the M1-type, while there are negligible contributions from E2-type transitions, which are therefore ignored. We also extend our analysis to the triply heavy charm and bottom baryons.
We present measurements of the branching fractions and $CP$ asymmetries for $D_s^{+} \rightarrow K^{+} \eta $, $D_s^{+} \rightarrow K^{+} \pi^0 $, and $D_s^{+} \rightarrow \pi^{+} \eta $ decays, and the branching fraction for $D_s^{+} \rightarrow \pi^{+} \pi^0$ based on the full data sample collected by the Belle detector at the KEKB $e^+e^-$ asymmetric-energy collider. No evidence for $CP$ violation is found.
A measurement of the $B_S^0\rightarrow J/\psi\phi$ decay parameters using $80~\mathrm{fb}^{-1}$ of integrated luminosity collected with the ATLAS detector from $13~\mathrm{TeV}$ proton-proton collisions at the LHC is presented. The measured parameters include the CP-violating phase $\phi_S$, the width difference $\Delta\Gamma_S$ between the $B_S^0$ meson mass eigenstates and the average decay width $\Gamma_S$. The values measured for the physical parameters are combined with those from $19.2~\mathrm{fb}^{-1}$ of $7~\mathrm{TeV}$ and $8~\mathrm{TeV}$ data, leading to the following:
$\phi_S=-0.087\pm0.036~(\mathrm{stat.})\pm0.021~(\mathrm{syst.})~\mathrm{rad}$
$\Delta\Gamma_S=0.0657\pm0.0043~(\mathrm{stat.})\pm0.0037~(\mathrm{syst.})~\mathrm{ps}^{-1}$
$\Gamma_S=0.6703\pm0.0014~(\mathrm{stat.})\pm0.0018~(\mathrm{syst.})~\mathrm{ps}^{-1}$
Results for $\phi_S$ and $\Delta\Gamma_S$ are also presented as $68\%$ confidence level contours in the $\phi_S-\Delta\Gamma_S$ plane. Furthermore, the transversity amplitudes and corresponding strong phases are measured. $\phi_S$ and $\Delta\Gamma_S$ measurements are in agreement with the Standard Model predictions.
Many analyses in ATLAS rely on the identification of jets containing $b$-hadrons ($b$-jets) with high efficiency while rejecting more than 99% of non-$b$-jets. Identification algorithms, called $b$-taggers, exploit $b$-hadron properties such as their long lifetime, their high mass, and high decay multiplicity. Recently developed ATLAS $b$-taggers using neural networks are expected to outperform previous $b$-taggers by a factor of two in terms of light jet rejection. Nevertheless, contributions from light jet mistags can be non-negligible in certain analyses phase spaces. It is therefore important to precisely measure the mistag rate of the light jets in both data and simulation to correct the corresponding rate in simulation.
Due to the high light jet rejection of the $b$-taggers, the mistag rate cannot be measured directly but rather by means of a modified tagger, designed to decrease the $b$-jet efficiency while leaving the light jet response unchanged. This so-called "negative tag method" has been improved recently: uncertainties are reduced by constraining non-light flavour contributions with a data-driven method and the dominant systematic uncertainty has been reduced significantly, from 10-60% to 5-20% due to improved inner detector modeling and an auxiliary analysis. The method and a selection of results released recently to the ATLAS collaboration using $pp$ collisions at $\sqrt{s}=$ 13 TeV with the ATLAS detector will be presented.
The Dark Energy Survey has observed large scale structure data in 5000 sq. deg in sky. This effort was done in collaboration with hundreds of scientists culminating in the more accurate and precise constraints to date on the cosmology of the late-time Universe.
In this talk, I discuss the methodology and measurements in the third year of the Dark Energy Survey. I review the results from the Cosmological analysis and, finally, I discuss the tension of Large Scale Structure constraints from DES and the Planck experiment and the efforts that are already being developed for the next generation of DES data.
Twenty years ago, in an experiment at Brookhaven National Laboratory, physicists detected what seemed to be a discrepancy between measurements of the muon’s magnetic moment and theoretical calculations of what that measurement should be, raising the tantalizing possibility of physical particles or forces as yet undiscovered. The Fermilab team has just announced that their precise measurement supports this possibility. The reported significance for new physics is 4.2 sigma just slightly below the discovery level of 5 sigma. However, an extensive new calculation of the muon's magnetic moment using lattice QCD by the BMW-collaboration reduces the gap between theory and experimental measurements. The lattice result appeared in Nature on the day of the Fermilab announcement. In this talk both the theoretical and experimental aspects are summarized with two possible narratives: a) almost discovery or b) Standard Model re-inforced. Some details of the lattice caluculation are also shown.
The Deep Underground Neutrino Experiment (DUNE) is a next-generation long-baseline neutrino experiment. Its main physics goals are the precise measurement of the neutrino oscillation parameters, in particular the violation of the charge-parity symmetry and the neutrino mass hierarchy. DUNE consists of a Far Detector (FD) complex with four multi-kiloton liquid argon detectors, and a Near Detector (ND) complex located close to the neutrino source at Fermilab (USA). Here we present an overview of the DUNE experiment, its detectors, and physics capabilities, within the context of the long baseline program of the next decade.
Ultralight axions (ULA), whose masses can lie in a wide range of values and can be even smaller than $10^{−28}$ eV, are generically predicted in UV theories such as string theory. In the cosmological context, the early Universe may have gotten filled with a network of ultralight axion (ULA) cosmic strings which, depending upon the mass of the axion, can survive till very late times. If the ULA also couples to electromagnetism, and the network survives post recombination, then the interaction between the strings and the CMB photons induces a rotation of the polarization axis of the CMB photons (otherwise known as the birefringence effect). This effect is independent of the string tension, and only depends on the coupling between the ULA and the photon (which in turn is sensitive to UV physics). In this talk I will present some results for this birefringence effect on CMB, due to three different models of string network. Interestingly, this is within the reach of some current and future CMB experiments.
The cosmological collider physics program aims at probing particle physics at energies as high as the inflationary Hubble scale, $H \le 10^{13}$ GeV, using precision measurements from CMB, large scale structure surveys, and 21-cm cosmology. Heavy particles produced during inflation can impart unique correlations in the density fluctuations across the sky, leading to non-gaussianity (NG) in the cosmological observables. This presents a unique opportunity for the “direct detection” of particles with masses as large as $H$. However, the strength of this signal drops exponentially due to a Boltzmann-like factor as masses exceed $H$. In this talk, I will discuss a mechanism that overcomes this suppression and broadens the scope of cosmological collider physics, focusing on the case of a massive complex scalar field. The mechanism allows us to harness large kinetic energy of the inflaton to produce particles with masses as large as $\sim 60H$. I will show that NG with $f_{\rm NL} \sim {\cal O}(0.01-10)$ can be obtained, and delineate a procedure to infer the mass of the heavy field from the signal.
SPT-3G is the third survey receiver operating on the South Pole Telescope dedicated to high-resolution observations of the cosmic microwave background (CMB). Sensitive measurements of the temperature and polarization anisotropies of the CMB provide a powerful dataset for constraining the fundamental physics of the early universe, including models of inflation and the neutrino sector. Additionally, CMB surveys with arcminute-scale resolution are capable of detecting galaxy clusters, millimeter-wave bright galaxies, and a variety of transient phenomena. The SPT-3G instrument provides a significant improvement in mapping speed over its predecessors, SPT-SZ and SPTpol. The broadband optics design of the instrument achieves a 430 mm diameter image plane across observing bands of 95 GHz, 150 GHz, and 220 GHz, with 1.2 arcmin FWHM beam response at 150 GHz. In the receiver, this image plane is populated with 2690 dual-polarization, tri-chroic pixels (~16000 detectors) read out using a 68X digital frequency-domain multiplexing readout system. In 2018, SPT-3G began a multiyear survey of 1500 deg$^{2}$ of the southern sky. I will summarize the unique optical, cryogenic, detector, and readout technologies employed in SPT-3G, and I will report on the integrated performance of the instrument.
Line-intensity mapping (LIM) of millimeter-wavelength tracers is a promising new technique for mapping cosmic structure at redshifts beyond the reach of galaxy surveys. I will describe the design and science motivation for the South Pole Telescope Summertime Line Intensity Mapper (SPT-SLIM), which seeks to demonstrate the use of on-chip spectrometers based on microwave kinetic inductance detectors (MKIDs) for LIM observations of CO at z~1-3. The design of SPT-SLIM is enabled by key technical developments, including MKID-coupled R=300 filter-bank spectrometers between 120-180 GHz, as well as a new low-cost, high-throughput MKID readout architecture based on the ICE platform. When deployed in the 2022-23 austral summer, SPT-SLIM will produce strong constraints on the CO power spectrum, while developing the experimental and observational techniques needed to use LIM as a cosmological probe in future survey instruments.
In the inflationary paradigm, a background of primordial gravitational waves is predicted to be produced. These perturbations would leave a unique signature in the curl component of the cosmic microwave background (CMB) polarization (B-modes). A detection of B-modes spectrum power at degree angular scale would constrain the intensity of the tensor perturbations generated during inflation . This information is encoded in the tensor to scalar ratio r.
The B-modes power spectrum is dominated by foregrounds such as synchrotron emission and polarized dust at small angular scales and by lensed curl-free CMB polarization (E-modes) at large angular scales. To isolate the large angular scale primordial B-modes signal, small aperture telescopes such as Bicep/Keck BK must work in conjunction with Large aperture telescopes (LAT) telescopes such as the South Pole Telescope that have higher resolution and are more sensitive to smaller angular scales to be able to pristinely remove the non-primordial signals.
To date we only have upper limits on r.
Several other collaborations are measuring stringent upper limits with state of the art instruments.
The combined efforts of the SPT and the BK collaboration joint analysis group, South Pole Observatory (SPO), will significantly improve the constraint (𝜎(r)~0.003 SPO) that could come from BK data alone (𝜎(r)~0.02 BK15). The SPO r limit will hold until the CMB-S4 results (𝜎(r)~5e−4 forecast).
Moreover, thanks to its large number of sensitive detectors, scan strategy and sky coverage on a foreground clean patch of the sky, SPT-3G will give us the capability to deliver an independent constraint on r, that will be informative on the performances of the Large Aperture Telescope design for CMB-S4.
Many physics models beyond the Standard Model predict heavy new particles preferentially decaying to at least one top quark. Three searches for a heavy resonance decaying into at least one top quark in pp collision at a center-of-mass energy of 13 TeV at the LHC will be presented in the talk. These searches include: The search for a heavy resonance decaying to a top quark and a W boson in the fully hadronic final state as well as in the lepton+jets final state, and the search for W' bosons decaying to a top and a bottom quark in the all-hadronic final state. The three searches use the data collected by the CMS experiment between 2016 and 2018, corresponding to an integrated luminosity of 137 fb−1. Novel machine learning techniques and reconstruction techniques are used to optimize discrimination of top quarks with high Lorentz boosts, which requires the use of non isolated leptons and jet substructure techniques, as well as allowing for a significant improvement of the analysis sensitivity compared with earlier results. No significant excess of events relative to the expected yield from standard model processes is observed. The most stringent limits to date are obtained from these searches.
A search for dijet resonances in events with identified leptons has been performed using full Run 2 data collected in 𝑝𝑝 collisions at √s=13 TeV by the ATLAS detector, corresponding to an integrated luminosity of 139 𝑓𝑏−1. The dijet invariant-mass ($m_{jj}$) distribution from events with at least one isolated electron or muon was probed in the range of $0.22 < m_{jj} < 6.3$ TeV. The analysis probes much lower $m_{jj}$ than traditional inclusive dijet searches and has been sensitive to a large range of new physics models in association with a final-state lepton. As no statistically significant deviation from the Standard Model background hypothesis was found, limits were set on contributions from generic gaussian signals and on various beyond the Standard Model scenarios including the Sequential Standard Model, a technicolor model, a charged Higgs boson model and a simplified Dark Matter model.
Many theories beyond the Standard Model predict new phenomena, such as $Z'$ and vector-like quarks, in final states containing bottom- or top-quarks. It is challenging to reconstruct and identify the decay products and model the major backgrounds. Nevertheless, such final states offer great potential to reduce the Standard Model backgrounds due to their characteristic decay signature. The latest search in the two quark final states using the full Run-2 ( $139$ fb$^{−1}$) proton-proton collision data collected at a center-of-mass energy of $\sqrt{s} = 13$ TeV with the ATLAS detector will be presented. In particular, this presentation will summarize the recent results of dijet and top-antitop resonance searches in the hadronic top-quark final state. This talk will also highlight associated improvements coming from deep learning-based $b$-quark and top-quark identification techniques. Furthermore, the interpretations of these results in the context of $s$-channel dark matter mediator models will be discussed.
This talk presents a search for a new resonance $W^\prime$ decaying into a $W$ boson and a $125~\text{GeV}$ Higgs boson $H$ in the ${\ell^{\pm}{\nu}b\bar{b}}$ final states, where $\ell = e,~\mu,~\mathrm{or}~\tau$, using $pp$ collision data at 13 TeV corresponding to an integrated luminosity of 139 fb$^{-1}$ collected by the ATLAS detector at LHC. The search considers the one-lepton channel, where an electron, muon, or leptonically decaying tau lepton is successfully reconstructed. Both resolved and merged regimes, as well as one and two b-tag regions, are employed to reconstruct the $H\rightarrow bb$ decay across the range of $W^{\prime}$ masses. The search is conducted by examining the reconstructed invariant mass distributions of $W^\prime \to WH$ candidates in the mass range from $400~\text{GeV}$ to $5~\text{TeV}$. Upper limits are placed at the 95% confidence level on the production cross-section times branching fraction of heavy $W^{\prime}$ resonances in heavy-vector-triplet models.
A search for a new heavy boson $W^{\prime}$ in proton-proton collisions at $\sqrt{s}$ = 13 TeV is presented. The search focuses on the decay of the $W^{\prime}$ to a top quark and a bottom quark, using the full Run 2 dataset collected with the ATLAS detector at the LHC with an integrated luminosity of 139 $\text{fb}^{−1}$. The talk will give an overview of the analysis, which includes the hadronic-decaying top-quark identification using a Deep Neural Network trained on jet substructure variables and the data-driven background estimation. It will show the search sensitivity as expected exclusion limits on the $W^{\prime}$ production cross-section times the top-bottom channel branching ratio for several $W^{\prime}$ masses between 1.5 and 6 TeV.
A search for electroweak production of charginos and neutralinos at the Large Hadron Collider was conducted in 139 fb$^{-1}$ of proton-proton collision data collected at a center of mass energy of $\sqrt{s} = 13$ TeV with the ATLAS detector. This search utilizes fully hadronic final states with missing transverse momentum to identify signal events with a pair of charginos or neutralinos that subsequently decay into high-$p_T$ gauge or Higgs bosons as well as a lighter chargino or neutralino. The light chargino or neutralino creates missing transverse momentum and each of the bosons can decay to light- or heavy-flavor quark pairs. Fully hadronic final states have a large branching ratio compared to leptonic or semi-leptonic decays, probing high-mass signals which have a smaller production cross-section, giving strong motivation to explore this final state. The inclusion of more signal leads to more background, by exploiting boosted boson tagging techniques the additional background can be suppressed. This boson tagging is achieved by reconstructing and identifying the high-$p_T$ SM bosons using large-radius jets and their substructure. No significant excess is found beyond standard model expectations. Various assumptions in decay branching ratios and the type of LSP were made to set exclusion limits on wino or higgsino production at a 95% confidence level. These excluded a mass of 1050 and 900 GeV for the wino and higgsino respectively when the lightest SUSY particle has a mass below 400 and 250 GeV.
Most searches for new physics at the Large Hadron Collider assume that a new particle produced in pp-collisions decays almost immediately or is non-interacting and escapes the detector. However, a variety of new physics models predict particles that decay inside the detector at a discernible distance from the interaction point. Such long-lived particles would create spectacular signatures that evade many prompt searches. This talk will present recent CMS searches for new long-lived particles using Run 2 data. This talk will also highlight the experimental challenges that these signatures pose for the trigger, offline reconstruction, and non-standard backgrounds.
A search for neutral long-lived particles decaying into displaced jets in the ATLAS hadronic calorimeter in $pp$ collisions at $\sqrt{s} = 13 \textrm{ TeV}$ during 2016 with data corresponding to $10.8 \textrm{ fb}^{-1}$ or $33.0 \textrm{ fb}^{-1}$ of integrated luminosity (depending on the trigger) is preserved in RECAST and thereafter used to constrain three new physics models not studied in the original work. A Stealth SUSY model and a Higgs-portal baryogenesis model, both predicting long-lived particles and therefore displaced decays, are probed for proper decay lengths between a few cm and 500 m. A dark sector model predicting Higgs and heavy boson decays to collimated hadrons via long-lived dark photons is also probed. The cross-section times branching ratio for the Higgs channel is constrained between a few millimeters and a few meters, while for a heavier 800 GeV boson the constraints extend from tenths of a millimeter to a few tens of meters. The original data analysis workflow was completely captured using virtualization techniques, allowing for an accurate and efficient reinterpretation of the published result in terms of new signal models following the RECAST protocol.
Triggering long-lived particles (LLPs) at the first stage of the trigger system is very crucial in LLP searches to ensure that we do not miss them at the very beginning. The future High Luminosity runs of the Large Hadron Collider will have an increased number of pile-up events per bunch crossing. There will be major upgrades in hardware, firmware and software sides, like tracking at level-1 (L1). The L1 trigger menu will also be modified to cope with pile-up and maintain the sensitivity to physics processes. In our study we found that the usual level-1 triggers, mostly meant for triggering prompt particles, will not be very efficient for LLP searches in the 140 pile-up environment of HL-LHC, thus pointing to the need to include dedicated L1 triggers in the menu for LLPs. We consider the decay of the LLP into jets and develop dedicated jet triggers using the track information at L1 to select LLP events. We show in our work that these triggers give promising results in identifying LLP events with moderate trigger rates.
The search for long-lived particles (LLP) at the LHC can be improved with timing information. If the visible decay products of the LLP form jets, the arrival time is not well-defined. In this talk, I will discuss possible definitions and how they are affected by the kinematics of the underlying parton-level event.
Weakly coupled light new physics is a well motivated lamppost often referred to as a dark sector. At low masses and weak couplings, dark sector particles are generically long-lived. In this talk I will describe how neutrino-portals to a dark sector can be efficiently probed by looking for the decay of heavy neutral leptons that are produced via the upscattering of solar neutrinos within the Earth's core/mantle. Large volume detectors (such as Borexino or Super Kamiokande) can search for MeV-scale photons and electron-positrons pairs from HNLs decaying while passing through their detectors.
Searches for physics beyond the Standard Model (SM) at collider experiments—mostly focused on prompt signatures with high momentum and high missing transverse energy—have thus far produced no definitive evidence for such phenomena. But what if they have been looking in the wrong places? Just as long-lived particles exist in the SM, beyond the SM physics may too feature such particles. Here, a novel search for displaced photons is introduced, using 139 fb$^{−1}$ of $pp$ collision data at center-of-mass energy $\sqrt{s} =$ 13 TeV collected with the ATLAS detector. The search specifically targets the relatively unconstrained branching ratio of the Higgs boson to invisible particles, where there is still ample room for signatures featuring relatively soft photons and modest missing transverse energy. Exploiting the longitudinal segmentation and the excellent precision timing capabilities of the ATLAS detector’s liquid argon electromagnetic calorimeter, the striking, smoking-gun signature of a displaced photon that both fails to point back to the interaction point and arrives significantly delayed can be employed as a powerful discrimination variable. The analysis strategy, including the entirely data-driven background estimation method and expected sensitivities, is presented in detail.
We present a search for dark matter candidates produced in association with a Higgs boson using data collected from $pp$ collision at $\sqrt{s}=13$ TeV with the ATLAS detector that corresponds to an integrated luminosity of 139 $fb^{-1}$. This search targets events that contain a large missing transverse momentum and a Higgs boson reconstructed either as two $b$-tagged small-radius jets or as a single large-radius jet associated with two $b$-tagged sub-jets. Compared to the previous iteration, this search represents an optimised event selection and advances in object identification that enhance the expected sensitivity and simplify the analysis. No significant excess from the Standard Model prediction is observed. The results are interpreted in two benchmark models with a Two-Higgs-Doublet extended by either a heavy vector boson $Z’$ or a pseudoscalar singlet $a$ which provide dark matter candidates.
Many beyond the Standard Model (BSM) theories suggest the existence of multiple fundamental scalar fields and associated Higgs bosons, with the standard model Higgs boson being the lightest and most easily discovered. The dimension-4 interactions between a theorized generic heavy Higgs boson and Standard Model (SM) particles have already been explored in all major Higgs boson production channels, particularly in gluon-gluon fusion, with no evidence of BSM effects so far. Thus, our study takes a new direction by accounting for effective dimension-6 interactions with SM particles in addition to dimension-4 interactions, and by probing the VH channel for a heavy Higgs boson. If the generic heavy Higgs boson is connected with BSM physics at the scale of a few TeV, these dimension-6 operators will dramatically boost heavy Higgs boson momentum such that it can be distinguished from background. This particular region of the phase space has not been investigated by previous LHC studies, enhancing its potential for discovery of BSM physics and a generic heavy Higgs boson.
In this talk, I will present the motivations for the Generic Heavy Higgs Search and the reason for exploring this particular corner of the phase space, as well as the work-in-progress Monte-Carlo kinematic distributions and upper limits describing various signal hypotheses.
Charged Higgs bosons produced either in top-quark decays or in association with a top-quark, subsequently decaying via $H^{\pm} \to \tau^{\pm}\nu_{\tau}$, are searched for in $36.1 \mathrm{fb^{-1}}$ of proton-proton collision data at $\sqrt{s}=13$ TeV recorded with the ATLAS detector. Depending on whether the associated top-quark decays hadronically or leptonically, the search targets $\tau$+jets and $\tau$+lepton final states. In both cases, the $\tau$-lepton decays hadronically. No evidence of a charged Higgs boson is found. For the mass range of $m_{H^{\pm}} =$ 90-2000 GeV, upper limits at the 95% confidence level are set on the production cross-section of the charged Higgs boson times the branching fraction $\mathrm{\cal{B}}(H^{\pm} \to \tau^{\pm}\nu_{\tau})$ in the range 4.2-0.0025 pb. In the mass range 90-160 GeV, assuming the Standard Model cross-section for $t\overline t$ production, this corresponds to upper limits between 0.25% and 0.031% for the branching fraction $\mathrm{\cal{B}}(t\to bH^{\pm}) \times \mathrm{\cal{B}}(H^{\pm} \to \tau^{\pm}\nu_{\tau})$. In the newest iteration of the search, the mass range has been extended to $m_{H^{\pm}}$ = 80-3000 GeV and novel machine learning techniques have been developed to sift through $139\,\mathrm{fb^{-1}}$ of data. A parameterized neural network (PNN) is trained across the entire mass spectrum to provide signal-background discrimination in $\tau$+jets or $\tau$+lepton final states.
Four top-quark production, a rare process in the Standard Model (SM) with a cross-section around 12 fb, is one of the heaviest final states produced at the LHC, and it is naturally sensitive to physics beyond the Standard Model (BSM). A data excess is observed with twice of the expectation. A follow-up analysis is the search for Heavy (pseudo)Higgs boson A/H produced in association with a top-antitop quark pair leading to the final state with four top quarks. The data analyzed correspond to an integrated luminosity of 139 fb$^{−1}$ of proton-proton collision data at a centre-of-mass energy of 13 TeV collected by the ATLAS detector at the LHC. In this talk, the four top-quark decay final states containing either a pair of same-sign leptons or multi-lepton (SSML) are considered. To enhance the search sensitivity, a mass-parameterized BDT is introduced to discriminate the BSM signal against the irreducible SM four-top and other dominant SM backgrounds. Expected upper bounds on the production cross-section of A/H are derived in the mass range from 400 GeV to 1000 GeV.
Many extensions of the Standard Model include the addition of charged Higgs bosons. The two-Higgs doublet model (2HDM) is one such extension that predicts the presence of charged Higgs bosons. The 2HDM predicts three neutral Higgs bosons along with a positive and negative charged pair of Higgs bosons. In this talk, we present a search for these charged Higgs bosons decaying into a top and bottom quark with single-lepton final states. We perform a multivariable analysis using a Gradient Boosted Decision Tree approach to aid in signal-to-background discrimination. CMS data collected at 13 TeV in 2016 (35.9 fb^{-1}), 2017 (41.5 fb^{-1}), and 2018 (59.97 fb^{-1}) are considered in this search.
A search is presented for a light pseudoscalar Higgs boson (a) using data collected by the CMS experiment at LHC, at the center-of-mass of energy of 13 TeV. The study looks into the decay Higgs boson (H) via the H→aa→μμττ channel. The Higgs boson can be both standard-model-like (125 GeV) or heavier. The pseudoscalar mass falls within the range ma ϵ[2mτ,mH/2]. The large mass difference between the Higgs and the pseudoscalar means that the final tau lepton decay products are highly boosted in the decay direction and collimated. A modified version of tau reconstruction is used to account for the highly overlapping decay products. The modified reconstruction technique gives higher reconstruction efficiency over the standard tau reconstruction and hence better signal significance and background rejection. This technique also becomes useful when looking into various final states, especially the ones where one of the taus decays hadronically while the other decays leptonically (μ/e). The performance of the altered reconstruction technique, as opposed to the standard tau reconstruction, is also presented. The results from the 2016 and 2017 CMS datasets will be shown.
Athena is the software framework used in the ATLAS experiment throughout the data processing path, from the software trigger system through offline event reconstruction to physics analysis. The shift from high-power single-core CPUs to multi-core systems in the computing market means that the throughput capabilities of the framework have become limited by the available memory per process. For Run 2 of the Large Hadron Collider (LHC), ATLAS has exploited a multi-process forking approach with the copy-on-write mechanism to reduce the memory use. To better match the increasing CPU core count and the, therefore, decreasing available memory per core, a multi-threaded framework, AthenaMT, has been designed and is now being implemented. The ATLAS High Level Trigger (HLT) system has been remodelled to fit the new framework and to rely on common solutions between online and offline software to a greater extent than in Run 2.
We present the implementation of the new HLT system within AthenaMT, which is being commissioned now for ATLAS data-taking during LHC Run 3.
We present a novel implementation of classification using boosted decision trees (BDT) on field programmable gate arrays (FPGA). Two example problems are presented, in the separation of electrons vs. photons and in the selection of vector boson fusion-produced Higgs bosons vs. the rejection of the multijet processes. The firmware implementation of binary classification requiring 100 training trees with a maximum depth of 4 using four input variables gives a latency value of about 10ns. Implementations such as these enable the level-1 trigger systems to be more sensitive to new physics at high energy experiments. The work is described in [2104.03408].
The hls4ml library [1] is a powerful tool that provides automated deployment of ultra low-latency, low-power deep neural networks. We extend the hls4ml library to recurrent architectures and demonstrate low latency by considering multiple benchmark applications. We consider Gated Recurrent Units (GRU) and Long Short Term Memory(LSTM) Models trained using the CERN Large Hadron Collider Top tagging data [2], jet flavor tagging data [3], and the quickdraw dataset as our benchmark applications. By using a large parameter range in between these benchmark models, we demonstrate that low-latency inference across a wide variety of model weights, and show that resource utilization of recurrent neural networks can be significantly reduced with little loss to model accuracy.
Reference:
[1] J. Duarte et al., “Fast inference of deep neural networks in FPGAs for particle physics”,JINST13(2018), no. 07,P07027, doi:10.1088/1748-0221/13/07/P07027,arXiv:1804.06913
[2] CERNbox, .https://cernbox.cern.ch/index.php/s/AgzB93y3ac0yuId?path=%2F, 2016
[3] Guest, Daniel, et al. "Jet flavor classification in high-energy physics with deep neural networks." Physical Review D 94.11 (2016): 112002.
[4] Google, “Quick, Draw!”, https://quickdraw.withgoogle.com/
This talk introduces and shows the simulated performance of an FPGA-based technique to improve fast track finding in the ATLAS trigger. A fast track trigger is being developed in ATLAS for the High Luminosity upgrade of the Large Hadron Collider (HL-LHC), the goal of which is to provide the high-level trigger with full-scan tracking at 100 kHz in the high pile-up conditions of the HL-LHC. Options under development for achieving this include a method based on matching detector hits to pattern banks of simulated tracks stored in a custom made Associative Memory ASIC (Hardware Track Trigger, “HTT”) and one using the Hough transform (whereby detector hits are mapped onto a 2D parameter space with one parameter related to the transverse momentum and one to the initial track direction) on FPGAs (“Hough”).
Both of these methods can benefit from a pre-filtering step, to reduce the number of hit clusters that need to be considered and hence reduce the overall system size and/or power consumption, by examining pairs of clusters in adjacent strip detector layers (or lack thereof). This stub-filtering was first investigated by CMS but has been unexplored in ATLAS until now, and we will show the reduction in throughput enabled along with the performance impact on both HTT and Hough systems of track finding, as well as estimates of resource usage.
The high collision energy and luminosity of the LHC allow studying jets and hadronically-decaying tau leptons at extreme energies with the ATLAS detector. These signatures lead to topologies with charged particles, which are reconstructed as tracks with the ATLAS inner detector, at an angular separation smaller than the size of a charge cluster in the ATLAS pixel detector, forming merged pixel clusters. In the presence of these merged clusters, the track reconstruction efficiency is reduced, as hits can no longer be uniquely assigned to individual tracks. Well-defined tracks are very important for many analyses. To partially recover the track reconstruction efficiency loss, a neural network (NN) based approach was adopted in the ATLAS pixel detector in 2011 to split the merged clusters by estimating particle hit multiplicity, hit positions, and associated uncertainties. An improved algorithm based on Mixture Density Networks (MDN) shows promising performance and will be used in the ATLAS inner detector track reconstruction in Run-3. An overview of the MDN algorithm and its performance will be highlighted in this presentation. This talk will also show a performance comparison between the Run-2 NN and Run-3 MDN.
We report on the development of a track finding algorithm for the Fermilab Muon g-2 Experiment’s straw tracker using advanced Deep Learning techniques. Taking inspiration from original studies by the HEP.TrkX project, our algorithm relies on a Recurrent Neural Network with bi-directional LSTM layers to build and evaluate track candidates. The model achieves good performance on a 2D representation of the Muon g-2 tracker detector. We will discuss our targets for improving efficiency and performance, and plans towards application on real data via training on a synthetic dataset.
The Axion Dark Matter Experiment (ADMX) is an experiment that searches for axions as dark matter with a resonant cavity in a strong magnetic field. In previous operations, ADMX achieved DFSZ sensitivity between 2.66-3.31 micro eV with yocto Watt level background using a quantum amplifier and dilution refrigerator. The latest operation has searched between 3.3 to 4.2 micro eV between October 2019 and May 2021, and implemented several improvements, including synthetic axion injections and a more efficient data-taking cycle. I will show new axion search results from the latest operation as well as improvements on operation and analysis.
I will describe two precision experiments searching for ultralight axion-like dark matter. The SHAFT experiment uses ferromagnetic toroidal magnets, and is sensitive to the electromagnetic coupling in the 12 peV to 12 neV mass range. The CASPEr-e experiment is based on precision magnetic resonance, and is sensitive to the EDM and the gradient couplings in the 162-166 neV mass range. These two searches have recently produced leading experimental limits on all three of the possible interactions of axion-like dark matter in those mass ranges.
Cosmic Axion Spin Precession Experiment (CASPEr) is a laboratory scale experiment searching for ultralight axion-like dark matter, using nuclear magnetic resonance [D. Budker, et al. Phys. Rev. X, 4,021030 and D. Aybas, J. Adam, et al., Phys. Rev. Lett. 126, 141802]. I will describe our work on the next phase of the experiment, with the goal of searching in the kHz – MHz frequency band, using SQUID sensors. I will also describe our study of transient light-induced paramagnetic centers in ferroelectric PMN-PT ($\mathrm{(PbMg_{1/3}Nb_{2/3}O_3)_{2/3} - (PbTiO_3)_{1/3}}$) crystals. We use these paramagnetic centers to control the polarization and relaxation of the nuclear spin qubit ensemble, allowing us to improve sensitivity to axion-like dark matter.
Detection and understanding of dark matter is one of the major unsolved problems of modern particle physics and cosmology. Several theories of fundamental physics predict bosonic dark matter candidates that can modify Maxwell’s equations resulting in additional photon emission from conducting surfaces. One of these promising dark matter candidates is known as the axion, which could be detected by observing the emitted electromagnetic radiation resulting from axion-photon coupling.
The Broadband Reflector Experiment for Axion Detection (BREAD) haloscope experiment will investigate a currently underprobed dark matter parameter space using novel reflector technology. This new experiment will develop technology for a new type of wideband axion dark matter search experiment capable of detecting axions in the mass range of approximately 10 meV -- 30 eV, a range not currently accessible by other techniques. This target mass range corresponds to an observable dark matter signal in the under-probed terahertz regime.
This presentation will cover the commissioning and building of a preliminary, room-temperature, terahertz photon source testing and calibration system that is intended to be used for a prototype BREAD detector.
This work is supported by the Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. This work was supported in part by the Kavli Institute for Cosmological Physics at the University of Chicago through grant NSF PHY-1125897 and an endowment from the Kavli Foundation and its founder Fred Kavli. JL is supported by the Grainger Fellowship.
The QCD axion represents a well-motivated new physics candidate capable of explaining dark matter and the absence of the neutron electric dipole moment. If realized after the breaking of a Peccei-Quinn symmetry after the end of inflation, the late-time number density of axions is jointly determined by radiation of axions from topological defects known as strings and from the dynamics of the axion field as it acquires a mass through the QCD phase transition. Here I present the results of simulations of axion radiation from strings using the block-structured adaptive mesh refinement code AMReX, which greatly extends the dynamical range of conventional simulation techniques, towards a precise determination of the mass of the QCD axion which produces the observed relic abundance of dark matter.
The constituents of dark matter are still unknown, and the viable possibilities span a very large mass range. Specific scenarios for the origin of dark matter sharpen the focus on a narrower range of masses: the natural scenario where dark matter originates from thermal contact with familiar matter in the early Universe requires the DM mass to lie within about an MeV to 100 TeV. Considerable experimental attention has been given to exploring Weakly Interacting Massive Particles in the upper end of this range (few GeV – ~TeV), while the region ~MeV to ~GeV is largely unexplored. Most of the stable constituents of known matter have masses in this lower range, tantalizing hints for physics beyond the Standard Model have been found here, and a thermal origin for dark matter works in a simple and predictive manner in this mass range as well. It is therefore a priority to explore. If there is an interaction between light DM and ordinary matter, as there must be in the case of a thermal origin, then there necessarily is a production mechanism in accelerator-based experiments. The most sensitive way, (if the interaction is not electron-phobic) to search for this production is to use a primary electron beam to produce DM in fixed-target collisions. The Light Dark Matter eXperiment (LDMX) is a planned electron-beam fixed-target missing-momentum experiment that has unique sensitivity to light DM in the sub-GeV range. This contribution will give an overview of the theoretical motivation, the main experimental challenges and how they are addressed, as well as projected sensitivities in comparison to other experiments.
New theoretical developments have motivated “hidden sector” dark matter with mass below the proton mass. The Light Dark Matter Experiment (LDMX) will use an electron beam to produce dark matter in fixed-target collisions. A low current, high repetition rate (37.2MHz) electron beam extracted from SLAC’s LCLS-II will provide LDMX with sufficient luminosity to explore many dark matter candidates. Using a novel detector design, LDMX is expected to definitively search for thermal relic dark matter with masses between 1 MeV and several hundred MeV. The LDMX trigger system will reduce the high repetition rate of 37.2MHz down to about 5 kHz. In order to identify signal events, a missing energy trigger will be used that will rely on knowledge of the number of incoming electrons. To determine the electron multiplicity, arrays of fast scintillators will be used. A strategy for the missing energy trigger will be described. An overview of the LDMX trigger scintillators and the current status of simulation studies will be presented.
New physics beyond the Standard Model (SM) could be responsible for the presence of Dark Matter in the Universe. A hidden, or "dark", sector interacting with SM particles via new force carriers is a natural scenario to explain the features of Dark Matter. In the last decade, growing interest has been dedicated to the search for dark sectors with force carriers in the MeV-GeV mass range. A well motivated model envisions the presence of a $U(1)$ gauge boson, the heavy photon $A'$, whose existence can be probed with fixed-target experiments at accelerators.
The Heavy Photon Search Experiment (HPS) at the Thomas Jefferson National Accelerator Facility (JLAB) searches for heavy photons and other new force carriers that are produced via electro-production and decay visibly to electron-positron pairs. This talk presents recent developments in reconstruction and calibration of the 2019 Data Run at 4.55 GeV, including performance of the newly adopted Kalman Filter track reconstruction algorithm.
Cosmological observations indicate that our universe contains dark matter (DM), yet we have no measurements of its microscopic properties. Whereas the gravitational interaction of DM is well understood, its interaction with the Standard Model is not. Direct detection experiments, the current standard, search for nuclear recoil interactions and have low-mass sensitivities down to ~1 GeV. A path to detect DM with masses below 1 GeV is the use of accelerators producing boosted low-mass DM. The Coherent CAPTAIN Mills (CCM) experiment uses a 10-ton liquid argon scintillation detector at the Lujan Center at LANSCE to search for physics beyond the standard model. The Lujan Center delivers a 100-kW, 800 MeV, 290 ns wide proton pulse onto a tungsten target at 20 Hz to generate a stopped pion source. The fast pulse, in combination with the speed of the CCM scintillation detector, is crucial for isolating prompt speed of light particles generated by the stopped pion source and reducing neutron and steady state background. In this talk I will discuss CCM’s search for Vector Portal Dark Matter by showing the results from our Fall 2019 run, as well as the projected reach of the experiment based on the current upgrades to the CCM detector.
The existence of dark matter is ubiquitous in cosmological data, yet numerous particle detectors have been thoroughly looking for it without any success. For strongly interacting dark matter, the bounds from these experiments are actually irrelevant; as dark matter enters the atmosphere, it scatters and slows down, such that it has a much lower velocity than the detector threshold when it reaches underground laboratories. In this case, however, it would accumulate within the Earth and reach a density much greater than that of the dark matter halo. Here, I will describe a scheme for adapting present-day underground nuclear physics experiments to detect dark matter within this context. In particular, I will show that accumulated dark matter can be up-scattered to resolvable energies using underground nuclear accelerators, such as LUNA in Gran Sasso, and captured in nearby located low-background detectors.
The existence of dark matter is widely accepted, with a well motivated theo-retical candidate being a class of particles known as WIMPs (weakly interacting massive particles), which appear in the spectra of many extensions to the stan-dard model.
We explore a particular WIMP-like model in which fermionic dark matter weakly couples to the muon/tau sectors of the standard model through a new vector boson Z’, in addition to electrically charged particles through kinetic mixing of the Z’ with the SM photon. As well as the model potentially providing a candidate dark matter particle, the hypothetical Z’ could also aid in explaining the discrepancy between the predicted and observed value of the anomalous magnetic dipole moment of the muon.
Cosmological observations of the dark matter relic density along with findings from direct detection attempts allow us to tightly constrain the parameter space of the model. By initially assuming a momentum independent kinetic mixing parameter, it is difficult for the resulting parameter space to satisfy the restrictions imposed by both sets of experimental results. In this talk, we focus on the work done to remedy this disagreement. Our work involves an attempt at softening the direct detection constraint by considering the general case in which the mixing parameter is momentum dependent. We construct it in such a way that it vanishes in the zero-momentum transfer limit, which results in a viable parameter space. Our goal is then to compare model derived quantities including interaction cross sections and early universe annihilation rates to well established experimental bounds to determine if the resulting parameter space is consistent with the constraints imposed by both direct detection and relic abundance.
The ongoing pandemic has exacerbated the isolation of people with disabilities, due to the loss of physical access to habilitation personnel and facilities. However, the elimination of business travel in favor of virtual meetings has simplified the participation of physically disabled scientists in the intellectual life of the particle physics community. In view of the imminent restart of in-person conferences, it behooves us to re-examine the accessibility standards for all our upcoming events. In this talk, I will give an overview of accessibility considerations for in-person scientific meetings and highlight a few suggestions for improvement, with the goal of making our community more inclusive, and our conferences more enjoyable for all attendees.
The ATLAS Collaboration has developed a variety of printables for education and outreach activities. We present two ATLAS Coloring Books, the ATLAS Fact Sheets, the ATLAS Physics Cheat Sheets, and ATLAS Activity Sheets. These materials are intended to cover key topics of the work done by the ATLAS Collaboration and the physics behind the experiment for a broad audience of all ages and levels of experience. In addition, there is ongoing work in translating these documents to different languages, with one of the coloring books already available in 18 languages. These printables are prepared to complement the information found in all ATLAS digital channels, they are particularly useful in outreach events and in the classroom.
We created an extremely successful planetarium show called: Phantom of the Universe - The Hunt for Dark Matter, which has been seen in more than 600 planetariums in 67 countries and 42 US states. It has been translated into 22 languages. We were motivated in part by envisioning several scenes that could only work in a planetarium. Our target audiences were the public and students. We found that many planetariums had an interest in a dark matter show. They present our show for many months at a time (more than feature films). Planetariums have the perfect science-interested audience for us. None of the physicist organizers had ever made a planetarium show before (involving a spherical screen). To create the show, we worked with renowned people with extensive experience in filmmaking and with people at seven planetariums (in multiple countries). We hired a Hollywood producer and screenwriter. Our narrator for the English-language version was Academy Award-winning actor Tilda Swinton. Sound editing and sound effects were done by an Academy-Award-winning team at Skywalker Sound. As we developed the show, we never imagined such success.
Simulating Particle Detection (SPD) stream is a research program within UMD's FIRE, a gen-ed sequential course-based undergraduate research experience program. SPD introduces undergraduate students to experimental high energy particle physics. It concentrates on computing, data analysis and visualization, specifically using simulations of the upgrade calorimeters (HGCAL) of the CMS experiment at CERN. After an introduction to the stream's wide-range research outcomes, pedagogical principles, and diverse community, I will share my experiences from the measures taken for the challenges imposed by the remote-learning period during the pandemic.
Since 1984 the Italian groups of the Istituto Nazionale di Fisica Nucleare (INFN) and Italian Universities, collaborating with the DOE laboratory of Fermilab (US) have been running a two-month summer training program for Italian university students. While in the first year the program involved only four physics students of the University of Pisa, in the following years it was extended to engineering students. This extension was very successful and the engineering students have been since then extremely well accepted by the Fermilab Technical, Accelerator and Scientific Computing Division groups. Over the many years of its existence, this program has proven to be the most effective way to engage new students in Fermilab endeavors. Many students have extended their collaboration with Fermilab with their Master Thesis and PhD.
Since 2004 the program has been supported in part by DOE in the frame of an exchange agreement with INFN. An additional agreement for sharing support for engineers of the School of Advanced Studies of S.Anna (SSSA) of Pisa was established in 2007 between SSSA and Fermilab. In the frame of this program four SSSA students are supported each year. Over its 35 years of history, the program has grown in scope and size and has involved more than 500 Italian students from more than 20 Italian Universities, Since the program does not exclude appropriately selected non-italian students, a handful of students of European and non-European Universities were also accepted in the years.
Each intern is supervised by a Fermilab Mentor responsible for performing the training program. Training programs spanned from Tevatron, CMS, Muon (g-2), Mu2e and SBN design and experimental data analysis, development of particle detectors (silicon trackers, calorimeters, drift chambers, neutrino and dark matter detectors), design of electronic and accelerator components, development of infrastructures and software for tera-data handling, research on superconductive elements and on accelerating cavities, theory of particle accelerators.
Since 2010, within an extended program supported by the Italian Space Agency and the Italian National Institute of Astrophysics, a total of 30 students in physics, astrophysics and engineering have been hosted for two months in summer at US space science Research Institutes and laboratories.
In 2015 the University of Pisa included these programs within its own educational programs. Accordingly, Summer School students are enrolled at the University of Pisa for the duration of the internship and are identified and ensured as such. At the end of the internship the students are required to write summary reports on their achievements. After positive evaluation by a University Examining Board, interns are acknowledged 6 ECTS credits for their Diploma Supplement.
Information on student recruiting methods, on training programs of recent years and on final student`s evaluation process at Fermilab and at the University of Pisa will be given in the presentation.
A direct measurement of the Higgs self coupling is very crucial to understand the nature of electroweak symmetry breaking. This requires an observation of production of Higgs boson pair, which suffers from very low event rate even at the current LHC run. In our work, we study the prospects of observing the Higgs pair production at the high luminosity run of the 14 TeV LHC (HL-LHC) and also the proposed high energy upgrade of the LHC at 27 TeV, namely, HE-LHC. For the HL-LHC study, we choose multiple final states based on the event rate and cleanliness, namely, $b\bar{b}\gamma \gamma$, $b\bar{b} \tau^+ \tau^-$, $b\bar{b} WW^*$, $WW^*\gamma \gamma$ and $4W$ channels and do a collider study by employing a cut-based as well as multivariate analyses using the Boosted Decision Tree (BDT) algorithm. In case of HE-LHC study, we select various di-Higgs final states based on their cleanliness and production rates, namely, $b\bar{b}\gamma\gamma$, $b\bar{b}\tau^{+}\tau^{-}$, $b\bar{b}WW^{*}$, $WW^{*}\gamma\gamma$, $b\bar{b}ZZ^{*}$ and $b\bar{b}\mu^{+}\mu^{-}$ channels. We adopt multivariate analyses using BDT algorithm, the XGBoost toolkit and Deep Neural Network (DNN) for the signal-background discrimination. Also, we perform a study on the ramifications of varying the self-coupling of Higgs boson from its Standard Model (SM) value.
In this talk we will discuss the production of three Higgs bosons in the LHC and at a proton-proton collider running at a centre-of-mass energy of 100 TeV. We will argue that the seemingly challenging 6-botton jets final state is a very good candidate to investigate triple Higgs production within and beyond the SM in proton-proton colliders. In particular we will consider three different scenarios: one in which the triple and quartic Higgs boson self-couplings are not affected by
new physics phenomena besides the Standard Model (SM) and in addition, we will explore two possible SM extensions by one and two new scalars. We will show that a 100 TeV machine can impose competitive constraints on the quartic coupling in the SM-like scenario. In the case of the scalar extensions of the SM, we will show that large significances can be obtained in the LHC and the 100 TeV collider while obeying current theoretical and experimental constraints including a first order
electroweak phase transition.
One of the assumptions of simplified models is that there are a few new particles and interactions accessible at the LHC and all other new particles are heavy and decoupled. The effective field theory (EFT) method provides a consistent method to test this assumption. Simplified models can be augmented with higher order operators involving the new particles accessible at the LHC. Any UV completion of the simplified model will be able to match onto these beyond the Standard Model EFTs (BSM-EFT). In this paper we study the simplest simplified model: the Standard Model extended by a real gauge singlet scalar. In addition to the usual renormalizable interactions, we include dimension-5 interactions of the singlet scalar with Standard Model particles. As we will show, even when the cutoff scale is 3 TeV, these new effective interactions can drastically change the interpretation of Higgs precision measurements and scalar searches.
An Effective Field Theory (EFT) re-interpretation of the differential measurement of Vector Boson Fusion Higgs production and decay to two W bosons will be reported. The analysis used the full Run-2 data in 2015--2018 of $pp$ collisions at $\sqrt{s}$=13 TeV with the ATLAS detector at the LHC, which correspond to an integrated luminosity of 139 fb$^{−1}$. Events with an electron and a muon from the decay of the W bosons and two energetic jets in the final state are considered as signals. At particle level, Standard Model predictions can be modified by expressing the differential cross section of various observables as a function of parameters that represent new phenomena predicted by EFT. The background subtracted and unfolded data are used to set limits on these new physics parameters.
Rare decays of the Higgs boson are promising laboratories to search for physics beyond the standard model (BSM). Such BSM physics might alter Yukawa couplings to lighter quarks and add loop diagrams, possibly resulting in higher decay rates than predicted by the standard model. For the first time in four-lepton final states, decays of the Higgs boson into a $ZJ/\psi$ or $Z\psi(2S)$ final states are searched for. In addition, Higgs decays into $J/\psi$ pair, $\Upsilon$ pair, $\psi(2S)J/\psi$, or $\psi(2S)\psi(2S)$ final states are studied. Events with subsequent decays of the $Z$ boson into lepton pairs ($e^{+}e^{−}$ or $\mu^{+}\mu^{−}$) and $J/\psi$ or $\Upsilon$ mesons into muon pairs are selected using online event filters. Final states with $\psi(2S)$ mesons are accessed via the inclusive decay of $\psi(2S)$ into $J/\psi$. A data sample of proton-proton collisions collected at a center-of-mass energy of 13 TeV with the Compact Muon Solenoid detector at the Large Hadron Collider that corresponds to an integrated luminosity of about 137 fb$^{−1}$ is used. This talk will present recent searches and implications for future searches of such BSM signatures with higher luminosities.
The generation of the neutrino mass is an essential observation from the neutrino oscillation experiments. This indicates a major revision of the Standard Model which initiated with the massless neutrinos. A possible interesting scenario is the seesaw mechanism where SM gauge singlet Right Handed Neutrinos are introduced. Another interesting aspect is the extension of the SM with SU$(2)_𝐿$ triplet fermions. Alternatively a general U$(1)_L$ extension of the SM is also an interesting idea which involves three generations of the SM singlet RHNs to generate the tiny neutrino mass through the seesaw mechanism. Additionally such models can contain a $𝑍^\prime$ boson which could be tested at the colliders through the pair production of the RHNs.
We propose a model-independent framework to classify and study neutrino mass models and their phenomenology. The idea is to introduce one particle beyond the Standard Model which couples to leptons and carries lepton number together with an operator which violates lepton number by two units and contains this particle. This allows to study processes which do not violate lepton number, while still working with an effective field theory. The contribution to neutrino masses translates to a robust upper bound on the mass of the new particle. We compare it to the stronger but less robust upper bound from Higgs naturalness and discuss several lower bounds.
We consider the generation of neutrino masses via a singly-charged scalar singlet. Under general assumptions we identify two distinct structures for the neutrino mass matrix which are realised in several well-known radiative models. Either structure implies a constraint for the antisymmetric Yukawa coupling of the singly-charged scalar singlet to two left-handed lepton doublets, irrespective of how the breaking of lepton-number conservation is achieved. The constraint disfavours large hierarchies among the Yukawa couplings. We study the implications for the phenomenology of lepton-flavour non-universality, measurements of the $W$-boson mass, flavour violation in the charged-lepton sector and decays of the singly-charged scalar singlet. We also discuss the parameter space that can address the Cabibbo Angle Anomaly.
In a seesaw scenario, GUT and family symmetry can severely constrain the structure of the Dirac and Majorana mass matrices of neutrinos. We will discuss an interesting case where these matrices are related in such a way that definite predictions for light neutrino masses are achieved without specifying the seesaw scale. This opens up the possibility to consider both high- and low-scale leptogenesis. We will explore both of these possibilities in an $SU(5) \times \mathcal{T}_{13}$ model and show that sub-GeV right-handed neutrinos with active-sterile mixing large enough to be probed by DUNE can explain baryon asymmetry of the Universe through resonant leptogenesis.
Possible structural linkage between neutrinos and charged leptons under neutrino mixing of being in neutrinos’ own maximum contact is studied. Where it is provided that one neutrino (fermion) and neutrino-pair (boson) may interact. So it should have such a restrained condition that one neutrino makes a maximum contact number-6 (six) with other 6 neutrinos under 2D mixing. Then possible structural linkage between neutrinos and charged leptons will emerge vertically, and it seems common and recurrent in their three generations. There, the winding angles of series of the neutrino-pair, with which higher charged leptons (μ, τ) possibly build up, appear to fall almost on observed neutrino mixing angles of θ12 and θ23, respectively.
The challenging experimental environment at the High Luminosity LHC (HL-LHC) will require replacement of the existing endcap calorimeters of the CMS experiment. In their place, the new HGCAL detector will offer a radiation hard, high granularity calorimeter which meets the challenge and offers improved abilities for physics object reconstruction. We review the design and current status of the HGCAL upgrade project.
The HL-LHC upgrade of the CMS experiment includes a replacement of the endcap calorimeters with the new HGCAL High-Granularity Calorimeter. Development of radiation-hard 8" silicon sensors is an important part of the upgrade project. We will review the status of the sensor development, including radiation tests, and describe the plans towards the full sensor production.
The HGCAL endcap calorimeter of the CMS experiment at HL-LHC will include a hadronic compartment that is based partly on the SiPM-on-tile concept. Building a performant SiPM-on-tile system involves the development and testing of rad-hard scintillators and SiPMs to meet the challenges of the HL-LHC experimental environment. We will review the design of the SiPM-on-tile part of the calorimeter, and describe the current status of the effort.
T2K is a long-baseline accelerator neutrino oscillation experiment which has precisely measured neutrino oscillation parameters and hinted at a significant matter-antimatter asymmetry in the lepton sector. In view of the upcoming program of upgrades of the beam intensity, a novel plastic-scintillator detector for the T2K near detector upgrade, called SuperFGD, is proposed aiming to reduce the statistical and systematic uncertainties of the measurements. The scintillation light from particle interactions in SuperFGD is collected by Multi-Pixel Photon Counter (MPPC), a solid state photomultiplier with high internal gain. In this talk, the characterization of the MPPC sensors is presented.
A novel particle detector design is proposed utilizing a modified bandgap reference circuit. The output of the circuit is calibrated to be proportional to the work function of gallium nitride, which provides a reference voltage that is independent of temperature variations, supply variations and loading. It is hypothesized that particle interactions with the detector cause temporal fluctuations in the output. Experimental data of transient signals observed under neutron and alpha irradiation are presented.
The ATLAS missing transverse momentum trigger is susceptible to the impact of multiple proton-proton interactions (pileup) in the same event. To mitigate the impact of pileup, sophisticated subtraction schemes are utilized. During the Run 2 data-taking (2015-2018), these methods focused only on information from the calorimeter due to the limited time available for the algorithms to utilize tracks in the HLT. HLT is the High Level Trigger software-based second-level trigger subsystem. In this talk, I will present updates on the missing transverse momentum trigger algorithms utilizing tracking information for Run 3 (2022-2024).
We present measurements of CMS jet energy scale (JES) and resolutions, based on a data sample
collected in proton-proton collisions at a center-of-mass energy of 13 TeV. The corrections,
extracted from data and simulated events using the combination of several channels and methods,
account successively for the effects of pileup, simulated jet response, and residual JES eta and
pT dependences. The jet energy resolution is measured in data and simulated events, where it is
studied as a function of pileup and jet pT and eta. The studies exploit events with dijet
topology, photon+jet, Z+jet and multijet events.
The ongoing CMS analysis on the measurement of the full spin density production matrix, which includes multi-differential measurements of variables sensitive to the top quark spin correlation, polarization and related angular observables, is presented. Events containing two leptons, two b-jets and additional jets, as well as missing transverse momentum produced in proton-proton collisions at a center-of-mass energy of 13 TeV are considered. The data corresponds to an integrated luminosity of 137/fb collected with the CMS detector at the LHC. Results are used to challenge Standard Model predictions and also to indirectly search for contributions of new physics.
We present our recent NNLO calculation of t-channel single-top-quark production and decay that resolves a disagreement between two previous calculations whose size at the inclusive level was comparable to the NNLO correction itself, and was even larger differentially. Moving beyond those comparisons, we have included b-quark tagging to allow for comparison with experiment, and added the ability to use double deep inelastic scattering (DDIS) scales ($\mu^2=Q^2$ for the light-quark line and $\mu^2=Q^2+m_t^2$ for the heavy-quark line) that allow for direct testing of parton distribution function (PDF) stability. All code will be publicly available in MCFM.
We demonstrate that several characteristic fiducial and differential standard model observables, and observables sensitive to new physics, are stable between NLO and NNLO, but point out there is a sizable difference in the prediction of some exclusive t+n-jet cross sections. Finally, we use this calculation to present preliminary results which indicate that some commonly used PDF sets are in significant disagreement, both with each other and with themselves
between perturbative orders when evaluated at Tevatron energies.
A simultaneous measurement of the three components of the top-quark and top-antiquark polarization vectors in $t$-channel single-top-quark production is presented. Due to the large mass of the top quark, the $t\rightarrow Wb$ decay occurs before hadronization, giving one access to its polarization through the angular distribution of the decay products. The analysis we present uses an integrated luminosity of 139 $fb^{-1}$ of proton-proton collisions at 13 TeV, collected with the ATLAS detector at the LHC. We also discuss the more intricate analysis of the quadruple-differential decay rate of $t$-channel single-top-quark, which is currently in progress; its purpose is the simultaneous determination of four decay amplitudes and their phases in addition to all three components of the polarization vector for top quarks and antiquarks separately. Prospects for constraining anomalous couplings/Effective-Field-Theory coefficients with this analysis are also discussed.
The Mu2e experiment, currently under construction at Fermilab, will search for charged lepton flavor violation (CLFV) in the form of coherent neutrinoless conversion of muons to electrons in the presence of an atomic nucleus. In order to reach its projected single-event sensitivity of $3 \times 10 ^{−17}$, Mu2e will need to create the most intense muon beam ever developed, with $10^{10}$ muons per second stopping in the stopping target. These muons will be produced from pions originating from the production target. Optimizing pion production is therefore a vital component of the Mu2e design. This talk will discuss how the design of the production target, solenoid, and instrumentation is optimized for pion production.
The Mu2e experiment being constructed at Fermilab will search for indications of Charged Lepton Flavor Violation by measuring 105-MeV electrons emitted in conversions of negative muons into electrons in the nuclear field without emission of neutrinos. Mu2e-II is a proposed upgrade to the baseline Mu2e experiment to extend the reach by an order of magnitude. To enhance charged-pion production, the Mu2e-II upgrade will rely on a 100-kW 800-MeV proton beam from the dedicated linac accelerator complex PIP-II to be built at Fermilab. Mu2e-II will reach a higher sensitivity than the Mu2e baseline by increasing the rate of negative muons stopped in the detector’s stopping target foils by a factor of about 10. Such sensitivity will allow Mu2e-II to reach New Physics mass scales up to $2\cdot{}10^4$ TeV. For Mu2e-II we are considering a novel conveyor target with spherical target elements moved through the beam path both mechanically and by a gas flow. We will discuss our recent advances in conceptual design R&D for a Mu2e-II target station, based on energy deposition and radiation damage simulations involving Monte-Carlo codes (MARS15, G4beamline, and FLUKA) as well as thermal and mechanical analyses to estimate the survivability of the system. The concurrent use of several simulation codes is intended to allow us to elucidate the specific systematic uncertainty inherent in simulations. Our simulations warrant that some other designs are less preferable and support our assessment of the target station’s required working parameters and constraints. We show how thermal and mechanical analyses determine the choice of cooling scheme and prospective materials for the conveyor’s spherical elements.
Quantum machine learning could possibly become a valuable alternative to classical machine learning for applications in High Energy Physics by offering computational speed-ups. In this study, we employ a support vector machine with a quantum kernel estimator (QSVM-Kernel method) to a recent LHC flagship physics analysis: $\mathrm{t\overline{t}}$H (Higgs boson production in association with a top quark pair). In our quantum simulation study using up to 20 qubits and up to 50000 events, the QSVM-Kernel method performs as well as its classical counterparts in three different platforms from Google Tensorflow Quantum, IBM Quantum and Amazon Braket. Additionally, using 15 qubits and 100 events, the application of the QSVM-Kernel method on the IBM superconducting quantum hardware approaches the performance of a noiseless quantum simulator. Our study confirms that the QSVM-Kernel method can use the large dimensionality of the quantum Hilbert space to replace the classical feature space.
Clustering of charged particle tracks along the beam axis is the first step in reconstructing the positions of proton-proton (p-p) collisions at Large Hadron Collider (LHC) experiments. In this talk, we formulate this problem for a 2048 qubit D-Wave quantum computer that works by quantum annealing. We showcase the performance of the quantum annealer on artificial events generated from p-p collision and track distributions measured by the Compact Muon Solenoid experiment at the LHC. This performance is enhanced via multiple hardware optimizations which are outlined in the talk. The quantum clustering algorithm is found to be limited by the connectivity of the qubits and the overall efficiency of the algorithm in addressing event topologies with more than 5 collisions. Current research directions are highlighted in extending this algorithm to be compatible with operating at the full LHC-scale problem complexities relevant for particle physics research.
We present a minimal UV complete framework to embed inflation and dark matter by extending the standard model with a singlet real scalar field (the inflaton) and a singlet fermonic field acting as dark matter. The inflaton features the most general renormalizable polynomial up to quartic order, which is flat due to the existence of a perturbed inflection-point, comfortably fitting CMB measurements. We also analyze (p)reheating by considering the Higgs production via inflaton decay. In the early universe, dark matter can be generated by the mediation of gravitons or inflatons. However, the production via the direct decay of the inflatons dominates, making viable a large range of dark matter masses, from $\mathcal{O}(10^{-5})$ GeV to $\mathcal{O}(10^{11})$ GeV.
Axion couplings to photons could induce photon-axion conversion in the presence of magnetic fields in the Universe. This conversion could impact various cosmic distance measurements, such as luminosity distances to type Ia supernovae and angular distances to galaxy clusters, in different ways. In this paper we consider different combinations of the most up-to-date distance measurements to constrain the axion-photon coupling. Employing the conservative cell magnetic field model for the magnetic fields in the intergalactic medium (IGM) and ignoring the conversion in the intracluster medium (ICM), we find the upper bounds on axion-photon couplings to be around $5 \times 10^{-12}$ (nG/$B$) $\sqrt{\mathrm{Mpc}/s}$ GeV$^{-1}$ for axion masses $m_a$ below $10^{-13}$ eV, where $B$ is the strength of the IGM magnetic field, and $s$ is the comoving size of the magnetic domains. When including the conversion in the ICM, the upper bound is lowered and could reach $5 \times 10^{-13}\, $GeV$^{-1}$ for $m_a < 5 \times 10^{-12}$ eV. While this stronger bound depends on the ICM modeling, it is independent of the strength of the IGM magnetic field, for which there is no direct evidence yet. These constraints could be placed on firmer footing with an enhanced understanding and control of the astrophysical uncertainties associated with the IGM and ICM. All the bounds are determined by the shape of the Hubble rate as a function of redshift reconstructable from various distance measurements, and insensitive to today's Hubble rate, of which there is a tension between early and late cosmological measurements.
Self-interaction among the neutrinos in the early Universe has been proposed as a solution to the Hubble tension, a discrepancy between the measured values of the Hubble constant from CMB and low-redshift data. However, flavor-universal neutrino self-interaction is highly constrained by BBN and several laboratory experiments such as, tau and K-meson decay, double-neutrino beta decay etc. In this talk, I will discuss about the cosmology if only one or two neutrino states are self-interacting. Such flavor-specific interactions are less constrained by the laboratory experiments. Finally, I will talk about the feasibility of addressing the Hubble tension in the framework of such flavor-specific neutrino self-interaction.
The existence of feebly interacting massive particles (FIMPs) could have significant implications on the effective number of relativistic species Neff in the early Universe. In this work, we investigate in detail how short-lived FIMPs that can decay into neutrinos affect Neff and highlight the relevant effects that govern its evolution. We show that even if unstable FIMPs inject most of their energy into neutrinos, they may still decrease Neff, and identify neutrino spectral distortions as the driving power behind this effect. As a case study, we consider Heavy Neutral Leptons (HNLs) and indicate which regions of their parameter space increase or decrease Neff. Moreover, we derive bounds on the HNL lifetime from the Cosmic Microwave Background and comment on the possible role that HNLs could play in alleviating the Hubble tension.
I will discuss the implications of self-interacting dark-sectors with light degrees of freedom and mass thresholds on early universe physics. Such models exhibit a relative increase in the energy density of the dark sector when the temperature crosses a mass threshold. Of special interest are models with mass thresholds below $\mathcal{O}({\rm MeV})$. In this region of parameter space, the transition (increase in energy density) occurs after BBN, allowing for $ N_\mathrm{eff} > 3$ and consequently a larger value of $H_0$. Additionally, the transition occurs during the epochs probed by cosmological data. I will talk about the cosmological constraints on these models using the most recent data including the 2018 Planck CMB spectra, baryon acoustic oscillation (BAO) data and local measurements of the Hubble constant. The analysis shows a preference for a transition at the $\rm KeV$ scale and we find that these models can alleviate the $H_0$ tension allowing for $H_0 = 71.4 $.
I will describe how to use the 21-cm line of hydrogen to learn about the nature of dark matter in the early universe. I will begin with an overview of the 21-cm signal during cosmic dawn and reionization, and give a brief update of the 2021 status of both theory and measurements. Then, I will show how to use the depth of the signal as a thermometer to learn about anomalous cooling or heating due to dark-matter interactions. Finally, I will illustrate how the timing of this signal can be used to study dark matter at smaller scales than currently accessible. This will determine whether dark matter is cold, warm, or self interacting, and settle whether there is a small-scale CDM problem.
The Standard Model of particle physics is in remarkable agreement with most experimental data so far. However, a lot of questions remain unanswered, such as the origin of neutrino masses or the need for extra sources of CP violation. Possible solutions rest on scalar sector extensions, popular beyond-the-Standard-Model scenarios, in which the addition of scalar triplets is an attractive possibility. Such models are much studied in the literature, but they still hide some features underneath. In the Higgs-triplet model, in which small neutrino masses may be generated via the type-II seesaw mechanism, the theory can a priori develop a charge-breaking vacuum as the global minimum of the theory, which would spoil electromagnetism. Furthermore, and although not possible with just one triplet, a CP-breaking vacuum is possible with the addition of two triplets, which could lead to interesting leptonic CP-violating effects. However, it also introduces novel and unexpected features in its scalar spectrum. In this work, we briefly present such hidden features.
A modest extension of the Standard Model by two additional Higgs doublets - the Higgs Troika Model - can provide a well-motivated scenario for successful baryogenesis if neutrinos are Dirac fermions. Adapting the ``Spontaneous Flavor Violation'' framework, we consider a version of the Troika model where light quarks have significant couplings to the new multi-TeV Higgs states. Resonant production of new scalars leading to di-jet or top-pair signals are typical predictions of this setup. The initial and final state quarks relevant to the collider phenomenology also play a key role in baryogenesis, potentially providing direct access to the relevant early Universe physics in high energy experiments. Viable baryogenesis generally prefers some hierarchy of masses between the observed and the postulated Higgs states. We show that there is a complementarity between direct searches at a future 100 TeV$pp$ collider and indirect searches at flavor experiments, with both sensitive to different regions of parameter space relevant for baryogenesis. In particular, measurements of $D-\bar{D}$ mixing at LHCb probe much of the interesting parameter space. Direct and indirect searches can uncover the new Higgs states up to masses of $\mathcal{O}(10)$ TeV, thereby providing an impressive reach to investigate this model.
The stringent constraints from the direct searches for exotic scalars at the LHC as well as indirect bounds from flavour physics measurements have imposed severe restrictions on the parameter space of new physics models featuring extended Higgs sectors. In the Type-II 2HDM, this implies a lower bound on the charged Higgs masses of $\mathcal{O}$(600 GeV). In this work we analyse the phenomenology of a Z3HDM in the alignment limit focusing on the impact of flavour physics constraints on its parameter space. We show that the couplings of the two charged Higgs bosons in this model feature an additional suppression factor compared to Type-II 2HDM. This gives rise to a significant relaxation of the flavour physics constraints in this model, allowing the charged Higgs masses to be as low as $\mathcal{O}$(200 GeV). We also consider the constraints coming from precision electroweak observables and the observed diphoton decay rate of the 125 GeV Higgs boson at the LHC. The bounds coming from the direct searches of nonstandard Higgs bosons at the LHC, particularly those from resonance searches in the ditau channel, prove to be very effective in constraining this scenario further.
A search for a light pseudoscalar (a) in the composite Higgs model is performed in the gluon-gluon fusion production channel with decay to di-tau. The lightness of a makes it such that the most promising topology is that of a boosted di-tau topology, where a is created with significant momentum transverse to the LHC beam line before decaying into a pair of taus. Preliminary investigations have been done to show that this search is accessible with CMS Run 2 data.
In this talk, we explore the collider phenomenology of the charged Higgs boson in the context of a Beyond Standard Model scenario with extended gauge and scalar sector. Because of the intricate pattern of symmetry breaking, the charged Higgs can simultaneously decay via heavy gauge boson mediated channels ($W^{'}Z/ WZ^{'}$) along with the traditional SM decay modes. Our goal here is to formulate the search strategy of the $H^{\pm}$ in the channel $\sigma(g b \rightarrow H^{\pm}t)\mathcal{BR}(H^{\pm} \rightarrow W^{'} Z)$. Since the $W^{'}_{\mu}$ boson in this model does not couple to SM fermions in tree level, it opens up interesting cascade decay chain: $H^{\pm} \rightarrow W^{'} Z \rightarrow W^{\pm} Z Z$. As a result the charged Higgs can be discovered in the final state with multiple hard leptons and/ or b-quarks which future LHC experiments with sufficiently large luminosity (say, $\mathcal{L} = 1000 fb^{-1}$) can probe.
I discuss the scenario where the SM scalar sector is enhanced by two real scalar singlets. This model features 3 CP even neutral states that mix and allow for interesting decay chains, e.g. an asymmetric h3-> h1 h2 process with non-degenerate masses or h2-> h1 h1 processes with non SM-like masses. I will present several benchmark planes within this model which lead to interesting novel signatures at hadron colliders.
We present a search for a low mass dark photon below 1 MeV which is radiated from a muon in proton-proton collisions at a center-of-mass energy of 13 TeV. A low mass dark photon has no available decay channel to standard model particles, and is hence stable. We assume that a dark photon directly interacts with detector materials through bremsstrahlung, and its small kinetic mixing, in which the dark photon converts to and from the normal photon, means depositing energy outside the CMS Electromagnetic Calorimeter. The search is model-independent, and applies to all dark photons with mass lower than 1 MeV. I will present preliminary feasibility studies using Monte Carlo simulation, which have shown that a dark photon shower in HCAL is distinguishable from the standard model backgrounds.
Originally, the Large Hadron Collider (LHC) was designed to complete the Standard Model (SM) of particle physics, and while great progress has been made in validating the SM, it still leaves unexplained phenomena. Because of this, it is useful to implement the search for Beyond the Standard Model (BSM) physics. The high luminosity upgrade to the LHC (HL-LHC) can improve current searches for BSM physics by utilizing hardware and track triggering systems optimized for long-lived particles (LLPs) and exotic signatures indicative of new physics. The design of the hardware and the track triggering algorithms necessary in the search for BSM physics should have a high efficiency (reduce background and reduce number of events lost) and be computationally affordable (fast algorithms). This talk addresses the triggering parameters needed to cover a large range of LLP signatures and exotic models, specifically displaced leptons.
The Heavy Photon Search experiment searches for electro-produced dark photons using an electron beam provided by CEBAF at the Thomas Jefferson National Accelerator Facility. HPS looks for dark photons through two distinct methods – a resonance search in the e+e invariant mass distribution above the large QED background for large dark photon-SM particles couplings, and a displaced vertex search for long-lived dark photons for small couplings. An engineering run in 2016 obtained 5.4 days (92.5 mC) of data using a 2.3 GeV, 200 nA electron beam, and both sets of analysis are unblinded and nearing completion. Even though the result from the 2016 displaced vertex search (which will be the focus of the talk) is insufficient to set physically meaningful limits, it demonstrated the full functionality of the experiment, including both signal expectation and background mitigation, that will enable probing unexplored parameter space with existing data and future, higher luminosity runs.
We revisit the solution to the $(g-2)_\mu$ puzzle based on a kinetically mixed dark photon. Despite this scenario being excluded in minimal models with fully visible and fully invisible dark photon decays, we show that semi-visible scenarios are still allowed by explicitly re-evaluating constraints from B-factories and fixed-target experiments. Such a solution points to dark photons with masses around a few GeV that couple to dark sectors with co-annihilating dark matter particles or heavy neutral leptons. In all models, we predict a large rate of multi-leptons at Belle-II and NA64.
Axion-like particles (ALPs) provide a promising direction in the search for new physics, while a wide range of models incorporate ALPs. We point out that neutrino and dark matter experiments, such as DUNE, CCM, possess competitive sensitivity to ALP signals. High-intensity proton beams can not only produce copious amounts of neutrinos, but also cascade photons that are created from charged particle showers stopping in the target. Therefore, ALPs interacting with photons can be produced (often energetically) with high intensity via the Primakoff effect 𝛾𝑍→𝑎𝑍 and then leave their signatures via inverse Primakoff scattering or decays to photon pairs, 𝑎→𝛾𝛾. The proton beam may also induce an electron flux, which, together with the cascade photons, can produce ALPs via their couplings to electrons through bremsstrahlung-like and compton-like processes. Through this coupling, ALP detection via decays to 𝑒+𝑒− and inverse compton scattering 𝑎𝑒→𝛾𝑒− are also possible.
In this talk, I will propose the use of the Earth as a transducer for ultralight dark-matter detection. In particular I will point out a novel signal of kinetically mixed dark-photon dark matter: a monochromatic oscillating magnetic field generated at the surface of the Earth. Similar to the signal in a laboratory experiment in a shielded box (or cavity), this signal arises because the lower atmosphere is a low-conductivity air gap sandwiched between the highly conductive interior of the Earth below and ionosphere or interplanetary medium above. At low masses (frequencies) the signal in a laboratory detector is usually suppressed by the size of the detector multiplied by the dark-matter mass. Crucially, in our case the suppression is by the radius of the Earth, and not by the (much smaller) height of the atmosphere. The magnetic field signal exhibits a global vectorial pattern that is spatially coherent across the Earth, which enables sensitive searches for this signal using unshielded magnetometers dispersed over the surface of the Earth. I will summarize the results of such a search using a publicly available dataset from the SuperMAG collaboration. The constraints from this search are complementary to existing astrophysical bounds. Future searches for this signal may improve the sensitivity over a wide range of ultralight dark-matter candidates and masses.
One of the key sub-detectors of the CMS experiment, located at the CERN Large Hadron collider, is the electromagnetic calorimeter (ECAL). This homogeneous calorimeter is designed to detect electrons and photons with energies from as low as 500 MeV up to 1 TeV. The ECAL is a homogeneous calorimeter consisting of ~76,000 scintillating crystals arranged around the collision point in an 8m long barrel that has a diameter of 3m and two endcaps, all within a 4T magnetic field. Electrons and photons are detected as showers that spread across many crystals due to the magnetic field and interactions with the detector material.
A multi-step procedure has been developed in order to reconstruct the energy of these particles from the energy deposits in the crystals of the ECAL. In the first step of this procedure a clustering algorithm is employed to gather the energies deposited around a "seed" crystal, which are above a certain threshold and subject to certain topological requirements to ensure that energy being clustered is not electronic noise and belongs to the showering electromagnetic particle. In the next step corrections are applied to this clustered energy in order to account for effects such as out of cluster energy deposits, energy losses upstream of the ECAL as well as in dead crystal and gaps in the ECAL, energy leakage, etc. In order to apply this correction a gradient boosted regression is trained using a detector simulation to correct the reconstructed energy of electrons and photons to their true energies. This regression uses several high level variables which describe the electromagnetic shower as its input features. This regression-based correction is crucial to ensure that electrons and photons are reconstructed with the correct energy scale and with an optimal resolution, and consquently is a key ingredient for several precision measurements, such as the masses of the Higgs and W bosons. This necessarily implies that any improved algorithm would result in gains in these flagship measurements in CMS.
We have recently developed an updated energy regression correction using graph neural networks which uses only low level information of the electromagnetic shower as input features. The algorithm we have developed already shows an improved energy reconstruction performance as compared to the current legacy algorithm and the same has also been validated in collision data. Efforts are now underway to extend it to other use cases, for example, discriminating prompt electrons and photons from jets which can fake them. In this report we will present this novel machine learning based algorithm and the results obtained with it, using both simulated and collision data from the CMS experiment.
Calibrating the pion energy response is a core component of reconstruction in the ATLAS calorimeter. Deep learning techniques have shown the best energy resolution for a wide range of particle momenta [1]; to further improve the pion energy resolution, a Mixture Density Network (MDN) based deep learning algorithm is explored. In addition to estimating the energy, the MDN also estimates the associated energy resolution for each individual pion; this enables the resolution to be quantified on a per pion basis for the first time. This work demonstrates the potential of MDN-based low-level hadronic calibrations to significantly improve the quality of particle reconstruction in the ATLAS calorimeter. This work is done in the context of the ML4Pions group in the ATLAS Collaboration.
[1] ATLAS Collaboration, Deep Learning for Pion Identification and Energy Calibration with the ATLAS Detector, ATL-PHYS-PUB-2020-018, 2020
As an unsupervised machine learning strategy, optimal transport (OT) has been applied to jet physics for the computation of distance between collider events. Here we generalize the Energy Mover’s Distance to include both the balanced Wasserstein-2 (W2) distance and the unbalanced Hellinger-Kantorovich (HK) distance. Whereas the W2 distance only allows for mass to be transported, the HK distance allows mass to be transported, created and destroyed, therefore naturally incorporating the total pt difference of the jets. Both distances enjoy a weak Riemannian structure and thus admit linear approximation. Such a linear framework significantly reduces the computational cost and in addition provides a Euclidean embedding amenable to simple machine learning algorithms and visualization techniques downstream. Here we demonstrate the benefit of this linear approach for jet classification and study its behavior in the presence of pileup.
Deep learning techniques have gained tremendous attention from researchers in many fields, including particle physics. However such techniques typically do not capture model uncertainty. Bayesian models offer a solid framework to quantify the uncertainty, but they normally come with a high computational cost. A recent paper develops a new theoretical framework casting dropout in Neural Networks (NNs) as approximate Bayesian inference for Gaussian processes without changing either the models or the training.
In this talk, I will present how this method can be applied to evaluate multi-classification uncertainty using the Modified National Institute of Standards and Technology (MNIST) dataset. The results from evaluating will include both the model uncertainty, as well as uncertainties from systematic mis-modeling of the training data. I will also present preliminary results of this method applied to the ATLAS identification of jets coming from b-quarks with high momentum, and compare the difference in uncertainties between NNs trained on samples of low momentum only and those including high momentum jets.
A framework is presented to extract and understand decision-making information from a deep neural network classifier of jet substructure tagging techniques. The general method studied is to provide expert variables that augment inputs (“eXpert AUGmented” variables, or XAUG variables), then apply layerwise relevance propagation (LRP) to networks that have been provided XAUG variables and those that have not. The XAUG variables are concatenated to the classifier’s intermediate input to the final layer or decision. The results show that XAUG variables can be used to interpret classifier behavior, increase discrimination ability when combined with low-level features, and in some cases capture the behavior of the classifier completely. The LRP technique can be used to find relevant information the network is using, and when combined with the XAUG variables, can be used to rank features, allowing one to find a reduced set of features that capture part of the network performance. These XAUG variables can also be added to low-level networks as a guide to improve performance.
*This work was supported under NSF Grants PHY-1806573, PHY-1719690 and PHY-1652066. Computations were performed at the Center for Computational Research at the University at Buffalo.
As the LHC prepares to enter its third run, analyses are increasingly focused on a drive for precision physics. One of the great tools for precision physics in this field is that of unfolding. This talk describes the development and usage of RooUnfold, RooFitUnfold, and RooUnfoldML in particle physics. Together they form a complete series of statistical software packages for the treatment unfolding problems, including most of the unfolding methods that are commonly used in particle physics, common uniform tools to evaluated their performance, and proposed methods for future analyses.
Observations of dark matter structure at the smallest scales can tell us about physical processes taking place in the dark sector at very early times. Here, we point out that the presence of light degrees of freedom coupling to dark matter in the early Universe introduces a localized feature in the halo mass function. This leads to a mass function that is distinct in shape than either warm dark matter or cold dark matter, hence distinguishing these models from other leading classes of dark matter theories. We present analytical calculations of these mass functions and show that they closely match N-body simulations results. We briefly discuss how current constraints on the abundance of small-scale dark matter structure do not directly apply to these models due to the multi-scale nature of their mass function.
Dwarf galaxies are relatively pristine objects for testing dark matter microphysics due to weak baryonic feedback in them. We use a particular class of dwarfs which are gas-rich to probe DM interactions with ordinary matter. We require the rate of heat exchange between DM and gas to not exceed the low radiative cooling rate of the gas. This gives strong constraints on popular DM models: our constraints on axion like particles (ALPs), millicharged DM and magnetic PBHs are complementary and comparable to other probes, while they are the strongest to date for dark photon DM for $10^{-20}< m_\mathrm{DM} < 10^{-14} eV$. We therefore show that observations of gas-rich dwarfs from current and upcoming optical and 21cm surveys open a new way to probe physics beyond the standard model.
Indirect detection experiments typically measure the flux of annihilating dark matter (DM) particles propagating freely through galactic halos. We consider a new scenario where celestial bodies "focus" DM annihilation events, increasing the efficiency of halo annihilation. In this setup, DM is first captured by celestial bodies, such as neutron stars or brown dwarfs, and then annihilates within them. If DM annihilates to sufficiently long-lived particles, they can escape and subsequently decay into detectable radiation. This produces a distinctive annihilation morphology, which scales as the product of the DM and celestial body densities, rather than as DM density squared. We show that this signal can dominate over the halo annihilation rate in γ-ray observations in both the Milky Way Galactic center and globular clusters. We use \textit{Fermi} and H.E.S.S. data to constrain the DM-nucleon scattering cross section, setting powerful new limits down to ∼10$^{−39}$ cm$^2$ for sub-GeV DM using brown dwarfs, which is up to nine orders of magnitude stronger than existing limits. We demonstrate that neutron stars can set limits for TeV-scale DM down to about 10$^{−47}$ cm$^2$.
Macroscopic dark matter is almost unconstrained over a wide ``asteroid-like'' mass range, where it could scatter on baryonic matter with geometric cross section. We show that when such an object travels through a star, it produces shock waves which reach the stellar surface, leading to a distinctive transient optical, UV and X-ray emission. This signature can be searched for on a variety of stellar types and locations. In a dense globular cluster, such events occur far more often than flare backgrounds, and an existing UV telescope could probe orders of magnitude in dark matter mass in one week of dedicated observation.
The capture of Dark Matter in Neutron Stars has garnered considerable interest in recent years. This interest is driven by the prospect that the energy deposited by dark matter scattering can heat these objects to infra-red temperatures, which may soon be within reach of observations. In order to obtain reliable results from these searches, proper incorporation of the physics of Neutron stars into the capture process is necessary. Key among these are gravitational focusing, relativistic kinematics, Pauli blocking, and multiple scattering. Additionally, we incorporate the internal structure of the Neutron star through the adoption of an equation of state coupled to the Tolman-Oppenheimer-Volkoff equations. In the case of hadronic targets, we must also account for strong interactions of the targets, which induce an effective mass, and that the momentum transfer is sufficiently large that hadrons cannot be treated as pointlike objects. Accounting for these effects allows us to project sensitivities for dark matter-lepton and nucleon cross sections using dimension-6 effective operators. In many cases, limits are potentially stronger than those obtained from direct detection searches.
In recent years, the usefulness of astrophysical objects as Dark Matter (DM) probes has become more and more evident, especially in view of null results from direct detection and particle production experiments. The potentially observable signatures of DM gravitationally trapped inside a star, or another compact astrophysical object, have been used to forecast stringent constraints on the nucleon-Dark Matter interaction cross section. Currently, the probes of interest are: at high red-shifts, Population III stars that form in isolation, or in small numbers, in very dense DM minihalos at z~15−40, and, in our own Milky Way, neutron stars, white dwarfs, brown dwarfs, exoplanets, etc. Of those, only neutron stars are single-component objects, and, as such, they are the only objects for which the common assumption made in the literature of single-component capture, i.e. capture of DM by multiple scatterings with one single type of nucleus inside the object, is valid. In this paper, we present an extension of this formalism to multi-component objects and apply it to Pop III stars, thereby investigating the role of He on the capture rates of Pop III stars. As expected, we find that the inclusion of the heavier He nuclei leads to an enhancement of the overall capture rates, further improving the potential of Pop III stars as Dark Matter probes.
Studies of CP violation and anomalous couplings of the Higgs boson to vector bosons and fermions are presented. The data were acquired by the CMS experiment at the LHC and correspond to an integrated luminosity of 137 fb−1 at a proton-proton collision energy of 13 TeV. The kinematic effects in the Higgs boson's four-lepton decay H → 4ℓ and its production in association with two jets, a vector boson, or top quarks are analyzed, using a full detector simulation and matrix element techniques to identify the production mechanisms and to increase sensitivity to the Higgs boson tensor structure of the interactions. A simultaneous measurement is performed of up to five Higgs boson couplings to electroweak vector bosons (HVV), two couplings to gluons (Hgg), and two couplings to top quarks (Htt). The CP measurement in the Htt interaction is combined with the recent measurement in the H →γγ channel. The results are presented in the framework of anomalous couplings and are also interpreted in the framework of effective field theory, including the first study of CP properties of the Htt and effective Hgg couplings from a simultaneous analysis of the gluon fusion and top-associated processes. The results are consistent with the standard model of particle physics.
The quest for lepton-flavor-violating processes at the LHC represents one of the key searches for new physics beyond the Standard Model (SM). We present a search for Higgs boson decays into a tau lepton and either an electron or a muon. The analysis uses data from proton-proton collisions at the LHC at $\sqrt{s}= 13$ TeV, collected by the ATLAS detector and corresponding to an integrated luminosity of $36.1$ $\text{fb}^{-1}$. No significant excess of events was found over the SM expectation and upper limits at 95% CL were placed on the branching ratios $\mathcal{B}(H \to e\tau)$ and $\mathcal{B}(H \to \mu\tau)$ of 0.47% and 0.28%, respectively. We conclude with a brief overview of an ongoing analysis of a larger data set with more sophisticated techniques that is expected to yield improved results.
We explore the new physics reach for the off-shell Higgs boson measurement in the $pp \rightarrow 𝐻^∗ \rightarrow 𝑍(ℓ^+ℓ^−)𝑍(𝜈\bar{𝜈})$ channel at the high-luminosity LHC. The new physics sensitivity is parametrized in terms of the Higgs boson width, effective field theory framework, and a non-local Higgs-top coupling form factor. Adopting Machine-learning techniques, we demonstrate that the combination of a large signal rate and a precise phenomenological probe for the process energy scale, due to the transverse 𝑍𝑍 mass, leads to significant sensitivities beyond the existing results in the literature for the new physics scenarios considered.
The top-quark Yukawa coupling $y_t$ is the strongest interaction of the Higgs boson in the Standard Model (SM) with $y_t \sim 1$. Due to its magnitude, it plays a central role in Higgs phenomenology in the SM and would be most sensitive to physics beyond the Standard Model. The top Yukawa can be directly measured at the LHC via top pair production in association with a Higgs boson $t\bar{t}h$. We study new physics effects for the Higgs-top coupling at high scales, using jet substructure techniques. We present the high-luminosity LHC sensitivity to new physics parametrized in the EFT framework and through a general Higgs-top form factor.
We explore the direct Higgs-top CP structure via the $pp \to t\bar{t}h$ channel with machine learning techniques, considering the clean $h \to \gamma\gamma$ final state at the high luminosity LHC~(HL-LHC). We show that a combination of a comprehensive set of observables, that include the $t\bar{t}$ spin-correlations, with mass minimization strategies to reconstruct the $t\bar{t}$ rest frame provide large CP-sensitivity.
The LHC is exploring electroweak (EW) physics at the scale EW symmetry is broken. As the LHC and new high energy colliders push our understanding of the Standard Model to ever-higher energies, it will be possible to probe not only the breaking of but also the restoration of EW symmetry. We propose to observe EW restoration in double EW boson production via the convergence of the Goldstone boson equivalence theorem. We measure this convergence through the ratio of differential cross sections for VH production. We present a method to extract this ratio from collider data. With a full signal and background analysis, we demonstrate that the 14 TeV HL-LHC can confirm that this ratio converges to one to 40% precision while at the 27 TeV HE-LHC the precision will be 6%. We also investigate statistical tests to quantify the convergence at high energies. Our analysis provides a roadmap for how to stress test the Goldstone boson equivalence theorem and our understanding of spontaneously broken symmetries, in addition to confirming the restoration of EW symmetry.
PROSPECT is a reactor antineutrino experiment designed to search for short-baseline sterile neutrino oscillations and to perform a precise measurement of the U-235 reactor antineutrino spectrum. The PROSPECT detector collected data at the High Flux Isotope Reactor (HFIR) at the Oak Ridge National Laboratory, with the ~4-ton volume covering a baseline range of 7-9m. To operate in this environment with tight space constraints, limited overburden, and the possibility of reactor-correlated backgrounds, the PROSPECT AD incorporates design features that provide excellent background rejection. These include detector segmentation and the use of Li-6 doped liquid scintillator with high light yield, world-leading energy resolution, and good pulse-shape discrimination properties. This talk will describe the operations of PROSPECT at HFIR and report on the latest results from the antineutrino spectrum measurement of U-235 fissions. Additionally, the limits from searches by PROSPECT for sub-GeV boosted dark matter upscattered by cosmic rays will be reported.
The Precision Reactor Oscillation and Spectrum Experiment (PROSPECT) is an above-ground antineutrino experiment at short baselines located at the High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory (ORNL). The PROSPECT detector comprises 4-tons of Li-6 doped liquid scintillator (6LiLS) divided into an 11x14 array of optically separated segments. This experiment's physics goals include searching for the existence of sterile neutrinos and precisely measuring the antineutrino energy spectrum. Antineutrinos are detected via the inverse beta decay (IBD) interaction which provides a near-unique space-time correlated signal pair consisting of a positron energy deposition and a delayed neutron capture in the liquid scintillator. The correlation between prompt and delayed pulses/signals is an excellent handle for background suppression. The highly segmented nature of the PROSPECT detector, as well as the double-ended readout structure in each segment, provides good position reconstruction for both prompt and delayed signals. In this talk, I will give an overview of the experiment, as well as current efforts to use the position resolution of the detector and the kinematics of the IBD reaction to study the neutrino directional reconstruction capabilities of PROSPECT.
The Daya Bay and PROSPECT experiments have made world-leading measurements of the $^{235}$U antineutrino fission spectra using liquid scintillator detectors located at nuclear reactors. The Daya Bay experiment has deconvolved a $^{235}$U spectrum from $\sim$3.5 million detected antineutrinos generated from power reactors with an isotopic mixture of fuels, and PROSPECT has detected $\sim$50,000 antineutrinos generated by a research reactor highly enriched in $^{235}$U. Combining the high-statistics Daya Bay measurement and PROSPECT's direct $^{235}$U measurement we derive a more precise measurement of the $^{235}$U antineutrino spectrum and improve the deconvolution of the power reactor fission spectrum into its individual isotopic components. In this talk, I will present the current status of the joint spectral analyses between these experiments.
The PROSPECT and STEREO experiments recently reported modern measurements of the $^{235}$U antineutrino energy spectra from highly-enriched uranium (HEU) research reactors using liquid scintillator based detectors. At HEU reactors, 99% of the antineutrino flux comes from $^{235}$U, providing a direct measure of the energy spectrum and antineutrino flux from a single isotope. STEREO and PROSPECT have provided independent measurements with different systematics from detectors at ILL (France) and HFIR (US). This analysis compares and combines both measurements to test their consistency and provide the best combined measurement of the pure $^{235}$U antineutrino spectrum. In this talk, I will present the current status of this joint spectral analysis.
The Precision Reactor Oscillation and Spectrum Experiment, PROSPECT, at the High Flux Isotope Reactor at ORNL has made word-leading measurements of reactor antineutrinos at short baselines. PROSPECT provides some of the best limits on eV-scale sterile neutrinos, has made a precision measurement of the reactor antineutrino spectrum of $^{235}$U from a highly-enriched uranium reactor, and has demonstrated the observation of reactor antineutrinos in an aboveground detector with good energy resolution and well-controlled backgrounds. The PROSPECT collaboration is now preparing an upgraded detector, PROSPECT-II, to probe yet unexplored parameter space for sterile neutrinos and fully resolve the Reactor Antineutrino Anomaly, a longstanding puzzle in neutrino physics. By pressing forward on the world's most precise measurement of the $^{235}$U antineutrino spectrum and measuring the absolute flux of antineutrinos from $^{235}$U, PROSPECT-II will sharpen a tool with potential value for basic neutrino science, nuclear data validation, and nuclear security applications. An additional deployment at a low-enriched uranium reactor would expand this contribution with complementary measurements of the antineutrino yield from other fission isotopes. PROSPECT-II provides a unique opportunity to continue the study of reactor antineutrinos at short baselines in the US while training a new cohort of neutrino physicists.
The aim of the Reactor Operations Antineutrino Detection Surface Testbed Rover (ROADSTR) project is to observe and monitor electron antineutrinos from nuclear reactors. ROADSTR has been designed as a readily mobile detector, allowing measurements at multiple sites using the same instrument. Besides the clear advantages towards nuclear safeguard and verification applications, an easily redeployable detector provides also a unique chance to contribute to flux and spectrum predictions for different nuclear fuels while minimizing the detector-related systematic uncertainties. Such measurements could prove crucial to understand the different anomalies spotted in the short baseline oscillation experiments, while providing benchmark measurements for different applications. In this talk, the current efforts underway within ROADSTR will be summarized, including the development of Pulse Shape Discrimination capable scintillators based in 6Li-doped plastic technology, the implementation of detector mobility, and the study of correlated backgrounds and their mitigation strategies.
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-823743.
Do neutrinos have sizable self-interactions? This fundamental question, whose answer directly affects future precise astrophysical and cosmological observations, is notoriously difficult to answer with laboratory experiments. For the last years, neutrino telescopes have been identified as unique tools to explore neutrino self-interactions. The actual discovery of astrophysical neutrinos and the advent of future neutrino telescopes, together with a more precise understanding of neutrino masses from laboratory and cosmological probes, calls for a robust theoretical description of the underlying particle physics and its connections with other neutrino observables. In this work, we set up such theoretical framework for present and future studies. We quantify the relevance of previously ignored effects, and we clarify the interplay with other experimental probes of neutrino properties. These directly affect the interpretation of present data in terms of self-interactions, as well as the testability of current "hints" in future facilities. After applying our formalism, we find that current IceCube data shows no evidence of neutrino self-interactions, and it is beginning to exclude self-interactions that have been argued to affect cosmological parameter extraction (most notably $H_0$). Furthermore, our results show that the future IceCube Gen2 observatory should be sensitive to many cosmologically relevant neutrino self-interaction models.
The ICARUS detector will search for LSND like neutrino oscillations exposed at shallow depth to the FNAL BNB beam in the context of the SBN program. In the approved FNAL SBN experiment the impact of cosmic rays is mitigated by a $4\pi$ Cosmic Ray Tagger (CRT) detector encapsulating the TPCs inside the pit and by a ~3 m concrete overburden both for the near and the far detectors. Cosmic backgrounds rejection is particularly relevant for the ICARUS detector. Due to its larger size and distance from target compared to SBND, in ICARUS the neutrino signal/cosmic background ratio is 40 times more unfavorable with in addition a greater than 3 times larger out-of-spill comics rate. In this talk, I will be addressing the question of a problematic background to genuine neutrino events especially into the electron neutrino appearance analysis by a detailed MonteCarlo calculation of the cosmic rays crossing the ICARUS detector.
The Supernova Early Warning System (SNEWS) is a public alert system that provides a warning to astronomers about the observation of neutrinos from a Galactic supernova. These events are extremely rare, so it will be crucial to gather all the physics possible from the next event. SNEWS has been operating as a simple coincidence between neutrino experiments from all around the world for more than two decades. In the current era of multi-messenger astrophysics, there are new opportunities for SNEWS to optimize the science reach from the next Galactic supernova beyond the simple early alert. In this talk, we will discuss the upgrades and new capabilities of SNEWS 2.0.
The next generation of neutrino telescopes, including Baikal-GVD, KM3NeT, P-ONE, TAMBO, and IceCube-Gen2, will be able to determine the flavor of high-energy astrophysical neutrinos with 10% uncertainties. With the aid of future neutrino oscillation experiments --- in particular JUNO, DUNE, and Hyper-Kamiokande --- the regions of flavor composition at Earth that are allowed by neutrino oscillations will shrink by a factor of ten between 2020 and 2040. We critically examine the ability of future experiments and show how these improvements will help us pin down the source of high-energy astrophysical neutrinos and a sub-dominant neutrino production mechanism with and without unitarity assumed. As an illustration of beyond-the-Standard-Model physics, we also show that the future neutrino measurements will constrain the decay rate of heavy neutrinos to be below $2\times 10^{-5}~$$m$/eV/s assuming they decay into invisible particles.
Many particle physics experiments utilise a pull term method to perform fits to data, in which systematic uncertainties are treated as nuisance parameters that reweight the predicted spectrum. However, this approach scales poorly in fit complexity as the number of systematic uncertainties increases. Conversely, one can utilise a Gaussian multivariate technique, in which systematic uncertainties are encoded into a covariance matrix. This approach is convenient for performing joint fits, and scales well to an arbitrarily large ensemble of systematic uncertainties; however, it also treats statistical uncertainties as Gaussian, which is inappropriate in any experiment which operates in a low-statistics environment, such as a neutrino physics experiment.
We present a novel method named PISCES (Parameter Inference with Systematic Covariance and Exact Statistics), a hybrid technique that combines a Gaussian multivariate treatment of systematic uncertainties with a Poisson likelihood treatment of statistical uncertainties. Under this method, only physics nuisance parameters are profiled using a fitter, while optimal systematic pulls in each analysis bin are calculated using a covariance matrix with each evaluation of the objective function. This technique is fast and memory-efficient, and convenient for performing joint fits over many samples. Following an introduction to the method, demonstrations using toy experiments and time and memory benchmarks are presented.
The Mu2e experiment at Fermilab will search for the charged-lepton flavor-violating process of a neutrinoless muon-to-electron conversion in the presence of a nucleus. The sensitivity goal of the experiment is four orders of magnitude below the current strongest limits on this process. This requires all backgrounds to sum to fewer than one event over the lifetime of the experiment. One major background is due to cosmic-ray muons producing electrons that fake a signal inside of the Mu2e apparatus. The Mu2e Cosmic Ray Veto (CRV) has been designed to veto these cosmic-ray backgrounds with an efficiency of approximately 99.99%, while causing a low dead time and operating in a high-intensity environment. In this talk the design is motivated, and the fabrication processes, and status, of the components of the CRV being produced at the University of Virginia are presented in detail.
The cosmic-ray-veto detector (CRV) for the Mu2e experiment consists of four layers of plastic scintillating counters read out by silicon photo-multipliers (SiPM) through wavelength-shifting fibers. This presentation reports the testing procedure and light properties of wavelength-shifting fibers with a diameter of 1.8 mm that were purchased to improve the CRV efficiency in certain critical regions. The measurements were performed using a custom-built scanner designed to ensure the fiber quality for the CRV. These results will be discussed and compared with the performance of the 1.4mm fibers used for the bulk of the CRV.
The Mu2e experiment is designed to search for the charged-lepton-flavor-violating process, $\mu^-$ to a $e^-$, with unprecedented sensitivity. The single 105-MeV electron that results from this process can be mimicked by electrons produced by cosmic-ray muons traversing the detector. An active veto detector surrounding the apparatus is used to detect incoming cosmic-ray muons. To reduce the backgrounds to the required level it must have an efficiency of about 99.99\% as well as excellent hermeticity. The detector consists of four layers of scintillator counters, each with two embedded wavelength-shifting fibers, whose light is detected by silicon photomultipliers. An upgrade of the experiment, Mu2e-II, that will provide an order of magnitude more sensitivity is under design. The cosmic-ray veto detector is being redesigned to handle the higher rates. This redesign is also being used for a proposed high-resolution probe of the interior of the Great Pyramid of Khufu. The design and expected performance of the detector will be described.
The Mu2e experiment aims to measure the neutrinoless, muon-to-electron conversion process in the field of a nucleus with a single event sensitivity of $2.8×10^{-17}$. The Mu2e tracker utilizes an array of straw tube panels in a solenoidal magnetic field to track the conversion electrons and measure their momenta. Using pre-production panels, tracker operation and diagnosis schemes were developed and implemented. Straw channel noise levels were shown to meet the design requirements at optimized thresholds. Using $^{55}$Fe radioactive source data and cosmic data, a wire-based time division calibration was performed and the associated longitudinal wire position resolution was measured to be better than 35 mm.
The Mu2e experiment will search for Beyond-the-Standard-Model, Charged Lepton Flavor Violation (CLFV) in the neutrinoless muon-to-electron conversion process $\mu^- + \text{Al}\rightarrow e^- + \text{Al}$. The number of muons stopped and captured by the aluminum Stopping Target is measured by the Stopping Target Monitor (STM) using muon atomic capture x-rays and muon nuclear capture $\gamma$-rays. An HPGe detector with $\sim$ 0.8 keV Gaussian resolution at 662 keV, and with an estimated photon rate capability of $\sim$ 100 kcps along with a LaBr$_3$ detector with Gaussian energy resolution of 7 keV at 662 keV, with an estimated photon rate capability $\sim$ 800 kcps are used to report the muon capture rate. In one beam-on second, $2.3 \times 10^{13}$ protons hit the Production Target, $3.7 \times 10^{10}$ muons are stopped in the Stopping Target and together, generate an energy flux of $3.2 \times 10^8$ TeV cm$^{-2}$ sec$^{-1}$ consisting of muons, electrons, neutrons, x-rays, and $\gamma$-rays, with mean particle energy $\sim$ 10 MeV. In order to measure the number of stopped muons in the experiment, the energy flux must be reduced by a factor of $5 \times $10$^8$ for the LaBr$_3$ detector and $3 \times$10$^{9}$ for the HPGe detector. In order to accomplish this reduction, a detector shielding house is placed 35 m from the target, downstream of a beam line consisting of poly absorbers and a sweeping magnet, and containing a tungsten collimator with 0.5 cm$^2$ apertures. A combination of lead, tungsten, copper and aluminum are layered to achieve the shielding goals. Borated polyethylene is used to absorb neutrons. Separate protection plans are made for the HPGe detector and the LaBr$_3$ detector because of their different rate and radiation sensitivities. Rate and energy flux requirement for the detectors are shown to be satisfied using Geant4 simulations.
The Large Hadron Collider at CERN is upgrading to a High Luminosity version that will increase the instantaneous luminosity to 5x10^(34) cm^(-2)s^(-1). This substantial increase in rate means that the current experiments will need to be modified in order to cope with the increased rates. The Compact Muon Solenoid (CMS) detector is installing a new muon station consisting of 144 Gas Electron Multipliers (GEMs) that will work with the existing Cathode Strip Chambers (CSCs) to provide a more precise measurement of the muon bending angle. Currently, the new GEM detectors have finished installation in the CMS experiment and they are in the commissioning phase with operation scheduled to begin in LHC-Run 3. This talk will present the status of this new muon station at the CMS experiment.
The CMS muon system plays an important role in the discovery of new physics like the Higgs boson and new particles. The next phase of the LHC is planned to increase luminosity to improve the discovery power. The high luminosity LHC (HL-LHC) will be a harsh environment of pp collisions and will require high-performance muon trigger and muon track reconstruction, especially in the endcap region. In order to maintain the performance of the CMS muon system, the CMS collaboration has been developing a Gas Electron Multiplier (GEM) detector for the endcap regions of the CMS muon system. The new sub-detector system requires a new procedure of commissioning and alignment to be developed. We report the status of the GE1/1 commissioning and alignment.
At the high luminosity Large Hadron Collider (LHC), the instantaneous luminosity will be up to $5-7.5 \times 10^{34} cm^{-2} s^{-1}$. This necessitates the upgrade of the muon spectrometer of the ATLAS detector. The Small Wheel, the innermost station of muon end-cap system, will be replaced by the 'New Small Wheel (NSW)'. For the high luminosity runs, the new system is required to improve trigger selectivity for the end-cap region in a high background environment, while maintaining excellent tracking capability for the events in $pp$ collisions at $\sqrt{s}=$ 13 TeV to 14 TeV with the ATLAS detector. To accomplish this, it should deliver hardware-based online track measurements with a pointing accuracy of 1 mrad at Level-1 in the end-cap region. All of this is achieved by two detector technologies, the Small-Strip Thin Gas Chamber (sTGC) and the Micro Mesh Gaseous structures (MM). The sTGC is the primary trigger detector because of its bunch-crossing identification capability. Along with this state of the art detector technology, radiation tolerant custom-made Application-Specific Integrated Circuits (ASICs) are built to create high-speed data inter-connections, which achieve up to 1 MHz of Level-1 data readout using the Back-End FELIX (Front End LInk eXchange) system. This complex system of $\sim$400K physical channels and more than $\sim$14K ASICs creates many challenges, which include achieving precise alignment of the readout channels for high spatial resolution and maintain simultaneous trigger and readout with a background rate of $\sim20 kHz cm^{-2}$. The sTGC detector quadruplets are assembled and aligned into wedges at CERN. We summarize our experiences during the sTGC integration and commissioning of the sTGC detector wedges. These studies are performed for the first time together with the final Front-End and Back-End electronics. Our work includes alignment survey of the detector channels, establishing proper connectivity between the detector and the Front-End channels, verifying the robustness of the detector performance against various noise sources, while tuning numerous clock phases and delays for synchronous trigger and readout at high data rate.
Advances in semiconductor research and development have enabled engineering of scintillation materials based on quantum dot (QD) photoluminescence. This has yielded low-mass and radiation tolerant scintillators with excellent timing and light-yield performance awaiting application in high energy physics experiments. We introduce a detector system of such a scintillator that consists of bulk GaAs with embedded sheets of self-assembled InAs QDs combined with physically integrated photodiodes for light collection. Early research and development of $\sim20$ micron thin prototype sensors detecting 5.5. MeV $\alpha$-particles have shown fast decay constants of $\sim$500 ps with $\sim$70 ps time resolution and a light collection of $3.0\times10^4$ electrons / MeV using simple electronics readout. We describe results of recent measurements, discuss ongoing improvements in the detector and readout design, and present plans for performance testing to assess applications to high energy charged particle detection.
ATLAS is preparing for the HL-LHC upgrade, where integrated and instantaneous
luminosity will reach unprecedented values. For this, an all-silicon Inner
Tracker (ITk) is under development with a pixel detector surrounded by a strip
detector. The strip system consists of 4 barrel layers and 6 endcap disks.
Prototyping has been completed successfully, and pre-production is about to
start. We present an overview of the Strip System, highlighting the
deliverables from the US. We will outline the current status of pre-production
on various detector components, with an emphasis on QA and QC procedures. We
will also discuss the plans for the pre-production and production phase
distributed over many institutes.
n this talk I will highlight recent results from the CMS and ATLAS Collaborations. Additionally, an overview will be given of the status of the two detectors during the LHC shutdown and the preparations for the upcoming data-taking period.
In recent years, Machine Learning (ML) and Artificial Intelligence (AI) methods have become ubiquitous in High Energy Physics research. Though initial implementations of ML for HEP focused mainly on supervised learning for classification problems like jet tagging or event selection, HEP researchers are increasingly employing cutting edge techniques for a wide range of applications and even contributing to advances in the field of ML and AI. This talk will describe current ML methods being studied for a range of HEP use cases including data collection, reconstruction, tagging, simulation, and inference; methods discussed will include geometric machine learning, deep learning, unsupervised and weakly supervised models, and more.
LHC provides one of the most potent and comprehensive physics tests in the TeV realm and is on its course to realize its full potential with upgrades. Beyond the vast class of successful searches, many new and exciting opportunities are emerging. In this talk, I will highlight several new angles and directions at the LHC, such as precision physics and exotic signatures, to motivate us for the new adventures ahead.
The holographic principle tells us that quantum theories of gravity behave like lower dimensional theories without gravity. We will review lessons learned from applying this principle in the context of gauge/gravity duality and black hole thermodynamics, show how it has guided recent efforts to recast scattering amplitudes in terms of a conformal field theory living on the celestial sphere, and discuss what we might hope to gain by combining amplitudes and conformal bootstrap technology.
A 10 dimensional model with $\mathcal{N}=1$ SUSY and $E_8$ as a gauge group will be presented. It will be shown that through the orbifold $\mathbb{T}^6/(\mathbb{Z}_3\times \mathbb{Z}_3)$, only the Standard Model remains after compactification, with feasible Yukawa couplings. Gauge coupling unification can be achieved at $M_{GUT}=10^7\ {\rm 𝐺𝑒𝑉}$ with a viable proton lifetime. Therefore the highly predictive extra dimensional GUT model can be within reach of near future experiments.
In the Standard Model, we cannot obtain the mechanism how to derive the left-handed neutrino masses. Our strategy to solve it is to exploit the Seesaw Mechanism. To get the right-handed Majorana neutrino masses for the Seesaw Mechanism, we utilized and studied the D-brane instanton effect with magnetized orbifold models on torus. Many models can be constructed by changing the magnetic fluxes. We listed up all models consistent with the Standard Model and pointed out that our calculations are consistent with the Modular transformation on the torus.
Parity solutions to the strong CP problem are a compelling alternative to approaches based on Peccei-Quinn symmetry, particularly given the expected violation of global symmetries in a theory of quantum gravity. The most natural of these solutions break parity at a low scale, giving rise to a host of experimentally accessible signals. In this talk, we give an overview of this class of solutions and assess the simplest parity-based solution in light of LHC and flavor constraints. We further highlight prospects for near-future tests at colliders, tabletop experiments, and gravitational wave observatories. The origin of parity breaking and associated gravitational effects provide new avenues for discovery through EDMs and gravity waves, establishing generalized parity as a promising and testable solution to the strong CP problem.
Despite of the successful predictions of the Standard model, some of the parameters from the flavor sector, e.g. the mixing angles and the and CP phases, do not have an origin within the model. It has been proposed to use modular flavor symmetries to solve this issue, either by imposing them or deriving them from an underlying theory. In this work, we derive the modular symmetries by using the Yukawa couplings given by a magnetized compactified torus. We show that modular transformations of torus give rise to finite metaplectic groups, whose order is determined by the least common multiple of the number of flavors involved. We also comment on the role of supersymmetry in these constructions and outline a path towards non–supersymmetric models with modular flavor symmetries.
We propose a model for the QCD axion which is realized through a coupling of the Peccei-Quinn scalar field to magnetically charged fermions at high energies. We show that the axion of this model solves the strong CP problem and then integrate out heavy magnetic monopoles using the Schwinger proper time method. We find that the model discussed yields axion couplings to the Standard Model which are drastically different from the ones calculated within the KSVZ/DFSZ-type models, so that large part of the corresponding parameter space can be probed by various projected experiments. Moreover, the axion we introduce is consistent with the astrophysical hints suggested both by anomalous TeV-transparency of the Universe and by excessive cooling of horizontal branch stars in globular clusters. We argue that the leading term for the cosmic axion abundance is not changed compared to the conventional pre-inflationary QCD axion case for axion decay constant $f_a > 10^{12}~\text{GeV}$.
One of the current problems of the Standard Model is that it does not predict the parameters of the flavor sector, e.g. mixing angles and CP phases need to be adjusted by hand. Recently, a new approach to address this problem has been to assume that Yukawa couplings are modular forms which give rise to a modular flavor symmetry in the Lagrangian. The two main ways to proceed have been to either impose the modular symmetry, or to derive it from e.g. a compactified torus. In this work, using the latter approach, we obtain a simplified version of Yukawa couplings, which are given by the overlap integral of the Dirac zero-mode wavefunctions. Using Euler’s Theorem, we derive closed form analytic expressions for these Yukawa couplings that are valid for arbitrary magnetic flux parameters. This form is not only simple, but also has the advantage of making the modular transformations of Yukawa coupling more transparent.
We consider an explicit effective field theory example based on the Bousso-Polchinski framework with a large number N of hidden sectors contributing to supersymmetry breaking. Each contribution comes from four form quantized fluxes, multiplied by random couplings. The soft terms in the observable sector in this case become random variables, with mean values and standard deviations which are computable. We show that this setup naturally leads to a solution of the flavor problem in low-energy supersymmetry if N is sufficiently large. We investigate the consequences for flavor violating processes at low-energy and for dark matter.
The R-parity violating decays of Wino charginos, Wino neutralinos and Bino Neutralinos LSPs are
analyzed within the context of the B − L MSSM “heterotic standard model”. These LSPs correspond
to statistically determined initial soft supersymmetry breaking parameters which, when evolved using
the renormalization group equations, lead to an effective theory satisfying all phenomenological requirements; including the observed electroweak vector boson and Higgs masses. The explicit decay
channels of these LSPs into standard model particles, the analytic and numerical decay rates and the
associated branching ratios are presented. The decay lengths of these RPV interactions are discussed.
It is shown that the vast majority of these decays are “prompt”, although a small, but calculable, number correspond to “displaced vertices” of various lengths. The relation of these results to the neutrino hierarchy--either normal or inverted--is discussed in detail.
The strongly coupled heterotic M-theory vacuum for both the observable and
hidden sectors of the B − L MSSM theory is reviewed, including a discussion of
the “bundle” constraints that both the observable sector SU(4) vector bundle
and the hidden sector bundle induced from a single line bundle must satisfy.
Gaugino condensation is then introduced within this context, and the hidden
sector bundles that exhibit gaugino condensation are presented. The condensation
scale is computed and found to be low enough to be compatible with the energy scales available at the LHC.
The general U$(1)_𝑋$ extension of the Standard Model (SM) is a well motivated scenario which has a plenty of new physics options. Such a model is anomaly free which requires to add three generations of the SM singlet right-handed neutrinos (RHNs) which naturally generates the light neutrino masses by the seesaw mechanism.This offers interesting phenomenological aspects in the model. In addition to that the model is equipped with a beyond the SM (BSM) neutral gauge boson, $𝑍^\prime$ which interacts with the SM and BSM particles showing a variety of new physics driven signatures. After the anomaly cancellation the U$(1)_𝑋$ charge of the particles are expressed in terms of the SM Higgs doublet and the SM Higgs singlet which allows us to study the interaction of the fermions with the $𝑍^\prime$.In this paper we investigate the pair production mechanism of the different charged through the photon, $𝑍$ and $Z^\prime$ boson exchange processes at the electron-positron $(𝑒^-𝑒^+)$. The angular distributions, forward-backward $(\mathcal{A}_{𝐹𝐵})$, left-right $(\mathcal{A}_{𝐿𝑅})$ and left-right forward-backward $(\mathcal{A}_{LR, FB})$ asymmetries of the different charged fermion pair productions show substantial deviation from the SM results
In certain extensions of the Standard Model(SM), the interactions between the new scalars and the SM Higgs can cause the electroweak(EW) symmetry to remain broken at temperatures well above the electroweak scale. Fermionic-induced EW symmetry non-restoration (EWSNR) effect has also been studied in the context of effective field theories, where EWSNR is linked to some non-renormalizable interactions; thus, fermionic-induced EWSNR only occurs below specific cutoff temperature. In this talk, I will introduce some UV-complete models with new fermions that have unstored EW symmetry at high temperatures. In these models, fermionic-induced EWSNR is not limited by a cutoff temperature because some of the heavy fermions are always decoupled from thermal equilibrium at high temperatures as a consequence of their mass mechanisms. Then, I will identify the parameter space that satisfies the theoretical (stability of effective potential, perturbative unitarity bound, thermal equilibrium conditions) and experimental constraints. Within this parameter space, I will examine the novel thermal histories of these models and their phenomenological implications.
The possibility that tiny violations of Lorentz invariance may occur in nature and be detectable with existing technology has been intensely pursued for over two decades. Despite there being no indication for Lorentz violation, many potential signatures, particularly in the QCD and electroweak sectors, remain critically unexamined. Recent theoretical work on Lorentz violation grounded in effective theory has produced an abundance of novel collider observables amenable to sidereal-time analyses. In this talk, I discuss the prospects for studying quark-sector operators contributing to deep inelastic scattering and the Drell-Yan process at existing and future colliders.
Lepton number violation (LNV) is a very attractive research topic for theoretical and experimental physicists due to its implications beyond the Standard Model. It provides feasible theoretical explanations to several open questions in particle physics (e.g., the origin of neutrino mass) and also has a rich phenomenology at different energy scales. We explore the underlying connections between neutrinoless double 𝛽−decay (0𝜈𝛽𝛽) experiments, hadron colliders, and cosmology observations. In the context of simplified models, we show that future collider and 0𝜈𝛽𝛽 experimental results may complement each other.
Although being very successful, the Standard Models (SM) of particle physics, fails to explain some observations and also puzzles. In particular, neutrino data can’t be accommodated within the SM. Also, the origin of observed hierarchies between charged fermion masses and CKM matrix elements remain unexplained within the SM.
We consider simple extension by non-anomalous U(1) flavor symmetry, which gives natural explanation of fermion flavor and, including the right handed neutrinos, gives successful (and very specific) neutrino oscillation scenario.
Other interesting properties and phenomenological implications, of the proposed model, will be also discussed.
We study a model which generates Majorana neutrino masses at tree-level via low-energy effective operator with mass-dimension-9. The introduction of such a higher dimensional operator brings down the lepton number violating mass scale to TeV making such model potentially testable at present or near future colliders. This model possesses several new $SU(2)_L$ fermionic multiplets, in particular, three generations of triplets, quadruplets and quintuplets, and thus a rich phenomenology at the LHC. Noting that lepton flavour violation arises very naturally in such a setup, we put constraints on the Yukawa couplings and heavy fermion masses using the current experimental bounds on lepton flavour violating processes. We also obtain 95% CL lower bounds on the masses of the triplets, quadruplets and quintuplets using a recent CMS search for multilepton final states with137 inverse femtobarn integrated luminosity data at 13 TeV centre of mass-energy. The possibility that the heavy fermions could be long-lived leaving disappearing charge track signatures or displaced vertex at the future colliders like LHeC, FCC-he, MATHUSLA, etc. is also discussed.
We revisit a discussion of one possible way to search for lepton flavor violation (LFV), muonium-antimuonium oscillations. This process violates muon lepton number by two units and could be sensitive to the types of beyond the standard model physics that are not probed by other types of LFV processes. Using techniques of effective field theory, we calculate the mass and width differences of the mass eigenstates of muonium. We argue that its invisible decays give the parametrically leading contribution to the lifetime difference and put constraints on the scales of new physics probed by effective operators in muonium oscillations.
Lepton-flavor violating transitions provide excellent tools to probe physics beyond the Standard Model (BSM). Processes such as radiative muon decays or muon conversion on nuclei probe a variety of different operators. We point out that Rayleigh operators that contribute to muon conversion can also be probed in a much simple environment of e+e- collisions. We report on the computation of short and long-distance contributions to those processes.
We show that assuming flavour violation in the first two generations
of sfermions in the decoupling limit leads to interesting consequences for proton decay. Assuming the decoupling sfermions lie within 30 TeV, for the decay mode $p \to e^+ \pi^0$, which has sensitivity beyond that of DUNE and Hyper K is brought within the reach of those experiments. The most of the decay modes which is $p \to K^+ \bar \nu_e$ which essentially rules this model out for this range of masses, is now able to survive and further interestingly can be explored at DUNE and HyperK. Finally partial decoupling has interesting consequences for the mode $p \to K^+ \bar \nu_\tau$.
The Mu2e Experiment at Fermilab is looking for neutrino-less conversion of a muon to an electron. The experiment requires an extremely efficient Cosmic Ray Veto (CRV) to detect cosmic muons and ignore them so they cannot be confused with a successful direct conversion. Similarly, noise generated by neutrons and gamma rays from muon beam production/transportation can challenge the operation of the CRV, and creates difficulties in looking at actual events involving muons by creating experimental dead time. To try to help maintain adequate efficiency levels and minimize dead time, machine learning is being used in an attempt to improve the rejection efficiency and noise event classification. The model being used is a neural net, using Monte Carlo dropout for error analysis and better prediction, built by the Keras library in Python, and is trained on a simulated dataset of noise and cosmic muon events generated using Geant4.
The IceCube Neutrino Observatory is designed to observe neutrinos interacting deep within the South Pole ice. It consists of 5,160 digital optical modules, which are arrayed over a cubic kilometer from 1,450 m to 2,450 m depth. At the lower center of the array is the DeepCore subdetector, which has a denser configuration that lowers the observable energy threshold to about 5 GeV and creates the opportunity to study neutrino oscillations with low energy atmospheric neutrinos. A precise reconstruction of neutrino direction is critical in the measurements of oscillation parameters. In this presentation, I will discuss the direction reconstruction of GeV-scale neutrinos in IceCube by using a convolutional neural network (CNN) and compare the result to that of the current likelihood-based reconstruction algorithm.
The IceCube Neutrino Observatory detects atmospheric and astrophysical neutrinos using a cubic kilometer of ice instrumented with optical sensors at the South Pole. Neutrinos are detected using these sensors which record the cone of light from Cherenkov radiation, emitted by charged particles moving faster than the speed of light in ice, allowing the event vertex of neutrino interactions to be reconstructed. Low energy events are difficult to detect in IceCube because the detector is sparse and there is less Cherenkov radiation emitted, so optimized reconstruction methods are required. Reconstructing the event vertex in particular is important to ensure that the events are contained in the detector, and to allow us to remove background atmospheric muons and other noise. Current and past reconstruction methods have been likelihood-based, however, these methods are computationally intensive. We utilized a Convolutional Neural Network, which has proven to be faster than the likelihood-based methods, and has comparable resolution for vertex reconstruction.
Neutrinos offer a variety of insights into Standard Model physics that are not yet understood, including flavor oscillations and the neutrino mass ordering. One instrument being used to study neutrinos is the IceCube South Pole Neutrino Observatory, a cubic kilometer-scale Cherenkov detector over 1.5 km below the South Pole. An extension, the IceCube-Upgrade, is currently under development and is designed to enhance the detector’s low-energy performance. The DOMs detect Cherenkov radiation from neutrino interactions within the ice. Using features of the recorded light, such as arrival time and intensity, we can reconstruct neutrino properties such as energy and direction. Reconstructing neutrino events in IceCube is difficult at lower energies (below 100 GeV) due to both the lower number of Cherenkov photons produced during interactions, as well as the large spacing between DOMs, which is optimized for higher-energy events. One way to reconstruct these events is with neural networks, specifically Recurrent Neural Networks (RNNs). RNNs excel at handling data with a sequential relationship such as time, which makes them a great candidate for reconstructing particle interactions. This study highlights the results of an RNN trained to reconstruct the energy and direction of low-energy neutrinos using IceCube-Upgrade simulation; we also provide a comparison to an existing likelihood-based reconstruction.
In recent years, deep learning has played an emerging role in event reconstruction for neutrino experiments using liquid argon TPCs (LArTPCs), a high-precision particle imaging technology. Several algorithms have been developed to infer the 3D location of charge depositions in the detector. Furthermore, the development of 2D pixel-readouts naturally provides 3D positions. Therefore, there is a growing need for reconstruction algorithms that work on 3D image data. We report on our effort to develop a 3D particle instance identifier based on an extension of the Mask Region-Convolutional Neural Network (Mask-RCNN). Mask-RCNN, originally an algorithm for 2D images, is widely used in computer vision problems and has three main goals: to identify the location of each object in an image using a bounding box, to classify an object in each bounding box, and to cluster each object by determining its pixel boundaries using a mask. Inspired by the conversion to 3D and our sparse dataset, we introduce a sparse bounding box proposal method that greatly reduces inefficiencies associated with box predictions in 3D. We also describe our future plans to continue development on the masking network to explore using this architecture for particle clustering.
In high energy physics, Machine Learning (ML) has been applied to a broad range of problems: from jet tagging to particle identification, from the separation of signal over background, to fast simulation of event data, to to name a few. In this presentation, ML algorithms and techniques are explored to form lepton pairs (di-leptons) in a dark fermionic model.
In this model, the final-state leptons are the products of off-shell dark $Z$ ($Z_{D}$) bosons. This presents a unique challenge since the identification of their parent particle cannot be done by a simple invariant mass comparison calculation. Machine learning can greatly simplify this imposing task of matching final-state muons with their associated parent particles by exposure to correctly matched events and invalid permutations of these muons.
Monte Carlo-simulated proton-proton collision data at $\sqrt{s}$ = 13 TeV, corresponding to an integrated luminosity of 35.9 $fb^{-1}$ are preprocessed, and the resulting muon observables such as $\phi$, $\eta$, $p_{T}$, and charge, as well as higher-level, calculated observables, such as the $\Delta R$ and the invariant mass of the formed dimuon pair are used as input features to ML algorithms, including $\texttt{XGBoost}$, deep neural networks, boosted decisions trees, support vector machine learning, as well as ensemble methods of the aforementioned models.
These models are trained and hyperparameter tuning is performed to achieve the highest classification accuracy. The performance of these models are compared and discussed. The methods presented here can be applied to both MC-generated and real data from the LHC, and a wide range of problems in which efficiently matching final-state particles to the decaying off-shell parent particle are necessary for proper event reconstruction.
We study the simplest viable dark matter model with an additional neutral real singlet scalar, including a vectorlike singlet and doublet fermions. We find a considerable enhancement in the allowed region of the scalar dark matter parameter spaces in the presence of these fermions. This model could also accommodate tiny neutrino masses and mixing at one loop-level through the radiative seesaw mechanism. Dilepton + transverse missing energy signature arising from the new fermionic sector can observe at Large Hadron Collider (LHC), satisfying relic density, including other theoretical and experimental bounds. We perform such analysis for a benchmark point in the context of 14 TeV LHC experiments with a future integrated luminosity of 3000 ${\rm fb^{-1}}$.
A search is presented for new particles in proton-proton collisions at $\sqrt{s}$ = 13 TeV at the LHC, using events with energetic jets and large missing transverse momentum. The analysis is based on a data sample corresponding to an integrated luminosity of 101 $\mathrm{fb}^{-1}$, collected in 2017–2018 with the CMS detector. Separate categories are defined for events with narrow jets from initial-state radiation and with large-radius jets consistent with a hadronic decay of a W or a Z boson. Novel machine learning techniques are used to identify hadronic W and Z boson decays, allowing for a significant improvement of the analysis sensitivity compared with earlier results. The analysis is combined with an earlier search based on a data sample corresponding to an integrated luminosity of 36 $\mathrm{fb}^{-1}$, collected in 2016. No significant excess of events is observed with respect to the standard model background expectation, as determined from control samples in data. The results are interpreted in terms of limits on the branching fraction of an invisible decay of the Higgs boson, as well as constraints on simplified models of dark matter, on first-generation scalar leptoquarks decaying to quarks, and neutrinos, and on gravitons in models with large extra dimensions. Several of the new limits are the most restrictive to date.
The visible content of the Universe is made up of baryons and almost without of anti-baryons, so it requires a baryogenesis mechanism to generate the baryon asymmetry and it is widely believed that successful baryogenesis requires extending the Standard Model. There are strong evidence of invisible contents, Dark Matter (DM) in the universe in astrophysical observations, such as rotational curves of galaxies, gravitational lensing and bullet cluster. This analysis searches for the dark matter production in baryon number violation (BNV) process in proton-proton collision. The data sample, collected by the CMS experiment during the 2016-2018 data taking of the LHC, corresponds to an integrated luminosity of 137 fb-1 at a center-of-mass energy of 13 TeV. The events are required to contain missing transverse momentum and one jet with additional b-tagged jet arising from initial-state gluon splitting. The results are interpreted in the context of simple TeV-scale model of BNV in which a heavy colored scalar mediator is produced in down-type quarks interaction (b+s or b+d) and decays into DM and one up-type quark (u or c).
Pair production of dark photons is predicted from models of supersymmetry. When both dark photons decay into muon pairs, a trigger selection with three muons can be highly efficient for GeV-scale dark photons. We report the results of a simulation study of the CMS detector for p-p collisions at 14 GeV with average pile-up (interactions per bunch crossing) of 200. In this study, the dark photons have mass 1 GeV and they are promptly produced (originating from the primary vertex). Efficiency of 90% is obtained for events when the muon with the third largest pT is in the range 5-10 GeV. Efficiency approaches 100% when the third largest muon pT exceeds 40 GeV. The tri-muon trigger provides large increases in efficiency over single and double muon triggers for third largest muon with pT in the range 5 GeV to 40 GeV.
We present a search for dark matter production in events with missing transverse momentum and a Higgs boson decaying to a photon pair using 139 fb$^{-1}$ of $pp$ collisions recorded by the ATLAS experiment at a center-of-mass energy of 13 TeV. The search considers three simplified dark matter models which include either vector or pseudo-scalar mediators and predict final states with a pair of dark matter candidates and a Higgs boson. Events are selected using a combination of missing transverse momentum cuts and a boosted decision tree (BDT) trained to separate dark matter signals from background. This talk will focus on the optimization of the BDT training and categorization procedure, resulting in a final selection which provides improved sensitivity to all considered signal models. No significant excess is observed and limits are set on various model parameters such as the mass of the dark matter candidate.
Some new physics extensions of the Standard Model predict that the 125 GeV Higgs boson can be a portal to invisible dark matter candidates through its decay. Direct searches for Higgs boson decay to invisible particles are a convenient way to explore this scenario. I present the results of a search for invisible decays of the Higgs boson produced through the vector boson fusion channel (+low pT photon) in 𝑝𝑝 collisions at 𝑠√= 13 TeV with the ATLAS detector.
Gravitational-wave (GW) detections are rapidly increasing in number, enabling precise statistical analyses of the population of compact binaries. In this talk I will show how these population analyses cannot only serve to constrain the astrophysical formation channels, but also to learn about cosmology. The three key observables are the number of events as a function of luminosity distance, the stochastic GW background of unresolved binaries and the location of any feature in the source mass distribution, such as the expected pair instability supernova (PISN) gap. Given data from LIGO-Virgo observations, I will present constraints in cosmological modifications of gravity. I will also discuss future prospects on measuring $H_0$ given a possible population of black holes above the PISN gap. These novel tests of the standard cosmological model require GW data only and will become increasingly relevant as GW catalogs grow, specially if multi-messenger events remain elusive.
Cosmic string network generically appears in many natural extensions of particle SM. And cosmic strings are one-dimension topological defects which can be formed in grand unified theory scale phase transitions in the early universe and are also predicted to form in the context of string theory. The main mechanism for a network of Nambu-Goto cosmic strings to lose energy is through the production of loops and the subsequent emission of GW, thus offering an experimental signature for the existence of cosmic strings. And the unresolvable GW bursts produced by cosmic strings at different loop scale and cosmic time will overlap with each other and form a stochastic GW background (SGWB). We performed the parameter estimation in three cosmic string models using the third Advanced LIGO-Virgo observation run isotropic stochastic search results. We also consider a new source component in the model, i.e. kink-kink collision, using more realistic model parameters.
I will discuss gravitational wave signals sourced by hydrodynamic and hydromagnetic turbulent sources that might have been present in the early universe at epochs such as the electroweak and quantum chromodynamic (QCD) phase transitions. I will consider various models of primordial turbulence: purely hydrodynamical turbulence induced by fluid motions, magnetohydrodynamic (MHD) turbulence dominated either by kinetic or magnetic energy both with and without helicity. I will also address the generation of circularly polarized gravitational waves by parity violating turbulent sources. I will present our results of numerical modeling of the early-universe turbulence and resulting gravitational waves and I will review the signal detection prospects through space based laser interferometers such as Laser Interferometer Space Antenna (LISA) and Pulsar Timing Arrays (PTAs). In particular, I will discuss the possibility of explaining the recent observational evidence by NANOGrav collaboration for a stochastic gravitational wave background in the nanohertz frequency range through hydro and hydromagnetic turbulence at the QCD energy scale.
Supermassive black hole binary mergers generate a stochastic gravitational wave background detectable by pulsar timing arrays. While the amplitude of this background is subject to significant uncertainties, the frequency dependence is a robust prediction of general relativity. We show that the effects of new forces beyond the Standard Model can modify this prediction and introduce unique features into the spectral shape. In particular, we consider the possibility that black holes in binaries are charged under a new long-range force, and we find that pulsar timing arrays are capable of robustly detecting such forces. Supermassive black holes and their environments can acquire charge due to high-energy particle production or dark sector interactions, making the measurement of the spectral shape a powerful test of fundamental physics.
Triple gauge boson production is an important class of processes at the LHC. It allows measurements to test the quartic gauge couplings in the Standard Model and constrain the non-standard gauge couplings in the Standard Model effective field theory (SMEFT). We perform the computations of the NLO EW and QCD corrections to $W^{+}Z\gamma$ production with leptonic decays in SM at the LHC. The considered process is $p\;p\rightarrow e^{+}\;\nu_{e}\;\mu^{+}\;\mu^{-}\;\gamma$. We study the impacts of the corrections on the total and differential cross sections. We also study the tree-level effects of individual dimension eight operator in SMEFT. The corresponding unitarity bound is derived from partial wave expansions. By showing the interplay between the NLO corrections in SM and the effects of dimension-eight operators in SMEFT, we conclude that the NLO EW corrections are indispensable to test the gauge couplings in SM and draw the limits on the dimension-eight operators in SMEFT precisely.
Precision measurements and searches for new phenomena in the Higgs sector are among the most important goals in particle physics. Experiments at the Future Circular Colliders (FCC) are ideal to study these questions. Electron-positron collisions (FCC-ee) up to an energy of 365 GeV provide the ultimate precision with studies of Higgs boson couplings, mass, total width, and CP parameters, as well as searches for exotic and invisible decays.
After the triumph of discovering the Higgs boson at the CERN Large Hadron Collider, people are getting increasingly interested in studying the Higgs properties in detail and searching for the physics beyond the Standard Model (SM). A multi-TeV lepton collider provides a clean experimental environment for both the Higgs precision measurements and the discovery of new particles. In high-energy leptonic collisions, the collinear splittings of the leptons and electroweak (EW) gauge bosons are the dominant phenomena, which could be well described by the parton picture. In the parton picture, all the SM particles should be treated as partons that radiated off the beam particles, and the electroweak parton distribution functions (EW PDFs) should be adopted as a proper description for partonic collisions of the initial states. In our work, both the EW and the QCD sectors are included in the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) formalism to perturbatively resum the potential large logarithms emerging from the initial-state radiation (ISR). I will show the results of QCD jet production as well as some other typical SM processes at a possible high-energy electron-positron collider and a possible high-energy muon collider obtained using the PDFs.
The recently proposed MUonE experiment at CERN aims at providing a novel determination of the leading order hadronic contribution to the muon anomalous magnetic moment through the study of elastic muon-electron scattering at relatively small momentum transfer. The anticipated accuracy of the order of 10ppm requires high-precision predictions, including all the relevant radiative corrections. To aid the effort, the theoretical formulation for the fixed order NNLO QED corrections are described with complete mass effects.
We study electroweak phase transition and resultant GWs of a CP conserving 2HDM with a softly broken $Z_2$ symmetry. We analysed the parameter space of both type I and type II 2hdm without relying on any decoupling limit. We observe $M_{H^\pm} \approx M_H$ or $M_{H^\pm} \approx M_A$ favours SFOEWPT in 2HDM. In addition to di-Higgs production, scalar to fermion decay channel is also important to probe phase transition behaviour in 2HDM. We also comment about the shape of potential leading to SFOEWPT in 2hdm.
A common assumption about the early universe is that it underwent an electroweak phase transition (EWPT). Though the standard model (SM) is able to restore the electroweak symmetry through a smooth cross over PT, we require a strongly first-order PT to ensure electroweak baryogenesis, requiring us to look at new physics beyond the SM. The simplest case to extend the SM is to add a real singlet field, which allows for a first-order EWPTs (FOEPT) to occur.
Starting with the most general higgs+singlet lagrangian, we fixed four of its coupling constants as functions of the three quartics, the singlet and higg's mass and vacuum expectation value, whose range of values had more experimental motivation than the former. We ran a Monte-Carlo scan over these five free parameters, requiring a FOEPT and a PT strength of $\frac{𝑣_c}{𝑇_𝑐}>1.3$. These points were then passed through the FindBounce package to calculate the nucleation temperature. The resulting parameter space was studied, most notably, we observed the ratio of the triple higgs coupling to the SM value $\left(\kappa=\lambda_3/\lambda^{𝑆𝑀}_3\right)$ take on values between 0.5 and 2.7. The possible values of $\lambda_3$ could serve as motivation for future collider experiments to improve sensitivity in this range when looking at the cross sections of $𝑝𝑝\rightarrow ℎℎ$ versus $\lambda_3$.
The goal of the Mu2e experiment is to test the conservation of charged lepton flavor with a search for neutrinoless muon to electron conversion in the Coulomb field of a nucleus. To extend the sensitivity of this measurement by four orders of magnitude beyond present limits, the Mu2e design incorporates several innovations to produce an intense muon beam, detect signal electrons, and reduce sources of background. This talk will give an overview of the theory and physical significance of the conversion measurement, and will discuss the physics concepts implemented in the beam sequence and primary detector components. Particle progression through the Mu2e apparatus begins with a pulsed proton beam that interacts with nuclei to produce pions, which decay to muons as they travel through the production and transport solenoids, guided by strong magnetic fields. When particles from the resulting muon beam are stopped by a target in the detector solenoid and trapped in atomic orbitals, decay via neutrinoless muon-to-electron conversion would violate charged lepton flavor and indicate the involvement of physics beyond the Standard Model. The Mu2e tracker, a low-mass array of straw drift cells, will identify high-energy signal electrons against a background of electrons from muon decay in orbit by precisely measuring their trajectories through a magnetic field, with additional energy and timing measurements made by the electromagnetic calorimeter. Besides decay in orbit electrons, the other dominant source of background comes from cosmic ray particle interactions, identified and vetoed by layers of active shielding covering the detector. Components of the Mu2e experiment are at various stages of construction and testing, and this talk will conclude with an overview of the current status.
The Mu2e experiment is designed to search for New Physics in an extremely rare process of muon to electron neutrino-less conversion. The Mu2e sensitivity to New Physics heavily relies on suppressing all the background sources to a fraction on event. The dominant background at Mu2e originates from cosmic ray (CR) muons that interact or decay in the detector solenoid and produce a signal-like electron. Mu2e expects to observe one CR background a day. In order to reach the proposed sensitivity, Mu2e is designed to suppress the CR background by 4 orders of magnitude, using the Cosmic Ray Veto detector that covers over 300 $m^2$. The precision CR background prediction is an essential component of Mu2e's success. We will report on CR background estimates at Mu2e modeled by CRY cosmic ray generator and using the detector response simulated with the Geant4 framework.
The Mu2e experiment at Fermilab seeks to observe the ultra-rare conversion of a muon to an electron in a nuclear field, which produces a monoenergetic electron with an energy close to the muon rest mass. This process violates charged lepton flavor number conservation in the Standard Model, and is a clear signal for New Physics if observed. Mu2e’s most dangerous source of background is conversion-like events produced from cosmic rays, which can interact with the apparatus and generate fake signal candidates. Cosmic rays are mitigated by an active cosmic ray veto (CRV) detector that surrounds the apparatus. We will briefly summarize the backgrounds to the conversion signal and then dive into how Mu2e detects, reconstructs, and rejects cosmic ray events, including discussion of some recent work that was done to improve the timing reconstruction of the cosmic rays. In this discussion, we study the impact of the signal propagation time through long, up to 6m, scintillating CRV counter bars and the time-of-flight corrections on the cosmic rejection efficiency. These newly introduced corrections will be used to improve the cosmic ray vetoing and optimize event selection.
The Mu2e experiment will search for neutrinoless, coherent conversion of a muon into an electron in the presence of an aluminum nucleus. This conversion process is an example of charged lepton flavor violation (CLFV), which has never been observed experimentally. Mu2e is designed to accurately detect the 105 MeV/c conversion electron (CE) momentum in a uniform 1 T magnetic field. We investigate calibrating the Mu2e absolute momentum scale using dedicated calibration runs for detecting positive stopped pions decaying to a positron and a neutrino inside a reduced magnetic field of 0.7 T. Such events will produce monoenergetic positrons at 69.8 MeV, allowing for a potentially ideal calibration signal.
The Mu2e experiment at Fermilab will search for charged lepton flavor violation (CLFV) via muon to electron conversion, with a goal of improving the previous upper limit by four orders of magnitude and reaching unprecedented single-event sensitivities. The signal of CLFV conversion is a ~105 MeV electron, which is detected using a high-precision straw tracker. Protons produced by muon capture in the stopping target can create highly ionizing straw hits, and these hits constitute a background that can impact reconstruction efficiency. In this talk, I will discuss improving the rejection of this background by replacing a simple cut on the energy deposited in the straw with a TMVA-based machine learning algorithm. In particular, it is found that a neural network using the ADC waveform shape and Time-Over-Threshold significantly improves both the signal electron acceptance and proton rejection efficiency.
One major puzzle in particle physics is the replicated lepton families. There appears to be a ‘family symmetry’ which prevents charged lepton flavor violating (CLFV) processes. If a continuous global lepton family symmetry exists, it leads to associated Goldstone bosons, ‘familons’, through spontaneous symmetry breaking. The familon can acquire mass if this symmetry is also explicitly broken. Such family symmetry can be tested by searching for the beyond-standard-model (BSM) muon decay,
$\mu^{\pm} \to f + e^{\pm}$
For $\mu$-decay at rest, the signal would be a mono-energetic electron with its energy determined by the familon mass. Various experiments have established branching ratio limits from $10^{-5}\sim10^{-6}$ for $\mu^{+} \to f + e^{+}$, over the familon mass range.
Mu2e has an opportunity to search for familon production using data from a short momentum calibration run at the experiment's start. The goal of the run is a high precision measurement of the (positron) Michel spectrum, to study both the momentum scale and the momentum resolution of the detector, near the decay positron's kinematic endpoint ($m_{\mu}$c/2$\sim$53 MeV/c). This same study will have acceptance for decay familons in the momentum range [35-53] MeV/c.
A familon signal would appear as a line in the reconstructed Michel spectrum. A number of changes to the standard Mu2e running conditions need to be implemented to accomplish the familon search. Assuming the proton beam intensity is reduced to 1/10 nominal, an averaged luminosity of $2.5\times10^{8}$ $\mu^{+}$ stops per real second allows, in 1-day of continuous data-taking, to surpass existing branching ratio limits by $\sim$2 orders of magnitude. However, such a beam intensity at 50% magnetic field requires the charged particle tracker to operate outside its Mu2e parameter set, which is now under-study. Nonetheless, even if the proton beam intensity is reduced to 1/1000 nominal, in 1-day of data collection the existing branching ratio limits will be surpassed by an order of magnitude. In both cases limits are set over the familon mass range from [0-40] MeV/$c^2$.
The DUNE physics program primarily focuses on signals in the GeV energy range. In recent years, DUNE's potential as a low-energy experiment has been fruitfully explored, specifically regarding its sensitivity to signals as low as 5-10 MeV such as those associated with supernova burst and solar neutrinos. In this presentation I discuss the requirements and modifications that could extend DUNE's sensitivity to energies as low as 2MeV and would enable us to further expand DUNE's physics program to searches for neutrino-less double-beta decay in xenon-doped liquid argon at the multi-ton scale. I will present the modifications we propose with corresponding sensitivity estimates for 𝑚/𝑏𝑒𝑡𝑎/𝑏𝑒𝑡𝑎 measurements beyond the inverted hierarchy region, and describe the rich and diverse R&D program that this research avenue would open for DUNE.
The Deep Underground Neutrino Experiment (DUNE) is an international project for precision neutrino physics. DUNE will consist of two detector complexes exposed to the world’s most intense neutrino beam. The Near Detector complex will sample the beam near the neutrino production target, at Fermilab. The Far Detector, comprised of four LArTPC modules each with 17-kton LAr mass, will be located 1300 km away in the Sanford Underground Research Facility in South Dakota. The high-intensity neutrino beam combined with DUNE’s highly capable multi-component Near Detector and massive high-resolution LArTPC Far Detector enable a variety of Beyond the Standard Model physics probes. These include discovery of new particles (sterile neutrinos, dark matter, heavy neutral leptons, etc.), precision tests of the neutrino mixing matrix including non-standard neutrino interactions, and the detailed study of rare processes (e.g. neutrino trident production). This talk will review these Beyond the Standard Model physics scenarios and discuss their prospects at DUNE.
Reactor experiments provide an excellent platform to investigate the atomic ionization effects induced by the unexplored neutrino interaction channels. Including the atomic effects in our calculations, we study the neutrino-electron scattering by reactor anti-neutrinos in low-energy electron recoil detectors such as Si/Ge in light of neutrino non-standard interactions with leptons. We find that the crystal structure in Si/Ge yields a sizable suppression to the neutrino-electron scattering cross-section when compared to the free-electron approximation. We present our sensitivity results for the light vector and scalar mediator case. The explanation of the excess in the recent Xenon1T result can also be investigated at the reactor experiments since the reactors have a similar energy flux profile to solar neutrinos with characteristic neutrino energies <1 MeV.
We have analyzed new contributions to the muon anomalous magnetic moment in a class of models that generates a naturally large transition magnetic moment for the neutrino (needed to explain the XENON1T electron recoil excess). These models are based on an approximate $SU(2)_H$ symmetry that suppresses the neutrino mass while allowing for a large neutrino transition magnetic moment. We have shown that the new scalars present in the theory with masses around $100$ GeV can yield the right sign and magnitude for the muon $g−2$ which has been confirmed recently by the Fermilab collaboration. Such a correlation between muon $g−2$ and the neutrino magnetic moment is generic in models employing leptonic family symmetry to explain a naturally large neutrino magnetic moment. We have also outlined various other experimental tests of these models at colliders. Results will be presented.
This talk presents a model of the electron-like excess observed by the MiniBooNE experiment comprised of oscillations involving a new mass state, $\nu_4$, at $\mathcal{O}(1)$ eV and a high mass state, $\mathcal{N}$, at $\mathcal{O}(100)$ MeV that decays to $\nu+\gamma$ via a dipole interaction.
Short baseline oscillation data sets (omitting MiniBooNE appearance data) are used to predict the oscillation parameters. We simulate the production of $\mathcal{N}$ along the Booster Neutrino Beamline via both Primakoff upscattering ($\nu A \to \mathcal{N} A$) and Dalitz-like neutral pion decays ($\pi^0 \to \mathcal{N} \nu \gamma$).
The simulated events are fit to the MiniBooNE neutrino energy and visible scattering angle data separately to find a joint allowed region at 95% CL.
An example point in this region with coupling of $3.6 \times 10^{-7}$ GeV$^{-1}$, $\mathcal{N}$ mass of 394 MeV, oscillation mixing angle of $6\times 10^{-4}$ and mass splitting of $1.3$ eV$^2$ has $\Delta \chi^2/dof$ for the energy and angular fit of 15.23/2 and 37.80/2, respectively.
We show that one of the simplest extensions of the Standard Model, the addition of a second Higgs doublet, when combined with a dark sector singlet scalar, allows us to: i) explain the long-standing anomalies in the Liquid Scintillator Neutrino Detector (LSND) and MiniBooNE (MB) while maintaining compatibility with the null result from KARMEN, ii) obtain, in the process, a portal to the dark sector, and iii) comfortably account for the observed value of the muon $g-2$. Three singlet neutrinos allow for an understanding of observed neutrino mass-squared differences via a Type I seesaw, with two of the lighter states participating in the interaction in both LSND and MB. We obtain very good fits to energy and angular distributions in both experiments. We explain features of the solution presented here and discuss the constraints that our model must satisfy. We also mention prospects for future tests of its particle content.
T2K (Tokai to Kamioka) is a Japan based long-baseline neutrino oscillation experiment designed to measure (anti-)neutrino flavor oscillations. A neutrino beam peaked around 0.6 GeV is produced in Tokai and directed toward the water Cherenkov detector Super-Kamiokande, which is located 295 km away. A complex of near detectors is located at 280 m and is used to constrain the flux and cross-section uncertainties. In 2014, T2K has started a campaign to measure the phase $\delta_{CP}$, an unknown element of the Pontecorvo-Maki-Nakagata-Sakata matrix, that can provide a test of the violation or conservation of the CP symmetry in the lepton sector. To achieve this goal, T2K is taking data with a neutrino and antineutrino enhanced beam investigating asymmetries in the electron neutrino and antineutrino appearance probabilities. The most recent results showed that the CP-conserving cases are excluded at 90% confidence level. One of the largest systematic uncertainties affecting neutrino oscillation measurements comes from limited knowledge of (anti-)neutrino-nucleus interactions. The T2K experiment has a wide range of programs measuring neutrino interaction cross-section using detectors in its near detector complex. In this talk an overview of the latest T2K neutrino oscillation and cross-section measurements are presented. An intense program of upgrades is ongoing and promises to improve the sensitivities of the experiment. It will be discussed in some detail along with the future prospects of the experiment.
T2K has been accumulating data corresponding to $3.6\times10^{21}$ POT over the past 10 years. It has been studying neutrino oscillations by observing a disappearance of muon flavored (anti)neutrinos and the appearance of electron flavored (anti)neutrinos in an accelerator-generated neutrino beam sent across Japan. In particular, the collaboration has recently published the first substantial 3-sigma constraints on the CP-violating phase, $\delta_{CP}$, in an April 2020 Nature article. The results from this analysis have since been updated to include 34% more neutrino mode data and significant improvements to the neutrino interaction and flux models. This talk will present the analysis that led to these new results and discuss some future prospects for joint analyses between T2K and other experiments (NO$\nu$A and Super-Kamiokande) measuring neutrino oscillation parameters.
T2K (Tokai to Kamioka) is a long-baseline neutrino oscillation experiment situated in Japan with a baseline of 295 km and a neutrino beam of energy peaked at 600 MeV. The experiment can record data using either a mainly neutrino or mainly anti-neutrino beam, allowing to study the difference between the oscillations of neutrinos and anti-neutrinos. In T2K, one powerful method to test the 3-flavour neutrino oscillation framework (known as PMNS formalism) is to compare the disappearance of muon neutrinos and anti-neutrinos. In order to test the compatibility, we measure the oscillation parameters describing their disappearance ($\theta_{23}$, $\Delta m^{2}_{32}$) while allowing them to vary separately for neutrinos and antineutrinos. The compatibility with the PMNS framework is tested by comparing the fitted parameter values between the neutrinos and antineutrinos. For this study, we use the T2K run 1-10 data, corresponding to an exposure of 1.9664$\times 10^{21}$ POT in neutrino mode and 1.6345$\times 10^{21}$ POT in anti-neutrino mode.
We use the selected one ring muon candidate events in both neutrino and anti-neutrino mode to perform the joint fit which constrains the wrong-sign background. The best-fit values for $\Delta m^{2}$ ($\Delta \bar m^{2}$) are 2.48$\times 10^{-3} eV^{2}$ (2.53$\times 10^{-3} eV^{2}$), and for sin$^{2}\theta_{23}$ are 0.468 (0.449). The analysis results are in agreement with the PMNS formalism.
The T2K long-baseline neutrino oscillation experiment has measured a first indication of leptonic CP violation. Reducing the systematic uncertainty on predicted events at the far site is an urgent priority for the collaboration. In 2022, the T2K near detector will be upgraded to reduce systematic uncertainties to enable higher precision measurements of neutrino oscillation phenomena. The primary neutrino target of the upgraded detector is the Super Fine Grained Detector (SFGD) - a two-ton solid scintillator detector comprised of optically isolated cubes 1 cm on a side. The SFGD is flanked by high-angle time-projection chambers and time-of-flight panels that provide for full polar angle coverage of outgoing muons produced in charged current interactions. These detector systems have improved timing resolution, 3D tracking capability and the ability to measure the energy of neutrons emerging from neutrino interactions with time-of-flight techniques. In the talk, we discuss the motivation and capabilities of this new detector system as well as the current status of its design and construction.
Long-baseline neutrino oscillation experiments such as T2K (Tokai-to-Kamioka) and DUNE ( Deep Underground Neutrino Experiment) rely on models of neutrino interaction on nuclei. A major systematic uncertainty in the model of neutrino interaction comes from the blindness of the detector to the neutrino-induced neutrons in the final state. The 3D-projection scintillator tracker, which consists of a large number of 1 cm x 1 cm x 1 cm plastic scintillator cubes with
three orthogonal wavelength shifting fibers crossing through each cube is proposed as part of the near detectors of T2K upgrade and the DUNE. Nanosecond timing resolution and fine granularity will allow to the 3D-projection scintillator tracker the measurement of neutron kinetic energy in the neutrino interaction on an event-by-event basis. Two prototypes have been assembled and exposed to neutron beam test in Los Alamos National Laboratory in December 2019 and 2020 to fully demonstrate the neutron detection capability and optimize the tracker design. In this presentation, the prototype detector assembly, beamline setup and the detector calibration using LED pulses and cosmics muons will be detailed.
The long-baseline neutrino oscillation experiments depend on detailed models of neutrino interactions on nuclei. However, these models constitute an important source of systematic uncertainty partly due to the missing information of the final state neutrons in the detectors to date. As such, neutron information is desired in the near detectors of upcoming long-baseline neutrino experiments. Here, we are proposing a three-dimensional projection scintillator tracker to be used as a near detector component in the next generation long-baseline neutrino experiments such as T2K upgrade and DUNE. Due to the good timing resolution and fine granularity, this technology is capable of measuring neutron kinetic energy from neutrino interactions and can provide valuable data for refining neutrino interaction models and better reconstruct the neutrino energy. Neutron beam data has been taken in Los Alamos National Lab (LANL) in both 2019 and 2020 with neutron energy ranging from 0 to 800 MeV using two of such prototype detectors. In order to demonstrate the capability of the neutron detection, a total neutron-scintillator cross section is measured with one of the prototypes and compared to external measurements. In this presentation, the details of the cross section measurement and the systematic uncertainty handling will be presented.
The MIP Timing Detector (MTD) is a new sub-detector planned for the Compact Muon Solenoid (CMS) experiment at CERN, aimed at maintaining the excellent particle identification and reconstruction efficiency of the CMS detector during the High Luminosity LHC (HL-LHC) era. The MTD will provide new and unique capabilities to CMS by measuring the time-of-arrival of minimum ionizing particles with a resolution of 30 - 40 ps at the beginning of HL-LHC operation. The information provided by the MTD will help disentangle ~200 nearly simultaneous pileup interactions occurring in each bunch crossing at LHC by enabling the use of 4D reconstruction algorithms. The central Barrel Timing Layer (BTL) of the MTD uses a sensor technology consisting of LYSO:Ce crystal bars readout by SiPMs, one at each end of the bar. In this talk, we present an overview of the MTD BTL design and the recent test beam results demonstrating the achievement of the target time resolution of about 30 ps.
The Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider (LHC) is undergoing an extensive Phase II upgrade program to prepare for the challenging conditions of the High-Luminosity LHC (HL-LHC). A new timing detector in CMS will measure minimum ionizing particles (MIPs) with a time resolution of ~30-40 ps and hermetic coverage up to a pseudo-rapidity of |η|=3. The Endcap Timing Layer (ETL) will be based on Endcap Timing Readout Chip (ETROC) with a two-disk system of MIP-sensitive LGAD silicon devices. We will review the ETL design and the prototype testing results.
We report on new results and simulations from the Askaryan Calorimeter Experiment (ACE) which uses the coherent microwave Cherenkov emission from high energy particle showers in dielectric-loaded waveguides as calorimetric timing layers with ~1 ps resolution. Above ACE's energy threshold, a single 5 cm thick (1.4 $X_0$) layer of ACE waveguides would provide ~1 ps timing resolution, 3D spatial constraints on the scale of ~300 μm - 5 mm, and an additional energy measurement, making ACE a true 5D detector. When embedded inside another calorimeter technology, ACE timing layers could provide a powerful additional measurement for particle-flow reconstruction algorithms as well as unique vertexing capabilities to significantly reduce pileup. Due to thermal noise limits, ACE elements have a relatively high energy threshold so they are currently limited to ion colliders like the EIC or future high CoM colliders like the proposed FCC-hh. ACE elements are also exceptionally radiation-hard and can provide exquisite timing precision even when deployed in the damaging far-forward region of these future experiments. We report on new simulation results from deploying ACE timing layers in the barrel and forward calorimeters at these future colliders and discuss ongoing research to further develop and improve the ACE detector concept.
FASER (ForwArd Search ExpeRiment) fills the axial blindspot of other, radially arranged LHC experiments. It is installed 480 meters from the ATLAS interaction point, along the collision axis. FASER will search for dark matter and other new, long-lived particles that may be hidden in the collimated reaction products exiting ATLAS. FASER comprises: a magnetic spectrometer built with ATLAS silicon tracker modules; four LHCb outer ECAL modules; an emulsion neutrino detector; and plastic scintillators for veto, trigger and timing. The experiment is currently in its final commissioning stages. I report on successful preliminary tests of hardware and software performance with cosmic rays on the surface, and after installation in situ. FASER will begin taking pp collision data from the start of LHC Run 3, in 2022.
The use of precision timing to measure time-of-flight or to distinguish events from the same bunch crossing in collider detectors has become a common feature of many modern experiments. Currently achieving a precision of 30 picoseconds is seen as an attainable goal. To move to a precision close to one picosecond will require further advances in our time measurement technology. One central component of any time measurement is a precisely aligned reference clock distributed to all of the detector elements. When the required precision of the measurement is of the order of a picosecond, environmental changes need to be tracked and corrected for to maintain the the precision of the reference clock. In this talk we will present the design and testing of a system capable of measuring the drift in the clock phase (wander) and correcting for it in real time with sub-picosecond precision. For this we have developed an ASIC, using the TSMC 65nm process, that is capable of adjusting with sub-picosecond precision the phase delay of a digital clock signal, and a simple digital dual mixer time difference (DDMTD) circuit that can be used for measuring wander with sub-picosecond precision. Using this system, we will demonstrate the feasibility of distributing reference clocks, detecting and correcting for changes in the phase delay to a precision of ~200fs.
The muon campus program at Fermilab includes the Mu2e experiment that will search for a charged-lepton flavor violating processes where a negative muon converts into an electron in the field of an aluminum nucleus, improving by four orders of magnitude the search sensitivity reached so far.
Mu2e’s Trigger and Data Acquisition System (TDAQ) uses {\it otsdaq} as its solution. Developed at Fermilab, {\it otsdaq} uses the {\it artdaq} DAQ framework and {\it art} analysis framework, under-the-hood, for event transfer, filtering, and processing.
{\it otsdaq} is an online DAQ software suite with a focus on flexibility and scalability, while providing a multi-user, web-based, interface accessible through a web browser.
A Detector Control System (DCS) for monitoring, controlling, alarming, and archiving has been developed using the Experimental Physics and Industrial Control System (EPICS) open source Platform. The DCS System has also been integrated into {\it otsdaq}, providing a GUI multi-user, web-based control, and monitoring dashboard.
The cross sections of the Z boson production in association with at least two b jets as a function of various kinematic variables are measured in pp collisions at $\sqrt{s} = 13$ TeV using 137 fb$^{−1}$ of data collected by the CMS experiment at LHC. The Z boson decays to electrons or muons are considered with leading (sub-leading) lepton transverse momentum $p_{T} >$ 35 (25) GeV and pseudorapidity $|\eta|<$2.4, and the invariant mass within 71 and 111 GeV. Jets are selected with $p_{T} >$ 30 GeV and $|\eta|<$ 2.4. The results are compared to various QCD calculations.
Measurements of the production rate of Z bosons in association with heavy quarks provide sensitive tests of perturbative quantum chromodynamics (pQCD) predictions, which are made at next-to-leading-order (NLO) accuracy using either a 4-flavor number scheme (4FNS) or 5-flavor number scheme (5FNS). In the 4FNS, b-quarks are not present in the parton distribution functions (PDFs) and only appear as a product of gluon splitting (g → bb). In the 5FNS, on the other hand, a (massless) b-quark PDF is included. A previous analysis studying Z + b-jet events using 2015 & 2016 data showed that the 5FNS predictions match the data well, while the 4FNS predictions underestimate the data. The uncertainties are substantial, however. In our analysis we are attempting to further investigate these results by also including Z + c-jet events and looking at the combined “heavy-flavor” (b+c) region to reduce uncertainties. We are also updating the 2015-2016 results with 140 fb$^{−1}$ of ATLAS Run-2 data at $\sqrt{s}=$ 13 TeV. This is still a work in progress, but important milestones will be presented.
At the EIC, semi-inclusive production of hadrons and jets in deep-inelastic scattering (DIS) are crucial processes to obtaining information about the polarized TMD PDFs of the proton. Notably, recently it was proposed that in DIS the coupling of the proton PDFs and the T-odd part of the TMD jet function in semi-inclusive jet production can provide important information on the proton PDFs, such as the proton transversity, with well-controlled theoretical uncertainties. In this talk, we report our study on the phenomenology of semi-inclusive jet production in DIS, with special focus on the implication of the T-odd part of the TMD jet function.
Since the first positive measurement of the Λ-hyperon global spin polarization in heavy-ion collisions by STAR in 2017, the understanding of the nature of this phenomenon is one of the most intriguing challenges for the community. As relativistic fluid dynamics celebrates multiple successes in describing collective dynamics of the QCD matter in such reactions, the natural question arises whether the spin dynamics can also be modelled in such a framework. In this talk, the motivation for and recent outcomes of the experimental hunt for the macroscopic footprints of quantum spin in the relativistic heavy-ion collisions will be presented and the theoretical challenges connected with formulating its collective description will be discussed.
Building upon the most recent CT18 global fit, we present a new calculation of the photon content of proton based on an application of the LUXqed formalism. In this work, we explore two principal variations of the LUXqed ansatz. In one approach in which we designate CT18lux, the photon PDF is calculated directly using the LUXqed formula for all scales, $Q$. In an alternative realization, CT18qed, we instead initialize the photon PDF in terms of the LUXqed formulation at a lower scale, $Q\! \sim\! Q_0$, and evolve to higher scales with a combined QED+QCD kernel at $\mathcal{O}(\alpha),~\mathcal{O}(\alpha\alpha_s)$ and $\mathcal{O}(\alpha^2)$. While we find these two approaches generally agree, especially at intermediate $x$ ($10^{-3}
I’ll discuss precision calculations of dark radiation in the form of gravitons coming from Hawking evaporation of spinning primordial black holes (PBHs) in the early Universe. This calculation incorporates a careful treatment of extended spin distributions of a population of PBHs, the PBH reheating temperature, and the number of relativistic degrees of freedom. Results are compared to constraints on dark radiation from BBN and the CMB, as well as the projected sensitivity of CMB Stage 4 experiments, which will be sensitive to some well-motivated PBH spin distributions.
GRAMS (Gamma-Ray and AntiMatter Survey) is a next-generation proposed balloon/satellite mission that will be the first to target both MeV gamma-ray observations and antimatter-based indirect dark matter searches with a LArTPC (Liquid Argon Time Projection Chamber) detector. Astrophysical observations at MeV energies have been poorly explored and long-neglected. With a cost-effective, large-scale LArTPC, a single LDB (Long-Duration Balloon) flight could provide an order of magnitude improved sensitivity compared to previous experiments. We can uniquely measure gamma rays from annihilating dark matter and evaporating primordial black holes. Additionally, GRAMS can extensively explore dark matter parameter space via antimatter measurements. In particular, low-energy antideuterons can be background-free dark matter signatures. In this talk, I will give an overview and the current status of the GRAMS project.
Any measurably large elementary particle electric dipole moment (EDM) would constitute physics beyond the standard model. Based on frozen spin polarized beam control technology developed at the COSY laboratory in Juelich, Germany, a conceptual design is presented for a storage ring (PTR) capable of measuring proton (p) and deuteron (d) EDMs. Superimposed electric and magnetic bending make it possible to freeze the spins of simultaneously counter circulating polarized beams. This permits the monotonic accumulation of "out-of-plane" (meaning "out of the horizontal ring plane") precession of the polarization direction of one of the two beams, as caused by the bend fields acting on its particle EDMs, thereby supporting the EDM determination. Bunch accumulation and polarization preparation would be accomplished using an "arcs-only bunch accumulator" reconfiguration of COSY, now side-by-side with PTR in the existing COSY beam hall. Using doubly frozen (p,p), (p,d), and (d,d) proton and deuteron pairings, the EDMs and their differences can be obtained with unprecedented precision. The required spin control technology has already been demonstrated using deuterons in COSY. Existing polarized beam sources, injection, extraction, electron cooling and other existing beam-handling apparatus would be re-deployed. Newly to be constructed would be the PTR ring described in a recent CPEDM report, "Storage ring to search for EDMs of charged particles".
Rare nuclear isotope accelerator facilities require high-intensity proton beams to produce different types of nuclear isotopes more copiously. Such requirements provide an excellent opportunity to search for dark-sector particles such as axion-like particles(ALPs). This presentation will introduce an experimental proposal called DAMSA (Dump-produced Aboriginal Matter Searches at Accelerator) at the RAON (Rare isotope Accelerator complex for ONline experiment) facility, which is under construction in Korea. One of the main features of DAMSA is the proximity of the detector to the target, which enables the exploration of the high-mass region of ALP parameter space. The proximity, however, requires a method to effectively control beam-related neutron backgrounds effectively. We performed a Geant4 Monte Carlo simulation for the neutron production at the target, the proton beam dump, and their interactions inside the detector system. In this talk, we will discuss the current status of the study and its results.
Rare nuclear isotope accelerator facilities provide high-flux proton beams to produce a large number of rare nuclear isotopes. The high-intensity nature of their beams enables investigating dark-sector particles including axion-like particles (ALPs). In this talk, we will discuss detection prospects of ALP, using its coupling to Standard Model photons, in DAMSA (Dump-produced Aboriginal Matter Searches at Accelerator), a proposed experiment at RAON (Rare isotope Accelerator complex for ONline experiment) constructed in Korea. DAMSA features the close proximity of its detector to the target (i.e., ALP production dump) and a high-intensity proton beam, and as a result, DAMSA is capable of probing a high-mass region of ALP parameter space, which has never been explored by the existing experiments, and the region below the so-called "cosmological triangle". While the neutrino-induced backgrounds produced in the target and subsequently entering the detector are greatly suppressed thanks to the low 600-MeV proton beam energy, the mitigation of huge beam-related neutron backgrounds is rather challenging. We argue that they can be significantly suppressed with a high-capability detector system and hence the proposed ALP searches are feasible, inspiring other nuclear isotope accelerator facilities to pursue similar physics opportunities.
The search for new physics at the energy frontier has a strong model-based foundation, with well-motivated theories informing the phase space that is subsequently investigated in data. This strategy has been effective for decades in establishing the Standard Model, culminating in the 2012 discovery of the Higgs boson with the Large Hadron Collider (LHC). Recent developments in machine learning techniques motivate the complementation of current model-driven analysis programs with generic searches for unexpected new physics signals. Anomaly detection is at the essence of this pursuit, with the goal of identifying features of the data solely based on their inconsistency with a background-only model. In this talk, a novel application of a Variational Recurrent Neural Network (VRNN) to the task of anomalous jet detection is presented. This method is fully unsupervised, in that it trains directly on data and does not use a signal hypothesis. Results are shown using the LHC Olympics simulated datasets, where a VRNN-based selection is shown to enhance sensitivity to both two- and three-prong large-R jet signal excesses. Future prospects for integrating this and other unsupervised learning methods into LHC analyses are also discussed.
Sophisticated machine learning techniques have promising potential in search for physics beyond Standard Model in Large Hadron Collider (LHC). Convolutional neural networks (CNN) can provide powerful tools for differentiating between patterns of calorimeter energy deposits by prompt particles of Standard Model and displaced particles coming from decay of long-lived particles predicted in various models beyond the Standard Model. We demonstrate the usefulness of CNN by using a couple of physics examples from well motivated BSM scenarios predicting long-lived particles giving rise to displaced jets. Our work suggests that modern machine-learning techniques have potential to discriminate between energy deposition patterns of prompt and displaced particles, and thus, they can be useful tools in such searches.
The ATLAS detector was designed to detect prompt particles from the LHC. A pair of long-lived particles, as part of a new Hidden Sector added to the Standard Model, would lead to challenging reconstruction and differentiating from background in $pp$ collisions at $\sqrt{s}=$ 13 TeV with the ATLAS detector. The two main backgrounds to a search for such long-lived particles are QCD multijet and Beam-induced Background (BIB), the latter being muons arising from proton bunch interaction with LHC collimators or beam gas which then deposit energy in the calorimeter. Beam-induced background is non-standard and mimics signal very well, such that an algorithm is used to isolate a sample of BIB jets in ATLAS data which is used to train a neural network. This neural network was designed to take low-level variables from the ATLAS tracker, calorimeters and muon system through 1D convolutions and an LSTM to take advantage of the natural ordering and correlations of those subsystems' constituents, and uses this to discriminate between signal, QCD and BIB jets.
Due to the BIB background requiring the use of data in training an adversarial network was added to reduce the effect of simulation/data differences into the final NN score. The adversary is instrumental in controlling systematic uncertainties, using a novel technique which trains on both signal and background to reduce the effects of simulation/data mis-modelling. To do this, a control region is constructed containing a multijet selection and signal trigger veto of both data and simulation. This way, the same population of jets in both data and simulation could be studied, such that the only difference would be mis-modelling in the input variables. The adversary uses this control region to ignore the mis-modelling while discriminating between signal, QCD and BIB.
The talk presents a search for a new leptophilic vector boson Z' decaying into the four-muon final state using the data collected by ATLAS detector in the year 2015-2018. A moderate excess of 4𝜇 events with a 𝜇+𝜇− pair is the experimental signature for this study. The gauge boson Z' is predicted by the highly motivated gauged 𝐿𝜇−𝐿𝜏 model, which is the simplest extension of the Standard Model (SM). The model addresses the observed g-2 of the muon anomalous magnetic dipole moment and the B physics anomalies. At the same time, the model probes physics and cosmology outstanding questions related to the dark matter and neutrino mass.
With the discovery of the Higgs boson in 2012 by the CMS and ATLAS experiments, searches for new heavy particles (such as vector-like quarks) have ensued in hope of solving the hierarchy problem. In this talk, I will be discussing the search for the X5/3, a strongly interacting fermionic partner of the top quark with charge +5/3. Left-handed and right-handed coupling of the X5/3 to W bosons are considered separately. The search is conducted using the CMS datasets collected in 2017 and 2018. The data were collected at a center-of-mass energy of sqrt(s) = 13 TeV with the CMS detector, corresponding to an integrated luminosity of 41.5 fb-1 (60 fb-1) in 2017 (2018). The search looks for events with pair production of an X5/3 and its antiparticle, which subsequently decay to a top quark and W boson. To enhance signal separation, the search is constructed to only look for events where one W boson decays to a lepton and neutrino, while the other three W bosons decay hadronically. Limits on the cross section will be presented and compared to previous results.
The total decay width of the Higgs has not yet been constrained precisely, which allows for up to 11% of the branching fraction to be from beyond the standard model decays, so the Higgs Decay represents one possible way for Dark Matter(DM) searches. This talk will discuss the search for invisibly decaying Higgs boson or DM particles produced in association with a Z boson that decays into an electron or muon pair in pp collisions at $\sqrt s = 13 TeV$ corresponding to an integrated luminosity of $139 fb^{−1}$ collected with the ATLAS detector during the 2015-2018. Z+Higgs boson(ZH), Mono-Z spin-1 simplified models and 2HDM+a models are involved in this analysis, and the observed events number is consistent with the Standard Model prediction. The upper limit on the branching ratio for the Higgs to invisible particles is also updated.
MicroBooNE is an 85-ton active mass liquid argon time projection chamber (LArTPC) at Fermilab. Its excellent calorimetry and resolution (both spatial and energy), along with its exposure to two neutrino beamlines make it a powerful detector not just for neutrino physics, but also for Beyond the Standard Model (BSM) physics and astroparticle physics. The experiment has competitive sensitivity to heavy neutral leptons possibly present in the leptonic decay modes of kaons, and also to scalar bosons that could be produced in kaon decays in association with pions. In addition, MicroBooNE serves as a platform for prototyping searches for rare events in the future Deep Underground Neutrino Experiment (DUNE). This talk will explore the capabilities of LArTPCs for BSM physics and astroparticle physics and highlight some recent results from MicroBooNE.
Two-Higgs-Doublet model with an additional scalar (2HDM+a) belongs to a generic class of mediator-based dark matter models which have garnered considerable theoretical and experimental interest over the past few years.
This presentation talks about the the constraints on this model using 13 TeV collision data collected by the ATLAS detector at the LHC , with multiple analyses and combination discussed thanks to the rich phenomenology of collider signatures of this model.
The THDMa is a new physics model that extends the scalar sector of the Standard Model by an additional doublet as well as a pseudoscalar singlet and allows for mixing between all possible scalar states. In the gauge eigenbasis, the additional pseudoscalar serves as a portal to the dark sector, with a priori any dark matter spin states. The option where dark matter is fermionic is currently one of the standard benchmarks for the experimental collaborations, and several searches at the LHC constrain the corresponding parameter space. However, most current studies constrain regions in parameter space by setting all but 2 of the 12 free parameters to fixed values.
I will discuss a generic scan on this model, allowing all parameters to float. All current theoretical and experimental constraints are applied. I identify regions in the parameter space which are still allowed after these have been applied and which might be interesting for an investigation at a future e+e- collider.
Rather than view time as one of four dimensions in space-time, we start with the assumption that time is best described in a three dimensional domain of its own, defined by spherical coordinates, and is “linked to” the spatial domain, defined by orthogonal Cartesian coordinates. The result is a six-dimensional structure that is simple to visualize and define geometrically. We will refer to it as Space-Time Conjunction, as opposed to the more common term – Space-Time Continuum.
The spatial domain is the structure of mass and the time domain is the structure of energy. In theory they can exist separately but whenever energy is associated with mass the two domains are functionally joined – Space-Time Conjunction.
Transitions between the two domains occur in accordance with the laws of physics and several paradoxes can be understood. One in particular is the behavior of electrons. Under non-stressed conditions electrons can be described by Schrodinger functions in the spatial domain but when the stress of an electric field is applied they transition to the time domain until the stress is removed.
Relativistic behavior can be properly analyzed, but only if we recognize that the term “reference frame” is not accurate. It cannot mathematically exist as a three-dimensional concept. Reference can only refer to a single point, which represents the common point of origin of a unique conjunction of a spatial domain reference and time domain reference, in order to serve this purpose. The consequence is that we must then use “reference points” to analyze relativistic motion. Relativity paradoxes vanish as if by magic.
Besides the six-dimensional structure, one, and only one, empirical assumption is included in this model. Previously, electric charge has been defined only by the measurement of the force it exerts. That does not quite define the basic nature of electric charge. We can remedy that problem if we modify one constraint traditionally placed on solutions to Schrodinger’s equations.
We are removing the conventional constraint that the Schrodinger wave function must establish continuity across the particle boundary by reducing to zero magnitude at that boundary.
Instead, we assume or postulate a single-magnitude, discrete, wave function magnitude at the particle boundary. Mathematically speaking the wave function changes its form but still satisfies the continuity requirement. It simply transitions to an evanescent wave function outside the boundary of the particle.
We can refer to this as a time discontinuity that, by nature of exponential decay of the evanescent wave, exists in a small fringe region about the particle. If we establish this discontinuity in the wave function at multiple of one-third pi, we produce single-valued positive and negative charge. By consequence, it also incorporates the definition of spin. It is also compatible with the quark structure of particle physics.
In summary, we may now have a physical definition of electric charge. Furthermore, we show that analyses using this space-time geometry and the definition of charge even allow us to characterize magnetic and nuclear binding forces into a unified theory.
Everything in the Universe has its own structure; every structure is in harmony with the others; solar system, Milky Way, black holes, other systems of star around, etc. Also, electron which is one of the main subatomic particles is no exception. We could find electron in two different states: ground or exited state. Regardless of how it gets excited, we realize that it wants to return to the ground state by the emission of a photon. Indeed, the excited electron is the birthplace of photon. Photons are generated by Electrons, and if we show that an Electron is also made of Photons; it would be obvious that Electron and the Photon have a common nature. In this paper, we are going to show that an electron is made of photons and we will explain how they gather together.
By explaining the structure of the electron, we are going to calculate the energy of the electron. The energy of an electron is the summation of stored energy and kinetic energy which in the various applications of electrons, they show their special effect. Sometimes we perceive the stored energy, and sometimes the kinetic energy.
Clifford algebra is the math language of quantum mechanics, known to most physicists in the matrix representations of Pauli and Dirac. Less familiar (but far more intuitive) is the original geometric intent of Clifford, the algebra of interactions of fundamental geometric objects - point, line, plane, and volume elements. In geometric representation, the 3D vacuum wavefunction is comprised of one scalar point, three vector line elements (orientational degrees of freedom), three bivector area elements, and one trivector volume element. Various combinations of the four fundamental constants that define the dimensionless electromagnetic coupling constant alpha (speed of light, permittivity of space, electric charge quantum and angular momentum quantum) permit assigning geometrically and topologically appropriate electric and magnetic flux quanta to the eight wavefunction components, increasing `dimensionality' of the model to the ten degrees of freedom of string theory. Time (quantum phase) emerges from wavefunction interactions, in the dimension-increasing property of Clifford algebra wedge products. Such a 6D Yang-Mills model is naturally gauge invariant, finite, confined, asymptotically free, background independent, and contains the four forces [1,2].
[1]https://www.researchgate.net/publication/335240613_Naturalness_begets_Naturalness_An_Emergent_Definition
[2]https://www.researchgate.net/publication/335976209_Naturalness_Revisited_Spacetime_Spacephase
All experimental data is consistent with massless neutrinos. There exist possibilities other than rest mass differences to explain oscillation. The two-component photon wavefunction is comprised of electric and magnetic flux quanta, coupled by Maxwell's equations. In the basic photon-electron interaction of QED, opposing phase shifts of the electron's inductive and capacitive impedances decouple the photon's flux quanta, breaking Maxwell's equations, transferring energy and momentum. Extending the two-component Dirac wavefunction (scalar charge and bivector magnetic moment) to the full eight-component vacuum wavefunction in the geometric representation of Clifford algebra permits assigning topological magnetic charge to the spin 1 3D pseudoscalar. A simple three-component neutrino wavefunction model might then be comprised of the two photon components, topologically protected by magnetic charge. Curiously, in SI units 1D vector magnetic flux quantum and 3D trivector magnetic charge quantum are numerically identical yet geometrically and topologically distinct. We discuss the mixing matrix that results from such a model.
https://indico.fnal.gov/event/19348/contributions/186426/
An exercise using the value 2.5549x10^59, which is a value that is equal to the equations G/lp^2 and 2Pic^3/h, and that substitutes this value for the Gravitational Constant G in the derived Planck units was performed. The new values were then placed into a matrix chart that compares the newly derived values, dimensions, and magnitudes for both equations as they are used within each derived unit. Calculated results derived using the matrix chart appear to show that both equations ( G/lp^2 and 2Pic^3/h ) unify Gravity and Electromagnetism, that the fine structure constant exists as the result of entropy, and that it is quite possible we live within a universe which is tunable through entropy.
We investigate the effects of producing dark matter by Hawking evaporation of primordial black holes (PBHs) in scenarios that may have a second well-motivated dark matter production mechanism, such as freeze-out, freeze-in, or gravitational production. We show that the interplay between PBHs and the alternative sources of dark matter can give rise to model-independent modifications to the required dark matter abundance from each production mechanism, which in turn affect the prospects for dark matter detection. In particular, we demonstrate that for the freeze-out mechanism, accounting for evaporation of PBHs after freeze-out demands a larger annihilation cross section of dark matter particles than its canonical value for a thermal dark matter. For mechanisms lacking thermalization due to a feeble coupling to the thermal bath, we show that the PBH contribution to the dark matter abundance leads to the requirement of an even feebler coupling. Moreover, we show that when a large initial abundance of PBHs causes an early matter-dominated epoch, PBH evaporation alone cannot explain the whole abundance of dark matter today. In this case, an additional production mechanism is required, in contrast to the case when PBHs are formed and evaporate during a radiation-dominated epoch.
We investigate Hawking evaporation of a population of primordial black holes (PBHs) as a novel mechanism to populate a dark sector which consists of self-interacting scalar dark matter with pure gravitational coupling to the visible sector. We demonstrate that depending on initial abundance of PBHs and the dark matter mass, the dark sector can reach chemical equilibrium with a temperature above, below, or equal to the temperature of the visible sector at the same time. Due to the absence of non-gravitational mediators between two sectors, any temperature asymmetry between two sectors will persist and evolve to keep the entropy of each sector conserved during the expansion of the Universe. We show that an equilibrated dark sector populated by Hawking evaporation of PBHs can explain the dark matter relic abundance today for dark matter in the MeV-TeV mass range.
Primordial black holes (PBHs), possibly formed via gravitational collapse of large density perturbations in the very early universe, are one of the earliest proposed and viable dark matter (DM) candidates. Recent studies indicate that PBHs can make up a large or even entire fraction of the present day DM density for a wide range of masses. Ultralight
PBHs in the mass range of 10^{15} - 10^{17} g, emit particles through Hawking radiation, and can be probed via observations of those emitted particles in various detectors. In this talk, I will discuss how the observations of the 511 keV gamma ray line and continuum gamma-rays set some of the most stringent exclusion limits on the DM fraction of ultralight PBHs. I will also demonstrate how measurements of low-energy photons from the Galactic Center by the imminent telescopes such as AMEGO can probe the DM fraction of PBHs into a completely unexplored mass window.
The extended excess towards the Galactic Center (GC) in gamma rays inferred from Fermi-LAT observations has been interpreted as being due to dark matter (DM) annihilation. In a recent paper my collaborators and I performed a new likelihood analyses of the GC and showed that when including templates for the stellar galactic and nuclear bulges, the GC shows no significant detection of a DM annihilation template, even after generous variations in the Galactic diffuse emission (GDE) models and a wide range of DM halo profiles. We include Galactic diffuse emission models with combinations of 3D inverse Compton maps, variations of interstellar gas maps, and a central source of electrons. For the DM profile, we include both spherical and ellipsoidal DM morphologies and a range of radial profiles from steep cusps to kiloparsec-sized cores, motivated in part by hydrodynamical simulations. Our derived upper limits on the dark matter annihilation flux place strong constraints on DM properties. In the case of the pure b-quark annihilation channel, our limits on the annihilation cross section are more stringent than those from the Milky Way dwarfs up to DM masses of ~TeV, and rule out the thermal relic cross section up to ~300 GeV. Better understanding of the DM profile, as well as the Fermi-LAT data at its highest energies, would further improve the sensitivity to DM properties.
The Cherenkov Telescope Array (CTA) is the next-generation ground-based observatory for very-high-energy (VHE, E>100 GeV) gamma-rays. It will consist of more than 100 imaging atmospheric Cherenkov telescopes (IACTs) divided between two arrays in the Northern and the Southern hemispheres. Featuring telescopes with different sizes, it will provide coverage of the whole sky over a wide energy range, between ~20 GeV and ~300 TeV.
The science topics that CTA wants to address can be divided into three main themes: understanding the origin and role of relativistic cosmic particles, probing extreme environments such as neutron stars and black holes, and exploring frontiers in Physics. Physics frontier topics to be studied with CTA include the particle nature and constituents of Dark Matter in indirect searches through gamma rays, tests of Lorentz invariance using gamma-ray propagation and probes of cosmology. U.S. scientists have led an international collaboration to build and operate a prototype 9.7-m IACT for CTA, the prototype Schwarzschild-Couder Telescope (pSCT), an innovative design proposed as a telescope candidate for CTA. The pSCT features an innovative dual-mirror design and a camera with state-of-the-art silicon photomultiplier detectors. The pSCT has recently successfully detected the Crab Nebula with 8.6 standard deviations utilizing a partially-equipped camera. A funded upgrade of the pSCT focal plane sensors and electronics is currently ongoing, which will bring the total number of channels from 1600 to 11328 and the telescope field of view from about 2.7° to 8°.
In this talk, I will introduce the CTA project with particular focus on its potential to explore frontiers in Physics as well as describe the proposed U.S. participation.
The discovery of diffuse sub-PeV gamma-rays by the Tibet ASγ collaboration promises to revolutionize our understanding of the high-energy astrophysical universe. It has been shown that this data broadly agrees with prior theoretical expectations. In this talk, We will discuss the impact of this discovery on a well-motivated new physics scenario: PeV-scale decaying dark matter (DM). Considering a wide range of final states in DM decay, a number of DM density profiles, and numerous astrophysical background models, we find that this data provides the most stringent limit on DM lifetime for various Standard Model final states. In particular, we find that the strongest constraints are derived for DM masses in between a few PeV to few tens of PeV. Near future data of these high-energy gamma-rays can be used to discover PeV-scale decaying DM.
The construction of a mathematically rigorous relativistic quantum theory has so far remained elusive. Within the 'axiomatic quantum field theory in curved spacetime' research program it has been acknowledged that such a theory needs to be compatible with the general-relativistic conception of a spacetime. That is, one may not rely on the symmetries of Minkowski spacetime in formulating the axioms of such a theory, even if one is merely interested in the special-relativistic case. In the aforementioned approach mathematical physicists thus attempted to generalize the Wightman axioms to the general-relativistic setting.
In this work, we pursue a more direct and arguably more physical ansatz by generalizing the $N$-body Born rule from non-relativistic quantum mechanics to curved spacetime. We first review the one-body case, closely tied to the mathematical theory of the relativistic continuity equation, and then generalize the general-relativistic spacetime concept to the case of $N$-bodies (with 'fixed background'). We show how the conservation of probability therein is mathematically tied to the validity of a scalar many-body continuity equation---as one would a priori expect. If it holds, the integrand for calculating the respective detection probability turns out to be an absolute invariant in the sense of Poincaré-Cartan, so that a 'preferred spacetime splitting' is not required.
The general-relativistic $N$-body Born rule presented here allows one to infer important, structural aspects of a rigorous relativistic quantum theory, and it provides an essential step towards the more general case in which $N$ is allowed to vary. The formalism overcomes some problems of related approaches in the literature, including the use of non-canonical geometric structures and overly restrictive causal/topological conditions (see Reddiger & Poirier, arXiv:2012.05212 [Math-Ph] (2020)).
We will present our recent effort in computing the dynamics of binary black hole systems at the 3rd post-Minkowskian order. Our approach is based on the numerical unitarity method for the computation of multi-loop scattering amplitudes for massive particles in Einstein-Hilbert Gravity.
Lorentz violation has been a popular topic in recent year in the search for experimental signals beyond known physics. We build the general Lorentz-violating terms in the context of effective field theory and analyze measurements in different gravity potentials, comparisons of gravitational accelerations, interferometer experiments, and studies of neutron gravitational bound states to extract first constraints on certain coefficients for Lorentz violation and spin-gravity couplings.
The LIGO/Virgo collaboration is making astonishing discoveries at a fantastic pace, including a heavy binary black hole merger with component masses in the “black hole mass gap,” which cannot be explained by standard stellar structure theory. In this talk, I will discuss how new light particles that couple to the Standard Model can act as an additional source of energy loss in the cores of population-III stars, dramatically altering their evolution and potentially explaining mass-gap objects. I will also demonstrate how new population catalogs can help distinguish different scenarios for the origin of these objects.
The LIGO-Virgo Collaboration has so far detected around 90 black holes, some of which have masses larger than what were expected from the collapse of stars. The mass distribution of LIGO-Virgo black holes appears to have a peak at ∼ $30M_\odot$ and two tails on the ends. By assuming that they all have a primordial origin, we analyze the GWTC-1 (O1&O2) and GWTC-2 (O3a) datasets by performing maximum likelihood estimation on a broken power law mass function f(m), with the result $f \propto m^{1.2}$ for $m < 35 M_\odot$ and $f \propto m^{-4}$ for $m > 35 M_\odot$. This appears to behave better than the popular log-normal mass function. Surprisingly, such a simple and unique distribution can be realized in our previously proposed mechanism of PBH formation, where the black holes are formed by vacuum bubbles that nucleate during inflation via quantum tunneling. Moreover, this mass distribution can also provide an explanation for supermassive black holes formed at high redshifts.
Clouds of ultralight bosons - such as axions - can form around a rapidly spinning black hole, if the black hole radius is comparable to the bosons' wavelength. The cloud rapidly extracts angular momentum from the black hole, and reduces it to a characteristic value that depends on the boson's mass as well as on the black hole mass and spin. Therefore, a measurement of a black hole mass and spin can be used to reveal or exclude the existence of such bosons. Using hierarchical Bayesian inference, we can simultaneously measure the black hole spin distribution at formation and the mass of the scalar boson. Based on the black holes released by LIGO and Virgo in their GWTC-2, the data strongly disfavors the existence of scalar bosons in the mass range between $1.3\times10^{-13}\,\mathrm{eV}$ and $2.7\times10^{-13}\,\mathrm{eV}$. Our mass constraint is valid for bosons with negligible self-interaction, that is with a decay constant $10^{14}~\mathrm{GeV}$. The statistical evidence is mostly driven by the two {binary black holes} systems GW190412 and GW190517, which host rapidly spinning black holes. The region where bosons are excluded narrows down if these two systems merged shortly ($\sim 10^5$ years) after the black holes formed. If time permits, we will also discuss the prospect of this search in the coming decade, as well as a multiband technique for precise measurement of boson mass.
What happens when we collide light at the highest laboratory energies? LHC beams source energetic photons that can collide to create new particles. Recently, ATLAS reported the landmark observation of photon-induced W boson pairs in the electron–muon channel using 139 fb$^{−1}$ of $\sqrt{s}$ = 13 TeV proton–proton collision data. A hallmark of photon fusion production is the forward scattering of protons, which was recently measured using the ATLAS Forward Proton spectrometer in association with electron and muon pairs using 14.6 fb$^{−1}$ of data. Moreover, 2.2 nb$^{−1}$ of $\sqrt{s_{NN}}$= 5.02 TeV lead–lead collision data enabled the observation of light-by-light scattering and search for axion-like particles in diphoton final states. This talk summarizes these remarkable experimental advances using the LHC as a photon collider, opening novel probes of the Standard Model and beyond in uncharted regimes.
The search for the electroweak VBS production of a VW pair plus two jets in the semi-leptonic final state is presented. The full CMS dataset (137.1 fb-1) for the LHC Run II of proton-proton collisions at a center-of-mass energy of 13 TeV is analyzed. In the final state, we expect two well-separated jets with a high invariant mass, one lepton from the W boson decay, and the decay products of the W/Z boson. Jets arising from the hadronic decay of vector bosons could be reconstructed either as two additional jets, with invariant mass near the W/Z mass, or as one jet with a larger radius in the case of a boosted decay. The main background arises from the single W production plus jets, which is measured in dedicated control regions with a data-driven strategy. The implementation of sophisticated machine learning techniques enhances the discrimination of the signal from these overwhelming backgrounds.
Identifying $𝑊𝑊^{(\ast)}\rightarrow\ell\nu qq$ from heavy particle decays at the LHC is an important but challenging problem due to overlapping lepton and jet signatures. We have developed a deep learning-based $𝑊𝑊^{(\ast)}$ tagger which learns from simulated calorimeter features to identify boosted $𝑊𝑊^{(\ast)}$ decays to semileptonic final states from $t\bar{t}$ and di-jet backgrounds in ATLAS. In this talk, we present the methods applied to the tagger development in the electron channel and some preliminary performance results on simulated ATLAS events at $\sqrt{s}=13$ TeV.
The Drell-Yan process is studied in the framework of TMD factorization in the Sudakov region $s\gg Q^2\gg q_\perp^2$
corresponding to recent LHC experiments with $Q^2$ of order of mass of Z-boson and transverse momentum
of DY pair $\sim$ few tens GeV. The DY hadronic tensors are expressed in terms of quark and quark-gluon TMDs
with ${1\over Q^2}$ and ${1\over N_c^2}$ accuracy.
It is demonstrated that in the leading order in $N_c$ the higher-twist quark-quark-gluon TMDs reduce to
leading-twist TMDs due to QCD equation of motion. The resulting hadronic tensors
depend on two leading-twist TMDs: $f_1$ responsible for total DY cross section,
and Boer-Mulders function $h_1^\perp$. The corresponding qualitative and semi-quantitative predictions
seem to agree with LHC data on five angular coefficients $A_0-A_4$ of DY pair production.
The remaining three coefficients
$A_5-A_7$ are determined by quark-quark-gluon TMDs multiplied by extra ${1\over N_c}$ so they
appear to be relatively small in accordance with LHC results.
We propose that the dynamics of a scalar $\phi$ of mass $O(10)$ MeV, weakly coupled to the Higgs, can give rise to a first order electroweak phase transition. Vacuum stability close to the weak scale requires a suppressed (maybe vanishing) top Yukawa coupling before the transition, rising to the Standard Model (SM) value later. All SM flavor could appear similarly, after the electroweak phase transition, through dimension-5 interactions of $\phi$ suppressed by scales from $O(10^3)$ TeV to near Planck mass. The scalar $\phi$ is long-lived and can yield missing energy signals in rare kaon decays.
The measurement of the production of a $W$ boson with a $c$ quark (termed ''$W + c$ analysis'') is studied with the ATLAS detector using the full Run 2 dataset of $pp$ collisions at $\sqrt{s} = 13 \; \mathrm{TeV}$. This measurement is crucial in obtaining increasingly precise values of the $s$ and $\bar{s}$ parton distribution function (PDF) of the proton, as well as studying the physics of the charm quark. One of the decay modes through which $W + c$ analysis can be performed is the so-called ``Satellite" mode, where a $D^{*}$ decays according to: $D^* \rightarrow \pi D^0 \rightarrow \pi(K \pi \pi^0)$, and a W boson decays leptonically. We present the measurement of the cross section of $W^+ + D^{*-}$,$W^- + D^{*+}$, and the ratio of these cross sections $R^\pm _c (WD^*)$ using the Satellite mode in $pp$ collisions at $\sqrt{s} = 13 \; \mathrm{TeV}$. Specifically, we measure the scaling factor $\mu$ (which scales Standard Model predictions to observed data) for each charged lepton channel to be: $\mu(e^-) : 1.15 \pm 0.07$, $\mu(e^+) : 1.08 \pm 0.07$, $\mu(\mu^-) : 1.11 \pm 0.07$, and $\mu(\mu^+) : 1.06 \pm 0.09$. Additionally, we calculate a first pass value of $R^\pm _c (WD^*)$ to be 0.92, and we bound the uncertainty on this ratio to be $\sigma( R^\pm _c (WD^*)) < 0.06$.
The Mu2e experiment will search for Beyond-the-Standard-Model, Charged Lepton Flavor Violation (CLFV) in muon capture $\mu^- + \text{Al}\rightarrow e^- +\text{Al}$, with a single event sensitivity surpassing the current world's best limit by 10,000 times. To report a reliable result, the number of stopped muons will be normalized to 10\% precision utilizing a combination of two $\gamma$-ray and one x-ray transitions. The first, directly proportional to the CLFV signal is the 1808.7 keV $\gamma$-ray emitted promptly in the muon capture process,
$\mu^{^{-1}} + _{}^{27}\textrm{Al} \rightarrow V_{\mu } + _{}^{26}\textrm{Mg}^{*} + n$
$ _{}^{26}\textrm{Mg}^{*} \rightarrow _{}^{26}\textrm{Mg} + \gamma $
The second, is the 346.8 keV x-ray emitted promptly from the 2p$\rightarrow$1s muonic atomic transition in Al, from muon stops in the target. The third, is the 844 keV $\gamma$-ray from the $\beta$-decay
$_{}^{27}\textrm{Mg} \rightarrow _{}^{27}\textrm{Al} + \beta ^{-1} + \bar{v_{e}} + \gamma $
where $^{27}$Mg is produced in the muon capture process, and decays with a lifetime of 9.5 minutes.
The stopped-muon rate measurement will use two complementary photon counting detectors. One of them, the LaBr$_3$ detector, is capable of high rate operation up to and above 800 kcps, with 0.7 % energy resolution. The other, the HPGe detector is capable of energy resolution of 0.1%, however, its rate capability is limited to an estimated $\sim$100 kcps.
The AlCap experiment is an experiment conducted at PSI (Switzerland) that studies products of muon capture on aluminum, titanium. These materials are candidates for stopping targets in the next-generation of charged lepton flavor violation experiments, namely Mu2e at Fermilab and COMET at J-PARC, which will search for the neutrinoless conversion of muons to electrons in the nuclear fields. The muonic X-rays emitted during atomic capture, and gamma-rays from nuclear muon capture are important in determining the number of stopped muons in the target. I will describe the AlCap experiment and present results.
The AlCap experiment has measured the emission rate and energy spectra of protons, deuterons, tritons and alpha particles associated with the nuclear capture of muons stopped in Al, Si, and Ti at PSI. These measurements quantify an important nuclear physics hit background to the Mu2e and COMET experiments, which will search for charged lepton flavor violation at an unprecedented level of sensitivity. Detailed information on the rates and energy spectra of the emitted heavy particles in the capture process is important to the design of the background-reducing aspects of these experiments. The results are also relevant for understanding the nuclear physics of these rare reaction branches. In this talk, I will describe the experiment and present the results.
Searches for charged lepton flavor violation (CLFV) are a probe of new physics beyond the Standard Model. We used existing data to set the first limits on the branching ratio of the CLFV decays $\tau \to \ell \gamma \gamma$ where $\ell=e, \mu$. The decays $\tau \to \ell X$, where $X$ is a weakly interacting boson, were also examined and improved upper bounds were obtained. The results and future prospects will be presented.
We present a search for the lepton-flavor-violating decays $B^0→\tau^\pm\ell^\mp$, where $\ell= (e,\,\mu)$, using the full data sample of $772×10^6$ $B\overline B$ pairs recorded by the Belle detector at the KEKB asymmetric-energy $e^+e^−$ collider. We use events in which one $B$ meson is fully reconstructed in a hadronic decay mode. The $\tau^\pm$ lepton is reconstructed indirectly using the momentum of the reconstructed $B$ and that of the $\ell^\mp$ from the signal decay. We find no evidence for $B^0→\tau^\pm \ell^\mp$ decays and set upper limits on their branching fractions at 90% confidence level of $\cal B$($B^0→\tau^\pm\mu^\mp$)$< 1.5 \times 10^{-5}$ and $\cal B$($B^0→\tau^\pm e^\mp$)$< 1.6 \times 10^{-5}$.
Neutrino oscillations in matter provide a unique probe of new physics. Leveraging the advent of neutrino appearance data from NOvA and T2K in recent years, we investigate the presence of CP-violating neutrino non-standard interactions in the oscillation data. We first show how to very simply approximate the expected NSI parameters to resolve differences between two long-baseline appearance experiments analytically. Then, by combining recent NOvA and T2K data, we find a tantalizing hint of CP-violating NSI preferring a new complex phase that is close to maximal: $\phi_{e\mu}$ or $\phi_{e\tau}=3\pi/2$ with $|\epsilon_{e\mu}|$ or $|\epsilon_{e\tau}|\sim0.2$. We then compare the results from long-baseline data to constraints from IceCube and COHERENT.
Charged-current quasielastic scattering is the signal process in modern neutrino oscillation experiments. It also serves as the main tool for the reconstruction of the incoming neutrino energy. Exploiting effective field theory, we factorize neutrino-nucleon quasielastic cross sections into soft, collinear, and hard contributions. We evaluate soft and collinear functions from QED and provide a model for the hard contribution. Performing resummation, we account for logarithmically-enhanced higher-order corrections and evaluate cross sections and cross-section ratios quantifying the resulting error. We discuss the relevance of radiative corrections depending on conditions of modern and future accelerator-based neutrino experiments.
We study inelastic neutrino-nucleus scattering. Primarily we target $^{40}Ar, ^{133}Cs$, and $^{127}I$ nuclei. Nuclear shell model provides clear understanding of nuclei. In practice we use Bigstick, which is based on nuclear shell model, to generate the numerical results of the nuclear structure. We also include spin-independent and spin-dependent neutrino-nucleus currents based on chiral effective field theory (EFT) in our calculation. Hence the scattering amplitude then can be expressed as the linear combination of the nuclear response functions given by chiral EFT. In the study there are two scattering processes we have considered, charged lepton-nucleus (${\nu} + N \rightarrow {l^-} + N^*$) and neutrino-nucleus ($\nu + N \rightarrow {\nu} + N^*$). We predict the number of photon production cross-sections from the inelastic scattering. We also calculate the light dark matter-nucleus inelastic scattering cross-section and the event rate.
The standard three-active neutrino oscillation picture would be modified in the presence of neutrino non-standard interactions (NSIs). In a model-independent manner, I shall first discuss dimension-6 SMEFT operators that can induce such NSIs. Then in the second half of my talk, focusing on terrestrial neutrino oscillation experiments Daya Bay, Double Chooz, RENO, T2K, NOvA, as well as T2HK, DUNE and JUNO in the near future, I will discuss their sensitivity to new physics. Time permitting, constraints from COHERENT and precision cosmology on neutral current NSIs will also be discussed.
We explore the theoretical constraints on the observable parameters of neutrino mixing on predictions for the leptonic Dirac CP-violating phase within a class of theoretical models that include a single source of CP violation due to charged lepton corrections. As a means to enforce unitarity of the lepton mixing matrix, we assume specific ansatzes for the probability distributions of the continuous input parameters and calculate the distributions of the observable lepton mixing parameters. The approach guarantees that a physically meaningful prediction for the most likely values for the leptonic Dirac CP-violating phase within these simple scenarios is automatically obtained.
We explore the implications of recent nucleon axial form factor lattice calculations for neutrino scattering experiments.
The MicroBooNE detector has an active mass of 85 tons of liquid argon and is located along the Booster Neutrino Beam (BNB) at Fermilab. It has a rich physics program including the search for a low-energy excess observed at MiniBooNE and measurements of neutrino-Argon interaction cross sections. In this talk, we present a procedure, using the Wiener-SVD unfolding method, to extract the nominal neutrino flux-averaged total and differential cross sections of the inclusive muon neutrino charged-current interaction on argon. This procedure relies on a minimal set of assumptions while maximizing the power in comparing data results with predictions from theory and event generators. Taking advantage of the power of a Liquid Argon Time Projection Chamber (LArTPC) and the Wire-Cell tomographic event reconstruction paradigm, this procedure enables a new round of cross section measurements at MicroBooNE.
In collider experiments, very light new particles are produced in the far-forward direction with small angle relative to the beam axis. The ForwArd Search ExpeRiment (FASER) is aptly located 480 m downstream from the ATLAS interaction point where background is minimal. The FASERnu emulsion detector, positioned just upstream of FASER, will detect collider-produced neutrinos for the very first time. The average cross sections of neutrinos and antineutrinos will be measured in the unexplored energy region 350 GeV - 3 TeV. In addition, the interface detector enables track matching between the FASER spectrometer and the FASERnu emulsion detector, which enables separate cross section measurements for mu neutrinos and antineutrinos. I will present the resolving power of the FASER spectrometer and the sensitivity of FASERnu to measuring neutrino-nucleon charged current (CC) cross sections.
The T2K experiment is a long base-line neutrino oscillation experiment which is designed to measure $\nu_\mu$ disappearance and $\nu_e$ appearance from the $\nu_\mu$ beam produced from a 30 GeV proton beam at J-PARC(Japan Proton Accelerator Research Complex). It consists of the J-PARC accelerator, a near detector complex (ND280) and a far detector (Super-Kamiokande). In order to achieve more precise $\nu_e$ appearance measurements and to explore CP violation in the neutrino sector, we need to improve our knowledge on $\nu_e$ interactions and determine contamination of $\nu_e$ in the $\nu_\mu$ beam better. The $\nu_e$ component in the beam that is intrinsic is the main background in the $\nu_e$ appearance measurement. Besides, a large systematic uncertainty in T2K $\nu_e$ appearance observation comes from uncertainties related with the neutrino cross-section modeling. Since the far detector is a water Cherenkov detector, neutrino interaction measurements on water are important to constrain the neutrino cross-section systematic uncertainties. The design of $\pi^0$ Detector(P0D), a component of ND280, which includes fillable water targets, allows us to measure on-water neutrino interaction cross-section. We developed a cross-section measurement method utilizing Markov-Chain Monte Carlo. In this talk, I will present the method and fake data study results of the charged current $\nu_e$ interaction cross section on-water.
The Accelerator Neutrino Neutron Interaction Experiment (ANNIE) experiment is a 26-ton gadolinium-loaded water Cherenkov detector located on the Booster Neutrino Beam at Fermilab. The experiment has a two-fold motivation: to perform a physics measurement and to advance new detector technologies. The measurement of final state neutron multiplicity from neutrino interactions in water as a function of momentum transfer will lower systematic uncertainties for future long-baseline neutrino experiments. The experiment is currently commissioning large-area picosecond photodetectors that will improve time and spatial resolution. I will present the current status of the ANNIE experiment along with future plans.
The Deep Underground Neutrino Experiment (DUNE) is a long baseline neutrino experiment using liquid argon detectors to study neutrino oscillations, proton decay, and other phenomena. The single-phase ProtoDUNE detector is a prototype of the DUNE far detector and is located in a charged particle test beam at CERN. It is critical to have accurate momentum estimation of charged particles for calibration and testing of the ProtoDUNE detector performance, as well for proper analysis of DUNE data. Charged particles passing through matter undergo multiple Coulomb scattering (MCS). MCS is momentum-dependent, allowing it to be used in muon momentum estimation while allowing for momentum estimation of muons exiting the detector, a key benefit of MCS over various other methods. We will present the status of the MCS analysis which was developed and evaluated using Monte Carlo simulations and discuss the bias and resolution of our momentum estimation method, as well as its dependencies on the detector resolution.
Detectors based on Chemical Vapor Deposition (CVD) diamond have been used
successfully in beam conditions monitors in the highest radiation areas of
the LHC. Future experiments at CERN will accumulate an order of magnitude
larger fluence. As a result, an enormous effort is underway to identify
detector materials that can operate after fluences of 10^{16}/cm^2 and
10^{17}/cm^2.
Diamond is one candidate due to its large displacement energy that enhances
its radiation tolerance. Over the last 2 years the RD42 collaboration has
constructed 3D CVD diamond detectors that use laser fabricated columns to
enhance radiation tolerance. The cells in these detectors had a size of
50µm x 50µm with columns 2.6µm in diameter. The beam test results for
both un-irrdiated and irradiated detectors will be presented.
As nuclear and particle physics facilities move to higher intensities, the
detectors used there must be more radiation tolerant. Diamond is in use at
many facilities due to its inherent radiation tolerance and ease of use. We
will present radiation tolerance measurements of the highest quality
poly-crystalline Chemical Vapor Deposition (pCVD) diamond material for
irradiations from a range of proton energies, pions and neutrons up to a
fluence of 2 x 10^16 particles/cm^2. We have measured the damage constants
as a function of energy and particle species and compare with theoretical
models. We also present measurements of the rate dependence of pulse height
for non-irradiated and irradiated pCVD diamond pad and pixel detectors,
including detectors tested over a range of particle fluxes up to 20 MHz/cm2
with both pad and pixel readout electronics. Our results indicate the pulse
height of unirradiated and neutron irradiated pCVD diamond detectors is not
dependent on the particle flux.
The High Luminosity upgrade of Large Hadron Collider (HL-LHC) will increase
the LHC Luminosity by an order of magnitude increasing with it the density
of particles on the detector by an order of magnitude. For protecting the
inner detectors of experiments and for monitoring the delivered luminosity,
a radiation hard beam monitor is being developed. For ATLAS we are developing
a set of detectors based on poly-crystalline Chemical Vapor Deposition (pCVD)
diamonds and a dedicated rad-hard ASIC. Due to the large range of particle
flux through the detector, flexibility is very important. To satisfy the
constraints imposed by the HL-LHC, our solution is based on segmenting each
single diamond sensor into multiple devices of varying size and reading them
out with a new multichannel readout chip. In this talk we describe the
proposed system design including detectors, electronics, mechanics and
services and present preliminary results from the prototype ASIC and first
devices fabricated using the ASIC.
Future operation of the LHC and HL-LHC will record a higher number of proton-proton collisions and therefore yield larger data rates and sample sizes. This will further stress real-time triggering systems and offline event reconstruction. Therefore, heterogenous computing systems utilizing both CPU and GPU hardware are being developed at CMS to deal with these tasks. Specifically, the precise reconstruction of silicon pixel hits is an important aspect of tracking at the HLT and offline. However current reconstruction algorithms - the generic and template algorithms - are not optimal for a GPU implementation. In recent years, fast implementations of neural networks have been built on GPU hardware for deep learning. Additionally, neural networks have shown promising results in various ATLAS and CMS tasks over the last few years. We therefore investigate the use of hybrid convolutional neural networks and deep neural networks in local hit reconstruction. We train and test the networks on data from a detailed silicon sensor simulation, Pixelav, tuned to simulate all sensors, including heavily radiation-damaged detectors. We find that the resulting reconstruction algorithms equals, if not outperforms present reconstruction algorithms in the predicted resolutions.
Reconstruction of charged particle trajectories (tracks) in the tracking detector surrounding the interaction region is a key component of the event reconstruction in the ATLAS experiment at the Large Hadron Collider.
The ATLAS Inner Detector (ID) records up to 1500 individual signals (hits) per proton-proton collision, and between 20 and 60 collisions happen simultaneously at each bunch crossing.
As a result, about 30000 to 90000 individual hits need to be combined into track candidates.
This represents a significant combinatorial challenge, and track reconstruction was by far the largest single user of processing time in the reconstruction of the LHC Run 2 data.
In preparation for LHC Run 3, the ATLAS collaboration undertook a major effort to optimize and speed up the track reconstruction for the coming data-taking campaign.
This talk will summarize track reconstruction strategy in ATLAS, outline the improvements that were made and show their impact on the computational and physics performance.
If time permits, an outlook to track reconstruction with the new ITk tracking detector to be installed for the High-Luminosity LHC campaign can be given.
A rate of 60 or more inelastic collisions per beam crossing was observed during LHC Run 2 and even higher vertex density, or pile-up, is expected in Run 3 and Run 4. Efficient and precise reconstruction of the primary vertex in proton-proton collision is essential for determining the full kinematic properties of the hard-scatter event and of soft interactions. Increasing instantaneous luminosity poses a challenge for primary vertex reconstruction in ATLAS. To meet this challenge, ATLAS has developed a global approach to vertex finding and fitting, allowing vertices to compete for the association of nearby tracks. This talk will summarize the strategy and performance of this new vertex reconstruction software for Run 3, and the expected performance with Inner Tracker (ITk) upgrade for Run4.
The ATLAS experiment is currently preparing for an upgrade of the inner tracking detector for High-Luminosity Large Hadron Collider (LHC) operation, scheduled to start in 2027. The new detector, known as the Inner Tracker or ITk, employs an all-silicon design with five inner Pixel layers and four outer Strip layers. The staves are the building blocks of the ITk Strip barrel layers. Each stave consists of a low-mass support structure which hosts the common electrical, optical and cooling services as well as 28 silicon modules, 14 on each side. The first pre-production electrical stave was assembled at Brookhaven National Laboratory in December 2019. To characterize the stave, a set of electrical and functional measurements have been performed both at room and at cold temperature. In this talk I will present the methods used to characterize this stave with particular focus on noise studies.
Data acquisition (DAQ) tests of the RD53a single chip cards (SCC) using Yet Another Rapid Readout (YARR), Front-End Link eXchange (FELIX) and Reconfigurable Cluster Element (RCE) readout systems are performed. Test stand for the DAQ tests of RD53a SCC was assembled at the SLAC National Accelerator Laboratory. YARR is the system developed for readout of up to 4 SCC or one quad module. It is widely used by universities and labs. FELIX system is designed to readout multiple modules; it is selected as the baseline readout system of the prototypes after the system integration. RCE is a System-on-Chip based readout system developed at SLAC, that is the principal data transmission qualification platform which also serves early module tests and large structure readout. DAQ tests of the three readout systems are required to proceed further with the ITk Pixel System upgrade toward the High Luminosity LHC Upgrade of the ATLAS detector. Comparison of results obtained with YARR, FELIX and RCE readout systems is presented.
A new silicon-strip charged-particle tracking detector (ITk strips) is a major component of the future upgrade of the ATLAS experiment for the high-luminosity LHC. The Autonomous Monitoring and Control (AMAC) chip is an application-specific integrated circuit designed to monitor voltages, currents and temperatures on each ITk module, and to control power to the front-end electronics. The ASIC design has been tested by both simulation and in-situ testing of prototype chips. The high fluence of charged particles moving through the read-out electronics during operation in the HL-LHC presents a set of inevitable radiation hazards. Python-interfaced simulation sequences were developed to challenge the chip’s response to both single-event upsets (SEU) and single-event transients (SET). I will present how we use the simulation framework to verify that the AMAC performs its required functions and to study the response to single-event errors. I will also show the results from probing prototype chips and describe the database used to store this information.
A new silicon strip charged particle detector (ITk strips) is a major component of the future upgrade of the ATLAS experiment for the high luminosity LHC. The Autonomous Monitoring and Control (AMAC) chip is an application specific integrated circuit designed to monitor voltages and currents on each ITk module, and to control power to the front-end readout electronics. To guarantee the reliability of the AMAC, a comprehensive probe station testing procedure has been developed, which allows for the testing of the digital and analog functionality of every AMAC to be installed in an ITk module. To date, the probe station has successfully tested over one thousand prototype AMAC chips, validating most of the functionality while also identifying non-optimal features that will be adjusted in the newer chip version. I will present the probe-station setup, the functionality of the AMAC that was tested, and the results of probing the wafers with prototypes.
The Hybrid Controller Chip (HCC) is an application specific integrated circuit designed as part of the silicon strip detector for the ATLAS Inner Tracker (ITk), which will be installed as part of the High Luminosity LHC upgrade program. A prototype of the HCC was produced and tested in 2018 and 2019, and the production version is currently being prepared. The HCC must read out clustered hit data from the strip tracker at a high rate while simultaneously surviving exposure to radiation. Ionizing radiation has the potential to interfere with the digital logic and memory of the HCC, disrupting normal operation and jeopardizing the accuracy of read out results. This talk will discuss the measurement of the effect of heavy ion irradiation on physical prototype HCCs as well as improvements made to the production design of the HCC informed by the results of irradiation.
The Hybrid Controller Chip (HCC) is an application specific integrated circuit that's part of the front-end electronics for the new ATLAS Inner Tracker Strip detector, which will be installed as part of the High Luminosity LHC upgrade program. A prototype of the HCC was produced and tested in 2018 and 2019, and the production version is currently being prepared. The HCC must read out clustered hit data from the strip tracker at a high rate while simultaneously surviving exposure to large amounts of ionizing radiation. This radiation can can interfere with the normal operations of the circuit by causing logic and memory errors known as single event effects, which can cause bits to invert and data to become corrupted. This talk will discuss work to verify the correctness of the digital logic of the production HCC, with a particular focus on simulation of these single event effects to assess how well the design is protected against radiation before submitting the chip.
The monitored drift tube (MDT) chambers are the main component of the precision tracking system in the ATLAS muon spectrometer, capable of measuring the sagitta of muon tracks to an accuracy of 60 μm, which corresponds to a momentum accuracy of about 10% at pT=1 TeV. To cope with large amount of data and high event rate at HL-LHC, the present MDT readout electronics will be replaced and the MDT detector will be used at the first-level trigger with an output event rate of 1 MHz and a latency of ~6 us. Prototypes for two frontend ASICs, a frontend mezzanine card and a data transmission board have been realized and tested. The design of a mobile mini-Data Acquisition system is ongoing and will be crucial for testing newly-built small-diameter MDT chambers with new frontend electronics prototypes and for future integration and commissioning. I will present the overall design of MDT frontend electronic system, results from ASIC and board prototypes and tests using the miniDAQ system.