- Compact style
- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Help us make Indico better by taking this survey! Aidez-nous à améliorer Indico en répondant à ce sondage !
The 2024 edition of the APS Division of Particles & Fields (DPF) Meeting will be hosted collaboratively by the University of Pittsburgh and Carnegie Mellon University in Pittsburgh. This meeting will be combined with the annual Phenomenology Symposium (Pheno) for a single year, and the joint event is set to take place from May 13 to 17, 2024. It will cover the latest topics in particle physics theory and experiment, plus related issues in astrophysics and cosmology. We would like to encourage you to register and submit an abstract for a parallel talk. All talks at the symposium are expected to be in person. There will be no poster sessions in this conference. The conference adjourns at 1:00 PM, Friday May 17. We hope to see you in May!
Early registration ends April 8, 2024
Parallel talk abstract submission deadline has been extended to April 8, 2024
Registration closes April 19 (Friday), 2024
Student travel award application deadline April 1 2024
--- News ---
TOPICS TO BE COVERED:
PLENARY PROGRAM SPEAKERS:
Zeeshan Ahmed, Aram Apyan, Ketevi Assamagan, Carlos Arguelles, Elke Aschenauer, Christian Bauer, Tulika Bose, Andrew Brinkerhoff, Gabriella Carini, Valentina Dutta, Peter Elmer, Mark Elsesse, Jonathan Feng, Carter Hall, Erin Hansen, Roni Harnik, David Hertzog, Kevin Kelly, Kiley Kennedy, Peter Lewis, Elliot Lipeles, Kendall Mahn, Sudhir Malik, Julie Managan, Rachel Mandelbaum, Ethan Neil, Tim Nelson, Laura Reina, David Saltzberg, Mayly Sanchez, Kate Scholberg, Vladimir Shiltsev, Jesse Thaler, Jaroslv Trnka, Sven Vahsen, Daniel Whiteson, Jure Zupan, Kathryn Zurek ... ...
SPECIAL EVENTS:
Conference Reception: Monday May 13
Early Career Forum: Tuesday May 14 at noon
Public Lecture by Prof. Hitoshi Murayama: Tuesday May 14
Conference Banquet: Thursday May 16
Student Travel Awards: With support from DPF, there are a number of awards (up to $300 each) available to graduate students from US institutions for travel and accommodation to DPF-Pheno 24. A student applicant should send an updated CV and a statement of financial need, indication of the talk submission to DPF-Pheno 24, and arrange for a short recommendation letter sent from their thesis advisor, by email to dpfpheno24@pitt.edu with the subject line "DPF-Pheno 24 travel assistance". The decision will be based on the academic qualification and the financial need. The deadline for the application is April 1 (same as the abstract submission), and the winners will be notified by April 19. Winner institutes and names will be announced at the conference banquet.
DPF - PHENO 2024 PROGRAM COMMITTEE: Todd Adams (Florida State U.), Andrea Albert (LANL), Timothy Andeen (U. Texas), Emanuela Barberis (Northeastern U.), Robert Bernstein (FNAL), Sapta Bhattacharya (Wayne State), Tom Browder (U. Hawaii), Stephen Butalla (Florida Tech), Joel Butler (FNAL), Mu-Chun Chen (UC Irvine), Sekhar Chivukula (UC San Diego), Sarah Demers (Yale U.), Dmitri Denisov (BNL), Bertrand Echenard (Caltech), Sarah Eno (U. Maryland), Andre de Gouvea (Northwestern U.), Tao Han (U. Pittsburgh), Mike Kordosky (William and Mary), Mark Messier (Indiana U.), Marco Muzio (Penn State U.), Jason Nielsen (UC Santa Cruz), Vaia Papadimitriou (FNAL), Manfred Paulini (CMU), Heidi Schellman (Oregon State Univ.), Gary Shiu (U. Wisconsin-Madison), Tom Shutt (SLAC), Mayda Velasco (Northwestern U.), Gordon Watts (U. Washington), Peter Winter (ANL).
DPF - PHENO 2024 LOCAL ORGANIZING COMMITTEE: John Alison (CMU), Brian Batell (U. Pitt), Amit Bhoonah (U. Pitt), Matteo Cremonesi (CMU), Arnab Dasgupta (U. Pitt), Valentina Dutta (CMU), Ayres Freitas (U. Pitt), Akshay Ghalsasi (U. Pitt), Joni George (U. Pitt), Grace Gollinger (U. Pitt), Tao Han (U. Pitt), Tae Min Hong (U. Pitt), Arthur Kosowsky (U. Pitt), Da Liu (U. Pitt), Matthew Low (U. Pitt), James Mueller (U. Pitt), Donna Naples (U. Pitt), Vittorio Paolone (U. Pitt), Diana Parno (CMU), Manfred Paulini (CMU), Andrew Zentner (U. Pitt).
The lightest supersymmetric particles could be higgsinos that have a small mixing with gauginos. If the lightest higgsino-like state makes up some or all of the dark matter with a thermal freezeout density, then its mass must be between about 100 and 1150 GeV, and dark matter searches put bounds on the amount of gaugino contamination that it can have. Motivated by the generally good agreement of flavor- and CP-violating observables with Standard Model predictions, I consider models in which the scalar particles of minimal supersymmetry are heavy enough to be essentially decoupled, except for the 125 GeV Higgs boson. I survey the resulting purity constraints as lower bounds on the gaugino masses and upper bounds on the higgsino mass splittings. I also discuss the mild excesses in recent soft lepton searches for charginos and neutralinos at the LHC, and show that they can be accommodated in these models if $\tan\beta$ is small and $\mu$ is negative.
A search for ``emerging jets'' produced in proton-proton collisions at a center-of-mass energy of 13 TeV is performed using data collected by the CMS experiment corresponding to an integrated luminosity of 138 fb^-1. This search examines a hypothetical dark quantum chromodynamics (QCD) sector that couples to the standard model (SM) through a scalar mediator. The scalar mediator decays into an SM quark and a dark sector quark. As the dark sector quark showers and hadronizes, it produces long-lived dark mesons that subsequently decay into SM particles, resulting in a jet, known as an emerging jet, with multiple displaced vertices. This search looks for pair production of the scalar mediator at the LHC, which yields events with two SM jets and two emerging jets at leading order. The results are interpreted using two dark sector models with different flavor structures, and exclude mediator masses up to 1950 (1850) GeV for an unflavored (flavor-aligned) dark QCD model. The unflavored results surpass a previous search for emerging jets by setting the most stringent mediator mass exclusion limits to date, while the flavor-aligned results provide the first direct mediator mass exclusion limits to date.
Minimal Dark Matter models extend the Standard Model by incorporating a single electroweak multiplet, with its neutral component serving as a candidate for the thermal relic dark matter in the Universe. These models predict TeV-scale particles with sub-GeV mass splittings $\Delta$. Collider searches aim at producing the charged member of the electroweak multiplet which then decays into dark matter and a charged particle. Traditionally, these searches involve signatures of missing energy and disappearing tracks. Due to the small size of $\Delta$, the transverse momentum of this charged particle is too soft to be resolved at hadron colliders. In this talk, I show that a Muon Collider is capable of detecting these soft charged decay products, providing a means to discover TeV thermal relics with an almost degenerate charged companion. Our technique also facilitates the determination of $\Delta$, allowing for a comprehensive characterization of the dark sector. Our results indicate that a 3 TeV muon collider will have the capability to discover the highly motivated thermal Higgsino-like dark matter candidate as well as other scenarios of Minimal Dark Matter featuring larger multiplets whose neutral component corresponds to a fraction of the total dark matter in the Universe. This study highlights the potential of a muon collider to make significant discoveries even at its early stages of operation.
Dark portals like the gauge, higgs, and neutrino portals are well-motivated extensions of the standard model (SM). These portals may lead to interactions between dark matter and the SM. In some scenarios, the mediator predominantly decays invisibly, making it challenging to constrain them. The prospect of a future muon collider has triggered a growing interest in the particle physics community. We show how a clean environment and high luminosity can lead to the best bound for masses O(10-100) GeV, even though the proposed collider will have a very high center of mass energy ~ few TeV.
The search for dark matter (DM) continues, with increasingly sensitive detectors at the WIMP scale, and novel detection techniques for discovering sub-GeV DM. In this talk I highlight two types of directionally sensitive experiments, in which the DM signal can be distinguished from the low-energy backgrounds. A new, highly efficient computational method can streamline the theory predictions, reducing the evaluation time by up to seven orders of magnitude.
Cosmic ray (CR) upscattering of dark matter is one of the most straightforward mechanisms to accelerate ambient dark matter, making it detectable at high threshold, large volume experiments. In this work, we revisit CR upscattered dark matter signals at the IceCube detector, considering both proton and electron scattering, in the former case including both quasielastic and deep inelastic scattering. We consider both scalar and vector mediators over a wide range of mediator masses, and use lower energy IceCube data than has previously been used to constrain such models. We show that our analysis sets the strongest existing constraints on cosmic ray boosted dark matter over much of the eV - MeV mass range.
We study the physics of the intermediate scattering regime for boosted dark matter (BDM) interacting with standard model (SM) target nucleons. The phenomenon of BDM, which is consistent with many possible DM models, occurs when DM particles receive a Lorentz boost from some process. BDM would then exhibit similar behavior to neutrinos as it potentially interacts, at relativistic speeds, in terrestrial based neutrino detectors. Producing (in)direct DM signatures in these experiments, as opposed to recoil experiments which probe the interactions of the non-relativistic halo of DM in our solar system. We investigate the intermediate scattering regime, between elastic and inelastic events, of such processes involving BDM at energies of order 1-2 GeV where resonant scattering processes occur. The application of this research is an event generator GENIE code for implementation in future experiments such as LArTPC at DUNE.
We perform a global fit of dark matter interactions with nucleons using a non-relativistic effective operator description, considering both direct detection and neutrino data. We examine the impact of combining the direct detection experiments CDMSlite, CRESST-II, CRESST-III, DarkSide-50, LUX, LZ, PandaX-II, PandaX-4T, PICO-60, SIMPLE, SuperCDMS, XENON100, and XENON1T along with neutrino data from IceCube and Deepcore, ANTARES, and Super-Kamiokande. While current neutrino telescope data lead to increased sensitivity compared to underground nuclear scattering experiments for dark matter masses above 100 GeV, our future projections show that the next generation of underground experiments will significantly outpace solar searches for most dark matter-nucleon elastic scattering interactions.
A sub-component of dark matter with a short collision length compared to a planetary size leads to efficient accumulation of dark matter in astrophysical bodies. Such particles represent an interesting physics target since they can evade existing bounds from direct detection due to their rapid thermalization in high-density environments. In this talk, I will demonstrate that terrestrial probes, such as, large-volume neutrino telescopes as well as commercial/research nuclear reactors, can provide novel ways to constrain or discover such particles.
We propose anti-ferromagnets as optimal targets to hunt for sub-MeV dark matter with spin-dependent interactions. These materials allow for multi-magnon emission even for very small momentum transfers, and are therefore sensitive to dark matter particles as light as the keV. We use an effective theory to compute the event rates in a simple way. Among the materials studied here, we identify nickel oxide (a well-assessed anti-ferromagnet) as an ideal candidate target. Indeed, the propagation speed of its gapless magnons is very close to the typical dark matter velocity, allowing the absorption of all its kinetic energy, even through the emission of just a single magnon.
In this study, we present the development of a portable cosmic muon tracker tailored for both on-site measurements of cosmic muon flux and outreach activities. The tracker comprises two 7cm x 7cm plastic scintillators, wavelength shifting (WLS) fibers, and Hamamatsu SiPMs (S13360-2050VE). The detector utilizes plastic scintillator panels optically coupled to WLS fibers, transmitting scintillation light to the SiPMs. SiPM outputs are routed to a PCB board equipped with op amp amplifiers and a peak hold circuit, connected to an ESP32 microcontroller module. When muons traverse both scintillators, the light emitted triggers the SiPMs, generating equivalent signals proportional to light intensity. These signals are then amplified, and the pulse peak is held for 500 microseconds. The peak analog voltage is subsequently digitized using the onboard ADC in the ESP32. Continuously measuring and recording peak values, the ESP32 triggers muon detection when both peaks surpass a set threshold. The SiPMs are powered by a High Voltage bias supply module, while a BMP180 Module measures temperature and pressure. For real-time event tagging, a GPS module is interfaced with the ESP-32. Housed within an acrylic box measuring 10 x 10 x 10cm, the detector can be powered using a 5V 1A USB power bank. Additionally, a mobile app allows for real-time monitoring. This versatile and cost-effective portable detector facilitates cosmic muon research in various experimental settings. Its portability and low power requirements enable on-site measurements in environments such as tunnels, caves, and high altitudes.
The Mu2e experiment at Fermilab will conduct a world-leading search for Charged Lepton Flavour Violation (CLFV) in neutrino-less muon-to-electron conversion in the field of a nucleus. In doing so, it will provide a powerful probe into physics beyond the Standard Model, which can greatly enhance the rates of CLFV processes. To accomplish this measurement, which will constitute an $\mathcal{O}(10^{4})$ improvement as compared to previous measurements, Mu2e must have excellent control over potential backgrounds: requiring less than one background event for $\mathcal{O}(10^{18})$ muons stopped over the lifetime of the experiment. One such background arises from cosmic muons, which are expected to result in approximately one background event per day. Mu2e will defeat these cosmic ray background events with an active shielding system: a large-area cosmic ray veto (CRV) detector enclosing the apparatus, with the ability to identify and veto cosmic ray muons with an average efficiency of 99.99$\%$. This talk will briefly describe the Mu2e apparatus, the design of the CRV, its expected performance, and its present status in preparation for physics data-taking in 2026.
LYSO crystals are radiation-hard, non-hygroscopic, have a light yield of $\sim30,000\,\gamma$/MeV, a 40-ns decay time, and a radiation length of just 1.14 cm. Conventional photosensors work naturally at the LYSO peak wavelength of 420 nm. These properties suggest that an electromagnetic calorimeter made from LYSO should be ideal for high-rate, low-energy precision experiments where high resolution is imperative at energies below 100\,MeV. Yet, few examples exist and the performance for previous prototypes did not achieve what the light-yield specifications might suggest for energy resolution. We have been designing a large solid angle $\sim$spherical calorimeter made of tapered LYSO crystals for possible use in a new measurement of the branching ratio $R_{e/\mu} = \Gamma(\pi^+\rightarrow e^+\nu(\gamma))/\Gamma(\pi^+\rightarrow \mu^+\nu(\gamma))$. The $\pi$-to-$e$ decay emits a 69 MeV positron, to be measured against the continuum of $<53\,$MeV Michel positrons from muon decay. I will present our studies obtained with an array of recently optimized LYSO crystals made by SICCAS. We have obtained excellent results in bench tests with various sources, an array test with a 17.6 MeV $\gamma$ source from a $p$-Li reaction, and from a test-beam run at the Paul Scherrer Institute using a positron beam from 30 – 100 MeV having excellent momentum resolution.
We present a calculation of QED radiative corrections to low energy electron proton scattering at next-to-leading order. This work is based on that performed previously by Maximon and Tjon which relied on the soft photon approximation for the two-photon exchange diagram. The calculations are performed assuming the finite size of the proton through electromagnetic dipole form factors and relaxation of the approximation made in this earlier work. Comparisons are provided over the same kinematic ranges as those used in Maximon and Tjon. In addition we will discuss the impact of these corrections on several kinematic distributions.
Electron-positron pair production and hadron photoproduction are the most important beam-induced backgrounds at linear electron-positron colliders. Predicting them accurately governs the design and
optimization of detectors at these machines, and ultimately their physics reach. With the proposal, adoption, and first specification of the C3 collider concept it is of primary importance to estimate
these backgrounds and begin the process of tuning existing linear collider detector designs to fully exploit the parameters of the machine. We will report on the status of estimating both of these backgrounds at C3 using the SiD detector concept, and discuss the effects of the machine parameters on the preliminary detector and electronics design.
We present a decision tree-based implementation of autoencoder anomaly detection. A novel algorithm is presented in which a forest of decision trees is trained only on background and used as an anomaly detector. The fwX platform is used to deploy the trained autoencoder on FPGAs within the latency and resource constraints demanded by level 1 trigger systems. Results are presented with two datasets: a BSM Higgs decay to pseudoscalars with a 2gamma 2b final state, and the LHC physics dataset for unsupervised New Physics detection. Finally, the effects of signal contamination on the training set are presented, demonstrating the possibility of training on data.
This work is detailed in 2304.03836. New physics studies are shown with respect to last year's presentation at Pheno 2023.
The Compact Muon Solenoid (CMS) detector at the CERN LHC produces a large quantity of data that requires rapid and in-depth quality monitoring to ensure its validity for use in physics analysis. These assessments are often done by visual inspection which can be time consuming and prone to human error. In this talk, we introduce the “AutoDQM” system for Automated Data Quality Monitoring in CMS to enable prompt and accurate data assessment. AutoDQM uses a beta-binomial probability function, principal component analysis, and autoencoders for anomaly detection. These algorithms were tested on already-validated data collected by CMS in 2022. The algorithms were able to identify anomalous “bad” data-taking runs at a rate 5-6 times higher than “good” runs suitable for physics analysis, demonstrating AutoDQM’s effectiveness in improving data quality monitoring.
We present R-Anode, a new method for data-driven, model-agnostic resonant anomaly detection that raises the bar for both performance and interpretability. The key to R-Anode is to enhance the inductive bias of the anomaly detection task by fitting a normalizing flow directly to the small and unknown signal component, while holding fixed a background model (also a normalizing flow) learned from sidebands. In doing so, R-Anode is able to outperform all classifier-based, weakly-supervised approaches, as well as the previous Anode method which fit a density estimator to all of the data in the signal region instead of just the signal. We show that the method works equally well whether the unknown signal fraction is learned or fixed, and is even robust to signal fraction misspecification. Finally, with the learned signal model we can sample and gain qualitative insights into the underlying anomaly, which greatly enhances the interpretability of resonant anomaly detection and offers the possibility of simultaneously discovering and characterizing the new physics that could be hiding in the data.
Anomaly detection is a promising, model-agnostic strategy to find physics beyond the Standard Model. State-of-the-art machine learning methods offer impressive performance on anomaly detection tasks, but interpretability, resource, and memory concerns motivate considering a wide range of alternatives. We explore using the 2-Wasserstein distance from optimal transport theory, both as an anomaly score and as input to interpretable machine learning methods, for event-level anomaly detection at the Large Hadron Collider. The choice of ground space plays a key role in optimizing performance. We comment on the feasibility of implementing these methods in the L1 trigger system.
Understanding the higgs boson, both in the context of Standard Model physics and beyond-the-Standard Model hypotheses, is a key problem in modern particle physics. An increased understanding could come from the detection and analysis of pairs of higgs bosons produced at hadron colliders. While such higgs pairs have not yet been observed at the Large Hadron Collider (LHC), it is likely that they will be detected within the next few years at the High-Luminosity LHC. In this study, we show how a machine-learning based higgs pair analysis can constrain several dimension-6 SMEFT Wilson coefficients in the higgs sector. We find that including shape-level information, e.g. in the form of the distributions of kinematic observables, in such analyses is likely to place tighter constraints on the coefficients than a rate-only analysis.
A model based on a $U(1)_{T 3R}$ extension of the Standard Model can address the mass hierarchy between the third and the first two generations of fermions, explain thermal dark matter abundance, and the muon $g - 2$ and $R_{K^{(*)}}$ anomalies. The model contains a light scalar boson $\phi'$ and a heavy vector-like quark $\chi_u$ that can be probed at CERN's Large Hadron Collider (LHC). We perform a phenomenology study on the production of $\phi'$ and ${\chi}_u$ particles from proton-proton $(pp)$ collisions at the LHC at $\sqrt{s}=13$ TeV primarily through $g{-g}$ and $t{-\chi_u}$ fusion. We work adopt a phenomenological framework, an effective field theory approach, in which the $\chi_u$ and $\phi'$ masses are free parameters and consider the final states of the $\chi_u$ decaying to $b$-quarks, muons, and MET from neutrinos and the $\phi'$ decaying to $\mu^+\mu^-$. The analysis is performed using machine learning algorithms, over traditional methods, to maximize the signal sensitivity with integrated luminosities of of $150, 300$, and $3000$ fb$^{-1}$. Further, we note the proposed methodology can be a key mode for discovery over a large mass range, including low masses, traditionally considered difficult due to experimental constraints.
Charged Lepton Flavor Violation (cLFV) stands as a compelling frontier in the realm of particle physics, offering a unique window into the mysteries of flavor physics beyond the Standard Model. I will provide a comprehensive overview of the current experimental landscape and future prospects.
A survey of ongoing experimental efforts will be presented, highlighting recent breakthroughs and advancements in the field. Various experiments, ranging from high-energy accelerators to precision low-energy experiments, will be discussed, shedding light on the diverse strategies employed to detect elusive cLFV signals.
Furthermore, the talk will delve into the challenges faced by experimentalists and the ingenious techniques developed to overcome these obstacles. Emphasis will be placed on the interplay between theory and experiment, underscoring the importance of a collaborative approach in pushing the boundaries of our understanding.
In anticipation of the future, the presentation will explore upcoming experiments and their potential to provide crucial insights into cLFV. Novel technologies, experimental designs, and anticipated sensitivities will be discussed, offering a glimpse into the promising avenues that lie ahead.
By the end of the talk, attendees will gain a thorough appreciation of the dynamic landscape of experimental efforts in charged lepton flavor violation.
Neutrino oscillations have shown that lepton flavor is not a conserved quantity. Charged lepton flavor violation (CLFV) is suppressed by the small neutrino masses well below what is experimentally observable, while lepton number violation (LNV) is forbidden in the SM extended to include neutrino masses. New physics models predict higher rates of CLFV and allow for LNV. The CLFV $\mu^- \rightarrow e^-$ conversion and CLFV and LNV $\mu^- \rightarrow e^+$ conversion processes are sensitive to a wide range of new physics models.
$\mu^- \rightarrow e^+$ conversion is complementary to $0\nu\beta\beta$ decay and may be sensitive to flavor effects that $0\nu\beta\beta$ decay is insensitive to. A key background to the search for $\mu^- \rightarrow e^+$ conversion is radiative muon capture (RMC). Previous muon conversion experiments have had difficulty describing the RMC background when searching for $\mu^- \rightarrow e^+$ conversion. The Mu2e experiment at FNAL aims to improve the sensitivity to $\mu^- \rightarrow e^-$ conversion by a factor of 10,000. In order to make a similar improvement in the sensitivity to $\mu^- \rightarrow e^+$ conversion, the RMC background will need to be well understood. I will discuss RMC and previous $\mu^- \rightarrow e^+$ conversion searches, and then the upcoming $\mu^- \rightarrow e^+$ conversion search at the Mu2e experiment.
Charged lepton flavor violation is an unambiguous signature of New Physics. Current experimental status and future prospects from the electron-positron colliders are discussed. Discovery potential of New Physics models with charged lepton flavor violation as its experimental signature are also presented.
Lepton flavor universality (LFU) is an assumed symmetry in the Standard Model (SM). The violation of the lepton flavor universality (LFUV) would be a clear sign of physics beyond the Standard Model and has been actively searched from both small- and large-scale experiments. One of the most stringent tests for LFU comes from the precision measurements of rare decays of light mesons. In particular, the ratio of branching fractions for charged pion decays, $R^{\pi}_{e/\mu}=\Gamma(\pi\rightarrow e\nu(\gamma))/\Gamma(\pi\rightarrow \mu\nu (\gamma))$, has tested the LFU at 0.1% level. However, while the value of $R^{\pi}_{e/\mu}$ is predicted to a precision of $10^{-4}$ in SM, there is an opportunity to improve the experimental probing of LFU by another order of magnitude. In this talk, I will introduce the PIONEER experiment, which has been recently approved at Paul Scherrer Institute (PSI), Switzerland, aiming at bridging the gap between precisions of SM predictions and measurements in experiments. Beside leveraging the intense charged pion beam at PSI, the PIONEER experiment adopts several cutting-edge detector technologies including a fully active 4-D silicon target stopping the pions, a high-performance trigger and data acquisition system, and a liquid Xenon calorimeter with excellent energy resolutions and fast responses. In addition to the precision measurement of $R^{\pi}_{e/\mu}$, the PIONEER experiment will also improve the search sensitivities to new physics beyond the standard model through searches of pion exotic decays, such as involving sterile neutrinos. Future phases of PIONEER experiment with higher intensity will contribute to the test of unitarity of CKM matrix through a precision measurement of pion beta decay leading to a precise determination of $V_{ud}$.
We show how the experiment Mu3e can improve sensitivity to light new physics by taking advantage of the angular distribution of the decay products. We also propose a new search at Mu3e for flavor violating axions through the decay mu ->3e + a which circumvents the calibration challenges which plague the mu -> e a.
Like the weak interaction itself, the Higgs coupling to the left chiral components of the Dirac bispinors for quarks "knows" which up goes with which down in the universal coupling. However, the simple conjecture that the right chiral components of each are not so distinguished provides for a consistent determination of the quark mass spectra and of
the CKM matrix relating their mass eigenstates (flavors) in terms of general, but perturbative, BSM corrections. The extensions to charged leptons follows the same pattern, but the absence of right-chiral components of Dirac bispinors for neutrinos in the SM and the corresponding mass-independent definition of the flavors of the left-chiral Weyl neutrinos leads naturally to the PMNS matrix being almost tri-bi-maximal due to the definition of the charged lepton flavors by their mass. However, a very different structure for the origin of neutrino mass is then required, which we conjecture is related to the Dark Matter nature of the right-chiral components whether they complete neutrinos to Dirac bispinors or form Majorana neutrinos via the see-saw mechanism.
Upcoming cosmological surveys will probe the impact of a non-zero sum of neutrino masses on the growth of structures. These measurements are sensitive to the behavior of neutrinos at cosmic distances, making them a perfect testbed for neutrino physics beyond the standard model at long ranges. In this talk, I will introduce a novel signal from long-range self-interactions between neutrinos. In the late-time universe, this interaction triggers the Jeans instability in the cosmic neutrino background. As a result, the cosmic neutrino background forms macroscopic bound states and induces large isocurvature perturbations in addition to the cold dark matter density perturbations. This enhancement of matter perturbation is uniquely probed by late-time cosmological observables. We find that with the minimum sum of neutrino masses measured by neutrino oscillation experiments, the current SDSS data already place strong constraints on the long-range neutrino self-interactions for interaction range greater than kpc.
The talk will still be about the same generalization of QM but more focused on the difference in the interference pattern of two paths in canonical QM vs. this generalization of QM. The reason for this change is that I have made much more progress in this aspect than the topic of my current abstract. As such, I believe it would be more fruitful to talk about this work as opposed to higher-order interference, a work still in progress.
In this talk, we discuss the cosmological effects of a tower of neutrino states (equivalently a tower of warm dark matter ) on cosmic microwave background (CMB) and large-scale structure. For concreteness, we considered the $N$-Naturalness model which is a proposed mechanism to solve the electroweak Hierarchy problem. The model predicts a tower of neutrino states, which act as warm dark matter, with increasing mass and decreasing temperature compared to the standard model neutrino. Compared to a single neutrino state, such a neutrino tower induces a more gradual suppression of the matter power spectrum. The suppression increases with the total number of states in the neutrino tower.
We explore these effects quantitatively in the scalar $N$-naturalness model and show the parameter space allowed by the CMB, weak lensing, and Lyman-$\alpha$ dataset. We found that neutrinos-induced suppression of the power spectrum at the small scale puts stringent constraints on the model. We emphasize the need for a faster Boltzmann solver to study the effects of the tower of neutrino states on smaller scales.
Natural anomaly-mediated Supersymmetry breaking (nAMSB) models arise from modifications to anomaly-mediated SUSY breaking models to avoid conflicts created by bounds from the Higgs mass, constraints from searches for wino-like WIMPS, and bounds from naturalness. nAMSB models still feature the wino as the lightest gaugino, but the higgsinos become the lightest EWinos. In nAMSB models with soft SUSY breaking in a sequestered sector, the Higgs mass is maintained at $m_h\sim125$ GeV, and sparticle masses fall within the LHC bounds. We explore model lines over the gravitino mass $m_{3/2}$ and find that the lower bound is excluded by gluino pair searches while the upper parameter space is excluded by gaugino pair searches. The middle range of $m_{3/2}\sim90 - 200$ TeV is expected to be fully testable at the HL-LHC with the following discovery channels: soft dilepton and trilepton from higgsino pair production, same sign diboson production, trilepton from wino pair production, and top squark pair production.
Supersymmetric models with low electroweak fine-tuning are more prevalent on the string landscape than fine-tuned models. We assume a fertile patch of landscape vacua containing the minimal supersymmetric standard model (MSSM) as a low-energy EFT. Such models are characterized by light higgsinos in the mass range of a few hundred GeV whilst top squarks are in the 1-2.5 TeV range. Other sparticles are generally beyond current LHC reach. We evaluate prospects for top squark searches of the expected natural SUSY at HL-LHC.
Supersymmetry is an appealing theoretical extension of the Standard Model because this framework presents a viable dark matter candidate. Several CMS analyses have searched for evidence of supersymmetry at the electroweak scale in the compressed region, where the parent sparticle mass is close to that of the child, leading to soft Standard Model decay products that can be difficult to reconstruct. The latest results from several Run 2 CMS analyses are presented with data from proton-proton collisions with a 13 TeV center-of-mass energy with luminosity up to 138 fb-1. These analyses target a variety of final states and employ a suite of methods to set stringent limits on several types of supersymmetric models
A search is presented for the pair-production of charginos and the production of a chargino and neutralino in a supersymmetric model where the near mass-degenerate chargino and neutralino each decay via $R$-parity-violating couplings to a Higgs boson and a charged lepton or neutrino. This analysis searches for a Higgs-lepton resonance in data corresponding to an integrated luminosity of 139 fb${}^{-1}$ recorded in proton-proton collisions at $\sqrt{s} = 13$ TeV with the ATLAS detector at the Large Hadron Collider at CERN.
A search is presented for the direct pair production of scalar tops, which each decay through an $R$-parity violating coupling to a charged lepton and a $b$-quark. The final state has two resonances formed by the lepton-jet pairs. Expected sensitivity will be shown for the dataset consisting of an integrated luminosity of 140 $fb^{-1}$ of proton-proton collisions at a center-of-mass energy of $\sqrt{s}= 13 $ TeV, collected between 2015 and 2018 by the ATLAS detector at the LHC. Supersymmetry is able to resolve many questions left unanswered by the Standard Model, such as the hierarchy problem. This search is inspired by the minimal supersymmetric B-L extension of the Standard Model, which has spontaneous $R$-parity violation that allows violation of lepton number.
In natural supersymmetric models defined by no worse than a part in
thirty electroweak fine-tuning, winos and binos are generically expected
to be much heavier than higgsinos. Moreover, the splitting between the
higgsinos is expected to be small, so that the visible decay products of
the heavier higgsinos are soft, rendering the higgsinos quasi-invisible
at the LHC. Within the natural SUSY framework, heavy electroweak gauginos
decay to $W$, $Z$ or $h$ bosons plus higgsinos in the ratio $\sim
2:1:1$, respectively. This is in sharp contrast to models with a
bino-like lightest superpartner and very heavy higgsinos, where the charged
(neutral) wino essentially always decays to a $W$ ($h$) boson and an
invisible bino. Wino pair production at the LHC, in natural SUSY, thus
leads to $VV$, $Vh$ and $hh+\not\!\!\!{E_T}$ final states ($V=W,Z$) where, for TeV
scale winos, the vector bosons and $h$ daughters are considerably
boosted. We identify eight different channels arising from the leptonic
and hadronic decays of the vector bosons and the decay $h\to b\bar{b}$,
each of which offers an avenue for wino discovery at the high luminosity
LHC (HL-LHC). By combining the signal in all eight channels we find,
assuming $\sqrt{s}=14$ TeV and an integrated luminosity of 3000
fb$^{-1}$, that the discovery reach for winos extends to $m(wino)\sim
1.1$~TeV, while the 95\% CL exclusion range extends to a wino mass of
almost 1.4~TeV. We also identify ``higgsino specific channels'' which
could serve to provide $3\sigma$ evidence that winos lighter than
1.2~TeV decay to light higgsinos rather than to a bino-like LSP, should
a wino signal appear at the HL-LHC.
The Peccei-Quinn (PQ) symmetry that solves the strong CP problem, being a global symmetry, suffers from a potential quality problem in that the symmetry is not respected by quantum gravity. In this talk I will present results from an ongoing work (with B. Dutta and R.N. Mohapatra) where we address successfully this problem based on a gauged U(1) symmetry. The PQ symmetry arises accidentally in a family of models and is protected by the gauged U(1) against quantum gravitational corrections. A unified theory based on SO(10) x U(1) gauge symmetry will also be presented, and the resulting axion phenomenology will be discussed.
A heavy axion avoids the quality problem and has been shown to produce interesting experimental signatures. A mirror sector has been invoked to explain how such axions can occur, often with a large hierarchy between the visible and mirror Higgs masses. I discuss a novel realization of the Twin Higgs framework that produces a heavy axion without this large hierarchy, addressing both the strong CP and electroweak hierarchy problems. I discuss the experimental constraints and discovery opportunities associated with this model.
We identify the QCD axion and right-handed (sterile) neutrinos as bound states of an SU(5) chiral gauge theory with Peccei-Quinn (PQ) symmetry arising as a global symmetry of the strong dynamics. The strong dynamics is assumed to spontaneously break the PQ symmetry, producing a high-quality axion and naturally generating Majorana masses for the right-handed neutrinos at the PQ scale. The composite sterile neutrinos can directly couple to the left-handed (active) neutrinos, realizing a standard see-saw mechanism. Alternatively, the sterile neutrinos can couple to the active neutrinos via a naturally small mass mixing with additional elementary states, leading to light sterile neutrino eigenstates. The SU(5) strong dynamics therefore provides a common origin for a high-quality QCD axion and sterile neutrinos.
The axion or axion like particle (ALP), as a leading dark matter candidate, is the target of many on-going and proposed experimental searches based on its coupling to photons. However, indirect searches for axions have not been as competitive as direct searches that can probe a large range of parameter space. In this talk, I will introduce the idea that axion stars will inevitably form in the vicinity of supermassive black holes due to Bose-Einstein condensation, enhancing the axion birefringence effect and opening up more windows for axion indirect searches. The oscillating axion field around black holes induces polarization rotation on the black hole image, which is detectable and distinguishable from astrophysical effects on the polarization angle, as it exhibits distinctive temporal variability and frequency invariability. We show that the polarization measurement from Event Horizon Telescope can set the most competitive limit on axions in the mass range of $10^{-21}$-$10^{-16}$ eV.
Proto-neutron stars, formed in the center of Type-II supernovae, represent promising science targets for probing axions. The hypothetical particles are emitted via e.g. the Primakoff process and can modify the cooling rate of the proto-neutron stars and also convert to observable gamma rays while propagating through astrophysical magnetic field. Observations of Supernova 1987 (SN 1987A) from the Solar Maximum Mission (SMM) gamma-ray telescope have previously been used to set bounds on the axion-photon coupling. In this work, we present updated limits with SMM data by including nucleon-nucleon bremsstrahlung as an additional mechanism of axion production. We also consider a novel axion conversion mechanism in the progenitor magnetic field of SN 1987A. This allows constraining larger axion masses and smaller axion-photon couplings due to the stronger magnetic field of the progenitor star compared to the magnetic field of the Milky Way. We use these results to project the sensitivity of gamma-ray searches towards a future Galactic supernova with a proposed full-sky gamma-ray telescope network.
Ultra-light axions with weak couplings to photons are motivated extensions of the Standard Model. We perform one of the most sensitive searches to-date for the existence of these particles with the NuSTAR telescope by searching for axion production in stars in the M82 starburst galaxy and the M87 central galaxy of the Virgo cluster. This involves a sum over the full stellar populations in these galaxies when computing the axion luminosity, as well as accounting for the conversion of axions to hard X-rays via magnetic field profiles from simulated IllustrisTNG analogue galaxies. We find no evidence for axions, and instead set robust constraints on the axion-photon coupling at the level of $|g_{a\gamma\gamma}| < 6.44 \times 10^{-13}$ GeV$^{-1}$ for $m_a \lesssim 10^{-10}$ eV at 95% confidence.
In this talk, I will introduce ARCANE reweighting, a new Monte Carlo technique to solve the negative weights problem in collider event generation. We will see a demonstration of the technique in the generation of $(e^+ e^- \longrightarrow q\bar{q} + 1~\mathrm{jet})$ events under the MC@NLO formalism.
In this scenario, ARCANE can reduce the fraction of negative weights by redistributing the contributions of $\mathbb{H}$- and $\mathbb{S}$-type events a) without introducing any biases in the distribution of physical observables and b) without requiring any changes to the matching and merging prescriptions used.
I believe that the technique can be applied to other processes of interest like $(q\bar{q}\longrightarrow W + \mathrm{jets})$ and $(q\bar{q}\longrightarrow t\bar{t}+\mathrm{jets})$ as well.
The Large Hadron Collider was developed, in part, to produce and study heavy particles such as the top quark. The lifetime of the top quark is on the order of less than $10^{-24}$ seconds. Due to its short lifetime, the top quark is observed indirectly by particle detectors through the particles it decays into. A key part of reconstructing heavy particles for observation is to properly assign the decay products to their respective top quarks or other parent particles. One common approach in this process involves summing the momenta and energy of various particle combinations in different permutations to compute the masses of the expected parent particles in a specific decay process. Those masses are then compared to expected masses in order to select the best set of particle assignments for the full collision event. Here we demonstrate that a matrix-based approach, which incorporates additional terms related to the expected transverse momenta associated with both correct and incorrect particle pairings, leads to improvements in reconstruction. For the benchmark task, where two top quarks decay to six quarks (fully-hadronic decay), this method leads to an improvement in reconstruction efficiency of approximately $10-13\%$ in events, containing six to fourteen jets, compared to a mass-only approach.
One key problem in collider physics is that of binary classification to fully reconstruct final states. Considering top quark pair production in the fully hadronic channel as an example, we explore the effectiveness of multiple variational quantum algorithms (VQAs) including quantum approximation optimization algorithm (QAOA) and its derivatives. Compared against other approaches, such as quantum annealing and kinematic methods i.e. the hemisphere method, we demonstrate comparable or better efficiencies for selecting the correct pairing depending on the particular invariant mass threshold.
The Large Hadron Collider (LHC) will undergo a major improvement from 2026-2028 called High Luminosity LHC (HL-LHC). The number of collisions per proton bunch crossing will increase from ~60 to ~200. This will stress the current event selection (trigger) system, and the efficiency of specialized jet triggers in particular. An important challenge lies in classifying jets coming from a single vertex or from multiple ones, and the difficulty in distinguishing this is exacerbated by the increased pile-up interactions and high energy background jets under high luminosity. Therefore, as part of the ongoing ATLAS detector upgrade, we are developing a multi-vertex jet trigger for Level 0 (hardware-based level) at HL-LHC, using machine learning techniques, such as Boosted Decision Trees (BDTs) to do the classification. Building on recent advancements, such as the development of the fwXmachina package in the University of Pittsburgh (useful for BDTs implementation in Level 1), the project spans describing HL-LHC multi-jet background, creating BDTs to classify single and multi-vertex events, and implementing them on Field Programmable Gate Arrays (FPGAs). This trigger will benefit the identification of specific di-Higgs decays like HH $\rightarrow$ 4b, but also any interesting physics with 4 jets in the final state.
The CMS detector will upgrade its tracking detector in preparation for the High Luminosity Large Hadron Collider (HL-LHC). The Phase-2 outer tracker layout will consist of 6 barrel layers in the center and 5 endcap layers. These will be composed of two different types of double-sensor modules, capable of reading out hits compatible with charged particles with transverse momentum above 2 GeV (“stubs”). Stubs are used in the back-end Level 1 track-finding system to form tracks that will be considered by the Level-1 trigger to select interesting events. An important part of this update is ensuring the tracker and the stub building step work correctly, which is where Data Quality Monitoring (DQM) comes in. Currently, there is no automated system to measure the performance of stub reconstruction. This talk focuses on the software development to ensure that we can monitor the performance of stub reconstruction, making use of Monte Carlo truth information.
The possibility of a dark sector photon that couples to standard model lepton pairs has received much theoretical interest. Dark photons with GeV scale masses could have decays with substantial branching fractions to simple decay modes such as opposite-sign muon pairs. If the dark photon originates from a heavy particle, for example a BSM Higgs, the dark photon is boosted in the lab frame (CMS detector) resulting in decay products in a narrow angular cone containing a lepton pair referred to as a “lepton jet. If the dark photon is short-lived, it appears to originate directly from the primary interaction vertex. In several production models, the dark photons are produced in pairs, resulting in events with two lepton jets. Such a distinctive signature is rarely produced from SM processes. We present the status of an analysis for Run 2 (139 inverse fb) for the dimuon decay channel. Selection criteria are based on simulated signals for a Higgs portal model with prompt production and simulated standard model backgrounds. Run 2 data is compared with simulated backgrounds for a control sample of like-sign muon pairs. A multivariate classifier method shows good separation of signal and background. Expected sensitivity to production cross section is discussed.
A search for dark matter (DM) produced in association with a resonant b$\bar{b}$ pair is performed in proton-proton collisions at a center-of-mass energy of 13 TeV collected with the CMS detector during the Run 2 of the Large Hadron Colllider. The analyzed data sample corresponds to an integrated luminosity of 137 fb$^{-1}$.
Results are interpreted in terms of a novel theoretical model of DM production at the LHC the predicts the presence of a Higgs-boson-like particle in the dark sector, motivated simultaneously by the need to generate the masses of the particles in the dark sector and the possibility to relax constraints from the DM relic abundance by opening up a new annihilation channel. If such a dark Higgs boson decays into standard model (SM) states via a small mixing with the SM Higgs boson, one obtains characteristic large-radius jets in association with missing transverse momentum that can be used to efficiently discriminate signal from backgrounds. Limits on the signal strength of different dark Higgs boson mass hypotheses below 160 GeV are set for the first time with CMS data.
We unveil blind spot regions in dark matter (DM) direct detection (DMDD), for weakly interacting massive particles with a mass around a few hundred~GeV that may reveal interesting photon signals at the LHC. We explore a scenario where the DM primarily originates from the singlet sector within the $Z_3$-symmetric Next-to-Minimal Supersymmetric Standard Model (NMSSM). A novel DMDD spin-independent blind spot condition is revealed for singlino-dominated DM, in cases where the mass parameters of the higgsino and the singlino-dominated lightest supersymmetric particle (LSP) exhibit opposite relative signs (i.e., $\kappa < 0$), emphasizing the role of nearby bino and higgsino-like states in tempering the singlino-dominated LSP. Additionally, proximate bino and/or higgsino states can act as co-annihilation partner(s) for singlino-dominated DM, ensuring agreement with the observed relic abundance of DM. Remarkably, in scenarios involving singlino-higgsino co-annihilation, higgsino-like neutralinos can distinctly favor radiative decay modes into the singlino-dominated LSP and a photon, as opposed to decays into leptons/hadrons. In exploring this region of parameter space within the singlino-higgsino compressed scenario, we study the signal associated with at least one relatively soft photon alongside a lepton, accompanied by substantial missing transverse energy and a hard initial state radiation jet at the LHC. In the context of singlino-bino co-annihilation, the bino state, as the next-to-LSP, exhibits significant radiative decay into a soft photon and the LSP, enabling the possible exploration at the LHC through the triggering of this soft photon alongside large missing transverse energy and relatively hard leptons/jets resulting from the decay of heavier higgsino-like states.
We will present the operational status of the LHC Run 3 milliQan detector un, whose installation began last year and was completed during the 2023-4 YETS, and is being commissioned at the time of submission. We will also show any available initial results from data obtained with Run 3 LHC Collisions.
FASER, the ForwArd Search ExpeRiment, has successfully taken data at the LHC since the start of Run 3 in 2022. From its unique location along the beam collision axis 480 m from the ATLAS IP, FASER has set leading bounds on dark photon parameter space in the thermal target region and has world-leading sensitivity to many other models of long-lived particles. In this talk, we will give a full status update of the FASER experiment and its latest results, with a particular focus on our very first search for axion-like particles and other multi-photon signatures.
The constituents of dark matter are still unknown, and the viable possibilities span a very large mass range. Specific scenarios for the origin of dark matter sharpen the focus on a narrower range of masses: the natural scenario where dark matter originates from thermal contact with familiar matter in the early Universe requires the DM mass to lie within about an MeV to 100 TeV. Considerable experimental attention has been given to exploring Weakly Interacting Massive Particles in the upper end of this range (few GeV to ~TeV), while the region ~MeV to ~GeV is largely unexplored. Most of the stable constituents of known matter have masses in this lower range, tantalizing hints for physics beyond the Standard Model have been found here, and a thermal origin for dark matter works in a simple and predictive manner in this mass range as well. It is therefore an exploration priority. If there is an interaction between light DM and ordinary matter, as there must be in the case of a thermal origin, then there necessarily is a production mechanism in accelerator-based experiments. The most sensitive way (if the interaction is not electron-phobic) to search for this production is to use a primary-electron beam to produce DM in fixed-target collisions. The Light Dark Matter eXperiment (LDMX) is a planned electron-beam fixed-target missing-momentum experiment that has unique sensitivity to light DM in the sub-GeV range. This contribution will give an overview of the theoretical motivation, the main experimental challenges and how they are addressed, as well as projected sensitivities in comparison to other experiments.
The mystery of dark matter is one of the greatest puzzles in modern science. What is 85% of the matter, or 25% of the mass/energy, of the universe made up of? No human knows for certain. Despite mountains of evidence from astrophysics and cosmology, direct laboratory detection eludes physicists. A leading candidate to explain dark matter is the WIMP or Weakly Interacting Massive Particle, a thermal relic left over after the Big Bang. I will be presenting the first search results from the LZ experiment, as well as some subsequent analyses in different channels, such as low-energy electron recoils, high-energy nuclear recoils (EFT), and multiple scattering. LZ, deployed in South Dakota, is one of the flagship US DOE dark matter projects, and currently world leading from 10 GeV up to the TeV scale in mass-energy in terms of setting limits on the WIMP interaction strength following non-discovery. I will also showcase the unprecedented degree of agreement between LZ data and simulation software (NEST +FlameNEST, LZLAMA, Geant4, and BACCARAT) utilized to model signal and background interactions in a detector like LZ.
Dark matter, estimated to be 85% of the total mass of the Universe, remains a mystery in physics. Despite accumulating evidence supporting its existence, the true nature of dark matter is still elusive. One of the candidate's hypothesis are the Weakly Interacting Massive Particles (WIMPs). The search for WIMPs represents a real experimental challenge, has been running for more than a decade and has been pushing the limit further and further. The DarkSide program is part of this direct detection search and will continue with its next generation experiment, DarkSide-20k.
The DarkSide-20k detector will consists of a dual phase liquid Argon time projection chamber (LArTPC) surrounded by two veto inside a cryostat of 8x8x8m³. It will be located in the Gran Sasso underground laboratory, providing a natural shielding from cosmic rays. The design has been made in order to minimize background and achieve a state of background-free operation, also allowed by strategy to suppress unwanted signals (such asneutrons, beta and gamma). This is made possible by leveraging the exceptional background rejection power of liquid argon thanks to pulse shape discrimination. The Photon Detection Units (PDUs) constitute a critical component of this design and will soon enter into production. Cryogenic and low-background silicon photomultipliers (SiPMs) will be employed for the project, undergoing rigorous testing before being assembled to build the PDUs at the Nuova Officina Assergi (NOA) cleanroom. This facility is located at the external laboratory adjacent to the underground site. All of this will lead to a very good sensitivity for the WIMP-nucleon cross section in yet undiscovered area of the parameter space.
We have further developed the dark matter (DM) Migdal effect within semiconductors beyond the standard spin independent interaction. Ten additional non-relativistic operators are examined which motivate five unique nuclear responses within the crystal. We derive the generalized effective DM-lattice Migdal Hamiltonian and present new limits for the full list of interactions.
In the context of a U(1)$_X$ extension of the Standard Model (SM), we consider a (super)heavy Dirac fermion dark matter (DM) which interacts with the SM sector through U(1)$_X$ gauge interaction with a sizable gauge coupling. Although its mass exceeds the unitarity bound for the thermal DM scenario, its observed relic density is reproduced through incomplete thermalization with the reheating temperature after inflation being lower than the DM mass. We investigate this DM scenario from the viewpoint of complementarity between direct DM detection experiments and LHC searches for the mediator $Z'$ boson.
As nuclear recoil direct detection experiments carve out more and more dark matter parameter space in the WIMP mass range, the need for searches probing lower masses has become evident. Since lower dark matter masses lead to smaller momentum transfers, we can look to the low momentum limit of nuclear recoils: phonon excitations in crystals. Single phonon experiments promise to eventually probe dark matter masses lower than 1 MeV. However the slightly higher mass range of 10-100 MeV can be probed via multiphonon interactions and importantly, do not require as low of experimental thresholds to make a detection. In this work, we analyze dark matter interacting via a pseudoscalar mediator, which leads to spin-dependent scattering into multiphonon excitations. We consider several likely EFT operators and describe the future prospects of experiments for finding dark matter via this method. Our results are implemented in the python package DarkELF and can be straightforwardly generalized to other spin dependent EFT operators.
We develop benchmarks for resonant di-scalar production in the generic
complex singlet scalar extension of the Standard Model (SM), which contains two new scalars. These benchmarks maximize di-scalar resonant production: $pp\rightarrow h_2 \rightarrow h_1 h_1/h_1h_3/h_3h_3$, where $h_1$ is the observed SM-like Higgs boson and $h_{2,3}$ are new scalars. The decays $h_2\rightarrow h_1h_3$ and $h_2\rightarrow h_3h_3$ may be the only way to discover $h_3$, leading to a discovery of two new scalars at once. Current LHC and projected future collider (HL-LHC, FCC-ee, ILC500) constraints are used to produce benchmarks at the HL-LHC for $h_2$ masses between 250 GeV and 1 TeV and a future $pp$ collider for $h_2$ masses between 250 GeV and 12 TeV. We update the current LHC bounds on the singlet-Higgs boson mixing angle. As the mass of $h_2$ increases, certain limiting behaviors of the maximum rates are uncovered due to theoretical constraints on the parameters. These limits, which can be derived analytically, are ${\rm BR}(h_2\rightarrow h_1h_1)\rightarrow 0.25$, ${\rm BR}(h_2\rightarrow h_3h_3)\rightarrow 0.5$, and ${\rm BR}(h_2\rightarrow h_1h_3) \rightarrow 0$. It can also be shown that the maximum rates of $pp\rightarrow h_2\rightarrow h_1h_1/h_3h_3$ approach the same value. Hence, all three $h_2\rightarrow h_ih_j$ decays are promising discovery modes for $h_2$ masses below $\mathcal{O}(1 {\rm TeV})$, while above $\mathcal{O}(1 {\rm TeV})$ the decays $h_2\rightarrow h_1h_1/h_3h_3$ are more encouraging. Masses for $h_3$ are chosen to produce a large range of signatures including multi-b, multi-vector boson, and multi-$h_1$ production. The behavior of the maximum rates imply that in the multi-TeV region this model may be discovered in the Higgs quartet production mode before Higgs triple production is observed. The maximum di- and four Higgs production rates are similar in the multi-TeV range.
The knowledge of the Higgs potential is crucial for understanding the origin of mass and the thermal history of our Universe. We show how collider measurements and observations of stochastic gravitational wave signals can complement each other to explore the multiform scalar potential in the two Higgs doublet model. In our investigation, we analyze critical elements of the Higgs potential to understand the phase transition pattern. Specifically, we examine the formation of the barrier and the uplifting of the true vacuum state, which play crucial roles in facilitating a strong first-order phase transition. Furthermore, we explore the potential gravitational wave signals associated with this phase transition pattern and investigate the parameter space points that can be probed with LISA. Finally, we compare the impact of different approaches to describing the bubble profile on the calculation of the baryon asymmetry.
We study the conditions under which the CP violation in the quark mixing matrix can leak into the scalar potential of the real two-Higgs-doublet model (2HDM) via divergent radiative corrections, thereby spoiling the renormalizability of the model. We show that any contributing diagram must involve 12 Yukawa-coupling insertions and a factor of the hard $U(1)_{PQ}$-breaking scalar potential parameter $\lambda_5$, thereby requiring at least six loops; this also implies that the 2HDM with only softly-broken $U(1)_{PQ}$ is safe from divergent leaks of CP violation to all orders. In both the type-I and -II 2HDMs, we demonstrate that additional symmetries of the six-loop diagrams guarantee that all of the divergent CP-violating contributions cancel. We also show that the CP leak can occur at seven loops and enumerate the classes of diagrams that can contribute, providing evidence that the real 2HDM is theoretically inconsistent.
Exploring additional CP violation sources at the Large Hadron Collider (LHC) is vital to the Higgs physics programme beyond the Standard Model. An unexplored avenue at the LHC is a significant non-linear realization of CP violation, naturally described in non-linear Higgs Effective Field Theory (HEFT). In this talk, we will discuss constraining such interactions across a broad spectrum of single and double Higgs production processes, incorporating differential information where feasible statistically and theoretically. We will focus on discerning anticipated correlations in the Standard Model Effective Field Theory (SMEFT) from those achievable in HEFT in top-Higgs and gauge-Higgs interactions. We will discuss the LHC sensitivity, particularly when discriminating between SMEFTy and HEFTy CP violations in these sectors.
Field space geometry has been fruitful in understanding many aspects of EFT, including basis-independent criteria for distinguishing HEFT vs. SMEFT, reorganization of scattering amplitudes in covariant form, derivation of renormalization group equations and geometric soft theorem. We incorporate field space geometry in functional matching by dividing the field space into light and heavy subspaces. A modified covariant derivative expansion method is proposed to calculate the functional traces while accommodating the covariance of the light subspace geometry. We apply this formalism to the non-linear sigma model and reproduce the effective theory more efficiently compared to other matching methods.
We explore the connection between the Higgs hierarchy problem and the metastability of the electroweak vacuum. Previous work has shown that metastability bounds the magnitude of the Higgs mass squared parameter in the $m_H^2 < 0$ case, realized in our universe. We argue for the first time that metastability also bounds the Higgs mass in the counterfactual $m_H^2 > 0$ case; that is, metastability windows $m_H^2$. In the Standard Model, these bounds are orders of magnitude larger than the Higgs mass, but new physics can lower these scales. As an illustration, we consider vacuum stability in the presence of additional TeV scale fermions with Yukawa couplings to the Higgs and a dimension-$6$ term required to prevent complete instability of the vacuum. We find that the requirement of metastability imposes stringent bounds on the values of $m_H^2$ and the parameters characterizing the new physics.
The discovery of neutrino oscillation has ushered in a number of questions: Why are neutrino masses small? Are they different from other fermion masses? Are neutrinos the solution to the baryon asymmetry ? Are there really only 3 neutrinos? Is there a relation between neutrino and quark mixing? and many more. In order to get to bottom of these questions a massive experimental program in particle, nuclear and astrophysics is under way. In this talk I will try to highlight how interconnected these endeavors are.
The Deep Underground Neutrino Experiment (DUNE) is a next-generation long-baseline neutrino oscillation experiment in the US. It will have four far detector modules, each holding 17 kilotons of liquid argon. These modules sit 1500 meters underground and 1300 kilometers from the near detector complex. In this overview talk, I will give an overview of DUNE experiment, including the status of DUNE far site, FD and ND prototypes, physics reach and recent results, and construction progress. Prospects of the DUNE Phase II program will also be introduced.
DUNE is the flagship of the next generation of neutrino experiments in the United States. It is designed to decisively measure neutrino CP violation and the mass hierarchy. It utilizes the Liquid Argon Time Projection Chamber (LArTPC) technology, which provides exceptional spatial resolution and the potential to accurately identify final state particles and neutrino events. DUNE's high resolution LArTPC increases the difficulty of reconstructing and identifying neutrino events at DUNE. Deep learning techniques offer a promising solution to this problem. At DUNE, convolutional neural networks, graph neural networks and transformers are being developed and have already shown promising results in kinematic reconstruction, clustering and event/particle identification. Deep learning methods have also been preliminarily tested on data from the DUNE prototype detector ProtoDUNE at CERN. In this talk, I will discuss the development of these deep-learning-based reconstruction methods at DUNE.
I will introduce the general concepts of DUNE (Deep Underground Neutrino Experiment), as well as the current status of protoDUNE-VD, one of the two large-scale LArTPC-based DUNE Far Detector prototypes located at CERN. Later, I will focus on my neural network module, aiming at speeding up photon propagation process for optical simulation of protoDUNE-VD. This module is 50 ~ 100 times faster than the traditional GEANT4 method, making the photon detection simulation much more efficient.
The Deep Underground Neutrino Experiment (DUNE) is one of two big next generation neutrino experiments aimed at measuring neutrino properties, including the mass hierarchy, CP violating phase. The DUNE Far Detector will consist of four 17-kt modules, two of which have been prototypes at the ProtoDUNE experiment at CERN. The ProtoDUNE experiment consists of two liquid argon time projection chambers which took hadron beam data in 2018-2020, and are preparing for a second run in the summer of 2024. This talk summarizes the ProtoDUNE experiment, its past results and future plans.
We present theoretical results at approximate NNLO in QCD for top-quark pair-production total cross sections and top-quark differential distributions at the LHC in the SMEFT. These approximate results are obtained by adding higher-order soft gluon corrections to the complete NLO calculations. The higher-order corrections are large, and they reduce the scale uncertainties. These improved theoretical predictions can be used to set stronger bounds on top-quark QCD anomalous couplings.
We study the implications of precise gauge coupling unification on supersymmetric particle masses. We argue that precise unification favors the superpartner masses that are in the range of several TeV and well beyond. We demonstrate this in the minimal supersymmetric theory with a common sparticle mass threshold, and two simple high-scale scenarios: minimal supergravity and minimal anomaly-mediated supersymmetry. We also identify candidate models with a Higgsino or a wino dark matter candidate. Finally, the analysis shows unambiguously that unless one takes foggy naturalness notions too seriously, the lack of direct superpartner discoveries at the LHC has not diminished the viability of supersymmetric unified theories in general nor even precision unification in particular.
We present the basis of dimension-eight operators associated to
universal theories. We first derive a complete list of independent
dimension-eight operators formed with the Standard Model bosonic
fields characteristic of such universal new physics
scenarios. Without imposing C nor P the basis contains 175 operators
- this is, the assumption of Universality reduces the number of
independent SMEFT coefficients at dimension eight from 44807 to 175.
89 of the 175 universal operators are included in the general
dimension-eight operator basis in the literature. The 86 additional
operators involve higher derivatives of the Standard Model bosonic
fields and can be rotated in favor of operators involving fermions
using the Standard Model equations of motion for the bosonic fields.
By doing so we obtain the allowed fermionic operators generated in this
class of models which we map into the corresponding 86 independent
combinations of operators in the dimension-eight basis of
arXiv:2005.00059.
We are investigating the effects of dimension 6 dipole moment operators on dipole
moment measurements, which are electric dipole moment (EDM) and magnetic dipole moment (MDM).
Baryon number violation is our most sensitive probe of physics beyond the Standard Model. Its realization through heavy new particles can be conveniently encoded in higher-dimensional operators that allow for a model-agnostic analysis. The unparalleled sensitivity of nuclear decays to baryon number violation makes it possible to probe effective operators of very high mass dimension, far beyond the commonly discussed dimension-six operators. To facilitate studies of this ginormous and scarcely explored testable operator landscape we provide the exhaustive set of UV completions for baryon-number-violating operators up to mass dimension 15, which corresponds roughly to the border of sensitivity. In addition to the known Standard Model fields we also include right-handed neutrinos in our operators.
As in arXiv:2307.04255, we consider a radically modified form of supersymmetry (called susy here to avoid confusion), which initially combines standard Weyl fermion fields and primitive (unphysical) boson fields. A stable vacuum then requires that the initial boson fields, whose excitations would have negative energy, be transformed into three kinds of scalar-boson fields: the usual complex fields $\phi$, auxiliary fields $F$, and real fields $\varphi$ of a new kind (with degrees of freedom and gauge invariance preserved under the transformation). The requirement of a stable vacuum thus imposes Lorentz invariance, and also immediately breaks the initial susy -- whereas the breaking of conventional SUSY has long been a formidable difficulty. Even more importantly, for future experimental success, the present formulation may explain why no superpartners have yet been identified: Embedded in an $SO(10)$ grand-unified description, most of the conventional processes for production, decay, and detection of sfermions are excluded, and the same is true for many processes involving gauginos and higgsinos. This implies that superpartners with masses $\sim 1$ TeV may exist, but with reduced cross-sections and modified experimental signatures. For example, a top squark (as redefined here) will not decay at all, but can radiate pairs of gauge bosons and will also leave straight tracks through second-order (electromagnetic, weak, strong, and Higgs) interactions with detectors. The predictions of the present theory include (1) the dark matter candidate of our previous papers, (2) many new fermions with masses not far above 1 TeV, and (3) the full range of superpartners with a modified phenomenology.
Baryon Acoustic Oscillations are considered one of the most powerful cosmological probes. They are assumed to provide distance measures independent of a specific cosmological model. At the same time the obtained distances are considered agnostic with respect to other cosmological observations. However, in current measurements, the inference is done assuming parameter values of a fiducial LCDM model and employing prescriptions tested to be unbiased only within some LCDM fiducial cosmologies. Moreover the procedure needs to face the ambiguity of choosing a specific correlation function model-template to measure cosmological distances.
Does this comply with the requirement of model and parameter independent distances useful, for instance, to select cosmological models, detect Dark Energy and characterize cosmological tensions?
In this talk I will review the subject, answer compelling questions and explore new promising research directions.
Models of cosmology including dark radiation (DR) have garnered recent interest, due in part to their versatility in modifying the $\Lambda$CDM concordance model in hopes of resolving observational tensions. Equally interesting is the capacity for DR models to be constrained or detected with current and near-term cosmological data. Finally, DR models have the potential to be embedded into specific microphysical models of BSM physics with clear particle physics origins. With these three features of DR in mind, we explore the detailed dynamics for a class of DR models that thermalize after big-bang nucleosynthesis by mixing with the standard model (SM) neutrinos. Such models were proposed in previous work (2301.10792), where only background quantities were studied, and the main focus was on the large viable parameter space. Concentrating on a sub-class of these models with a mass threshold within the dark sector, motivated by the successes of such models for resolving the Hubble tension, we perform a detailed MCMC analysis to derive constraints from CMB, BAO, and Supernovae data. In this talk, I will comment on (i) the degree to which interactions/mixing of DR with SM neutrinos is constrained by current data, (ii) the prospect of the model to resolve the Hubble tension, and (iii) the relevance of this type of self-interacting dark neutrino for explaining anomalies in neutrino experiments.
Cosmological first order phase transitions are typically associated with physics beyond the Standard Model, and thus of great theoretical and observational interest. Models of phase transitions where the energy is mostly converted to dark radiation can be constrained through limits on the dark radiation energy density (parameterized by $\Delta N_{\rm eff}$). However, the current constraint ($\Delta N_{\rm eff} < 0.3$) assumes the perturbations are adiabatic. We point out that a broad class of non-thermal first order phase transitions that start during inflation but do not complete until after reheating leave a distinct imprint in the scalar field from bubble nucleation. Dark radiation inherits the perturbation from the scalar field when the phase transition completes, leading to large-scale isocurvature that would be observable in the CMB. We perform a detailed calculation of the isocurvature power spectrum and derive constraints on $\Delta N_{\rm eff}$ based on CMB+BAO data. For a reheating temperature of $T_{\rm rh}$ and a nucleation temperature $T_*$, the constraint is approximately $\Delta N_{\rm eff}\lesssim 10^{-5} (T_*/T_{\rm rh})^{-4}$, which can be much stronger than the adiabatic result. We also point out that since perturbations of dark radiation have a non-Gaussian origin, searches for non-Gaussianity in the CMB could place a stringent bound on $\Delta N_{\rm eff}$ as well.
We demonstrate that the searches for dark sector particles can provide probes of reheating scenarios, focusing on the cosmic millicharge background produced in the early universe. We discuss two types of millicharge particles (mCPs): either with, or without, an accompanying dark photon. These two types of mCPs have distinct theoretical motivations and cosmological signatures. We discuss constraints from the overproduction and mCP-baryon interactions of the mCP without an accompanying dark photon, with different reheating temperatures. We also consider the $\Delta N_{\rm eff}$ constraints on the mCPs from kinetic mixing, varying the reheating temperature. The regions of interest in which the accelerator and other experiments can probe the reheating scenarios are identified for both scenarios. These probes can potentially allow us to set an upper bound on the reheating temperature down to $\sim 10$ MeV, much lower than the previously considered upper bound from inflationary cosmology at around $\sim 10^{16}$ GeV. In addition, we derive a new ``distinguishability condition'', in which the two mCP scenarios may be differentiated by combining cosmological and theoretical considerations.
The decay of asymmetric dark matter (ADM) leads to possible neutrino signatures with an asymmetry of neutrinos and antineutrinos. In the high-energy regime, the Glashow resonant interaction $\bar{\nu}_e+e^- \rightarrow W^-$ is the only way to differentiate the antineutrino contribution in the diffuse astrophysical high-energy neutrino flux experimentally, which provides a possibility to probe heavy ADM. In this talk, I will discuss the neutrino signal from ADM decay, the constraints with the current IceCube observation of Glashow resonance, and the projected sensitivities with the next-generation neutrino telescopes.
We study the cosmological phase transition in the Conformal Freeze-In (COFI) dark matter model. The dark sector is a 4D conformal field theory (CFT) at high energy scales, but its conformal symmetry is broken in the IR through a small coupling of a relevant CFT operator $\mathcal{O}_\mathrm{CFT}$ to a Standard Model (SM) portal operator. The dark sector confines below a gap scale $M_{\mathrm{gap}}$ of order keV$--$MeV, forming bound states amongst which is the dark matter candidate. We consider the holographic dual in 5D given by a Randall-Sundrum-like model, where the SM fields and the dark matter candidate are placed on the UV and IR branes respectively. The separation between the UV and IR branes is stabilized by a bulk scalar field dual to $\mathcal{O}_\mathrm{CFT}$, naturally generating a hierarchy between the electroweak scale and $M_{\mathrm{gap}}$. The confinement of the CFT is then dual to the spontaneous symmetry breaking by the 5D radion potential. We find the viable parameter space of the theory which allows the phase transition to complete promptly without significant supercooling.
Dark glueballs, bound states of dark gluons in a $SU(N)$ dark sector (DS), have been considered as a dark matter (DM) candidate. We study a scenario where the DS consists only of dark gluons and dominates the Universe after primordial inflation. As the Universe expands and cools down, dark gluons get confined to a set of dark glueball states; they undergo freeze-out, leaving the Universe glueball-dominated. To recover the visible sector and standard cosmology, connectors between the sectors are needed. The heavy connectors induce decays of most glueball states, which populates the visible sector; however, some of the glueballs could remain long-lived on a cosmological time scale because of the (approximately) conserved charge, and hence they are a potential DM candidate. We study in detail the cosmological evolution of the DS, and show resulting constraints and observational prospects.
We introduce a model of dark matter (DM) where the DM is a composite of a spontaneously broken conformal field theory. We find that if the DM relic abundance is determined by freeze-out of annihilations to dilatons, where the dilatons are heavier than the DM, then the model is compatible with theoretical and experimental constraints for DM masses in the 0.1-10 GeV range. The conformal phase transition is supercooled and strongly first-order, and can thus source large stochastic gravitational wave signals consistent with those recently observed at NANOGrav. Future experiments are projected to probe a majority of the viable parameter space in our model.
We outline a new production mechanism for dark matter that we dub “recycling”:dark sector particles are kinematically trapped in the false vacuum during a dark phase transition; the false pockets collapse into primordial black holes (PBHs), which ultimately evaporate before Big Bang Nucleosynthesis (BBN) to reproduce the dark sector particles. The requirement that all PBHs evaporate prior to BBN necessitates high scale phase transitions and hence high scale masses for the dark sector particles in the true vacuum. Our mechanism is therefore particularly suited for the production of ultra heavy dark matter (UHDM) with masses above ∼ 10^12 GeV. The correct relic density of UHDM is obtained because of the exponential suppression of the false pocket number density. Recycled UHDM has several novel features: the dark sector today consists of multiple decoupled species that were once in thermal equilibrium and the PBH formation stage has extended mass functions whose shape can be controlled by IR operators coupling the dark and visible sectors.
White dwarfs have long been considered as large-scale dark matter (DM) detectors. Owing to their high density and relatively large size, these objects can capture large amounts of DM, potentially producing detectable signals. In this talk, I will show how we can probe for the first time the elusive higgsino, one of the remaining supersymmetric DM candidates that is largely unconstrained, using the white dwarf population within the Milky Way’s central parsec combined with existing gamma-ray observations of this region.
This study demonstrates how magnetically levitated (MagLeV) superconductors can detect dark-photon and axion dark matter via electromagnetic interactions, focusing on the underexplored low-frequency range below a kHz. Unlike traditional sensors that primarily detect inertial forces, Maglev systems are sensitive to electromagnetic forces, enabling them to respond to oscillating magnetic fields induced by dark matter. The research highlights the superconductors' capacity to probe dark matter when its Compton frequency matches the superconductor's trapping frequency and details the adjustments necessary for detection. This approach could significantly enhance sensitivity in the Hz to kHz frequency range for dark matter detection.
We explore the possibility of probing (ultra)-light dark matter (DM) using Mössbauer spectroscopy technique. Due to the time-oscillating DM background, a small shift in the emitted photon energy is produced, which in turn can be tested by the absorption spectrum. As the DM induced effect (signal) depends on the distance between the Mössbauer emitter and the absorber, this allows us to probe DM mass inverse of the order of the macroscopic distance scales. By using the existing synchrotron based Mössbauer setup, we can probe DM parameter space which is at par with the bounds from various fifth force experiments. We show our method can improve the existing limits coming from experiments looking for oscillating nature of DM, by several orders of magnitude. An advancement of the synchrotron facilities would enable us to probe DM parameter space beyond the fifth force limit by several orders of magnitude.
Detecting axion and dark photon dark matter in the milli-eV mass range has been considered being a significant challenge due to its frequency being too high for high-Q cavity resonators and too low for single-photon detectors to register. I will present a method that overcomes this difficulty (based on recent work arXiv:2208.06519) by using trapped electrons as high-Q resonators to detect axion and dark photon dark matter, and set a new limit on dark photon dark matter at 148 GHz (~0.6meV) that is around 75 times better than previous constraints by a 7 days proof-of-principle measurement. I will also propose some updates to this work that improve the result a lot by optimizing some of the experimental parameters and techniques.
Atom interferometers and gradiometers have unique advantages in searching for various kinds of dark matter (DM). Our work focus on light DM scattering and gravitational effect from macroscopic DM in such experiments.
First we discuss sensitivities of atom interferometers to a light DM subcomponent at sub-GeV masses through decoherence and phase shift from spin-independent scatterings. Benefiting from their sensitivities to extremely low momentum deposition and the coherent scattering, atom interferometers will be highly competitive and complementary to other direct detection experiments, in particular for DM subcomponent with mass $m_\chi \leq 10$ keV.
As an excellent accelerometer, atom gradiometers can also be sensitive to macroscopic DM through gravitational interactions. We present a general framework for calculating phase shifts in atom interferometers and gradiometers with metric perturbations sourced by time-dependent weak Newtonian potentials. We derive signals from gravitationally interacting macroscopic DM and found that future space missions like AEDGE could constrain macroscopic DM fractions to less than unity for DM masses around $m_\text{DM}\sim 10^7$ kg.
Beyond the Standard Model (BSM) Higgs with a same-flavor, opposite-charge dilepton plus Missing Transverse Energy (MET) final state are predicted by many models, including extensions of supersymmetry with an additional scalar. Such models are motivated by phenomenological issues with the Standard Model, such as the hierarchy problem, and by astrophysical observations such as the excess of gamma-ray radiation in the Milky Way galactic center. We have seen sensitivity over the range of 1-4 GeV and >20 GeV for the mass of this scalar. Now, we are targeting the 4-20 GeV range. Conveniently, the proposed signal decay has the same final state as that of the signal region of a published ATLAS search for gauginos in a compressed-mass scenario at the LHC. Due to this apparent signal region overlap, we can take advantage of the analysis preservation and reinterpretation framework (RECAST) to calculate limits on the branching ratio for this decay mode instead of building a dedicated analysis in this range.
We present a search for the y+H production mode with data from the CMS experiment at the LHC using 138fb$^{-1}$ of data with sqrt(s) = 13TeV. In this analysis we target a signature of a boosted Higgs boson recoiling against a high energy photon for H->4l and H->bb final states. Effective HZγ and Hγγ anomalous couplings are considered in this work within the framework of Effective Field Theory. Within this model, constraints on the γH production cross-section are presented, and simultaneous constraints on four anomalous HZγ and Hγγ couplings are reported.
The Standard Model (SM) predicts couplings to the Higgs boson for a given mass of the Higgs boson, and experimental values different from these predictions would be strong indicators of physics beyond the SM. While Higgs decays to vector bosons and third-generation charged fermions have been established with good agreement to SM couplings, the Higgs boson coupling to charm quarks has yet to be experimentally determined with statistical significance. In the production mechanism where the Higgs boson is produced in association with a vector boson (VH, H->cc) and subsequently decays to a pair of charm quarks, is a promising process for studying the Higgs-charm Yukawa coupling due to its high signal to background ratio. We discuss the planned SM search for VH, H->cc in the resolved-jet regime, where the Higgs boson has a low to moderate transverse momentum (<~ 300 GeV), using CMS Run-3 proton-proton data. Previous analyses with CMS Run-2 data reconstructed the Higgs decay in the resolved-jet regime with two small-radius jets that were flavor-tagged independently, resulting in an underperformance due to excluding information from the radiation between the decay products of the Higgs. At the LHC Run-3, the Higgs, we intend to employ the novel “PAIReD” jet reconstruction technique: elliptical clusters of particles defined by pairs of small-radius jets with arbitrary separations between them. Modern flavor tagging algorithms trained on such novel jets allow us to increase tagging performance by exploiting correlations between hadronization products and extending the capabilities of merged-jet reconstruction flavor tagging techniques to small-radius jets. Flavor-tagging and simultaneously predicting the mass of PAIReD jets via machine learning provide greater leverage for separating the signal from the background. The overall analysis will be extended to include Higgs decays to bottom quarks, resulting in a simultaneous measurement of the Higgs-charm and Higgs-bottom Yukawa couplings. We expect to improve the rejection of major backgrounds by a factor of around 2.
The Higgs boson gives masses to all massive particles in the Standard Model (SM) and plays a crucial role in the theory. Studying different production and decay modes of the Higgs at the Large Hadron Collider is essential. The Vector Boson Fusion (VBF) is the second-largest production mechanism of the Higgs. Higgs bosons have the largest probability of decaying into a pair of bottom quarks, whereas the Higgs interaction to charm quarks has never been observed directly before. Thus, I led a sensitivity study conducted in the summer of 2023 to give insight into the best optimizations for Run-3 VBF Higgs to bb and Higgs to cc analysis. A new VBF trigger was made in late 2018 at the end of Run 2, allowing us to observe the Higgs boson decay to charm quark using the VBF production mode. The sensitivity study utilized the new trigger and began with determining the best working points for the flavor tagging of b and c quarks. I optimized hyperparameters and input variables of the Boosted Decision Trees (BDT). I introduced cuts on the BDT score to increase the significance of the invariant mass of the two signal b-quarks and c-quarks. The sensitivity analysis proved the feasibility of searching for VBF Higgs to bb and Higgs to cc using a partial Run 2 and Run 3 ATLAS dataset, leading to a full ATLAS analysis in September of 2023. This talk will summarize the sensitivity study and my current involvement in further optimizing the analysis to enhance signal sensitivity.
In this talk, we present the two-loop order $\mathcal{O}(\alpha\alpha_s)$ correction to the bottom quark on-shell wavefunction renormalization constant and we update the $\overline{MS}$-mass and the Yukawa coupling corrections at the same order, considering the full dependence on the top quark mass and on the bottom mass itself.
The Georgi-Machacek (GM) model is a motivated extension of the Standard Model (SM) that predicts the existence of singly and doubly charged Higgs bosons (denoted H± and H±±). Searches for these types of particles were conducted by the ATLAS collaboration at CERN with 139 fb$^{-1}$ of $\sqrt{s} = 13$ TeV $pp$ collision data (Run 2, collected between 2015 and 2018, see arXiv:2312.00420 and arXiv:2207.03925). Slight excesses were observed in searches utilizing events with vector boson-fusion (VBF) topologies. To further study these excesses, a new combined search for the H± and H±± is underway using additional $pp$ data collected by ATLAS during 2022-2024 (Run 3) at a collision energy of $\sqrt{s} = 13.6$ TeV. The VBF production of the H± and H±± is once again utilized, where the H± decays to a $W$ and $Z$ boson and the H±± decays into two same-sign $W$ bosons. Only the fully leptonic decays of the vector bosons are considered. Improvements over the Run 2 H± and H±± searches are discussed and some preliminary results are presented.
We present work in progress on using the timing information of jet constituents to determine the production vertex of highly displaced jets formed from the decay of a long-lived particle. We also demonstrate that the same network can output a much more consistent jet time that is less sensitive to geometric effects; allowing for better exclusionary power compared to $p_T$-weighted time.
Hadronization, a crucial component of event generation, is traditionally simulated using finely-tuned empirical models. While current phenomenological models have achieved significant success in simulating this process, there remain areas where they fall short of accurately describing the underlying physics. An alternative approach is machine learning-based models.
In this talk, I will present recent developments in MLHAD – a machine learning-based model for hadronization. We introduce a new training method for normalizing flows, which improves the agreement between simulated and experimental distributions of high-level observables by adjusting single-emission dynamics. Our results constitute an important step toward realizing a machine-learning-based model of hadronization that utilizes experimental data during training.
New physics at the LHC may be hiding in non-standard final state configurations, particularly in cases where stringent particle identification could obscure the signal. Here we present a search for resonances in the three-photon final state where two photons are highly merged. We target the case where a heavy vector-like particle decays to a photon and a new spin-0 particle $\phi$, where $\phi$ is light and decays to two photons, resulting in a merged diphoton signature. To classify and obtain the relevant kinematic properties of these merged photons, we use a convolutional neural network that takes individual crystal deposits in the CMS electromagnetic calorimeter as input. This method performs remarkably well for these highly merged decays where standard particle identification fails.
Normalizing flows have proven to be state-of-the-art for fast calorimeter simulation. With access to the likelihood, these flow-based fast calorimeter surrogate models can be used for other tasks such as unsupervised anomaly detection and incident energy regression without any additional training costs.
The invariant mass of particle resonances is a key analysis variable for LHC physics. For analyses with di-tau final states, the direct calculation of the invariant mass is impossible because tau decays always include neutrinos, which escape detection in LHC detectors. The Missing Mass Calculator (MMC) is an algorithm used by the ATLAS Experiment to calculate the invariant mass of resonances decaying to two tau particles. The MMC solves the system of kinematic equations involving the tau visible decay products by minimizing a likelihood function, making use of the tau mass constraint and probability distributions from Z → ττ decays. Because the algorithm uses Z decays it is most accurate in the Z mass range. This presentation will show that for high mass BSM resonances the MMC mass increasingly deviates from the true value, warranting further studies and the search for solutions to this discrepancy. We will show studies into machine learning solutions to di-tau mass reconstruction, aimed at providing improved accuracy for high-mass resonances. The specific use case is the search for X → SH → bbττ, sensitive to the Two-real-scalar-singlet extension to the Standard Model (TRSM), in which the Standard Model scalar sector is extended by two scalar singlets, labeled as X and S.
In a search for an exotic Higgs boson decay, a novel signature with highly collimated photons is studied where the Higgs boson decays into hypothetical light pseudoscalar particles of the form H to AA. In the highly boosted scenario, two collimated photons from the A decay are reconstructed as a single photon object, or an artificially merged photon shower. A deep learning based tagger is developed to identify the signal merged photon signature. We utilize the images of its electromagnetic shower shape and track structures. In this talk, we present the merged photon tagger that utilizes low-level detector information and its excellent performance across different boosts of A, compared with the standard CMS photon identification algorithm.
Quantum sensing employs a rich arsenal of techniques, such as squeezing, photon counting, and entanglement assistance, to achieve unprecedented levels of sensitivity in various tasks, with wide-reaching applications in fields of fundamental physics. For instance, squeezing has been utilized to enhance the sensitivity of gravitational wave detection and expedite the hunt for exotic dark matter candidates. In this talk, I will dive into the various quantum strategies aimed at accelerating the search for weak signals and explore initial approaches to transcend Standard Quantum Limits en route to achieving the ultimate limits of measurement sensitivity set by quantum mechanics. Along the way, I will underscore the important roles that distributed quantum sensing and entanglement can have in pushing the limits of our sensing capabilities.
Superconducting transmon qubits play a pivotal role in contemporary superconducting quantum computing systems. These nonlinear devices are typically composed of a Josephson junction shunted by a large capacitor and the bottom two energy eigenstates serve as qubits. When a qubit is placed in its excited state, it decays to its ground state with a relaxation timescale $T_1$. However, recent studies have suggested that cosmic rays or ambient gamma radiation could significantly degrade the relaxation times of transmon qubits, leading to detrimental correlated errors that impede quantum error correction processes [1,2]. In this study, we explore the potential of utilizing transmon qubits as radiation detectors by investigating the impact of radioactivity on transmons fabricated at the Superconducting Quantum Materials and Systems (SQMS) center, Fermilab. We develop a fast detection protocol based on rapid projective measurements and active reset to perform detection with milli-second time resolution. We utilize the underground facility at INFN-Gran Sasso and controlled radioactive sources (such as Thorium) to validate our scheme. Additionally, we investigate the possibility of enhancing detection efficiency by evaluating transmons fabricated with various superconducting materials and improved signal analysis schemes.
[1] Matt McEwen et al., Nature Physics18, 107–111 (2022)
[2] C.D. Wilen et al., Nature 594, 369–373 (2021)
*The work was supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Superconducting Quantum Materials and Systems (SQMS) Center under the contract No. DE-AC02-07CH11359, by the Italian Ministry of Foreign Affairs and International Cooperation, grant number US23GR09, and by the Italian Ministry of Research under the PRIN Contract No. 2020h5l338.
The QCD axion, originally motivated as a solution to the strong CP problem, is a compelling candidate for dark matter, and accordingly, the last decade has seen an explosion in new ideas to search for axions. Simultaneously, we have witnessed a revolution in quantum sensing and metrology, with the emergence of platforms enabling ever-greater measurement sensitivity. These platforms are now being brought to bear on axion dark matter searches, with the aim of a comprehensive probe of the phase space for QCD axions. In this talk, I briefly overview efforts to apply techniques evading the Standard Quantum Limit of amplification, such as squeezing, photon counting, and backaction evasion, to axion dark matter searches. I then focus on techniques well-suited to resonant electromagnetic probes of pre-inflationary sub-ueV axions, for which photon counting of the thermal state in the resonator is not advantageous relative to quantum-limited amplification. I describe, in particular, the RF Quantum Upconverter (RQU), a superconducting lithographed device containing a Josephson junction interferometer that upconverts kHz-MHz electromagnetic signals (corresponding to the sub-ueV mass window) to GHz signals. By leveraging mature microwave techniques as well as adapting sensitive measurement schemes utilized in cavity optomechanical systems (e.g., LIGO), the RQU can evade the Standard Quantum Limit. Recent experimental results for the RQU are discussed. I describe plans to integrate the RQU into DMRadio, an experimental campaign for sub-ueV dark matter with the ultimate goal of probing GUT-scale axions, and the Princeton Axion Search, which will probe QCD axions in the 0.8-2 ueV mass range.
Recent advancements in quantum computing have introduced new opportunities alongside classical computing, offering unique capabilities that complement traditional methods. As quantum computers operate on fundamentally different principles from classical systems, there is a growing imperative to explore their distinct computational paradigms. In this context, our research aims to explore the potential applications of quantum machine learning in the field of high-energy physics. Specifically, we seek to assess the feasibility of employing supervised quantum machine learning for searches conducted at the Large Hadron Collider. Additionally, we aim to investigate the potential of generative quantum machine learning for simulating tasks relevant to high-energy physics. By leveraging quantum computing technologies, we aim to advance the capabilities of computational approaches in addressing complex challenges within the field of particle physics.
ProtoDUNE-SP was a large-scale prototype of the single phase DUNE far detector which took test beam data in Fall 2018. The beam consisted of positive pions, kaons, muons, and protons, and this data is being used to measure the various hadron-Ar interaction cross sections. These measurements will provide important constraints for the nuclear ground state, final state interaction, and secondary interaction models of argon-based neutrino-oscillation and proton-decay experiments such as DUNE. This talk will focus on the measurement of the pion-argon inelastic interaction cross sections.
The SPS Heavy Ion and Neutrino Experiment (NA61/SHINE) is a fixed-target hadron spectrometer at CERN’s Super Proton Synchrotron. It has a dedicated program to measure hadron-nucleus interactions with the goal of constraining the accelerator-based neutrino flux, which mainly originates from the not precisely known primary and secondary hadron production. NA61/SHINE’s previous measurements of protons colliding on thin carbon targets and a replica T2K target have significantly reduced the flux uncertainty in the T2K experiment. This contribution will present the recent results and ongoing hadron production measurements in NA61/SHINE, the upcoming data-taking with a replica LBNF/DUNE target, as well as the plan after the Long Shutdown 3 of the accelerator complex at CERN.
The Deep Underground Neutrino Experiment (DUNE) is a long baseline oscillation experiment that, among its many physics goals, seeks to measure the charge-parity (CP) violating phase, $\delta_{\mathrm{CP}}$. To do so requires precise knowledge of both the neutrino and antineutrino fluxes. DUNE will achieve this via the use of both a near and far detection system. The leading source of systematic uncertainty associated with predicting the DUNE flux comes from the production of hadrons, closely followed by the uncertainties associated with beam focusing effects.
The DUNE flux was simulated within a custom geant4 framework in parallel with the Package to Predict the Flux (PPFX).The total systematic uncertainties associated with the hadron production and beam focusing effects within DUNE’s region of interest [0.5, 8] GeV, was found to be on average 8-10% across all modes, detector locations and neutrino species. Construction of the correlation matrix indicated that the systematic uncertainties were highly correlated, while the Far to Near Flux Ratio allowed for the cancellation of many systematic effects, effectively reducing the total systematic uncertainties to the order of 1.5-5%.
Identification of high-energy neutrino point sources by IceCube is exciting for particle phenomenology, as propagation of neutrinos over large distances allows us to test properties that are hard to access. However, beyond-Standard Model effects would often show up as distortions of the energy spectrum, which makes it difficult to distinguish new physics from uncertainties in the source modeling. In this talk, I will present ongoing work to determine how well a future dataset containing multiple point-source observations could simultaneously distinguish some of these effects, and how the analysis can account for this.
Charged leptons produced by high-energy and ultrahigh-energy neutrinos have a substantial probability of emitting prompt internal bremsstrahlung $\nu_\ell + N \rightarrow \ell + X + \gamma$. This can have important consequences for neutrino detection. We discuss observable consequences at high- and ultrahigh-energy neutrino telescopes and LHC's Forward Physics Facility. Logarithmic enhancements can be substantial (e.g.\ $\sim 20\%$) when either the charged lepton's energy, or the rest of the cascade, is measured. We comment on applications involving the inelasticity distribution including measurements of the $\nu/\bar{\nu}$ flux ratio, throughgoing muons, and double-bang signatures for high-energy neutrino observation. Furthermore, for ultrahigh-energy neutrino observation, we find that final state radiation affects flavor measurements and decreases the energy of both Earth-emergent tau leptons and regenerated tau neutrinos. Finally, for LHC's Forward Physics Facility, we find that final state radiation will impact future extractions of strange quark parton distribution functions. Final state radiation should be included in future analyses at neutrino telescopes and the Forward Physics Facility.
We use publicly available data to perform a search for correlations of high-energy neutrino candidate events detected by IceCube and high-energy photons seen by the HAWC collaboration. Our search is focused on unveiling such correlations outside of the Galactic plane. This search is sensitive to correlations in the neutrino candidate and photon skymaps which would arise from a population of unidentified point sources.
The scenario of neutrino self-interactions is an interesting beyond-Standard Model possibility that is difficult to test. High energy neutrinos measured by the IceCube neutrino detector having traveled long distances present an opportunity to attempt to constrain the parameters governing neutrino self-interaction: the mediator mass and coupling constant. We have modeled neutrino production, propagation, and detection by IceCube to predict the detected flux of neutrinos with neutrino self-interactions at a given value of the coupling constant and mediator mass. Using this model we can perform a joint analysis of several neutrino sources (the TXS 0506+056 blazar and the NGC 1068 AGN) whose different inherent assumptions make the joint analysis beneficial. Prior works have only examined sources individually, so our study of data points taken from multiple sources provides a statistically novel approach to this problem. We present our ongoing work on this analysis.
The ForwArd Search ExpeRiment (FASER) has been successfully acquiring data at the Large Hadron Collider (LHC) since the inception of Run 3 in 2022. FASER opened the window on the new subfield of collider neutrino physics by conducting the first direct detection of muon and electron neutrinos at the LHC. In this talk, we discuss the latest neutrino physics results from FASER. A review of the first neutrino results from the electronic detectors of FASER will be given, and the rest of the talk will focus on the first measurements of neutrino cross sections in the TeV-energy range with the FASER𝜈 sub-detector.
Proton-proton collisions at the LHC generate a high-intensity collimated beam of neutrinos in the forward direction, characterized by energies of up to several TeV. The recent observation of LHC neutrinos by FASERν and SND@LHC signals that this hitherto ignored particle beam is now available for scientific inquiry. Here we quantify the impact that neutrino deep-inelastic scattering (DIS) measurements at the LHC would have on the parton distributions (PDFs) of protons and heavy nuclei. We generate projections for DIS structure functions for FASERν and SND@LHC at Run III, as well as for the FASERν2, AdvSND, and FLArE experiments to be hosted at the proposed Forward Physics Facility (FPF) operating concurrently with the High-Luminosity LHC (HL-LHC). We determine that up to one million electron- and muon-neutrino DIS interactions within detector acceptance can be expected by the end of the HL-LHC, covering a kinematic region in x and Q2 overlapping with that of the Electron-Ion Collider. Including these DIS projections into global (n)PDF analyses reveals a significant reduction of PDF uncertainties, in particular for strangeness and the up and down valence PDFs. We show that LHC neutrino data enables improved theoretical predictions for core processes at the HL-LHC, such as Higgs and weak gauge boson production. Our analysis demonstrates that exploiting the LHC neutrino beam effectively provides CERN with a “Neutrino-Ion Collider” without requiring modifications in its accelerator infrastructure.
A search for a massive resonance $X$ decaying to a pair of spin-0 bosons $\phi$ that themselves decay to pairs of photons ($\gamma$), is presented. The search is based on CERN LHC proton-proton collision data at $\sqrt{s} = 13$ TeV, collected with the CMS detector, corresponding to an integrated luminosity of 138 $\textrm{fb}^{-1}$. The analysis considers masses $m_X$ between 0.3 and 3 TeV, and is restricted to values of $m_\phi$ for which the ratio $m_\phi/m_X$ is between 0.5 and 2.5\%. In these ranges, the two photons from each $\phi$ boson are expected to spatially overlap significantly in the detector. Two neural networks are created, based on computer vision techniques, to first classify events containing such merged diphotons and then to reconstruct the mass of the diphoton object. The mass spectra are analyzed for the presence of new resonances, and are found to be consistent with standard model expectations. Model-specific limits are set at 95\% confidence level on the production cross section for $X \to \phi\phi \to (\gamma\gamma)(\gamma\gamma)$ as a function of the resonances’ masses, where both the $X \to \phi\phi$ and $\phi \to \gamma\gamma$ branching fractions are assumed to be 100\%. Observed (expected) limits range from 0.03 - 1.06 fb (0.03 - 0.79 fb) for the masses considered, representing the most sensitive search of its kind at the LHC.
We present the first search for "soft unclustered energy patterns" (SUEPs) described by an isotropic production of many soft particles. SUEPs are a potential signature of some Hidden Valley models invoked to explain dark matter, and which can be produced at the LHC via a heavy scalar mediator. It was previously expected that such events would be rejected by conventional collider triggers and reconstruction; however, using custom data samples augmented by storing track-level information, and by targeting events where the scalar mediator recoils against initial-state radiation, this search is uniquely able to reconstruct large track clusters that are associated with the SUEP signature. The large QCD background is estimated utilizing a novel data-driven background prediction method which is shown to accurately describe data. This search achieves sensitivity across a broad range of mediator masses for the first time, where the track multiplicity is high.
We present a search for low-mass narrow quark-antiquark resonances. This search uses data from the LHC in proton-proton collisions at a center-of-mass energy of 13 TeV, collected by the CMS detector in Run 2, and corresponds to an integrated luminosity of 136 fb^-1. The analysis strategy makes use of an initial state photon recoiling against the narrow resonance. The resulting large transverse momentum (pT) of the resonance leads to its decay products being collimated into a single jet with internal two-pronged substructure. The new physics signal is searched for as a narrowly peaking excess above the standard model backgrounds in the jet mass spectrum. During the 2018 data taking period, a lower photon pT threshold trigger was implemented and is used in this analysis, allowing us to better probe the lower mass region. The variable N2DDT is used to identify two-pronged substructure jets, which is decorrelated with the jet’s mass and pT. An alternate method of selecting jets with two-pronged substructure using a machine learning algorithm called ParticleNet is also in development. A mostly data-driven method is used to determine the backgrounds in the analysis. A leptophobic Z’ decaying to quarks is the benchmark model used, and the analysis is further motivated by a simplified model of dark matter involving a mediator particle interacting between quarks and dark matter.
A search for Drell Yan production of leptoquarks is performed using proton-proton collision data collected at √s = 13 TeV using the full Run-2 dataset with the CMS detector at the LHC, CERN. The data corresponds to an integrated luminosity of approximately 137 fb−1. The search spans scalar and vector leptoquarks that couple up and down quarks to electrons and muons. Dielectron and dimuon final states are considered, with dilepton invariant masses above 500 GeV. Since the Drell-Yan production of leptoquarks is non-resonant, we fit the dilepton angular distribution to templates built from reweighted Monte Carlo samples. This allows us to probe higher leptoquark masses than previous searches. 95% Exclusion limits on leptoquark Yukawa couplings are presented for leptoquark masses upto 5 TeV.
Long-lived, charged particles are included in many beyond the standard model theories. It is possible to observe massive charged particles through unusual signatures within the CMS detector. We use data recorded during 2017-18 operations to search for signals involving anomalous ionization in the silicon tracker. Two new, enhanced methods are presented. The results are interpreted within several models including those with staus, stops, gluinos, and multiply charged particles as well as a new model with decays from a Z' boson.
Long-lived particles (LLPs) arise in many promising theories beyond the Standard Model. At the LHC, LLPs typically decay away from their initial production vertex, producing displaced and possibly delayed final state objects that give rise to non-conventional detector signatures. The development of custom reconstruction algorithms and dedicated background estimation strategies significantly enhance sensitivity to various LLP topologies at CMS. We present recent results of tracking- and calorimeter-based searches for LLPs and other non-conventional signatures obtained using data recorded by the CMS experiment during Run 2 and Run 3 of the LHC.
Since the landmark discovery in 2012 of the h(125) Higgs boson at the LHC, it should be a nobrainer to pursue the existence of a second Higgs doublet. We advocate, however, the general 2HDM
(g2HDM) that possesses a second set of Yukawa couplings. The extra top Yukawa coupling ρtt drives
electroweak baryogenesis (EWBG), i.e. generating Baryon Asymmetry of the Universe (B.A.U.) with
physics at the electroweak scale — hence relevant at the LHC! At the same time, the extra electron
Yukawa coupling ρee keeps the balance towards the stringent ACME2018 & JILA2023 bounds on the
electron electric dipole moment (eEDM), spectacularly via the fermion mass and mixing hierarchies
observed in the Standard Model — Discovery could be imminent (possibly followed by nEDM echo)!
EWBG suggests that exotic Higgs bosons H, A, H⁺ in g2HDM ought to be sub-TeV in mass, with
effects naturally well-hidden so far by 1) flavor structure, i.e. the aforementioned fermion mass-mixing
hierarchies; and 2) the emergent alignment phenomenon (i.e. small h−H mixing) that suppresses
processes such as t → ch, with the equivalent best limit by CMS and ATLAS. It is then natural to
pursue direct search modes such as cg → tH/A → ttc(bar) with extra top Yukawa couplings ρtc and ρtt
that are not alignment-suppressed; the results have just been published by CMS in 3/2024, which was
preceded by ATLAS. CMS would now pursue cg → bH⁺ → btb(bar), as well as continue to study t →
ch and ttc(bar) by adding Run III data, all with discovery potential. CMS also continues to pursue Bs,d
→ μμ, where the result published in 2023 has changed the world view.
Belle II would probe g2HDM with precision flavor measurements such as B → μν, τν; a ratio
deviating from 0.0045 would provide a smoking-gun. The τ → μγ process would need a large dataset.
With H, A, H⁺ expected at 300−600 GeV hence ripe for LHC search, we pursue lattice simulation
studies of first order electroweak phase transition, a prerequisite for EWBG in the early Universe, the
main motivation for our program. We also investigate the Landau pole phenomenon of g2HDM Higgs
sector for a new strong interaction scale, which could prove crucial for the future of collider physics.
Thus, our Decadal Mission:
“Find the extra H, A, H⁺ bosons; Crack the Flavor code; Solve the Mysterious B.A.U.!"
Baryon number violation is our most sensitive probe of physics beyond the Standard Model, especially through the study of nucleon decays. Angular momentum conservation requires a lepton in the final state of such decays, kinematically restricted to electrons, muons, or neutrinos. We show that operators involving tauons, which are at first sight too heavy to play a role in nucleon decays, still lead to clean nucleon decay channels with tau neutrinos. While many of them are already constrained from existing two-body searches such as $p\to \pi^+\nu$, other operators induce many-body decays such as $p \to \eta \pi^{+} \bar\nu_\tau$ and $n\to K^+ \pi^-\nu_\tau$ that have never been searched for.
The fermion mass hierarchy of the Standard Model (SM) spans many orders of magnitude and begs for a further explanation. The Froggatt-Nielsen (FN) mechanism is a popular solution which introduces an additional $U(1)$ symmetry to the SM under which SM fermions are charged. We studied the general class of FN solutions to the lepton flavor puzzle, including multiple different scenarios of neutrino masses. In this talk, we present preliminary results for the phenomenologically viable set of leptonic FN solutions. We calculate the magnitude of resulting flavor-changing observables for both low-energy decays and collider signatures, especially the observational potential of a future muon collider. We also discuss the potential for distinguishing between different FN scenarios based on the patterns observed in flavor-violating observables.
Charged lepton flavor violation arises in the Standard Model Effective Field Theory at mass dimension six. The operators that induce neutrinoless muon and tauon decays are among the best constrained and are sensitive to new-physics scales up to 10^7 GeV. An entirely different class of lepton-flavor-violating operators violates lepton flavors by two units rather than one and does not lead to such clean signatures. Even the well-known case of muonium–anti-muonium conversion that falls into this category is only sensitive to two out of the three ∆Lμ = −∆Le = 2 dimension-six operators. We derive constraints on many of these operators from lepton flavor universality and show how to make further progress with future searches at Belle II and future experiments such as Z factories or muon colliders.
Non-abelian symmetries are strong contenders as solutions to the flavour puzzle that seeks to explain the mass and mixing matrices of SM fermions. The Universal Texture Zero (UTZ) model charges all quark and lepton families as triplets under the $\Delta(27)$ symmetry group, while simultaneously exploiting the seesaw mechanism to generate light neutrino masses. Together with BSM triplet scalars, called flavons, the fermions and flavons generate a Yukawa structure that agrees with the current measurements and makes predictions for poorly constrained leptonic CP-violation parameters and other observables like $0\nu\beta\beta$ rates. In this talk, we present the inclusion of non-renormalizable potential in the flavon sector and illustrate how the additional 6-dimensional scalar potential introduce modification to the vacuum alignment. We investigated the possible symmetry contraction of arbitrary dimensional terms using the Hilbert-series-based DECO algorithm and classified terms that could contribute to non-trivial changes to the vacuum alignment and, hence, the flavour measurements. We are also looking into the possibility of classifying a general number of flavons using neural network. The perturbation to the vacuum alignment due to the non-renormalizable scalar potential can affect the effective coupling in the Yukawa sector after family symmetry breaking. We further outlines the possible phenomenological effect in the neutrino sector.
I will discuss effective field theory tools and model building efforts focused on describing probeable signals of charged lepton flavor violation at current and future muon-to-electron conversion experiments.
The “Hubble tension” refers to a disagreement between the present expansion rate of the universe, and that projected by applying our current model (“Lambda Cold Dark Matter” or Lambda-CDM) to early universe measurements; Lambda-CDM yields an expansion rate substantially different from current measurement, by more than five standard deviations. We describe the model, in particular the meaning of Lambda, which has a parameter w = -1. We find that if instead w = -1.73, the projected expansion rate comes out right; however, any w < -1 will cause the end of the universe in a finite time. We present the mathematics and some conclusions.
Cosmological observables are particularly sensitive to key ratios of energy densities and rates, both today and at earlier epochs of the Universe. Well-known examples include the photon-to-baryon and the matter-to-radiation ratios. Equally important, though less publicized, are the ratios of pressure-supported to pressureless matter and the Thomson scattering rate to the Hubble rate around recombination, both of which observations tightly constrain. Preserving these key ratios in theories beyond the $\Lambda$ Cold-Dark-Matter ($\Lambda$CDM) model ensures broad concordance with a large swath of datasets when addressing cosmological tensions. We demonstrate that a mirror dark sector, reflecting a partial $\mathbb{Z}_2$ symmetry with the Standard Model, in conjunction with percent level changes to the visible fine-structure constant and electron mass which represent a $\textit{phenomenological}$ change to the Thomson scattering rate, maintains essential cosmological ratios. Incorporating this ratio preserving approach into a cosmological framework significantly improves agreement to observational data ($\Delta\chi^2=-35.72$) and completely eliminates the Hubble tension with a cosmologically inferred $H_0 = 73.80 \pm 1.02$ km/s/Mpc when including the S$H_0$ES calibration in our analysis. While our approach is certainly nonminimal, it emphasizes the importance of keeping key ratios constant when exploring models beyond $\Lambda$CDM.
With the growing precision of cosmological measurements, tensions in the determination of cosmological parameters have arisen that might be the first manifestations of physics going beyond $\Lambda$CDM. We propose a new class of interacting dark sector models, which lead to qualitatively distinct cosmological behavior, dark acoustic oscillation, which can potentially simultaneously address the two most important tensions in cosmological data, the H0 and S8 tensions. The main ingredients in this class of models are self-interacting dark radiation and its dark acoustic oscillation induced by strong interactions with a fraction of dark matter. I will also present the latest results from applying this model across various combinations of cosmological data, illustrating the improvement it provides over $\Lambda$CDM.
Phase transitions provide a useful mechanism to produce both electroweak baryogenesis (EWBG) and gravitational waves (GW). We propose a left-right symmetric model with two Higgs doublets, a left-handed doublet $H_L$ and a right-handed doublet $H_R$, and a scalar singlet $\sigma$ under a $H_L \leftrightarrow H_R$ and $\sigma \leftrightarrow -\sigma$ symmetry as discussed by Gu. We utilize a multistep phase transition to produce EWBG and GW. At the first transition, $\sigma$ acquires a vev which results in GW being produced. At the second transition at a lower temperature, $H_R$ acquires a vev providing $W_R$ with a mass. This also produces a baryon asymmetry in the right-handed sector, which eventually is transferred to the left-handed sector. Finally, at an even lower temperature, the electroweak phase transition occurs and $H_L$ acquires a vev.
We propose a novel framework where baryon asymmetry of the universe can arise due to forbidden decay of dark matter (DM) enabled by finite-temperature effects in the vicinity of a first order phase transition (FOPT). In order to implement this novel cogenesis mechanism, we consider the extension of the standard model by one scalar doublet $\eta$, three right handed neutrinos (RHN), all odd under an unbroken $Z_2$ symmetry, popularly referred to as the scotogenic model of radiative neutrino mass. While the lightest RHN $N_1$ is the DM candidate and stable at zero temperature, there arises a temperature window prior to the nucleation temperature of the FOPT assisted by $\eta$, where $N_1$ can decay into $\eta$ and leptons generating a non-zero lepton asymmetry which gets converted into baryon asymmetry subsequently by sphalerons. The requirement of successful cogenesis together with a first order electroweak phase transition not only keep the mass spectrum of new particles in sub-TeV ballpark within reach of collider experiments but also leads to observable stochastic gravitational wave spectrum which can be discovered in planned experiments like LISA.
We calculate the effects of a light, very weakly-coupled boson $X$ arising from a spontaneously broken $U(1)_{B-L}$ symmetry on $\Delta N_{\rm eff}$ as measured by the CMB and $Y_p$ from BBN. Our focus is the mass range $1 \; {\rm eV} \, \lesssim m_X \lesssim 100 \; {\rm MeV}$. We find $U(1)_{B-L}$ is more strongly constrained by $\Delta N_{\rm eff}$ than previously considered. While some of the parameter space has complementary constraints from stellar cooling, supernova emission, and terrestrial experiments, we find future CMB observatories including Simons Observatory and CMB-S4 can access regions of mass and coupling space not probed by any other method.
A larger Planck scale during an early epoch leads to a smaller Hubble rate, which is the measure for efficiency of primordial processes. The resulting slower cosmic tempo can accommodate alternative cosmological histories. We consider this possibility in the context of extra dimensional theories, which can provide a natural setting for the scenario. If the fundamental scale of the theory is not too far above the weak scale, to alleviate the ``hierarchy problem," cosmological constraints imply that thermal relic dark matter would be at the GeV scale, which may be disfavored by cosmic microwave background measurements. Such dark matter becomes viable again in our proposal, due to smaller requisite annihilation cross section, further motivating ongoing low energy accelerator-based searches. Quantum gravity signatures associated with the extra dimensional setting can be probed at high energy colliders -- up to $\sim 13$ TeV at the LHC or $\sim 100$ TeV at FCC-hh. Searches for missing energy signals of dark sector states, with masses $\gtrsim 10$ GeV, can be pursued at a future circular lepton collider.
We describe a simple dark sector structure which, if present, has implications for the direct detection of dark matter (DM): the Dark Sink. A Dark Sink transports energy density from the DM into light dark-sector states that do not appreciably contribute to the DM density. As an example, we consider a light, neutral fermion $\psi$ which interacts solely with DM $\chi$ via the exchange of a heavy scalar $\Phi$. We illustrate the impact of a Dark Sink by adding one to a DM freeze-in model in which $\chi$ couples to a light dark photon $\gamma'$ which kinetically mixes with the Standard Model (SM) photon. This freeze-in model (absent the sink) is itself a benchmark for ongoing experiments. In some cases, the literature for this benchmark has contained errors; we correct the predictions and provide them as a public code. We then analyze how the Dark Sink modifies this benchmark, solving coupled Boltzmann equations for the dark-sector energy density and DM yield. We check the contribution of the Dark Sink $\psi$'s to dark radiation; consistency with existing data limits the maximum attainable cross section. For DM with a mass between $\text{MeV} -\mathcal{O}(10\text{ GeV})$, adding the Dark Sink can increase predictions for the direct detection cross section all the way up to the current limits.
Collisions between large fermionic dark matter bound states may produce characterisic photon bursts that are highly intense but rare in occurrence and short in duration. We discuss strategies and prospects for discovering such less explored class of indirect detection signals with nontrivial temporal structures. We also provide a concrete dark matter model that yields burst-like gamma-ray signals.
Axion-like particles (ALPs) offer a pathway for dark matter (DM) to interact with the Standard Model (SM) through a pseudoscalar mediator, addressing the absence of signals in direct detection experiments. This makes ALPs a compelling candidate for connecting DM to the SM. Our model assumes a dirac fermion DM particle that couples through an ALP. The freeze-out mechanism suggests that the ALP effective field theory (EFT) may not suffice, motivating us to explore a KSVZ-like UV completion. We extend the ALP effective theory by considering interactions with scalar and pseudoscalar particles, including couplings to various SM vector bosons. Our calculations reveal that these interactions may have greater importance than previously anticipated. The outcome of our study will shed light on where the correct relic density can arise concerning direct bounds on DM and the ALP, providing insights into the UV completion of the model.
A QCD axion with a decay constant below $ 10 ^{ 11} ~{\rm GeV} $ is a strongly-motivated extension to the Standard Model, though its relic abundance from the misalignment mechanism or decay of cosmic defects is insufficient to explain the origin of dark matter. Nevertheless, such an axion may still play an important role in setting the dark matter density if it mediates a force between the SM and the dark sector. In this work, we explore QCD axion-mediated freeze-out and freeze-in scenarios, finding that the axion can play a critical role for setting the dark matter density. Assuming the axion solves the strong CP problem makes this framework highly predictive, and we comment on experimental targets.
We present calculations of higher-order QCD corrections for the production of a heavy charged-Higgs pair ($H^+ H^−$) in the two-Higgs-doublet model at LHC energies. We calculate the NNLO soft-plus-virtual QCD corrections and the N$^3$LO soft-gluon corrections to the total and differential cross sections in single-particle-inclusive kinematics.
This talk discusses a new method to overcome common limitations in data-driven background predictions by validating the background model with synthetic data samples obtained using hemisphere mixing. These synthetic data samples allow for the validation of the extrapolation of the background model to the relevant signal region and avoid the problem of low statistical power in the most signal-like phase space. This technique also provides a way to determine the expected variance of the background prediction, resulting from the finite size of the data sample used to fit the model.
The results of a search for Higgs boson pair (HH) production in the decay channel to two bottom quarks and two W bosons using CMS data will be presented. The search is based on proton-proton collision data recorded at √s = 13 TeV center-of-mass energy during the period 2016 to 2018, corresponding to an integrated luminosity of 138 fb−1 and includes both resonant and non resonant as well as single lepton and double lepton channels. Run2 results show no excess in the resonant channel and in the non-resonant channel the upper limit of the cross section time branching ratio is 14-18 times that predicted by the standard model. In addition to presenting the Run2 results, this talk will also discuss expected improvements to the Run3 analysis, specifically improvements to the Heavy Mass Estimator and inclusion into the single lepton channel.
The simplest extension that can be added to the SM is the addition of a real singlet scalar S, which can result in a double Higgs bosson production if this new singlet is sufficiently heavy. New benchmark points are found by maximizing the production rate, which will allow to compare to the experimental results while this are being searched. The maximum values are shown for different values of the mixing angle and the resulting new mass eigenstate.
A search is presented for pair production of higgsinos in scenarios with gauge-mediated supersymmetry breaking. Each higgsino is assumed to decay into a Higgs boson and a nearly-massless gravitino. The search targets the $b\bar{b}$ decay channel of the Higgs bosons, leading to a reconstructed final state with at least three energetic $b$-jets and missing transverse momentum. Two complementary analysis channels are used to target higgsino masses below and above 250 GeV. The low (high) mass channel uses 126 (139) fb$^{-1}$ of $pp$ collision data collected at $\sqrt{s}$=13 TeV by the ATLAS detector during Run 2 of the Large Hadron Collider, extending previous ATLAS results with 24.3 (36.1) fb$^{-1}$. No significant excess above the Standard Model prediction is observed. At 95% confidence level, higgsino masses below 940 GeV are excluded. Exclusion limits as a function of the higgsino decay branching ratio to a $Z$ or a Higgs boson are also presented.
Neutrino physics is advancing into a precision era with the construction of new experiments, particularly in the few GeV energy range. Within this energy range, neutrinos exhibit diverse interactions with nucleons and nuclei. In this talk I will delve in particular into neutrino–nucleus quasi-elastic cross sections, taking into account both standard and, for the first time, non-standard interactions, all within the framework of effective field theory (EFT). The main uncertainties in these cross sections stem from uncertainties in the nucleon-level form factors, and from the approximations necessary to solve the nuclear many-body problem. I will explain how these uncertainties influence the potential of neutrino experiments to probe new physics introduced by left-handed, right-handed, scalar, pseudoscalar, and tensor interactions. For some of these interactions the cross section is enhanced, making long-baseline experiments an excellent place to search for them.
MicroBooNE is Liquid Argon Time Projection Chamber (LArTPC), able to image neutrino interactions with excellent spatial resolution, enabling the identification of complex final states resulting from neutrino-nucleus interactions. MicroBooNE currently possesses the world's largest neutrino-argon scattering data set, with a number of published cross section measurements and more than thirty ongoing analyses studying a wide variety of interaction modes. This talk provides an overview of MicroBooNE's neutrino cross-section physics program, focusing on the latest results.
The study of neutrino-nucleus scattering processes is important for the success of a new generation of neutrino experiments such as DUNE and T2K. Quasielastic neutrino-nucleus scattering, which yields a final state consisting of a nucleon and charged lepton, makes up a large part of the total neutrino cross-section in neutrino experiments. A significant source of uncertainty in the cross-section comes from limitations in our knowledge of nuclear effects in the scattering process.
The observations of short-range correlated proton-neutron pairs in exclusive electron scattering experiments led to the proposal of the Correlated Fermi Gas nuclear model. This model is characterized by a depleted Fermi gas region and a correlated high-momentum tail. We present an analytic implementation of this model for electron-nucleus and neutrino-nucleus quasi-elastic scattering. Also, we compare separately the effects of
nuclear models and electromagnetic and axial form factors on electron and neutrino scattering cross-section data.
NOvA, a long-baseline neutrino oscillation experiment, is primarily designed to measure the muon (anti)neutrino disappearance and electron (anti)neutrino appearance. It achieves this by utilizing two functionally identical liquid scintillator detectors separated by 810 km, positioned in the off-axis Fermilab NuMI beam, with a narrow band beam centered around 2 GeV. Energetic neutral pions, originating from Δ resonance, deep-inelastic interactions, or final state interactions, pose a significant challenge to the measurement of the electron (anti)neutrino appearance. This challenge stems from the potential misidentification of photons from neutral pion decay as electrons or positrons. Leveraging high-statistics antineutrino mode data from the near detector, we perform a measurement of the differential cross section for muon antineutrino charged-current neutral pion production. In this talk, we will present a detailed analysis of our approach and findings.
Neutrino-nucleus cross section measurements are needed to improve interaction modeling to enable upcoming precision oscillation measurements and searches for physics beyond the standard model. There are two methods for extracting cross sections, which rely on using either the real or nominal flux prediction for the measurement. We examine the different challenges faced by these methods, and how they must be treated when comparing to a theoretical prediction. Furthermore, the necessity for model validation in both procedures is addressed, and differences between “traditional” fake-data based validation and data-driven validation are discussed. Data-driven model validation leverages goodness-of-fit tests enhanced by the conditional constraint procedure. This procedure aims to validate a model for a specific measurement so that any bias introduced in unfolding will be within the quoted uncertainties of the measurement. Results are shown for the first measurement of the differential cross section $d^{2}\sigma(E_{\nu})/d\cos(\theta_{\mu})dP_{\mu}$ for inclusive muon-neutrino charged-current scattering on argon, which uses data from MicroBooNE, a nominal-flux-prediction unfolding, and data-driven model validation.
We report on a global extraction of the 12C Longitudinal (RL) and Transverse (RT ) nuclear electromagnetic response functions from an analysis of all available electron scattering dats on carbon. The response functions are extracted for a large range of energy transfer ν, spanning the nuclear excitation, quasielastic, and ∆(1232 MeV) region, over a large range of the square of the four-momentum transfer Q2. We extract RL and RT as a function of ν for both fixed values of Q2 (0 ≤ Q2 ≤ 1.5 GeV), as well fixed values of momentum transfer q. The data sample consists of more than 10,000 12C differential electron scattering and photo-absorption-cross section measurements. Since the extracted response functions cover a large range of Q2 and ν, they can be readily used to validate both nuclear models as well Monte Caro (MC) generators for electron and neutrino scattering experiments. The extracted response functions are compare to the prediction of several theoretical models and to predictions of the electron-mode versions of the NuWro and GENIE neutrino MC generators.
Project 8 is an experiment that seeks to determine the electron-weighted neutrino mass via the precise measurement of the electron energy in beta decays, with a sensitivity goal of $40\,\mathrm{meV/c}^2$. We have developed a technique called Cyclotron Radiation Emission Spectroscopy (CRES), which allows single electron detection and characterization through the measurement of cyclotron radiation emitted by magnetically-trapped electrons produced by a gaseous radioactive source. The technique has been successfully demonstrated on a small scale in waveguides to detect radiation from single electrons, and to measure the continuous spectrum from tritium. In order to achieve the projected sensitivity, the experiment will require novel technologies for performing CRES using tritium atoms in a magneto-gravitational trap in a multi-cubic-meter volume. In this talk, I will present a brief overview of the Project 8 experimental program, highlighting the latest results including our first tritium endpoint measurement and neutrino mass limit.
We focus on the potential of neutrino - 13C neutral current interactions in clarifying the reactor antineutrino flux around the 6 MeV region. The interactions produce 3.685 MeV photon line via the process of de-excitation of 13C in organic liquid scintillators, which can be observed in reactor neutrino experiments. We expect the future measurements of neutrino - 13C cross section in JUNO and IsoDAR@Yemilab at low energies might help testing the reactor flux models with the assistance of excellent particle identification.
The COHERENT collaboration made the first measurement of coherent elastic neutrino-nucleus scattering (CEvNS) and did so by employing neutrinos produced by the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL). The uncertainty of the neutrino flux generated from the SNS is on the order of 10% making it one of COHERENT's most dominant systematic uncertainties. To address this issue, a heavy water (D2O) detector has been designed to measure the neutrino flux through the well-understood electron neutrino-deuterium interaction. The D2O detector is composed of two identical modules designed to detect Cherenkov photons generated inside the target tank with Module 1 containing D2O as the target and Module 2 initially containing H2O for comparison and background subtraction. We also aim to make a measurement of the cross-section of the charged-current interaction between the electron neutrino and oxygen, providing valuable insight for supernova detection in existing and future large water Cherenkov detectors. In this talk, we present the construction and commissioning updates for Module 2 along with some preliminary results from Module 1.
The Karlsruhe Tritium Neutrino (KATRIN) Experiment directly measures the neutrino mass-scale with a target sensitivity of 0.3 eV/c2 by determining the shape change in the molecular tritium beta spectrum near the endpoint. KATRIN makes this measurement by employing its Magnetic Adiabatic Collimation with Electrostatic (MAC-E) Filter process to measure the integrated energy spectrum of the betas coming from molecular tritium decay. KATRIN is currently operating and has published an electron neutrino mass limit of 0.8 eV/c2 (90% C.L.) from its first two neutrino mass campaigns. The results from its first five neutrino mass campaigns are on track to be released later this year. In this talk, I will explain the operation of KATRIN and the analysis being done to understand the systematics that impact the KATRIN neutrino mass results.
Neutrino-nucleus scatterings in the detector could induce electron ionization signatures due to the Migdal effect. We derive prospects for a future detection of the Migdal effect via coherent elastic solar neutrino-nucleus scatterings in liquid xenon detectors, and discuss the irreducible background that it constitutes for the Migdal effect caused by light dark matter-nucleus scatterings. Furthermore, we explore the ionization signal induced by some neutrino electromagnetic and non-standard interactions on nuclei. In certain scenarios, we find a distinct peak on the ionization spectrum of xenon around 0.1 keV, in clear contrast to the Standard Model expectation.
The Coherent CAPTAIN-Mills (CCM) experiment is a 10 ton liquid argon scintillation detector located at Los Alamos National Lab studying neutrino and beyond Standard Model physics. The detector is located 23m downstream from the Lujan Facility's stopped pion source which will receive 2.25 x 10^22 POT in the ongoing 3 year run cycle. CCM is instrumented with 200 8-inch PMTs, 80% of which are coated in wavelength shifting tetraphenyl-butadiene, and 40 optically isolated 1-inch veto PMTs. The combination of PMTs coated in wavelength shifter and uncoated PMTs allows CCM to resolve both scintillation and Cherenkov light. Argon scintillation light peaks at 128nm, which requires the use of wavelength shifters into the visible spectrum for detection by the PMTs. The uncoated PMTs, however, will be more sensitive to the broad-spectrum Cherenkov light and less sensitive to the UV scintillation light produced in argon. This combination of coated and uncoated PMTs, along with our 2 nsec timing resolution, enables event by event identification of Cherenkov light. This event-by-event identification of Cherenkov light is a powerful tool in rejecting neutron backgrounds – enabling improved sensitivities to dark sector and beyond Standard Model physics searches.
HEP experiments are operated by thousands of international collaborators and serve as big drivers of frontier science and human knowledge. They provide a fertile ground to train next generation of scientists. While we invest in science, it is equally imperative that we integrate in our scientific mission, opportunities for participation and contribution from underrepresented and marginalized populations of our society. One of most powerful enablers to alleviate this challenge is to address the needs of professional development that can advance skills needed to succeed in HEP and STEM areas. NSF-funded IRIS-HEP "Training, Education & Outreach" program is uniquely placed to implement this. Its experiment agnostic collaborative approach has trained over two thousand users with sustainability as its centerpiece. Its open source training modules allow technical continuity and collaboration. Beyond HEP users, this software material is used to train students in HEP based internship programs, imparting an enriched experience. Its broader impacts allow software training for the high school teachers with a goal to tap, grow and diversify the talent pipeline for future cyber-infrastructure, starting with K-12 students. These efforts are lowering the barriers and building a path for a greater participation in STEM areas by our underrepresented populations. This contribution would describe the aforementioned efforts.
If science outreach is about connecting with new audiences, music remains a uniquely accessible form of outreach. However, physics music needn’t be limited to campy parodies. A project for creating music that is accessible at multiple technical levels will be presented. Using a form of 2D wavetable synthesis, a form of electronic music uses stereo audio signals, mapped onto an oscilloscope’s X-Y mode, to create images of LHC experiments from the music itself. On the dance floor, ColliderScope allows the physics community to reach truly new audiences. This talk will describe the project and its philosophy.
Aside from specialized skills, physicists also have quantitative skills useful in a wide variety of contexts. Among these are the abilities to quantify uncertainty and make useful approximations. These skills, if practiced by members of the general public, can help in understanding scientific results, in understanding the progress of science, in evaluating claims from non-scientific sources, and in taking into account the limitations of approximate models. In this talk, I discuss some efforts to make these skills intelligible to a more mainstream audience.
nEXO is a planned next-generation neutrinoless double beta decay experiment, designed to be built at SNOLAB in Ontario, Canada. Within the international nuclear and astroparticle physics communities, we strive to be a leader and role model in the areas of Diversity, Equity, and Inclusion (DEI) while drawing inspiration from the trailblazers who came before us. In 2020 nEXO founded its Diversity, Equity, and Inclusion Committee which has since developed a series of programming efforts such as a mentorship program, an internal DEI lecture series, and an internal newsletter and information hub. With recent funding from a DOE RENEW grant, nEXO plans to further its reach in the realm of DEI by starting several new initiatives including the creation of a DEI workshop for collaborations to be held in the summer of 2025. It is our hope that through this workshop, ideas can be shared and best practices for the community can be developed. This talk outlines the work of the nEXO DEI committee and a vision for the integration of DEI into physics collaborations.
I created a presentation, Building Inclusive Communities, and workshopped in in my classes over five years. Now, I've conducted a physics education research study to measure the impact on students' sense of belonging, scientific identity, and course performance. I will be sharing these results, as well as EDI resources for teachers, mentors, and students.
Models of freeze-in darkmatter (DM) can incorporate baryogenesis by a straightforward extension to two or more DM particles with different masses. We study a novel realization of freeze-in baryogenesis, in which a new SU(2)-doublet vector-like fermion (VLF) couples feebly to the SM Higgs and multiple fermionic DM mass eigenstates, leading to out-of-equilibrium DM production in the early universe via the decays of the VLF. An asymmetry is first generated in the Higgs and VLF sectors through the coherent production, propagation, and rescattering of the DM. This asymmetry is subsequently converted into a baryon asymmetry by SM processes, and potentially, by additional VLF interactions. We find that the asymmetry in this Higgs-coupled scenario has a different parametric dependence relative to previously considered models of freeze-in baryogenesis. We characterize the viable DM and VLF parameter spaces and find that the VLF is a promising target for current and future collider searches.
In this work, we explore baryon number violating interactions (BNV) within a specific model framework involving a charged iso-singlet, color-triplet scalar and a Majorana fermion with interactions in the quark sector. This model has been useful for explaining baryogenesis, neutron-antineutron oscillations, and other puzzles such as the DM-baryon coincidence puzzle. We revisit this model, with chiral perturbation theory as a guide, at the level of baryons and mesons in the dense environments of neutron stars. BNV neutron decays become accessible in this environment where in vacuum they would be kinematically forbidden. By considering several equations of state in binary pulsar candidates, we establish strong constraints on the model parameter space from these decays, and the subsequent scattering of the Majorana fermions, in total amounting to a $\Delta B=2$ loss in the star. These limits are highly complementary to laboratory bounds from rare dinucleon decay searches and collider probes.
Extending the Standard Model (SM) with right-handed neutrinos (RHN) provides a minimal explanation for both the origin of the SM neutrino masses through the type-I seesaw and of the present imbalance between matter and antimatter in our universe through leptogenesis. Even though the mass of these RHNs is in principle unbounded from above, an attractive possibility would be for the RHN masses to lie at a relatively low scale, i.e. MeV to TeV, such that these new particles can be searched for at present-day experiments. In this talk, I will discuss how the testability of the model changes compared to the minimal case with 2 RHNs when one considers a scenario with 3 RHN generations instead. Moreover, I will also look into the effects of flavour and CP symmetries on the parameter space of the model.
Based on arXiv:2106.16226, arXiv:2203.08538 and other upcoming works
Heavy neutral leptons (HNLs) are an extension of the Standard Model that are well-motivated by neutrino masses, dark matter, and baryogenesis via leptogenesis. We present a comprehensive analysis of all significant HNL production and decay mechanisms. This work has been incorporated into a new module that generates events for HNLs with arbitrary couplings to the $e$, $\mu$, and $\tau$ neutrinos within the FORESEE simulation package. We apply this new framework to simulate results for the well known benchmarks $U_e^2:U_\mu^2:U_\tau^2 =$ 1:0:0, 0:1:0, 0:0:1, as well as the recently proposed benchmarks 0:1:1, and 1:1:1. The simulations are performed for FASER and proposed experiments at the Forward Physics Facility. We find projected sensitivities that extend into currently unexplored regions of parameter space with HNL masses in the 2 to 3.5 GeV range.
In the ongoing Short-Baseline Neutrino facilities such as the Short-Baseline Near Detector (SBND), MicroBooNE and ICARUS, there exists an iron dump positioned $\sim$ 45.79 m from the Fermilab Booster Neutrino Beam (BNB)’s beryllium target. The neutrinos produced from charged pion and kaon decays can undergo up-scattering off iron nuclei resulting in the production of MeV mass scale heavy neutral leptons (HNLs). These HNLs then travel to the respective detectors and decay into Standard Model neutrinos, photons, $e^+e^−$, etc. While previous studies have predominantly focused on the production of HNLs without considering the iron dump, the inclusion of it significantly enhances sensitivity allowing us to probe more unconstrained parameter space of HNL coupling versus mass. Additionally, distinctive signatures indicating the production origin of HNLs have also been observed in the energy and angular spectra of the final states. Furthermore, we also investigated the effects of the dump in the case of inelastic dark matter thereby probing unexplored regions of the parameter space.
As we push to high precision measurements, the PDF uncertainty is often a limiting factor. To achieve improved precision, our goal is to not only ‘fit’ the PDFs, but to better understand the underlying process at the precision level. Toward this goal, we extend the QCD Parton Model analysis using a factorized nuclear structure model incorporating individual nucleons, and pairs of correlated nucleons. Our analysis simultaneously extracts the universal effective distribution of quarks and gluons inside correlated nucleon pairs, and the nucleus-specific fractions of such correlated pairs. These results fit data from lepton Deep-Inelastic Scattering, Drell-Yan processes, and high-mass boson production. This successful extraction of nuclear structure properties marks a significant advancement in our understanding of the fundamental structure of nuclei.
We present a very simple method for calculating the mixed Coulomb-nuclear effects in the $pp$ and $\bar{p}p$ scattering amplitudes, and illustrate the method using simple models frequently used to describe their differential cross sections at small momentum transfers. Combined with the pure Coulomb and form-factor contributions to the scattering amplitude which are known analytically from prior work, and the unmixed nuclear or strong-interaction scattering amplitude, the results give a much simpler approach to fitting the measured $pp$ and $\bar{p}p$ cross sections and extracting information on the real part of the forward scattering amplitudes than methods now in use.
In this work, we complete our CT18qed study with the neutron’s photon parton distribution function (PDF), which is essential for the nucleus scattering phenomenology. Two methods, CT18lux and CT18qed, based on the LUXqed formalism and the DGLAP evolution, respectively, to determine the neutron’s photon PDF have been presented. Various low-Q2 non-perturbative variations have been carefully examined, which are treated as additional uncertainties on top of those induced by quark and gluon PDFs. The impacts of the momentum sum rule as well as isospin symmetry violation have been explored and turned out to be negligible. A detailed comparison with other neutron’s photon PDF sets has been performed, which shows a great improvement in the precision and a reasonable uncertainty estimation. Finally, two phenomenological implications are demonstrated with photon-initiated processes: neutrino-nucleus W-boson production, which is important for the near-future TeV–PeV neutrino observations, and the axion-like particle production at a high-energy muon beam-dump experiment.
The X(6900) resonance, originally discovered by the LHCb collaboration and later confirmed by both ATLAS and CMS experiments, has sparked broad interests in the fully-charmed tetraquark states. Relative to the mass spectra and decay properties of fully-heavy tetraquarks, our knowledge
on their production mechanism is still rather limited. In this talk, I will discuss the production of S-wave fully-heavy tetraquark at the LHC and electron-ion collider with the nonrelativistic QCD (NRQCD) framework. We predicted the differential pT spectra of various fully-charmed S-wave tetraquarks at the LHC, and compare with the results predicted from the fragmentation mechanism at large pT end. We also looked at the production prospects at various electron-proton colliders.
Until recently, it was widely believed that every hadron is a composite state of either three quarks or one quark and one antiquark. In the last 20 years, dozens of exotic heavy hadrons have been discovered, and yet no theoretical scheme has unveiled the general pattern. For hadrons that contain more than one heavy quark or antiquark, the Born-Oppenheimer approximation for QCD provides a rigorous approach to the problem. In this approximation, a double-heavy hadron corresponds to an energy level in a potential that increases linearly at large interquark distances. Pairs of heavy hadrons, on the other hand, correspond to energy levels in potentials that approach a constant at large interquark distances. In this talk, I will discuss decays of double-heavy hadrons into pairs of heavy hadrons, which are mediated by couplings between the respective Born-Oppenheimer potentials. I will show that conventional and exotic double-heavy hadrons follow different decay patterns dictated by the symmetries of QCD with two static color sources. As case studies, I will compare selection rules and branching ratios for the decays of quarkonium and quarkonium-hybrid mesons into the lightest pairs of heavy mesons. I will also discuss the corresponding decays of double-heavy tetraquarks.
Double-heavy hadrons can be identified as bound states in the Born-Oppenheimer potentials for QCD. We present parameterizations of the 5 lowest Born-Oppenheimer potentials from pure $SU(3)$ lattice gauge theory as functions of the separation $r$ of the static quark and antiquark sources. The parametrizations have the correct limiting behavior at small $r$, where the potentials form multiplets associated with gluelumps. They also have the correct limiting behavior at large $r$, where the potentials form multiplets associated with excitations of a relativistic string. These Born-Oppenheimer potentials can be used to develop models based on QCD for the many exotic heavy hadrons that have been discovered since 2003.
Non-perturbative dynamics of gauge theories has been notoriously difficult to study. I discuss that supersymmetry slightly broken by anomaly mediation allows us to derive many features of dynamics. They include explicit demonstration of chiral symmetry breaking as well as monopole condensation, calculation of non-perturbative condensates, correct large $N_c$ behavior, and some of the low-lying spectra.
We find a complete set of 4-point vertices in the Constructive Standard Model (CSM) by satisfying perturbative unitarity. We use these and the 3-point vertices to calculate a comprehensive set of 4-point amplitudes in the CSM. We also introduce a package to numerically calculate phase-space points for constructive amplitudes and use it to validate the 4-point amplitudes against Feynman diagrams.
This talk is based on the following preprints:
arXiv:2403.07977
arXiv:2403.07978
arXiv:2403.07981
It is well known that in QFT, perturbative series expansions in powers of the coupling constant yield an asymptotic series. At weak coupling, this is not an issue, since the series is valid at lower orders and one can use it to make reliable predictions. However, the series fails completely at strong coupling. I will show that one can develop two different types of series expansions that are absolutely convergent and are valid at both strong and weak coupling. The first series is the usual one, in powers of the coupling constant, but where we pay special attention to the order of two asymptotic limits. In the second series, we expand the quadratic/kinetic part but not the interaction part containing the coupling. This yields a series in inverse powers of the coupling. The first series converges quickly at weak coupling and slowly at strong coupling whereas it is the reverse for the second series. We apply this to a basic one-dimensional integral and also to a path integral in quantum mechanics both of which contain a quadratic term and a quartic interaction term containing the coupling.
I will present some recent progress at the intersection between machine learning and field theories, highlighting Feynman diagram methods for neural network correlators and neural scaling laws.
First, building on a correspondence between neural network ensembles and statistical field theories, I will introduce a diagrammatic framework to calculate neural network correlators in the large-width expansion and study RG flow and criticality. Then, I will show how large-N field theory methods can be used to solve a model of neural scaling laws.
Based on 2305.02334 and work to appear.
The constructive method of determining amplitudes from on-shell pole structure has been shown to be promising for calculating amplitudes in a more efficient way. However, challenges have been encountered when a massless internal photon is involved in the gluing of three-point amplitudes with massive external particles. In this talk, I will describe how to use the original on-shell method, old-fashioned perturbation theory, to shed light on the constructive method, and show that one can derive the Feynman amplitude by correctly identifying the residue even when there is an internal photon involved.
We demonstrate how the scattering amplitudes of some scalar theories, scaffolded general relativity, multi-flavor DBI, and the special Galileon, vanish at multiple loci in momentum space that include and extend their soft-limit behaviors. We elucidate the factorization of the amplitudes near the zero loci into lower point amplitudes. We explain how the occurrence of the zero loci in these theories can be understood in terms of the double copy formalism.
In the presence of axion dark matter, electrons experience an "axion wind" spin torque and an "axioelectric" force, which give rise to magnetization and polarization currents in common ferrite materials. The radiation produced by these currents can be amplified in multilayer setups, which are potentially sensitive to the QCD axion without requiring a large external magnetic field.
The future Electron-Ion Collider (EIC) will have the capability to collide various particle beams with large luminosities in a relatively clean environment, providing access to untouched parameter space for new physics. In this study, we will look at the EIC’s sensitivity to Axion-like particles (ALPs) that are created via photon fusion and promptly decay to photons. Proton-electron collisions mildly improve the parameter space reach in the 2-6 GeV ALP mass range while collisions between lead ions and electrons improve the reach by ~10² in the same region, along with mild improvement from 6-30 GeV. This large improvement is due to the coherent scattering of electrons with lead ions, which benefits from a Z² enhancement to the cross section. A brief look into the same ALP production methods at the future Muon-Ion Collider yields similar improvement of ~10² to the sensitivity in the 30-300 GeV ALP mass range due to larger beam energies.
Owing to its high temperature, a copious number of heavy axion-like particles (ALPs) coupled to the photon field are produced by the Primakoff process and photon coalescence process in the plasma of massive stars in the later stages of their evolution. These heavy axions produced inside stars spontaneously decay into two photons, yielding the possibly detectable photon signal by current and future X-ray and gamma-ray telescopes. We discuss the observability of this photon signal by using the stellar model constructed by the 1D stellar evolution code MESA.
We identify a new resonance, axion magnetic resonance (AMR), that can greatly enhance the conversion rate between axions and photons. A series of axion search experiments rely on converting them into photons inside a constant magnetic field background. A common bottleneck of such experiments is the conversion amplitude being suppressed by the axion mass when $m_a \gtrsim 10^{-4}~$eV. We point out that a spatial or temporal variation in the magnetic field can cancel the difference between the photon dispersion relation and that of the axion, hence greatly enhancing the conversion probability.
We demonstrate that the enhancement can be achieved by both a helical magnetic field profile and a harmonic oscillation of the magnitude. Our approach can extend the projected ALPS II reach in the axion-photon coupling ($g_{a\gamma}$) by two orders of magnitude at $m_a = 10^{-3}\;\mathrm{eV}$ with moderate assumptions.
I will discuss a recently proposed novel experimental setup for axion-like particle (ALP) searches. Nuclear reactors produce a copious number of photons, a fraction of which could convert into ALPs via the Primakoff process in the reactor core. The generated flux of ALPs leaves the nuclear power plant, and its passage through a region with a strong magnetic field results in efficient conversion to photons, which can be detected. Such a magnetic field is the key component of axion haloscope experiments. I will discuss existing setups featuring an adjacent nuclear reactor and axion haloscope and I will demonstrate that the obtained sensitivity projections complement constraints from existing laboratory experiments, e.g., light-shining-through-walls.
Primordial black holes (PBHs) remain a viable dark matter candidate in the asteroid-mass range. We point out that in this scenario, the PBH abundance would be large enough for at least one object to cross through the inner Solar System per decade. Since Solar System ephemerides are modeled and measured to extremely high precision, such close encounters could produce detectable perturbations to orbital trajectories with characteristic features. We evaluate this possibility with a suite of simple Solar System simulations, and we argue that the abundance of asteroid-mass PBHs can plausibly be probed by existing and near-future data.
If present in the early universe, primordial black holes (PBHs) will accrete matter and emit high-energy photons, altering the statistical properties of the Cosmic Microwave Background (CMB). This mechanism has been used to constrain the fraction of dark matter that is in the form of PBHs to be much smaller than unity for PBH masses well above one solar mass. Moreover, the presence of dense dark matter mini-halos around the PBHs has been used to set even more stringent constraints, as these would boost the accretion rates. In this work, we critically revisit CMB constraints on PBHs taking into account the role of the local ionization of the gas around them. We discuss how the local increase in temperature around PBHs can prevent the dark matter mini-halos from strongly enhancing the accretion process, in some cases significantly weakening previously derived CMB constraints. We explore in detail the key ingredients of the CMB bound and derive a conservative limit on the cosmological abundance of massive PBHs.
We demonstrate a novel mechanism for forming dark compact objects and black holes through a dissipative dark sector. Heavy dark sector particles can be responsible for an early matter dominated era before Big Bang Nucleosynthesis (BBN). Density perturbations in this epoch can grow and collapse into tiny dissipative dark matter halos, which can cool via self-interactions. Once these halos have formed, a thermal phase transition can then shift the Universe back into radiation domination and standard cosmology. These halos can continue to collapse after BBN, resulting in the late-time formation of fragmented compact MACHOs and sub-solar mass primordial black holes.
Atomic dark matter is a dark sector model including two fermionic states oppositely charged under a dark U(1) gauge symmetry, which can result in rich cosmological signatures. I discuss recent work using cosmological n-body simulations to investigate the impact of an atomic dark matter sector on observables such as the galactic UV luminosity function at redshifts >10, and consider the constraining power of recent JWST observations for this model.
Atomic Dark Matter (aDM) is a well motivated class of models which has potential to be discovered at ground based Direct Detection experiments. The class of models we consider contains a massless dark photon and two Dirac fermions with different masses and opposite dark charge (dark protons and dark electrons), which will generally interact with the Standard Model through a kinetic mixing portal with our photon. The dark fermions have the potential to be captured in the Earth. Due to the mass difference, evaporation efficiencies are lower for dark protons than dark electrons, leading to a net dark charge in the Earth. This has the potential to alter the incoming flux of aDM in complex ways, due to interactions between the ambient dark plasma and the dark charged Earth. This modifies event rates in ground based direct detection experiments compared to the standard DM expectation. In this talk I will describe our ongoing effort to calculate aDMs interaction with and subsequent capture in the Earth through the dark photon portal. We identify regions of the aDM parameter space where there may be significant accumulation of aDM in the Earth, taking into account cosmological constraints on the massless dark photon kinetic mixing for aDM.
Atomic dark matter (ADM) models, with a minimal content of a dark proton, dark electron, and a massless dark photon, are motivated by theories such as Mirror Twin Higgs. ADM models might address the seeming tension between cold dark matter (CDM) and observations at small scales: excessive number of dwarf galaxies in the Milky Way, and the cuspiness of galactic cores. ADM has been shown to suppress matter perturbations on small scales. N-body simulations with percent ADM subcomponent predict interesting sub-galactic structures. We use similar N-body simulations and Lyman-alpha forest data, which is sensitive to small-scale ADM effects, to produce robust constraints on ADM parameter space. We use machine learning methods to optimize computational efficiency when scanning over the parameter space.
Primordial black holes (PBHs) can be formed from the collapse of large-density inhomogeneities in the early Universe through various mechanisms. One such mechanism is a strong first-order phase transition, where PBH formation arises due to the delayed vacuum transition. The probabilistic nature of bubble nucleation implies that there is a possibility that large regions are filled by the false vacuum, where nucleation is delayed. When the vacuum energy density inside those regions decays into other components, overdensity reaches a threshold, and the whole mass inside the region could gravitationally collapse into PBHs. In this scenario, PBHs can serve as both dark matter candidates and probes for models featuring first-order phase transitions, making it phenomenologically appealing. This mechanism can be tested through a multi-pronged approach, encompassing gravitational wave detectors, microlensing studies, and collider experiments.
Its been demonstrated that "optimized partial dressing" (OPD) thermal mass resummation, which uses gap equation solutions inserted into the tadpole, efficiently tames finite temperature perturbation theory calculations of the effective thermal potential, without necessitating use of the high-temperature approximation. Even though it was shown that OPD has a similar scale dependence as 3D EFT approaches in the high-T limit, the calculated scale dependence of variables, in particular strength of gravitational wave signal from phase transition is sizeable. In this talk we will show a self-consistent way to RG improve scalar potential at finite temperature in the OPD formalism and demonstrate large reduction in scale dependence of physical observables in comparison to current techniques.
We consider a classically conformal $U(1)$ extension of the Standard Model (SM) in which the new $U(1)$ symmetry is radiatively broken via the Coleman-Weinberg mechanism, after which the $U(1)$ Higgs field $\phi$ drives electroweak symmetry breaking through a mixed quartic coupling with the SM Higgs doublet via coupling constant $\lambda_{mix}$. For $m_{\phi}$ < $\frac{m_{h}}{2}$, the coupling governing the decay $h \rightarrow \phi \phi$ is strongly suppressed, and experimental signals lie in the domain of future experiments such as the ILC. Additional probes of this conformal model are future gravitational wave (GW) observatories, capable of detecting primordial GW generated from a strong first-order phase transition (FOPT). We perform a numerical analysis to investigate possible characteristic GW signals and detection prospects for a conformal model with such a phase transition, specifically for parameter regions which would simultaneously reproduce observed Dark Matter relic density and for which the anomalous Higgs properties can be measured at the ILC.
Our study presents a comprehensive analysis of baryon number violation during the electroweak phase transition (EWPT) within the framework of an extended scalar electroweak multiplet. We perform a topological classification of scalar multiplet's representation during the EWPT, identifying conditions under which monopole or sphaleron field solutions emerge, contingent upon whether their hypercharge is zero; which indicates that only monopole scalar multiplet can contribute to the dark matter relic density. We also conduct a systematic research of other formal aspects, like the construction of higher dimensional sphaleron matrix, computation of the sphaleron and monopole mass, and the analysis of boundary conditions for the field equation of motions. We then scrutinize the computation of sphaleron energy and monopole mass within the context of a multi-step EWPT, employing the SU(2) septuplet scalar extension to the Standard Model (SM) as a case of study. In the scenario of a single-step EWPT leading to a mixed phase, we find that the additional multiplet's contribution to the sphaleron energy is negligible, primarily due to the prevailing constraint imposed by the $\rho$ parameter. Conversely, in a two-step EWPT scenario, the monopole mass can achieve significantly high values during the initial phase, thereby markedly constraining the monopole density and preserving the baryon asymmetry if the universe undergoes a first-order phase transition. In the two-step case, we delineate the relationship between the monopole mass and the parameters relevant to dark matter phenomenology.
Minimal Supersymmetric Standard Model(MSSM) shortcomings in inducing a strong first-order phase transition and providing sufficient CP violation to explain the observed baryon asymmetry in the universe(BAU). In this talk, I will discuss how BAU could be generated in the context of Next-to-Minimal Supersymmetric Standard Model(NMSSM), and how strong the CP violation ingredients in NMSSM will be constrained by ongoing experiments, especially searches for permanent EDM of fundamental particles.
We employ a derivative expansion method to analyze the effective action within
the SU(2)-Higgs model at finite temperature. By utilizing a specific power
counting scheme, we compute gauge-invariant constraints on primordial gravi-
tational waves arising from a thermal first-order electroweak phase transition.
We then compare these results with findings from a pre-existing nonperturba-
tive analysis, effectively benchmarking the framework’s validity and assessing its
implications for the detectability of a stochastic gravitational wave background
by forthcoming experiments such as LISA.
Recently, NANOGrav collaboration (based on 12.5 years of observation) reported strong evidence [Arzoumanian et al. (2020)] and later, the analysis of 15 years of data resulted in confirming the detection of a stochastic gravitational wave background [Agazie et al. (2023)] that can be understood, along with the possibility of the astrophysical sources (such as supermassive black holes) induced gravitational waves, as a signal possibly from the early universe [Figueroa et al. (2023)]. Note, the detection of the stochastic gravitational waves has been confirmed by several missions of pulsar timing array (PTA), including European PTA (EPTA) and Indian PTA (InPTA) [EPTA collaboration; InPTA collaboration (2023)]. I will report the results of direct numerical simulations of gravitational waves induced by hydrodynamic and hydromagnetic turbulent sources that might have been present at quantum chromodynamic (QCD) phase transitions. Based on existing data I will discuss cosmological models constraints.
We analyse sound waves arising from a cosmic phase transition where the full velocity profile is taken into account as an explanation for the gravitational wave spectrum observed by multiple pulsar timing array groups. Unlike the broken power law used in the literature, in this scenario the power law after the peak depends on the macroscopic properties of the phase transition, allowing for a better fit with pulsar timing array (PTA) data. We compare the best fit with that obtained using the usual broken power law and, unsurprisingly, find a better fit with the gravitational wave (GW) spectrum that utilizes the full velocity profile. Even more importantly, the thermal parameters that produce the best fit are quite different. We then discuss models that can produce the best-fit point and complementary probes using CMB experiments and searches for light particles in DUNE, IceCUBE-Gen2, neutrinoless double $\beta-$decay, and forward physics facilities (FPF) at the LHC like FASER$\nu$, etc.
We show that observations of primordial gravitational waves of inflationary origin can shed light into the scale of flavor violation in a flavon model. The mass hierarchy of fermions can be explained by a flavon field. If it exists, the energy density stored in oscillations of the flavon field around the minimum of its potential redshifts as matter and is expected to dominate over radiation in the early universe. The evolution of primoridial gravitational waves acts as a bookkeeping method to understand the expansion history of the universe. Importantly, the gravitational wave spectrum is different if there is an early matter dominated era, compared to radiation domination expected from standard cosmological model and gets damped by the entropy released in the flavon decays, determined by the mass of the flavon field $m_S$ and new scale of flavor violation $\Lambda_{\rm FV}$. Furthermore, the flavon decays can source the baryon asymmetry of the universe. We show that the $m_S-\Lambda_{\rm FV}$ parameter space in which the correct baryon asymmetry is produced can also be probed by gravitational wave observatories like BBO, DECIGO, U-DECIGO, ARES, LISA, ET, CE etc. for a blue-tilted gravitational wave spectrum. Our results are compatible with primordial origin of NANO-GRAV observations.
Different inflation models make testable predictions that are often close to each other, and breaking this degeneracy (i.e. distinguishing different models) may then require additional observables. In this talk, we explore the minimal production of gravitational waves during reheating after inflation, arising from the minimal coupling of the inflaton to gravity. The subsequent signal shows a strong distinction between different inflaton potentials. If detected, such signal can also be used to probe the reheating process and would serve as a direct measurement of the inflaton mass.
Gravitational-wave (GW) signals offer unique probes into the early universe dynamics, particularly those from topological defects. We investigate a scenario involving a two-step phase transition resulting in a network of domain walls bound by cosmic strings. By introducing a period of inflation between the two phase transitions, we show that the stochastic GW signal can be greatly enhanced. The generality of the mechanism also allows the resulting signal to appear in a broad range of frequencies and can be discovered by a multitude of future probes, such as Pulsar Timing Arrays, and space- and ground-based observatories. We also offer a concrete model realization that relates the second phase transition to the epoch of inflation. In this model, the successful detection of the GW spectrum peak pinpoints the soft supersymmetry breaking scale and the domain wall tension.
The dynamical generation of right-handed-neutrino (RHN) masses $M_N$ in the early Universe naturally entails the heavy scalar $\phi$, responsible for B-L symmetry breaking, whose decay in early universe leads to novel Gravitational Waves (GW) spectral shapes arising from the propagation of primordial tensor modes generated during inflation and that re-enter the horizon before or during an epoch, where the energy budget of the universe is dominated by $\phi$ or $N$, whose out of equilibrium decay releases entropy and participates in low and intermediate scale leptogenesis. We show that a characteristic simultaneous damping and knee-like features in the GW spectrum in multi-band frequency ranges would provide evidence for low-scale thermal and non-thermal leptogenesis via $\phi$ decays and subsequent RHN decays. We identify the regions of the microphysics BSM parameter space involving $\phi-N$ Yukawa $y_N$ and $M_N$, where upcoming GW detectors will be able to probe. The detection of such a spectral feature would thus represent a novel and unique possibility to probe the physics of RHN mass generation in regions of parameter space that allow for low-scale and intermediate scale leptogenesis in accord with electroweak naturalness.
The large-scale Water-based Liquid Scintillator (WbLS) detector is a new opportunity for the neutrino community to accomplish competent long-baseline neutrino oscillation and unprecedented low-energy neutrino measurements. Several table-top WbLS detection systems have been implemented at BNL and LBNL.
It is critical to advance further with a mid-scale demonstrator to understand and tune the WbLS property and stability.
A 1-ton detector located at BNL instrumentation building, with WbLS contained in an acrylic tank that coupled with 2'' and 3'' PMTs outside, was built in 2022. In addition, a 30-ton detector at BNL, equipped with Hamamatsu 10'' PMTs submerged in the WbLS, is being built with the same team. Various liquid materials were developed and filled sequentially in the 1-ton detector. The performance and stability of WbLS for cosmic muons and an alpha source were measured with the 1-ton detector. In this presentation, the latest experiment's status and the physics results will be shown.
Liquid Xenon Time Projection Chambers have dominated the search for dark matter in the form of Weakly Interacting Massive Particles. The current generation (LZ, XENONnT, and PandaX) are becoming sensitive to coherent elastic neutrino-nucleus scatters from the Boron-8 solar neutrino component of the neutrino fog. However, current limits from these detectors are still two orders of magnitude above the main limitations of the neutrino fog, which would be reachable with a kilotonne-year exposure. Work is ongoing to realize such a Liquid Xenon Observatory, with significant developments underway for DARWIN, and within the XLZD consortium. In this talk, I present the requirements and status of the liquid xenon time projection technology for a kilotonne-year exposure regarding backgrounds and detector subsystems.
Liquid xenon (LXe) detectors are used in many experiments, including the proposed searches for dark matter and neutrinoless double-beta decay, DARWIN and nEXO. LXe scintillates in vacuum ultraviolet (VUV) region, and understanding optical properties of materials and photosensors in this region is important for maximizing sensitivity of these experiments. LIXO is a setup dedicated to such measurements constructed at the University of Alabama. It has provided the first measurement of angular-resolved reflectivity and PDE of a SiPM in LXe. LIXO has been upgraded to improve measurement speed, reduce uncertainties, and allow transparency measurements in LXe. This talk will present the upgraded system, LIXO2, and report measurement results of the VUV-reflective coating that was developed to improve sensitivity of LXe experiments. Our results confirm that the coating satisfies the needs of next-generation LXe detectors. The talk will conclude by discussing future LIXO2 measurement plans.
NuDot serves as a significant testbed for liquid scintillator research and development, with a primary objective of reducing one of the major challenges encountered in large-scale liquid scintillator neutrinoless double beta decay (0νββ) investigations—the solar neutrino background. Utilizing machine learning techniques and high-speed electronics, NuDot aims to showcase its capability in acquiring directional information by isolating prompt Cherenkov radiation from the overall isotropic scintillation emission. This precision separation is facilitated by employing low time-transit-spread photomultiplier tubes with future aims to also include new system-on-a-chip technologies, RFSoCs. The discussion will delve into the NuDot initiative and utilization of machine learning for signal extraction.
Detecting the detailed 3D topology of ionization in detectors is broadly desirable for enabling new techniques in nuclear and particle physics. One example is the directional detection of nuclear recoils from neutrinos or dark matter, which may prove critical for probing dark matter beneath the neutrino fog and affirming its galactic origin. Gaseous time projection chambers (TPCs) can enable the required low-energy directionality and x/y strip charge readout of such detectors has been proposed as the optimal balance between cost-efficiency and performance. We present an experimental study of nine distinct x/y strip configurations coupled to Micromegas amplification stages. The VMM3a ASIC is used with the Scalable Readout System (SRS) of the RD51 collaboration to read out individual strips, while the Micromegas avalanche charge is recorded with a pulse height analyzer system. These two complementary charge readout techniques are used with radioactive sources to characterize the gain, gain resolution, x/y charge sharing, and point resolution of each setup, in order to identify the optimal charge readout configuration.
Plastic scintillators are common materials in sampling calorimeters. At proton-proton colliders such as the LHC, the intense radiation environment can alter their optical properties, including the index of refraction. We present measurements of the change in the index of refraction for doses between 12 and 70 kGy and show that the size of the change depends on the presence of oxygen. We do this using a new, simple method to measure refractive index based on a consumer-grade camera. The proposed method has a precision within 0.10-0.15%, making it comparable to and more cost-effective than other methods.
A multi-TeV muon collider has the unique potential to provide both precision measurements and the highest energy reach in one machine that cannot be paralleled by any currently available technology. There has been a strong physics interest in Muon Colliders recently, as indicated by the number of publications, workshops, Snowmass activities, and the 2023 P5 report which referred to it as "...our Muon Shot". Significant progress on the fundamental R&D and design concepts for such a machine has led to a new international effort to assemble a conceptual design within the next several years. This effort will assess the viability of such a machine as a successor to the LHC program. In this talk, I will introduce the concept of a high-energy muon collider, provide brief physics motivation, and review recent technological progress. The remaining challenges and the R&D required to deliver a complete machine description will be described.
The muon collider has been identified as potentially the fastest, cheapest and most sustainable route to push back the energy frontier. One of the key challenges for the muon collider is to deliver a beam having an unprecedented muon beam brightness, so that extremely high luminosity can be reached. Beam brightness has two components; the current of muons that is accelerated, and the size of the muon beam, characterised by beam emittance. In this talk, I will explain how we can generate a large muon beam current, through use of a high power proton beam impinging on a target that is immersed in a very high field solenoid; and I will go on to explain how this beam can be squeezed into a tiny emittance through the use of the ionisation cooling technique. I will describe the experimental verification of the ionisation cooling concept and go on to explain the further technology demonstrations that will yield a full demonstration of a chain of ionisation cooling equipment in order to demonstrate practical execution of the muon collider.
The advantage of muons over electrons for a lepton collider is that one can accelerate and collide them in circular machines. Unfortunately muons are difficult to produce and have a short lifetime, and these basic issues drive most design choices for a muon collider. In particular, unlike most colliders, all the muons of a given sign in each pulse are combined into a single intense bunch. To keep decays reasonable, muons must be accelerated to their final energies in a small number of milliseconds. Accomplishing this with machines that are reasonably sized and cost effective requires unique solutions that maintain a high average bend field while allowing rapid energy variation. I will present two methods to accomplish this, hybrid pulsed synchrotrons and fixed field alternating gradient accelerators, describe recent work on their design, and outline challenges, including issues related to a machine that would fit on the Fermilab site. Once the beams are accelerated, the beams are collided in a ring. Since all the remaining muons eventually decay in this ring, management of the radiation load and the offsite neutrino beam intensity are essential to the collider ring design. Higher magnetic fields have a direct relationship to the luminosity in the collider ring, and thus magnet capabilities have a particularly direct impact on machine performance here. I describe important issues for the collider design and recent work on the collider lattice.
Muon colliders offer an exciting opportunity for high energy exploration, but the rapidly decaying beam causes challenges throughout the system. This talk will focus on detector design and machine-detector interface optimization, presenting recent developments targeting a 10 TeV collider as well as outlook for the future.
Unravelling the mystery of neutrino masses is one of the top priorities in particle physics and tremendous model building efforts have been devoted to exploring new physics beyond the Standard Model (BSM) in order to state the puzzle succinctly. In this work, we consider a simple extension of the standard model of particle physics (SM) — a class of models called Two-Higgs Doublet Model with Lepton number (2HDML) where a second Higgs doublet and right-handed neutrino/heavy neural lepton, both carrying lepton number, are introduced. With the breaking of lepton number at the electroweak scale, the right-handed neutrino/heavy neural lepton is naturally light and the nonzero neutrino masses are generated via Type-1 seesaw mechanism. This talk presents the construction and the phenomenology of such 2HDML model and discusses constraints on the 2HDM parameter space such as those derived from the CP-even Higgs exotic decays.
We investigate the possibility of neutrinoless double beta decay $( 0 \nu \beta\beta)$ and leptogenesis within a low-scale seesaw mechanism with additional sterile neutrinos. The general effective field theory (EFT) considerations suggest that if there are experimentally observable signatures in $0 \nu \beta \beta$ decay and the lepton asymmetry generated by the right-handed neutrinos, the low-scale leptogenesis is likely to be unviable. However, in this work, we show that in the context of low-scale resonant leptogenesis, one can obtain the observed BAU and observable signatures in $0 \nu \beta \beta$ decay in the presence of additional sterile neutrinos. In this framework, the neutrino masses are naturally suppressed by the extended seesaw parameter, $\mu$, rather than introducing small Yukawa couplings in other leptogenesis scenarios. This can lead to both observable experimental signatures in $0\nu\beta\beta$ decay and charged lepton flavor violation (cLFV) as well as large washout effects. The resonant leptogenesis mechanism with light neutrino masses can overcome the latter, even in the presence of experimentally accessible $0\nu\beta\beta$-decay and cLFV signatures. We have shown that the KamLAND-Zen experiment is sensitive to keV-MeV scale sterile neutrino masses, and future ton-scale experiments offer potential signals while maintaining viable leptogenesis.
We constrain limits on the decay and annihilation of very heavy dark matter (VHDM) particles in the mass range of $10^{9}-10^{16}$ GeV with the aid of projected neutrino flux sensitivity of future generations of neutrino telescopes, such as GRAND and IceCube-Gen2 radio upgrade. Particularly interesting constraints are obtained from the future lunar ultralong wavelength (ULW) radio telescope, which aims to detect the resultant radio pulse originating in the interaction of ultrahigh energy neutrinos (UHE$\nu$) with the lunar regolith. However, the limits from terrestrial detectors provide constraints up to a few times $10^{13}$ GeV, beyond which the measurements by ULW will be important. The ULW energy range at $\gtrsim10^{13}$ GeV is free from any astrophysical background, providing the best limits on VHDM decay and annihilation.
Using the s-wave unitarity constraint on a general Type-1 Seesaw Model, we investigate the effects that unitarity bounds place on a general massive neutrino mixing angle, compare these new constraints against the typical analytical Type-1 mixing, and comment on how these constraints affect the available phase space for massive neutrino searches.
In the present work, I will discuss the so-called non-unitary effects in the neutrino mixing matrix that appear when we add more massive neutrino states. In the context of the first detection of FASER$\nu$, I studied the sensitivity to non-unitary parameters in FASER$\nu$ and FASER$\nu$2. Other phenomenology related to non-unitarity will also be discussed. This work is based on: 2309.00116
In the context of the left-right symmetric model, we study the interplay of the neutrinoless double beta ($0\nu\beta\beta$) decay, parity-violating Møller scattering, and high-energy colliders, resulting from the Yukawa interaction of the right-handed doubly-charged scalar to electrons, which could evade the severe constraints from charged lepton flavor violation. The half-life $\onbb$ decay calculated in the effective field theory (EFT) framework allows for an improved description of the contributions with a non-zero left-right mixing and light right-handed neutrino.
We find that the sensitivities of the low-energy (or high-precision) and high-energy experiments are complementary to each other. The reach of parity-violating Møller scattering in the MOLLER experiment is stronger than those of future ton-scale $0\nu\beta\beta$-decay for TeV scale right-handed neutrino if the left-right mixing is negligible. On the other hand, for a non-zero left-right mixing, the constraints set by the MOLLER experiment become complementary to future ton-scale $0\nu\beta\beta$-decay experiments as well as direct searches and precision measurements at high-energy colliders.
The discovery of the neutral Higgs Boson of mass 125 GeV by the ATLAS and CMS experiments in 2012 has prompted further discussions on whether extensions of the Standard Model (SM) scalar sector exist, other than the observed SM doublet. The two-Higgs-doublet model (2HDM) is one of such extensions predicting the additional doublets. This model is supported by Supersymmetry and could provide the CP violation needed to explain the observed baryon asymmetry of the Universe. It predicts the appearance of two charged Higgs bosons that decay predominantly to top and bottom quarks for massive charged Higgs. Similar final states were addressed by a recent ATLAS search for hypothetical dark mesons using the full Run 2 $pp$ collision dataset at $\sqrt{s}=$ 13 TeV. These dark mesons could emerge from a SU(2) dark flavour symmetry conserving model that is analogous to the SM Quantum Chromodynamics, and may decay back to SM fermions or gauges bosons. This analysis studied decays of dark mesons to top and bottom quarks which subsequently decay to fully-hadronic and the 1-lepton states. The similarities of decay products between the dark mesons and the charged Higgs makes the dark meson search potentially sensitive to the charged Higgs signal. This talk will explore the feasibility of searching for the charged Higgs signals using the existing dark meson analysis.
The Standard Model provides the best description of the known fundamental particles and their interactions to date. However, findings regarding excesses of taus show a tension between Standard Model predictions and observed data. This tension can be understood in the context of a 2HDM. In this talk, I will show the recent results of the model-independent search for charged Higgs bosons via $H^{\pm}\rightarrow \tau\nu$ in ATLAS using the full run2 dataset.
Energy correlators, which as a jet-substructure observable measure correlations between energy detectors (calorimeters) in a collider experiment, have received significant attention over the last few years in both the theory/phenomenology and experimental communities. This success has prompted investigations into how energy correlators can be further used, such as in the study of both hot and cold nuclear matter, as well as to gain access to particles with particular quantum numbers. This requires “building” new detectors which are sensitive to more than just particle energy. In this talk, we will discuss this larger space of detectors, including some specific examples such as detectors which are sensitive to arbitrary powers of energy, as well as ones that are sensitive to a global U(1) charge. Beyond their construction, we will also discuss the renormalization of these objects and also highlight some ongoing experimental efforts which utilize these observables.
Understanding the behaviour of heavy quarks is important for painting a coherent picture of QCD, both formally and phenomenologically, and the upcoming runs at the LHC will provide unprecedented statistics for precision measurements related to heavy flavor. A natural object for initiating these studies are Energy and Charge Correlators, which measure correlations of energy flow, along with the flow of other intrinsic quantum numbers, at collider experiments. These observables fall into a broader class of so called “jet substructure” observables which have been successful in broadening our understanding of fundamental physics and QCD. The aformentioned correlators are distinguished in their ability to resolve scales associated with heavy quarks along with those of confinement. In this talk, I will introduce a variety of new correlator based observables, specifically the two and three point heavy energy correlators, along with their charged analogs. These observables provide new insights into jet substructure, specifically allowing for direct access to hadronization and intrinsic mass effects before confinement. This opens the door to a new class of precision, heavy flavored based measurements at the LHC and beyond.
We introduce “power jets,” a scheme that uses the fully correlated information of the QCD power spectrum to go beyond conventional, sequential jet clustering algorithms. This affords us a kinematic reconstruction that can accurately probe the underlying hard physics of an event, even in the presence of high pile up, and subject to finite sampling.
In collider physics, the properties of hadronic jets are often measured as a function of their lab-frame momenta. However, jet fragmentation must occur in the particular rest frame of all color-connected particles. Since this frame need not be the lab frame, the fragmentation of a jet depends on the properties of its sibling objects. This non-factorizability of jets has consequences for jet techniques such as jet tagging, boosted boson measurements, and searches for physics Beyond the Standard Model. In this talk, we will describe the effect and show its impact as predicted by simulation.
Based largely on https://arxiv.org/abs/2308.10951
The calibration of the energy scale and resolution of jets, the collimated sprays of particles initiated by quarks and gluons, is important for many precision measurements and searches for physics beyond the standard model at the Large Hadron Collider (LHC). Currently within ATLAS, a series of calibrations is required to correct jets for effects of pileup and detector response. This results in several (often large) corrections with a loss of correlations between the steps and artificial constraints. ATLAS is exploring new approaches for jet calibration based on machine learning (ML) that can, in principle, perform many of the corrections in one step and address the limitations of the current approach. This is particularly relevant for developing jet calibrations for physics performance studies at the future High Luminosity LHC, where there will be 3-4 times more pileup from additional pp collisions. In this talk, a ML-based approach to jet calibration will be presented using simulated samples of the upgraded ATLAS detector at the HL-LHC. Data formatting procedures, network structure/modifications, metrics for performance, and future extensions will be discussed.
In October 2022, gamma-ray telescopes observed an extremely bright gamma-ray burst, GRB221009A. This event was quickly heralded as the brightest GRB of all time (BOAT) by several metrics. Followup searches for neutrino emission were also performed with the IceCube detector. In this talk, I will present the results of an analysis searching for low-energy antineutrino emission from GRB221009A in the KamLAND neutrino detector. KamLAND provides unique sensitivity to electron antineutrinos between 1.8 and 500 MeV, enabling multimessenger searches at these energies. For various time windows surrounding GRB221009A, we search for antineutrinos coincident with the GRB. No significant antineutrino excesses were observed, but assuming different source emission spectra, we place upper limits on the neutrino flux from this unique astrophysical event, and compare these results with IceCube’s analysis.
Sterile neutrinos constitute one of the simplest solutions to explain the origin of neutrino masses. They can be easily produced in the hot and dense core of a core-collapse supernova (SN). Firstly, I'll revisit the SN1987A cooling bounds for dipole portal using the integrated luminosity method, which yields more reliable results than emissivity loss criterion. I'll then discuss a novel bound on the sterile neutrino parameter space arising from the energy deposition and the observed population of underluminous SN-IIP.
We constrain the interaction cross-section between neutrinos and dark matter using the inferred dark matter density profiles of Milky Way dwarf spheroidal galaxies. Assuming $\Lambda$CDM (DM is cold, collisionless, no self-interactions), energy injection into the dark matter sub-halo is needed to transform an initially cusped profile into a cored profile. Using estimates of the core sizes from stellar kinematics, we find an upper limit to the energy injection such that the core sizes do not become too large in comparison to observations assuming there are no other sources of feedback. Under assumptions of the interaction kinematics, this energy injection limit can be turned into a cross-section upper limit. Consideration of other sources of energy injection, e.g., baryonic feedback or host galaxy effects, on the dark matter profile can strengthen this constraint.
The exploration of dark sector mediators by gravitational waves from binary inspirals has been a subject of recent interest. Dark mediators typically generate a Yukawa-like potential that either directly impacts the orbital decay through dipole radiation or indirectly through altering the effective gravitational constant. However, with the rescaling of the binary component’s mass, the additional Yukawa term becomes indistinguishable from pure gravity for light mediators. Although probing ultralight mediators with binary inspirals is challenging, Extreme Mass Ratio Inspirals (EMRIs) provide proof of principle that advancements may be achieved in this area through multimessenger astronomy. The mass of the supermassive black hole (SMBH) can be precisely determined with Spectroscopic Reverberation Mapping or extremely large mass-ratio inspirals (XMRIs), where the interaction is purely gravitational. Once the mass of the supermassive black hole is determined, an EMRI signal can be used to study the dark forces between two black holes. We find that such a system would be sensitive to mediators with masses between 10^{-16} and 10^{-18} eV.
Exploring the nexus between macroscopic planetary science and Beyond Standard Model (BSM) physics offers avenues to search for novel particle signatures. One such connection involves investigating deviations in celestial object motion from well-established theories of gravity. These deviations are attributed to the influence of long-range forces mediated by new ultralight particles, referred to as the fifth force. This force can be phenomenologically modeled in the form of Yukawa gravity potential. Previously, fifth forces have been constrained via the timing of pulsars and orbital precession of various astronomical objects. In this talk, we will discuss the potential to constrain fifth force parameters using orbital trajectory data from JUNO, a NASA space probe orbiting the planet Jupiter.
Hawking radiation emitted by a black hole is typically modified in the presence of new degrees of freedom beyond the Standard Model. In this talk I will discuss the characteristics of a hypothetical observation of a black hole in its final minutes of evaporation by current and upcoming Very/Ultra High Energy Gamma Ray telescopes, such as HAWC, LHAASO, and CTA. I will then discuss the potential for multi-messenger signals by the first and second generations of IceCube, and KM3NET. We typically predict sensitivity to dark sectors with order 10 new Dirac fermions up to mass scales of hundreds of TeV.
It has recently been realized that many extensions of the Standard Model give rise to cosmological histories exhibiting extended epochs of cosmological stasis — epochs wherein the abundances of multiple energy components (such as matter, radiation, or vacuum energy) remain effectively constant despite cosmological expansion. In this talk, I shall discuss a novel realization of stasis involving a collection of scalar fields, each of which dynamically transitions from a period of slow roll to a period of rapid oscillation around its potential minimum as the universe expands. As I shall demonstrate, not only does cosmological stasis arise in such scenarios, but unlike in previous model realizations of this phenomenon, one finds that many properties of the stasis depend non-trivially on the initial conditions. For example, in the presence of an additional cosmological energy component, the system exhibits a tracking behavior wherein the effective equation of state for the universe as a whole evolves toward the equation of state of this energy component. The emergence of such tracking behavior has potential model-building implications in the context of dark-energy and cosmic-inflation scenarios.
Cosmological stasis is a phenomenon in which multiple energy components in the universe (such as matter, radiation, or vacuum energy) maintain constant abundances despite cosmological expansion. Such epochs have recently been shown to arise naturally in cosmologies associated with numerous extensions of the Standard Model, and can persist across many e-folds of expansion. In this talk, I describe how the evolution of perturbations in the matter and radiation densities is affected by the presence of a matter/radiation stasis epoch within the cosmological timeline. I also discuss the resulting implications for structure on small scales.
Recently, it has been shown that there can exist a type of cosmological epoch in which the abundances of different energy components remain essentially fixed for an extended period. This phenomenon, which is known as cosmic stasis, has been shown to arise in a variety of BSM contexts. In all previous realizations of stasis, however, the sustained transfer of energy density between energy components which underpins stasis has always hinged on the presence of a tower of states. In this talk, by contrast, I shall present a realization of stasis in which this sustained transfer of energy density arises not from the presence of such a tower, but rather from thermal effects stemming from the annihilation of a single particle species. I then show that there exists a QFT Lagrangian in which this stasis can be realized and then show that a significant number of e-folds of stasis can be achieved with this model.
Cosmological collider physics, a mechanism in which heavy particles produced during inflation leave an observable footprint in primordial non Guassianities, carries the prospect of probing physics at scales far higher than any terrestrial collider. Supersymmetric grand unified theories (SUSY GUTs) are a highly motivated target, but the high unification scale is orders of magnitude above the reach of cosmological collider physics. We focus on the extra dimensional “orbifold” SUSY GUTs because they solve the doublet triplet splitting problem and suppress proton decay. Utilizing the “chemical potential” generalisation of cosmological collider physics to extend its reach, we show that the heavy gauge bosons from the GUT broken generators can produce signals potentially observable in large scale structure surveys and 21-cm cosmology.
The string theory axions can naturally form stable string-domain wall network. The later collapse of the domain walls produce more than one type of axion mass eigenstates apart from gravitational waves.
We propose a solution to the strong CP problem that specifically relies on massless quarks and has no light axion. The QCD color group $SU(3)_c$ is embedded into a larger, simple gauge group (grand-color) where one of the massless, colored fermions enjoys an anomalous chiral symmetry, rendering the strong CP phase unphysical. The grand-color gauge group $G_{\rm GC}$ is Higgsed down to $SU(3)_c\times G_{c'}$, after which $G_{c'}$ eventually confines at lower energy, spontaneously breaking the chiral symmetry and generating a real, positive mass to the massless, colored fermion. Since the chiral symmetry has a $G_{c'}$ anomaly, there is no corresponding light Nambu-Goldstone boson. Potential experimental signals of our mechanism include vector-like quarks, pseudo Nambu-Goldstone bosons, light dark matter decay, and primordial gravitational waves from the new strong dynamics.
Over ten years ago, Fermi observed an excess of GeV gamma rays from the Galactic Center whose origin is still under debate. One explanation for this excess involves annihilating dark matter; another requires an unresolved population of millisecond pulsars concentrated at the Galactic Center. We use the results from LIGO/Virgo's most recent all-sky search for quasi-monochromatic, persistent gravitational-wave signals from isolated neutron stars to determine whether unresolved millisecond pulsars could actually explain this excess. Based on null results from the O3 Frequency-Hough all-sky search for continuous gravitational waves, we find that a large set of the parameter space in the pulsar luminosity function can be excluded.
A wide variety of celestial bodies have been considered as dark matter detectors. Which stands the best chance of delivering the discovery of dark matter? Which is the most powerful dark matter detector? We investigate a range of objects, including the Sun, Earth, Jupiter, Brown Dwarfs, White Dwarfs, Neutron Stars, Stellar populations, and Exoplanets. We quantify how different objects are optimal dark matter detectors in different regimes by deconstructing some of the in-built assumptions in these sensitivities, including observation potential and particle model assumptions. We show how different objects can be expected to deliver corroborating signals. We discuss different search strategies, their opportunities and limitations, and the interplay of regimes where different celestial objects are optimal dark matter detectors.
Indirect dark matter experiments probe dark matter properties by searching for the products or other observables that result from interactions, rather than measuring dark matter directly. Here we consider a two-component dark matter model where observable indirect signals are produced from lightly boosted dark matter particles produced from a more traditional dark matter candidate. In this model, additional signal dependencies arise based on galactic size which can help alleviate the developing tension between the galactic center excess and dwarf galaxy measurements.
Dark kinetic heating of neutron stars has been previously studied as a promising dark matter detection avenue. Kinetic heating can occur when dark matter is sped up to relativistic speeds in the strong gravitational well of high escape velocity objects, and deposits this kinetic energy after becoming captured by the object, thereby increasing its temperature. We show that dark kinetic heating can occur even in objects with low escape velocities, such as exoplanets and brown dwarfs, increasing the discovery potential of such searches. This can occur if there are long-range forces present in the dark sector, which increase the escape velocity of these objects, and can lead to heating rates substantially larger than those expected from neutron stars. We demonstrate existing sensitivity to this scenario using Wide-field Infrared Survey Explorer data on the local brown dwarf WISE 0855-0714, and map out future sensitivity to the dark matter scattering cross section below $10^{-40}$ cm$^2$. We compare dark kinetic heating rates of other lower escape velocity objects such as the Earth, Sun, and white dwarfs, finding complementary kinetic heating signals are possible depending on particle physics parameters.
We show that Milky Way white dwarfs are excellent targets for dark matter (DM) detection. Using Fermi and H.E.S.S. Galactic center gamma-ray data, we investigate sensitivity to DM annihilating within white dwarfs into long-lived or boosted mediators and producing detectable gamma rays. Depending on the Galactic DM distribution, we set new constraints on the spin-independent scattering cross section down to $10^{-45}-10^{-41}$ cm$^2$ in the sub-GeV DM mass range, which is multiple orders of magnitude stronger than existing limits. For a generalized NFW DM profile, we find that our white dwarf constraints exceed spin-independent direct detection limits across most of the sub-GeV to multi-TeV DM mass range, achieving sensitivities as low as about $10^{-46}$ cm$^2$. In addition, we improve earlier versions of the DM capture calculation in white dwarfs, by including the low-temperature distribution of nuclei when the white dwarf approaches crystallization. This yields smaller capture rates than previously calculated by a factor of a few up to two orders of magnitude, depending on white dwarf size and the astrophysical system.
The presence of asymmetric dark matter (ADM) in neutron star interiors has been shown to affect the global properties of neutron stars, namely their masses and radii. Since the neutron star interior is poorly understood, the most conservative approach to a Bayesian analysis of their interiors is to allow all equation of state (EoS) parameters to vary. In this work, we use synthetic neutron star mass-radius measurements to infer the possible constraints on bosonic ADM cores, i.e., the spatial regime where bosonic ADM has accumulated in the interior of neutron stars. We find that ADM cannot be excluded, and the inclusion of bosonic ADM in neutron star cores relaxes the constraints on the baryonic EoS. If the baryonic EoS were more tightly constrained independent of ADM, we find that statements about the ADM EoS parameter space could be made.
The forward-backward asymmetry in Drell–Yan production and the effective leptonic weak mixing angle are measured using a sample of proton-proton collisions at $\sqrt{s}$ = 13 TeV collected by the CMS experiment and corresponding to an integrated luminosity of 137 fb−1. The measurement uses both dimuon and dielectron events, and is performed as a function of the dilepton’s mass and rapidity. The measured value agrees with the standard model prediction. The total uncertainty using the CT18 PDF is 0.00031. This is the most precise $\sin^2\theta^{\ell}_{\mathrm{eff}}$ measurement at a hadron collider, with a precision comparable to the results obtained at LEP and SLD.
In the standard model of particle physics, the spontaneous symmetry breaking of the complex Higgs field gives rise to the massive Higgs boson and three Goldstone bosons. These Goldstone bosons give the longitudinal degree of freedom to the W and Z bosons. This analysis studies diboson polarization states, in a phase space where the longitudinal-longitudinal contribution is enhanced, with $WZ$ production from proton-proton collision in the ATLAS experiment of the Large Hadron Collider at $\sqrt{s} = 13$ TeV. The dominant contribution of both bosons being transversely polarized nearly vanishes at tree-level if the bosons are produced centrally, which effectively enhances the longitudinal-longitudinal $WZ$ contribution. As high jet multiplicity skews this Radiation Amplitude Zero (RAZ) effect, only events with lower $p_T^{WZ}$ ($<20, 40, 70$ GeV) are selected. We measure RAZ as the depth in the central region of the distributions of the rapidity differences between the $W$ lepton and the $Z$ boson and between the $W$ boson and the $Z$ boson. A high $p_T^Z$ cut also enhances the $W_0Z_0$ contribution. A BDT variable is trained to distinguish different diboson polarization states in two high $p_T^Z$ exclusive regions: $100 < p_T^Z \leq 200$ GeV and $p_T^Z > 200$ GeV. A maximum log-likelihood fit is then executed, yielding an observation of a non-zero longitudinal-longitudinal polarization fraction ($f_{00}$). Notably, this analysis marks the first observations of the Radiation Amplitude Zero Effect and of the longitudinal-longitudinal $WZ$ production in the high-$p_T^Z$ phase space.
We propose an analysis to measure the branching fraction of the Z boson decaying to $b\bar{b}b\bar{b}$ at the CMS detector. This quantity was previously measured by the LEP experiments to an uncertainty of about $36\%$ but has not yet been measured at the LHC; such a measurement would be a high-precision test of QCD involving $b$-quarks. The rarity of this decay, about $4*10^{-4}$, and the multiplicity of decay products make this measurement difficult. We show that the best prospect for this analysis selects events with a boosted Z boson which produces two jets, one of which contains multiple tagged $b$-quarks. Requiring this multi-$b$-tagged jet strongly decreases the background events due to QCD interactions, though such events are still the largest background. We propose several ways in which these backgrounds can be further reduced, outline a proposed analysis strategy, and present expected sensitivities for $\mathrm{pp}$ collisions at $\sqrt{s} = 13 ~\mathrm{TeV}$ using an integrated luminosity of $138 ~\mathrm{fb}^{-1}$.
Diboson production in association with jets is studied in the fully leptonic final states, pp → (Z/γ∗)(Z/γ∗) → 2ℓ2ℓ’, (ℓ, ℓ’ = e or μ), in proton-proton collisions at a center-of-mass energy of 13 TeV. The data sample corresponds to an integrated luminosity of 138 $\textrm{fb}^{−1}$ collected with the CMS detector at the LHC. Differential distributions and normalized differential cross sections are measured as a function of jet multiplicity, transverse momentum $p_\textrm{T}$, pseudorapidity η, invariant mass and ∆η of the highest-$p_\textrm{T}$ and second-highest-$p_\textrm{T}$ jets, and as a function of invariant mass of the four-lepton system for events with various jet multiplicities. These differential cross sections are compared with theoretical predictions that mostly agree with the experimental data. However, in a few regions we observe discrepancies between the predicted and measured values. These measurements demonstrate the necessity for better Monte Carlo modeling in events with complex multiboson final states and extra jets. Further improvement of the predictions is required to describe the ZZ+jets production in the whole phase space.
We investigate the $W$ boson exotic decay channel, $W \rightarrow \ell\ell\ell \nu$, at LHC. Although the decay branching ratio is suppressed by the four body final states, the large abundance of produced W boson make the compensation. After enumerating the signal and all classes of background, the Deep Neural Network (DNN) machine learning is exploited for optimization. Results indicate that this tiny branching ratio can be measured in sub-percent level precision. This decay channel can also be applied to constrain the BSM model. With the $L_\mu - L_\tau$ as the benchmark model, we find that the current bound on the gauge coupling for $Z'$ mass in the range of $[5,75]$ GeV can be improved by around one order.
We present the results for NLO QED correction to the Neutral Current Drell-Yan process using jettiness subtraction method.
The jettiness subtraction method utilizes Soft and Collinear Effective Theory (SCET) to construct the factorization theorem and relevant ingredients for the precision calculations for various processes. While the jettiness subtraction method was originally developed for QCD correction calculations, we here apply this method to QED calculations. We will discuss the adjustments needed and challenges on using this method for QED calculations.
We explore a variety of composite topological structures that arise from the spontaneous breaking of SO(10) to SU(3)c × U(1)em via one of its maximal subgroups SU(5) × U(1)χ, SU(4)c × SU(2)L × SU(2)R, and SU(5) × U(1)X (also known as flipped SU(5)). They include i) a network of ℤ strings which develop monopoles and turn into necklaces with the structure of ℤ2 strings, ii) dumbbells connecting two different types of monopoles, or monopoles and antimonpoles, iii) starfish-like configurations, iv) polypole configurations, and v) walls bounded by a necklace. We display these structures both before and after the electroweak breaking. The appearance of these composite structures in the early universe and their astrophysical implications including gravitational wave emission would depend on the symmetry breaking patterns and scales, and the nature of the associated phase transitions.
In my recent publication, Quasiclassical solutions for static quantum black holes, we derive novel nonlocal effects near the horizon of a quantum-corrected black hole. In this talk, I will outline two follow-up papers nearing submission. The first reinterprets this model as a quantum superposition of classical black hole spacetimes with a gaussian distribution of varying mass, broadening the applicability of the model. The second investigates the volume inside the horizon of a related quantum-corrected black hole, with implications for information content/storage. Future work includes potential quasinormal mode gravitational wave predictions! These canonical gravity approaches to constructing and exploring black hole models with quantum corrections are moving in a promising direction towards a theory of quantum gravity, and present a fertile area for further study.
Primordial black holes (PBHs) are plausible dark matter candidates that formed from the gravitational collapse of primordial density fluctuations. Current observational constraints allow asteroid-mass PBHs to account for all of the cosmological dark matter. We show that elastic scattering of a cosmological gravitational wave background, these black holes generate spectral distortions on the background of 0.3% for cosmologically relevant frequencies without considering coherent scattering and 5% when the coherent enhancement is included. Scattering from stellar objects induce much smaller distortions. Detectability of this signal depends on our ultimate understanding of the unperturbed background spectrum.
A new form of quasiclassical space-time dynamics for constrained systems reveals how quantum effects can be derived systematically from canonical quantization of gravitational systems. These quasiclassical methods lead to additional fields, representing quantum fluctuations and higher moments, that are coupled to the classical metric components. The new fields describe nonadiabatic quantum dynamics and can be interpreted as implicit formulations of nonlocal quantum corrections in a field theory. This field-theory aspect is studied here for the first time, applied to a gravitational system for which a tractable model is constructed. Static solutions for the relevant fields can be obtained in almost closed form. They reveal new properties of potential near-horizon and asymptotic effects in canonical quantum gravity and demonstrate the overall consistency of the formalism.
We investigate the effects of change in temperature of thermal bath via non-adiabatic conditions on the phase transition and it's gravitational wave signature. Our preliminary results show that it is possible to get gravitational waves with shifted frequency due to the thermal kick to the bath in the early universe. This is a novel result and is ubiquitous in scenarios with non-instantaneos reheating, perturbation reentry leading to PBH formation and so on.
In this talk we will be exploring a new paradigm where we establish a complementarity of Stochastic gravitational waves with the discovery of Neutral Heavy Lepton in the colliders.
The ATLAS detector will be upgraded to cope with challenging new conditions at the HL-LHC. The upgrades will include extended geometric coverage and finer detector resolution. The success of the research programs at the HL-LHC will strongly rely on tracking performance. Reconstructing individual particles in the HL-LHC collision environment with thousands of charged particles being produced within a few cm will be very challenging. The entire tracking system, presently consisting of pixel and strip detectors and the transition radiation tracker, will be replaced by a new all-silicon pixel and strip tracker. This excellent tracking detector will enable full exploitation of the physics potential of the LHC dataset in the HL-LHC era. The Argonne National Laboratory (ANL) is tasked with the testing and assembly of the inner tracker (ITk) pixel Layer1 quad module for the ATLAS detector upgrade. In the next 2-3 years, approximately 1200 silicon pixel modules will be assembled and tested at the laboratory. This presentation discusses the readiness of the preproduction and production of the ITk pixel Layer1 quad module highlighting the innovative test setup at ANL, designed to handle the fast production rate. Finally, a comprehensive and meticulously planned assembly and testing procedure is presented.
The ATLAS experiment is currently preparing for the High Luminosity LHC era, scheduled to begin in a few years time with the start of run 4. ATLAS will be upgraded to support at least 200 simultaneous proton-proton interactions per bunch crossing. As part of these upgrades, the trigger system is also being upgraded to support a 10x increase in readout rate, and-- for the first time-- a dedicated tracking subsystem as part of the second-stage Event Filter trigger. The Event Filter will receive data from the entire detector at 1 MHz and need to output events at a rate of 10 KHz; due to high pileup conditions, efficiently reconstructing tracks and vertices can provide a major improvement in determining whether to accept or reject events of potential interest. A number of possible Event Filter Tracking designs are currently under study, with a final decision on the system architecture expected by next year, but due to power and latency concerns, there is significant interest in "accelerator" options, where a FPGA (or GPU) serves as a tracking co-processor for the CPU based Event Filter cluster. In this talk, I will discuss some of the studies ongoing towards FPGA-based EF tracking solutions, with a particular focus on ways to efficiently reject fake tracks on the FPGA itself, such as fast linearized fits or neural network-based methods.
An upgraded all silicon Inner Tracker (ITk) is under construction for the HL-LHC upgrade of the ATLAS detector. This new detector system will be required to maintain and improve tracking performance and vertex reconstruction in the high pileup environment and to handle the increased radiation expected in the HL-LHC. ITk is comprised of the silicon strip and silicon pixel detectors. The US is responsible for building the two innermost layers of the silicon pixel detector called the Inner System. This system constitutes a barrel section and two endcaps, which together have approximately 1 billion pixels. We present an overview of the Inner System and the challenges involved in assembly. This presentation will discuss specific solutions that have been developed to enable successful production. This presentation will also discuss the ongoing quality assurance tests that are being done during preproduction and development of rigorous quality control tests that will be setup for production. The successful production and assembly of the Inner System are essential to facilitate the tracking and vertex reconstruction required for the physics studies of the ATLAS experiment at the HL-LHC.
The Large Hadron Collider (LHC) collides bunches of protons spaced 25 ns apart at a total center of momentum energy of 13.6 TeV, producing an event rate of 40 MHz. This generates about a petabyte worth of information every second, but this is far too much data to feasibly save for offline analysis. To increase the chances of saving interesting physics events, the ATLAS detector implements a two-tiered trigger system designed to reduce the rate of accepted events that are stored for later analysis. The first tier of this trigger system is the Level 1 hardware trigger. It reduces the initial 40 MHz rate down to 100 kHz at a latency of 2.5 μs. For Run 3 of the LHC, upgrades were made to the Level 1 trigger, including new feature extractors (FEXs) designed to increase sensitivity to key physics channels. In particular, the global feature extractor (gFEX) takes advantage of its single board architecture to implement algorithms which cover the entire range of the calorimeters. This makes it ideal for identifying large radius jets, indicative of Lorentz boosted objects, as well as global values of interest such as missing transverse energy and total transverse energy, all of which are key signatures in many beyond Standard Model physics scenarios. This talk will cover the algorithms used by gFEX, the work done to validate these algorithms, and a discussion of gFEX performance so far in ATLAS data-taking during Run 3 of the LHC.
The CMS Collaboration is preparing to build a new high-granularity end-cap calorimeter, the HGCAL, to be installed as a replacement end-cap calorimeter for the High Luminosity LHC era. We discuss silicon modules that will make up the electromagnetic compartment and a large fraction of the hadronic compartment of HGCAL and delve into the operations involved in assembling and testing these modules at a Module Assembly Center.
The recent detection of neutrinos at the LHC has ushered in a new era of multi-messenger collider physics. The Forward Physics Facility is an underground cavern that will allow the LHC to fully exploit this new capability in the HL-LHC era. The FPF will house several experiments, which will detect thousands of TeV-energy neutrinos each day, with far-reaching implications for neutrino physics, QCD, and astroparticle experiments. In addition, the FPF will enhance the LHC's potential to detect new, weakly-interacting particles. In this talk, I will introduce the physics motivations for the FPF and present the latest updates to the FPF’s plans and timeline.
FASER represents a novel experiment for LHC Run 3. The experiment, which is located 480 meters away from the ATLAS collision point and faces forward, is intended to look for neutral, weakly-interacting, and long-lived particles that go beyond the Standard Model and investigate high energy neutrinos of all flavors. FASER's most recent physics results will be discussed, as well as the experiment's prospects for the future. The fascinating and extensive physics program in the very forward region of the LHC collisions has been highlighted by studies of FASER2's physics potential, which has inspired the planned Forward Physics Facility (FPF). For the HL-LHC era, this would be a brand-new, specialized building to hold multiple novel experiments, including FASER2. The potential of the FASER2 in physics, together with the current state of technical investigations on the detector design, will be presented.
The Forward Physics Facility (FPF) is a proposed program to build an underground cavern with the space and infrastructure to support a suite of far-forward experiments at the Large Hadron Collider during the High Luminosity era (HL-LHC). The Forward Liquid Argon Experiment (FLArE) is a proposed Liquid Argon Time Projection Chamber (LArTPC) based experiment designed to detect very high-energy neutrinos and search for dark matter in FPF, 620 m from the ATLAS interaction point in the far-forward direction, and will collect data during HL-LHC. With a fiducial mass of 10 tonnes, FLArE will detect millions of neutrinos at the highest energies ever detected from a human source and will also search for Dark Matter particles with world-leading sensitivity in the MeV to GeV mass range. The LArTPC technology used in FLArE is well-studied for neutrino and dark matter experiments. It offers excellent spatial resolution and allows excellent identification of individual particles. In this talk, I will overview the physics reach, preliminary design, and status of FPF and FLArE.
SND@LHC is a compact stand-alone experiment to perform measurements with neutrinos produced at the LHC in a hitherto unexplored pseudo-rapidity region of 7.2 < 𝜂 < 8.6, complementary to all the other experiments at the LHC. The experiment is located 480 m downstream of IP1 in the unused TI18 tunnel. The detector is composed of a hybrid system based on an 800 kg target mass of tungsten plates, interleaved with emulsion and electronic trackers, followed downstream by a calorimeter and a muon system. The configuration allows efficiently distinguishing between all three neutrino flavours, opening a unique opportunity to probe physics of heavy flavour production at the LHC in the region that is not accessible to ATLAS, CMS and LHCb. This region is of particular interest also for future circular colliders and for predictions of very high-energy atmospheric neutrinos. The detector concept is also well suited to searching for Feebly Interacting Particles via signatures of scattering in the detector target. The first phase aims at operating the detector throughout LHC Run 3 to collect a total of $290\;\mathrm{fb}^{−1}$. The experiment has been running successfully during 2022 and 2023 and has published several results.
This talk will highlight recent results and discuss prospects for the High-Luminosity LHC era, at the proposed Forward Physics Facility or in the current location.
The FORMOSA detector at the proposed Forward Physics Facility is a scintillator-based experiment designed to search for signatures of "millicharged particles" produced in the forward region of the LHC. This talk will cover the challenges and impressive sensitivity of the FORMOSA detector, expected to extend current limits by over an order of magnitude. A pathfinder experiment, the FORMOSA demonstrator, was installed in the FASER cavern at the LHC in early 2024 and has been collecting collisional data. Results from this demonstrator and important implications for the full detector design will be shown.
The IceCube DeepCore detector at the South Pole has been collecting GeV-scale atmospheric neutrino data for the past decade. DeepCore measures atmospheric neutrino oscillations with precision comparable to accelerator-based experiments, while also complementing accelerator measurements by probing longer distance scales and higher energies, peaking above the tau lepton production threshold. In recent years, DeepCore’s measurement of neutrino oscillations has improved significantly due to improvements in background rejection, reconstruction techniques, particle identification, and modeling of systematic uncertainties, in addition to extra years of data.
The IceCube Upgrade, to be deployed in the 2025-2026 Antarctic season, will further improve IceCube’s sensitivity to these parameters. The Upgrade will consist of 7 additional densely-instrumented strings within the DeepCore region, greatly enhancing detector performance for GeV-scale neutrinos. In combination with the existing decade of DeepCore data, the IceCube Upgrade will provide highly competitive sensitivity to atmospheric muon neutrino disappearance, tau neutrino appearance, and the neutrino mass ordering.
T2K (Tokai to Kamioka) is a Japan-based long-baseline neutrino oscillation experiment designed to measure (anti)neutrino flavor oscillations. A neutrino beam peaked around 0.6 GeV is produced in Tokai and directed toward the water Cherenkov detector Super-Kamiokande, which is located 295 km away. A complex of near detectors is located at 280 m and is used to constrain the flux and cross-section uncertainties by measuring the neutrinos before oscillations. Along with improved measurements of the oscillation parameters to which it is most sensitive, T2K started a campaign to measure the phase 𝛿𝐶𝑃 that can provide a test of the violation or conservation of the CP symmetry in the neutrino sector. The most recent results will be discussed in this talk along with the future prospects of the experiment.
Decaying sterile neutrino can mimic $\nu_\mu \to \nu_e$ oscillation signals at neutrino experiments. We revisit this possibility as a solution to the MiniBooNE and LSND puzzles in view of new data from MicroBooNE. Using MicroBooNE's search for an excess of $\nu_e$ in the Booster beam, we derive new limits in the parameter space of models where the sterile neutrino decays via mixing or higher-dimensional operators. In order to contextualize these limits, we also provide a comprehensive fit to the MiniBooNE neutrino and antineutrino data, including appearance and disappearance channels. We find that MicroBooNE excludes a large portion of the MiniBooNE-preferred parameter space at more than 95% C.L.
The existence of sterile neutrinos can lead to a matter-enhanced resonance that results in a unique disappearance signature for Earth-crossing neutrinos, providing an alternative method for probing the short baseline anomalies. In order to reconcile the tension between appearance and disappearance experiments, decay mechanisms for the heavy sterile mass state have been proposed. In this talk, I will present the results of an improved search for eV-scale unstable sterile neutrinos with a high purity sample of up-going muon neutrinos from 500 GeV to 100 TeV using eleven years of data from the IceCube Neutrino Observatory. This work utilizes an updated event selection compared to previous results, with major improvements to reconstructions, systematic uncertainties, and the inclusion of a DNN-based classifier to separate starting and through-going events. The implications of these results in the context of the global fits will also be discussed.
The Earth acts as a matter potential for relic neutrinos which modifies their index of refraction from vacuum by $\delta\sim10^{-8}$. It has been argued that the refractive effects from this potential should lead to a large $\mathcal O(\sqrt\delta)$ neutrino-antineutrino asymmetry at the surface of the Earth. This result was computed by treating the Earth as flat. In this talk, I revisit this calculation in the context of a perfectly spherical Earth. I demonstrate, both numerically and through analytic arguments, that the flat-Earth result is only recovered under the condition $\delta^{3/2}kR\gg1$, where $k$ is the typical momentum of the relic neutrinos and $R$ is the radius of the Earth. This condition is required to prevent antineutrinos from tunneling into classically inaccessible trajectories below the Earth's surface and washing away the large asymmetry. As the physical parameters of the Earth do not satisfy this condition, I find that the asymmetry at the surface should only be $\mathcal O(\delta)$. While the asphericity of the Earth may serve as a loophole to my conclusions, I argue that it is still difficult to generate a large asymmetry even in the presence of local terrain.
Nelson–Barr models, which assume that CP is a spontaneously broken symmetry of nature, are a well-known solution to the strong CP problem with no new light degrees of freedom. Nevertheless, the spontaneous breaking of CP has dramatic implications in cosmology. It was recently shown that domain walls which form from this spontaneous breaking are exactly stable and must therefore be inflated away. Combined with the "quality problem", which sets an upper bound on the CP breaking scale to avoid the effects of dangerous irrelevant operators, this puts an upper bound on the scale of inflation and the subsequent reheating temperature. In this talk, I will briefly review the Nelson–Barr solution to the strong CP problem, the quality issue, and demonstrate that minimal Nelson-Barr models are in tension with simple models of inflation and thermal leptogenesis. I will work out one way of ameliorating this tension via the introduction of a new chiral symmetry, and discuss other possibilities and avenues for future work.
Our objective is to address the strong CP problem by leveraging softly broken Parity invariance within the framework of the Quark-Lepton Unified (Pati-Salam) Model, where fermions undergo the "see-saw" mass generation mechanism. The incorporation of vector-like fermions facilitates the realization of this mechanism. The smallness of the Physical Theta-parameter ($\bar{\theta}$) is attributed to the experimental constraint on the electric dipole moment (EDM) of the neutron, $d_n$. In our model, $\bar{\theta}$ can be dissociated from the neutron EDM without requiring fine-tuning and can only arise at loop levels.
We show that in the Nelson-Barr solution to the strong CP-problem a naturally light scalar can arise. The dependence of the CKM matrix elements on this new scalar is its predominant coupling. It gives rise to a completely new phenomenology if this field constitutes dark matter, as CKM elements vary periodically in time.
We discuss dark shower signals at the LHC from a dark QCD sector, containing GeV-scale dark pions. The portal with the Standard Model is given by the mixing of the $Z$ boson with a dark $Z^\prime$ coupled to the dark quarks. Both mass and kinetic mixings are included, but the mass mixing is the essential ingredient, as it is the only one mediating visible decays of the long-lived dark pions on collider scales. We focus especially on the possibility that the dark $Z'$ is {\it lighter} than the $Z$. Indirect constraints are dominated by electroweak precision tests, which we thoroughly discuss, showing that both $Z$-pole and low-energy observables are important. We then recast CMS and LHCb searches for displaced dimuon resonances to dark shower signals initiated by the production of on-shell $Z$ or $Z^\prime$, where the visible signature is left by a dark pion decaying to $\mu^+ \mu^-$. We demonstrate how dark shower topologies have already tested new parameter space in Run 2, reaching better sensitivity on a light dark $Z'$ compared to the flavor-changing decays of $B$ mesons, which can produce a single dark pion at a time, and the electroweak precision tests.
Dark-showers offer a compelling collider signature for Hidden Valley models featuring a confining dark sector. Our work extends the investigation of these models to near-conformal theories where the running coupling, controlled by renormalization group equations (RGE), flows near to an infra-red fixed point. We establish a framework of two classes of RGE solutions which cover much of the parameter space of confining theories, allowing us to present the first phenomenological results of such near-conformal dark sector theories.
We consider a Hidden Valley model which generates showering from strong dynamics within the dark sector followed by decays back into Standard Model states. Our interest is the limit of smaller dark pion masses, which create a high multiplicity of final states. The reconstruction of dark sector masses in such a setting is obscured by a thick combinatoric background. We apply the new SIFT (Scale-Invariant Filtered Tree) jet clustering algorithm to the reconstruction of simulated events of this type. By cutting an ordered slice through possible recombinations, the SIFT algorithm may help lift backgrounds of the described variety.
Belle II is considering upgrading SuperKEKB with a polarized electron beam. The introduction of beam polarization to the experiment would significantly expand the physics program of Belle II in the electroweak, dark , and lepton flavor universality sectors. For all of these future measurements a robust method of determining the average beam polarization is required to maximize the level of precision. The $BABAR$ experiment has developed a new beam polarimetry technique, Tau Polarimetry, capable of measuring the average beam polarization to better than half a percent. Tau Polarimetry strongly motivates the addition of beam polarization to SuperKEKB and could also be used at future $e^+e^-$ colliders such as the ILC.
We present the results obtained with the $BABAR$ detector by using the full data set of about 470 $\text{fb}^{-1}$ collected at the $e^+e^-$ PEP-II collider.
The Fermilab Muon g-2 experiment aims to measure the anomalous magnetic moment of the muon, a_mu, to a precision of 140 parts per billion. Continuing to improve the precision of this measurement permits a more detailed comparison between the experimental value and theoretical prediction. The value of a_mu is extracted by measuring the precession frequency of the muon, omega_a, along with a precise determination of the magnetic field in which the muon precesses. This talk will focus on the techniques utilized to measure omega_a, which all originate from the time distribution of positrons emitted from muon decays. We will also discuss different scans for checking data consistency and some systematic uncertainty studies.
The Mu2e experiment at Fermilab will enable the search for the neutrinoless muon to electron conversion in the field of an Al nucleus, a charged lepton flavor violating process. If observed, there would be a clear indication of physics beyond the Standard Model. Mu2e aims to reach a single event sensitivity of $3\times10^{17}$, improving from the previous limit by 4 orders of magnitude. This improvement relies on the development of trigger selection systems, designed to discard data from background-induced events by placing kinematic, topological cuts on a particle’s reconstructed track. One of the largest sources of background Mu2e faces is proton-antiproton annihilation, with an expected number of 0.010 ± 0.010. These annihilations produce a 2 GeV shower of particles, among which there could be an electron that can mimic the conversion electron signal. The uncertainty on this background is dominated by the systematic uncertainty associated with the theoretical model.
The goal of my research is to reduce this systematic uncertainty by enabling the measurement of the antiproton production cross-section in a control region. To do this, I will develop an antiproton trigger selection system by taking advantage of the track multiplicity and topology for these events. After development, I will make the first performance study of this trigger. The development of this trigger is essential to enable a data-driven analysis targeting the reduction of the systematic uncertainty of the antiproton-induced background.
Over the years, the Lorentz-boosted regime has become an attractive area for performing measurements and searches at the LHC. This has led to an increasing importance of boosted-jet tagging algorithms. The algorithms identifying jets originating from a massive particle decaying to b or c quark-antiquark pairs, employed in CMS Run 2 analyses, are shown in this talk. The talk summarises their performance and highlights three methods used to calibrate their performance in data. The results of the calibration and their comparison to simulation are presented.
Spin correlations in top-quark pair production have been recently used to measure Entanglement at high energy. In this context, the semileptonic channel may play an important role due to its large cross section. However, the unambiguous identification of the hadronic top decay products that correlated the most with the top quark polarization is challenging. In this talk, we introduce and use jet flavor tagging to significantly improve spin analyzing power in hadronic decays beyond exclusive kinematic information employed in previous studies.
Top quark polarization measurements provide observables that are sensitive to new physics. The down-type fermion from the W decay is the most powerful spin analyzer from top, which is not straightforward to measure in hadronic decays. Most applications measure top quark spin via an optimal hadronic spin analyzer built from kinematics. In this talk, we discuss how to improve the optimal hadronic polarimetry utilizing machine learning with information beyond simple kinematics.
In this talk I will present forecasts on cosmological parameters for a CMB-HD survey. These forecasts include residual foregrounds, delensing of the acoustic peaks, and DESI BAO. We find that CMB-HD can improve constraints on the scalar spectral index, n_s, by a factor of two compared to precursor surveys. We also find that the CMB-HD constraint on N_eff can rule out light thermal particles back to the end of inflation with 95% confidence. As an application, this can rule out the QCD axion in a model-independent way assuming the Universe's reheating temperature was high enough that the axion thermalized. We also find that baryonic effects can bias parameters if not marginalized over, and can increase parameter error bars. However, this can be mitigated by including information about baryonic effects from kinetic and thermal Sunyaev-Zel'dovich measurements by CMB-HD. I will also discuss details of the publicly available CMB-HD likelihood and fisher estimation codes.
The primary two methods for measuring the Hubble constant – forward modeling of CMB fluctuations and distance ladder measurements – disagree at a level that is statistically significant. A third method, known as the bright standard siren method, could add another competitive measurement to the fray, potentially pointing to a resolution of the aforementioned tension. This direct method calibrates a value of the Hubble constant by measuring distances to neutron star mergers via their gravitational wave emission and their corresponding recessional velocity from their bright electromagnetic emission. Yet, these recessional velocities include peculiar components that stem from local dynamics rather than the expansion of the universe. When this peculiar component is not well-constrained, its uncertainty can dominate the standard siren uncertainty. In this context, we propose a procedure for dedicated, post-factum measurement of merger peculiar velocities via the Dark Energy Spectroscopic Instrument (DESI). We then demonstrate this procedure with a number of galaxies in the vicinity of NGC 4993, the host of the only observed bright standard siren event, as a test case. Implementing this procedure on future standard siren events will serve a significant role in making the standard siren measurement of the Hubble constant more competitive.
The peaks of the CMB spectra provide a direct cosmological probe for studying dark sector physics. Specifically, a shift in the peak positions corresponds to a phase shift in the acoustic oscillations of the photon-baryon plasma before recombination, which is sensitive to the propagation behavior of non-photon radiation. It has been established that CMB spectra shift to higher l-modes if the non-photon radiation is self-interacting rather than free-streaming. In this talk, I will show that this phase shift can be further amplified if the non-photon radiation, which includes neutrinos or dark radiation, interacts with dark matter. Using neutrino-dark matter scattering as an example, we numerically calculate the amplified phase shift and offer an analytical interpretation of the result by modelling photon and neutrino perturbations with coupled harmonic oscillators. When the energy density of the interacting radiation exceeds that of the interacting dark matter at matter radiation equality, we find that the phase shift enhancement is proportional to the interacting dark matter abundance but rather insensitive to the abundance of interacting radiation. This additional phase shift emerges as a generic signature of models featuring neutrino-dark matter scattering, or a dark sector with dark matter-radiation interaction.
Gravitational lensing of the cosmic microwave background (CMB) encodes information from the low-redshift universe. Therefore, its measurement is useful for constraining cosmological parameters that describe structure formation, e.g. matter density ($\Omega_m$), the amplitude of clustering ($\sigma_8$), and the sum of neutrino masses. In this talk, I will first present cosmological results from the CMB lensing potential power spectrum measurement using data collected in 2018 from the third-generation camera on the South Pole Telescope (SPT-3G). Then I will give an update on the current status of the lensing measurement using the SPT-3G 2019+2020 data set.
Even if they do not comprise the dark matter, light axion-like particles may be sourced by bulk Standard Model matter through a coupling that violates CP. When considered in combination with the usual axion-photon coupling, the resulting 'monopole-dipole' scenario possesses a rich phenomenology, as has previously been studied in the context of terrestrial detection. In this talk, I discuss the possible cosmological consequences of such interactions. Standard Model nucleons contribute to a homogeneous vacuum expectation value for the axion field, which evolves between recombination and the present day as matter redshifts. This means that regardless of the field's initial conditions, the photon coupling will cause the plane of linear polarization of the CMB to globally rotate, manifesting as a cosmic birefringence signal. Recent analyses of Planck and WMAP data place strong limits on this scenario, and may even favour a non-zero value for the couplings.
Dark matter (DM) with masses of order an electronvolt or below can have a non-zero coupling to electromagnetism. In these models, the ambient DM behaves as a new classical source in Maxwell’s equations, which can excite potentially detectable electromagnetic (EM) fields in the laboratory. We describe a new proposal for using integrated photonics to search for such DM candidates with masses in the 0.1 eV - few eV range. This approach offers a wide range of wavelength-scale devices like resonators and waveguides that can enable a novel and exciting experimental program. In particular, we show how refractive index-modulated resonators, such as grooved or periodically-poled microrings, or patterned slabs, support EM modes with efficient coupling to DM. When excited by the DM, these modes are read out by coupling the resonators to a waveguide that terminates on a micron-scale-sized single photon detector, such as a single pixel of an ultra-quiet charge-coupled device or a superconducting nanowire. We then estimate the sensitivity of this experimental concept in the context of axion-like particle and dark photon models of DM, showing that the scaling and confinement advantages of nanophotonics may enable exploration of new DM parameter space.
We search for indirect signals of O(keV) dark matter annihilating or decaying into O (eV) dark photons. These dark photons will be highly boosted and have decay lengths larger than the Milky Way, and can be absorbed by neutrino or dark matter experiments at a rate dependent on the photon-dark photon kinetic mixing parameter and the optical properties of the experiment. We show that current experiments can not probe new parameter space, but future large-scale gaseous detectors with low backgrounds (i.e. CYGNUS, NEXT, PANDAX-III) may be sensitive to this signal when the annihilation cross section is especially large.
I will present a detailed study of the production of dark matter in the form of a sterile neutrino via freeze-in from decays of heavy right-handed neutrinos. Our treatment accounts for thermal effects in the effective couplings, generated via neutrino mixing, of the new heavy neutrinos with the Standard Model gauge and Higgs bosons and can be applied to several low-energy fermion seesaw scenarios featuring heavy neutrinos in thermal equilibrium with the primordial plasma. We find that the production of dark matter is not as suppressed as to what is found when considering only Standard Model gauge interactions. Our study shows that the freeze-in dark matter production could be efficient.
We present a novel perspective on the role of inflation in the production of Dark Matter (DM). Specifically, we explore the DM production during Warm Inflation via ultraviolet Freeze-In (WIFI). We demonstrate that in a Warm Inflation (WI) setting the persistent thermal bath, sustained by the dissipative interactions with the inflaton field, can source a sizable DM abundance via the non-renormalizable interactions that connect the DM with the bath. Compared to the (conventional) radiation-dominated (RD) UV freeze-in scenario for the same reheat temperature (after inflation), the resulting DM yield in WIFI is always enhanced, showing a strongly positive dependence on the mass dimension of the non-renormalizable operator. Of particular interest, for a sufficiently large mass dimension of the operator, the entirety of the DM abundance of the Universe can be created during the inflationary phase.
Most scenarios of Majorana Leptogenesis require on-shell production of heavy Majorana neutrinos, $N$ whose CP-violating decays give rise to a lepton asymmetry. This lepton asymmetry is then converted into the observed baryon asymmetry by sphalerons. In this talk, I will discuss the possibility of simultaneously generating dark and Standard Model lepton asymmetries when the universe reheats to a temperature $T_{\rm RH}\ll m_N$. Since the universe does not reach sufficiently large temperatures to produce $N$ on-shell, the dark and visible asymmetries are frozen-in via $N$ mediated scattering processes. I discuss dark sector thermalization, lepton number violation and transfer, and how CP can be violated by these scattering process. In particular, I point out how the interplay between wash-out processes and thermalization between the dark and visible sectors allows for the asymmetric dark matter abundance to be suppressed relative to the lepton asymmetry. This suppression gives rise to dark matter masses that can be much larger than the usual GeV-scale found most models of asymmetric dark matter.
In this talk, we look at some consequences of Majorons in the singlet Majoron model. We explore a scenario where the Majoron acts as Dark matter, while simultaneously baryon asymmetry being generated through leptogenesis, and Neutrino masses generated through type I seesaw mechanism. We explore the consequences of Majoron freeze-in production through a relatively unexplored channel of production. We also look at some scenarios where leptogenesis occurs and neutrino masses are generated, but Majorons are unstable on cosmological time scales, giving rise to different phenomenology.
The proposed next generation e+e- colliders provide an excellent opportunity for precision measurements of the electroweak and the Higgs sector that offer both direct and indirect probes of new physics beyond the Standard Model. These opportunities can be enabled by deploying low-mass, high granularity detectors, utilizing the latest state-of-the-art technological developments, that can offer unprecedented spatial, time and energy resolution to meet the physics objectives. Ongoing detector R&D efforts toward meeting these objectives will be discussed.
The International Linear Collider is proposed e+e- collider with a staged approach to reach high energies. The accelerator is based on a mature design that can deliver high luminosity and uses polarized beams. The first stage will be a Higgs Factory, with collisions at 250 GeV that allows measurements of Higgs boson couplings comparable to that of circular e+e- colliders.
A second stage will include a brief program at 350 GeV to measure the top quark mass at threshold, and then move on to a center of mass energy of 500 GeV or somewhat above. This stage offers an independent set of measurements of the Higgs boson couplings, together with new capabilities: measurement of the top quark Yukawa coupling, measurement of Higgs pair production, and measurement of the top quark form factors to a precision at which beyond-Standard Model effects would be expected. This talk will describe the physics program at energies higher than 250 GeV.
Prospects to constrain CP-odd contributions in the Higgs-strahlung process at a future electron-positron collider for the process e+e- => ZH are presented. A realistic study is performed in the framework of the FCC-ee collider at the center-of-mass energy of 240 GeV, with reconstruction of the IDEA detector performed using the DELPHES framework. A matrix-element package, MELA, is implemented that uses event weights to the Standard Model in order to optimally constrain the CP-odd contributions based on kinematic observables.
We show that the FCC-ee will have sensitivity to the MSSM electroweak sector that is complementary to the LHC, through the precision Z-boson measurements. Our results provide added motivation and quantitative targets for the desired systematic uncertainty on this measurement.
The existence of relic neutrino background is a strong prediction of big bang cosmology. But because of their extremely small kinetic energy today, the direct detection of relic neutrinos remains elusive. On the other hand, we know very little about the nature of dark matter. In this work, we show that heavy dark matter (with mass in the range of $10^9$ to $10^{15}$ GeV) decaying into neutrinos will provide a new probe of relic neutrinos via resonant neutrino scattering. We find that the distinct resonant absorption feature is potentially observable in the next-generation ultra-high energy neutrino telescopes (such as IceCube-Gen2) for a relic neutrino overdensity comparable to the current laboratory limits.
The proposed Muon Collider Facility, when finalized, is going to offer great opportunities for discovering new physics. At high energies, muons can produce heavy neutral lepton (HNL), well-motivated beyond the Standard Model (SM) particles, which can potentially explain neutrino mass via seesaw mechanism. HNL can interact with the SM sector via transition magnetic moment, and in this talk, I will present its production and decay channels in the context of muon collider. Finally, I will also present the sensitivity of muon collider to probe the dipole couplings to the SM gauge bosons and HNL mass.
We investigate the decay of MeV-scale sterile Dirac neutrinos into e+e- pairs, which is a signature that can be exploited by current solar neutrino experiments (e.g. Borexino) and future dark matter experiments (e.g. ISODAR@Yemilab) to set limits in mass-mixing parameter space. We present a closed-formed decay width correctly accounting for the neutral/charged current interference. We rederive Borexino bounds and derive ISODAR@Yemilab bounds based on this result. Our bound on Borexino is slightly more optimistic than the bound from the collaboration, likely resulting from our revised decay width. Our ISODAR@Yemilab shows improved sensitivities to the new particle compared to the past experiments.
The initiation of a novel neutrino physics program at the Large Hadron Collider (LHC) motivates studying the discovery potential of existing and proposed forward neutrino experiments. This requires resolving degeneracies between new predictions and uncertainties in modeling neutrino production in the forward kinematic region. Based on a broad selection of predictions for the parent hadron spectra, we parametrize the expected correlations in the spectra of neutrinos produced in their decays, and determine the highest achievable precision for their observation. This allows constraining various processes both within and beyond the Standard Model. In particular, we illustrate how combining multiple neutrino observables could lead to an experimental confirmation of the enhanced-strangeness scenario proposed to resolve the cosmic-ray muon puzzle during LHC Run 3, as well as constrain neutrino non-standard interactions. Moreover, we assess the possibility for observing neutrino trident scattering off a nucleus $N$, $\nu N\to\nu^{(\prime)} \ell^-\ell^{(\prime)+} N$, which has previously proven to be a notoriously difficult task with few reported experimental investigations and little conclusive results. We show that even a $\mathcal{O}(10~\textrm{ton})$ forward detector yields tens of di-muon trident events, while the relevant backgrounds can be reduced to negligible levels without affecting the signal.
The neutrino trident process is where a neutrino scatters off nuclei and produces a lepton pair. Most trident studies have focused on electron and muon production as they represent the most likely source of trident events in the Standard Model (SM). We analyze the possibility of detecting tau leptons from SM trident processes at the DUNE near detector. The detection of tau leptons at the DUNE near detector is considered anomalous so we must take into account all relevant SM backgrounds to avoid falsely attributing this anomalous detection to signs of new physics. We include both coherent and incoherent neutrino scattering off argon nuclei in the detector and estimate the event rate for both single and pair production of tau leptons for the DUNE standard flux, as well as the tau-optimized flux.
In this talk, I would like to investigate the excellent potential of future tau neutrino experiments in probing non-standard interactions and secret interactions of neutrinos. Due to its ability identifying tau lepton, DUNE far detector could have superior sensitivity in probing the secret neutrino interactions by observing downward-going atmospheric neutrinos, compared to the short-baseline experiments in Forward Physics Facility (FPF) at CERN. In probing the non-standard interactions, the large volume experiments such as HK, KNO, or ORCA could provide the dominant sensitivities. However, the inclusion of tau neutrino observation of DUNE could raise its sensitivity comparable to those larger volume experiments. Hence we point out the importance of increasing the tau lepton identification efficiencies in future experiments.
We derive the EFT amplitudes relevant for vector-boson pair production at the LHC in the dimension-8 SMEFT using on-shell methods. Since they are directly related to physical observables, the results allow for the identification of phenomenologically interesting amplitudes, and can furthermore distinguish between the SMEFT and generic EFTs.
We perform a comprehensive analysis of the scattering of matter and gravitational Kaluza-Klein(KK) modes in five-dimensional gravity theories. We consider matter localized on a brane as well as in the bulk of the extra dimension for scalars, fermions and vectors respectively, and consider an arbitrary warped background. While naive power-counting suggests that there are amplitudes which grow as fast as O(s^3) [where s is the center-of-mass scattering energy-squared], we demonstrate
that cancellations between the various contributions result in a total amplitude which grows no faster than O(s). Extending previous work on the self-interactions of the gravitational KK modes, we show that these cancellations occur due to sum-rule relations between the couplings and the masses of the modes that can be proven from the properties of the mode equations describing the gravity and matter wavefunctions. We demonstrate that these properties are tied to the underlying diffeomorphism invariance of the five-dimensional theory. We discuss how our results generalize when the size of the extra dimension is stabilized via the Goldberger-Wise mechanism. Our conclusions
are of particular relevance for freeze-out and freeze-in relic abundance calculations for dark matter models including a spin-2 portal arising from an underlying five-dimensional theory.
In this talk, I will discuss how the residual five-dimensional diffeomorphism symmetries of compactified gravitational theories with a warped extra dimension imply Equivalence theorems which ensure that the scattering amplitudes of helicity-0 and helicity-1 spin-2 Kaluza-Klein states equal (to leading order in scattering energy) those of the corresponding Goldstone bosons present in the `t-Hooft-Feynman gauge. We derive a set of Ward identities that lead to a transparent power-counting of the scattering amplitudes involving spin-2 Kaluza-Klein states. Power-counting for the Goldstone boson interactions establishes that the scattering amplitudes grow no faster than ${\mathcal O}(s)$, explaining the origin of the behavior previously shown to arise from intricate cancellations between different contributions to these scattering amplitudes in unitary gauge. Enabled by the Ward identities, I will also describe a robust method for computing the scattering amplitudes without large cancellations among the different diagrammatic contributions. I will also discuss how our results apply to more general warped geometries, including models with a stabilized extra dimension.
The most general massless particles allowed by Poincare' invariance are “continuous spin” particles (CSPs), a term coined by Wigner. Such particles are notable for their integer-spaced infinite tower of spin polarizations, with states of different integer (or half-integer) helicities mixing under boosts, much like the spin-states of a massive particle. The mixing under boosts is controlled by a spin-scale $\rho$ with units of momentum. Normally, we assume $\rho=0$, but this misses the most general behavior compatible with Lorentz symmetry. The interactions of CSPs are known to satisfy certain simple properties, one of which is that the $\rho\rightarrow 0$ limit generically recovers familiar interactions of massless scalars, photons, or gravitons, with all polarizations of helicity $|h|\geq 3$ decoupling in this limit. Thus, one can ask if the photon of the Standard Model is a CSP with a small, but non-zero $\rho$. One concern about this possibility - originally raised by Wigner - is that the infinite tower of polarizations could pose problems for thermodynamics.
In this talk, I discuss aspects of CSP thermodynamics, and show that the structure of CSP interactions imply that it is in fact thermodynamically well behaved. In a bath of charged particles coupled to CSP photons, the primary $h=\pm 1$ helicity modes thermalize quickly, while the other modes require increasingly long time-scales to thermalize. In familiar thermodynamic systems, the CSP photon behaves like the familiar photon, but with small, time-and $\rho$-dependent corrections to its effective number of degrees of freedom. Departures from familiar thermal behavior arise at energy scales comparable to $\rho$, which could have interesting and testable experimental consequences.
Energy correlators are useful observables for studying quantum chromodynamics (QCD). In particular, the two-point energy correlator offers a clean visualization of the confinement transition. I will present a calculation of the two-point correlator in the simplest holographic model of confinement, based on a warped extra dimension with an IR brane. This is the first AdS/CFT computation of energy correlators in a confining theory.
The results capture some, but not all, of the qualitative features of QCD energy correlators. I expect I can soon apply these techniques to more realistic models of confinement, which should fix some of the deviations from QCD.
The ForwArd Search ExpeRiment (FASER) searches for dark photons that are produced in the decays of neutral pions and eta mesons and decay into fermion-antifermion pairs. Dark photons are massive gauge bosons of broken $U(1)_D$ symmetry, presenting a remarkably simple extension to the standard model. Previous analyses have neglected spin correlations in signal event generation; however, because the dark photon has non-zero spin, such spin correlations exist. We analytically calculate the cross-section for these decays within the narrow-width approximation, and compare the results with and without spin correlations. We find that spin correlations are not a significant effect on existing dark photon searches.
Many well-motivated beyond-the-standard-model (BSM) scenarios naturally predict the production of hadronically decaying long-lived particles (LLPs) at the LHC, which leads to displaced-jet signatures. A displaced-jet search is therefore a powerful tool to address numerous long-standing puzzles in particle physics. With the Run 3 at LHC that started from 2022, we have developed and deployed a set of new techniques for the displaced jet search at CMS, including new displaced-jet triggers, new reconstruction algorithm, and new machine-learning-based LLP taggers, leading to significant improvements in sensitivities to challenging LLP signatures. We present a recent result using the data collected in 2022, which outperforms previous results by a factor of up to 10 despite analyzing a much smaller data set. Many more new developments and applications can be pursued in Run 3 and HL-LHC, which can significantly expand the discovery potential of BSM physics at CMS.
Many scenarios beyond the standard model hypothesize the existence of new particles with long lifetimes. These long-lived particles (LLPs) decay significantly displaced from their initial production vertex, leading to unconventional signatures within the detector. This presentation focuses on searches with LLP decays within the CMS muon system. An innovative usage of the CMS muon detectors is exploited in this context to significantly boost the sensitivity of such searches. We present the results obtained using data recorded by the CMS experiment during the completed Run-2 of the LHC.
A search for long-lived particles decaying into an oppositely charged lepton pair, mumu, ee, emu, is presented with a requirement that candidate leptons form a vertex within the inner tracking volume of ATLAS, displaced from the primary pp interaction region. The analysis uses the 140 fb^-1 of Run II data collected at 13 TeV by the ATLAS Experiment in 2015-2018. The results of the analysis are interpreted in the context of two models, together producing generic detection efficiencies for resonances with decay lengths (cτ) of 10-1000 mm decaying into a dilepton pair with masses between 0.1-2.2 TeV. The first model is a generic pair-produced Z′ from a new heavy scalar (S) with the Z′ decaying to lepton pairs or pairs of fermionic dark matter. The second is an R-parity violating supersymmetric model in which the lightest neutralino decays into ℓ+ℓ′−ν (ℓ,ℓ′= e, μ) with a finite lifetime. The neutralinos can be produced via the decay of pairs of gluinos or a variety of electroweak modes with heavier neutralinos and/or charginos.
As the field examines a future muon collider as a possible successor to the LHC, we must consider how to fully utilize not only the high-energy particle collisions, but also any lower-energy staging facilities necessary in the R&D process. An economical and efficient possibility is to use the accelerated muon beam from either the full experiment or from cooling and acceleration tests in beam-dump experiments.Beam-dump experiments are complementary to the main collider as they achieve sensitivity to very small couplings with minimal instrumentation. We demonstrate the utility of muon beam-dump experiments for new physics searches at energies from 10 GeV to 5 TeV. We find that, even at low energies like those accessible at staging or demonstrator facilities, it is possible to probe new regions of parameter space for a variety of generic BSM models, including muonphilic, leptophilic, Lμ−Lτ, and dark photon scenarios. Such experiments could therefore provide opportunities for discovery of new physics well before the completion of the full multi-TeV collider.
We consider the non-minimal quartic inflation driven by the U(1)$_X$ Higgs field $\phi$ in classically conformal U(1)$_X$ extended Standard Model (SM). Since the conformal symmetry is broken radiatively, the U(1)$_X$ gauge boson mass $m_{Z^\prime}$, the U(1)$_X$ gauge coupling $g_X$, and the inflationary predictions for tensor-to-scaler ratio $r$ are determined by only two free parameters, the inflaton mass $m_\phi$ and its mixing angle $\theta$ with the SM Higgs field. We show that the new FASER experiment at the High-Luminosity LHC (HL-LHC) can detect the inflaton in both cases if the mass is in the range 0.1 ≲ $m_{\phi}$ [GeV] ≲ 4. We show that the searches for primordial gravitational waves, collider searches for $Z^\prime$ at the LHC, and long-lived particle searches at experiments like FASER are complementary in the hunt for inflation. By performing a comparative study of the metric and Palatini formulations of gravity, we demonstrate that the two formulations are distinguishable.
Color-sextet scalars have well-known renormalizable couplings to quark pairs, but they could have an array of other possible couplings to the Standard Model. This talk will focus on proposed LHC searches for two operators of mass dimension six which include these sextet scalars. The first of these operators involves color-sextet scalars in a channel with jets and a hard opposite-sign lepton pair. The other operator of interest generates the counterpart processes with neutrinos, which produce jets in association with missing transverse energy in addition to possible leptons. Single production of the sextet scalars will be examined, along with tailored searches that would allow discovery/exclusion of these particles to be established to a higher sensitivity level than current ATLAS/CMS searches for the majority of the parameter space.
In the quantum simulation of lattice gauge theories, gauge symmetry can be either fixed or encoded as a redundancy of the Hilbert space. While gauge-fixing reduces the number of qubits, keeping the gauge redundancy can provide code space to mitigate and correct quantum errors by checking and restoring Gauss's law. In this work, we consider the correctable errors for generic finite gauge groups and design the quantum circuits to detect and correct them. We calculate the error thresholds below which the gauge-redundant digitization with Gauss's law error correction has better fidelity than the gauge-fixed digitization. The results provide guidance for fault-tolerant quantum simulations of lattice gauge theories.
The Sachdev-Ye-Kitaev (SYK) model is a fermionic model with $N$-flavors in $(0+1)$-dimensions that has holographic properties and saturates the Chaos bound in the large $N$, and low-temperature limit, where the model gains an approximate conformal symmetry. We propose an improved resource scaling $\mathcal{O}(N^5J^2t^2/\epsilon)$, and show results from noisy quantum hardware for $N=6,8$. In another upcoming paper, we study the SYK model at finite temperature using Variational methods and prepare thermal states for up to $N=12$ on simulators and $N=8$ on hardware.
A significant challenge in the detection of meV-scale rare events is demonstrating sufficiently low energy detection thresholds in order to detect recoils from light dark matter particles. Many detector concepts have been proposed to achieve this goal, which often include novel detector target media or sensor technology. A universal challenge in understanding the signals from these new detectors is characterization of detector response near the detection threshold, as the calibration methods available at low energies are very limited. We have developed a method of cryogenic optical beam steering that can be used to generate O(μs) pulses of small numbers of photons over the energy range of 0.1 - 5eV and deliver them to any location on the surface of a superconducting device with time and energy features comparable to expected signals. This allows for robust calibration of any photon-sensitive detector, enabling exploration of a variety of science targets including position sensitivity of detector configurations, phonon transport in materials, and the effect of quasiparticle poisoning on detector operation. In this talk, I will present the operating principles and results of this optical beam steering and pulse delivery system, and discuss the implementation of this technology for various novel sensor technologies such as HVeV detectors, MKIDs, and SQUATs (superconducting quasiparticle amplifying transmons).
We study flavor changing neutral current decays of B and K mesons in the dark U(1)D model, with the dark photon/dark Z mass between 10 MeV and 2 GeV. Although the model provides an improved fit (compared to the standard model) to the differential decay distributions of B → K(∗)l+
l−, with l = μ, e, and Bs → φμ+μ−, the allowed parameter space is ruled out by measurements of atomic parity violation, K+ → μ+ + invisible decay, and Bs − Bsbar mixing, among others. To evade constraints from low energy data, we extend the model to allow for (1) additional invisible ZD decay, (2) a
direct vector coupling of ZD to muons, and (3) a direct coupling of ZD to both muons and electrons, with the electron coupling fine-tuned to cancel the ZD coupling to electrons via mixing. We find that only the latter case survives all constraints.
In light of the recent branching fraction measurement of $B^{+}\to K^{+}\nu\bar{\nu}$-decay and its deviation from the SM expectation, we analyze the prospect of an axion-like particle (ALP) as the cause of such a departure. We assume a long-lived ALP with a mass of the order of a pion that predominantly decays to two photons. We focus on the scenario where the ALP decay length is several meters and therefore a non-negligible probability to decay outside detector volume of Belle-II mimicking the $B^{+}\to K^{+}\nu\bar{\nu}$-signal. Remarkably, such an arrangement provides a simple explanation to the long-standing $B\to \pi K$-puzzle by noting that the measured $B^{0}\to \pi^{0}K^{0}$ and $B^{+} \to \pi^{0} K^{+}$ decays have a $B^{0}\to a K^{0}$ and $B^{+} \to a K^{+}$ component respectively. We also argue based on our results that the axion-photon effective coupling belongs to a region in the parameter space that is still allowed after considering all the constraints known from various experiments.
Highly suppressed (Rare) $b$-quark processes provide an excellent probe into heavy New Physics (NP) scenarios in conjunction with stringent tests of the Standard Model (SM). Rare decays of the form $b \rightarrow s \nu \bar{\nu}$ appear in the $\Lambda_b \rightarrow \Lambda \nu \bar{\nu}$ channel, that has not yet been observed, but is a promising avenue of exploration at future $e^+e^-$ colliders, given the current status of $b$-quark anomalies. We provide an analysis of such decays in the SMEFT framework, accounting for the missing energy final states. Experimental deviations from the SM predictions connote the possible footprints of heavy NP events or dark sector final states that masquerade as undetectable neutrinos. To further probe the chiral structures of BSM contributions, we calculate a decay rate for polarized initial states, forming predictions of spin-angular correlations.
The amplitudes of $B\rightarrow PP$ decays, where $P$ is a pion or a kaon, are related by flavour $SU(3)$ ($SU(3)_F$). This allows us to describe all observables for these decays in terms of $SU(3)_F$ reduced matrix elements parametrized by diagrams. Using these parameters, we performed a fit to the experimental data, and found a discrepancy at the level of 3.6$\sigma$. This discrepancy can be resolved by adding $SU(3)_F$-breaking effects, but these effects are required to be very large, of the order of 1000%. When we add an assumption based on QCD factorization to the fit, the discrepancy jumps to 4.4$\sigma$. These are the anomalies in hadronic B decays; they strongly hint at the presence of new physics.
Evidence for an excess of b -> c tau nu decays, indicative of a violation of Lepton Flavor Universality (LFU), was first experimentally observed in a 2012 analysis at BaBar measuring the ratio quantities R(D()) = BF(B -> D() tau nu) / BF(B -> D() l nu) (l=mu,e). More results followed from the B factories supporting this anomaly, and were later joined by LHCb, which boasts a larger production rate of B mesons (including Lambdas and Bc), along with a unique set of challenges. Representing one of the most compelling standing tensions with the SM, with R(D()) currently sitting above 3sigma, the program of LFU measurements has only continued to expand. In this talk, I’ll review recent LHCb results, focusing on R(D()), including three results in the past year: a Run 2 simultaneous measurement of R(D(,+)) with the tau decaying to a muon, a Run 2 measurement of R(D) with the tau decaying hadronically, and a Run 1 simultaneous measurement of R(D(,0)) with the muonic tau decay. I’ll also briefly expand to mention other LFU measurements, including the combined Run 1+2 measurement of the longitudinal polarization of the D in B -> D tau nu, and future Run 2 measurements of the complementary R(J/psi) and of angular observables in B -> D* tau nu that can further constrain new physics models.
The 21-cm signal provides a novel avenue to measure the thermal state of the universe during cosmic dawn and reionization, and thus a probe of exotic energy injection such as those from decaying or annihilating dark matter (DM). These DM processes are inherently inhomogeneous: both decay and annihilation are density dependent, and furthermore the fraction of injected energy that is deposited at each point depends on the gas ionization and density, leading to further inhomogeneities in absorption and propagation.
In this talk, I will present a new framework for modeling the impact of spatially inhomogeneous energy injection and deposition, accounting for ionization and baryon density dependence, as well as the attenuation of propagating photons. Our simulation code, DM21cm, is the first complete inhomogeneous treatment of the effects on the 21-cm power spectrum under exotic energy injection. With our pipeline, I will present the sensitivity forecast of the upcoming HERA 21-cm power spectrum measurements to DM decays to photons and electron/positron pairs.
The polarization of light from various astrophysical sources could serve as a probe of new physics, including axion-like particles (ALPs). Previously most observational and theoretical studies of such polarization signals have focused on photon energies below the MeV scale, although there are studies of the effect of ALPs on photon intensity at the GeV scale. Extending the studies of polarization to the GeV region requires the measurement of electron-positron pair production. To date, no data on polarization have been released for photons of energies above the pair creation threshold. The Alpha Magnetic Spectrometer(AMS-02) provides the opportunity for such a GeV-scale gamma-ray measurement of linear polarization. In this talk, I will discuss the detectability of polarized gamma-ray emission from bright sources with the AMS-02 detector, and the prospects for using this channel to probe ALP parameter space by searching for energy-dependent variations in the polarization degree and angle. I will also provide forecasts for proposed future spectrometers such as AMS-100.
Superradiance provides a unique opportunity for investigating dark sectors as well as primordial black holes (PBHs), which themselves are candidates for dark matter (DM) over a wide mass range. Using axion-like particles (ALPs) as an example, we show that line signals emerging from a superradiated ALP cloud combined with Hawking radiation from PBHs, along with microlensing observations lead to complementary constraints on parameter space combinations including the ALP-photon coupling, ALP mass, PBH mass, and PBH DM fraction. For the asteroid mass range $\sim10^{16}-10^{22}~\textnormal{g}$, where PBHs can provide the totality of DM, we demonstrate that ongoing and upcoming observations such as SXI, JWST, and AMEGO-X will be sensitive to possible line and continuum signals, respectively, providing probes of previously inaccessible parameter space.
The Jovian magnetic field, being the strongest and largest planetary one in the solar system, could offer us new insights into possible microscopic scale new physics, such as a non-zero mass of the Standard Model (SM) photon or a light dark photon kinetically mixing with the SM photon. We employ the immense data set from the latest Juno mission, which provides us unprecedented information about the magnetic field of the gas giant, together with a more rigorous statistical approach compared to the literature, to set strong constraints on the dark photon mass and kinetic mixing parameter, as well as the SM photon mass. The constraint on the dark photon parameters is independent of whether dark photon is (part of) dark matter or not, and serves as the most stringent one in a certain regime of the parameter space.
A significant excess of gamma-rays has been detected by the Fermi-LAT space telescope in the direction of the Galactic center, yet its origin remains uncertain. The Galactic center excess (GCE) can be explained as a signal of annihilating dark matter or emissions from point sources such as unresolved millisecond pulsars. In principle, these hypotheses can be distinguished with likelihood based inference techniques that characterize dim point sources. A previous study has suggested that the standard approach, known as the Non-Poissonian Template Fit (NPTF), has shortcomings and biases in its adopted approach. In this work, we study the impact both of those issues can have on inferences of the GCE, both by testing the impact of the assumed priors on the dark matter and point source models, and further by testing the impact of moving to the new Compound Poisson Generator (CPG) which resolves many of the shortcomings of the NPTF.
We study the impact of an early dark energy component (EDE) present during big bang nucleosynthesis (BBN) on the elemental abundances of deuterium (D/H), and helium ($Y_p$), as well as the effective relativistic degrees of freedom $N_{\rm eff}$. We consider a simple model of EDE that is constant up to a critical temperature. After the critical temperature, the EDE decays as either standard model photons that mix with the plasma, dark photons that are uncoupled, or kination. We use measured values of the abundances and $N_{\rm eff}$ to establish limits on the input parameters of this EDE model, namely the amount of EDE initially present ($\rho_{\Lambda}$), and the critical temperature ($T_{\rm crit}$). In addition, we explore how those parameters are correlated with BBN inputs; the baryon to photon ratio $\eta_b$, neutron lifetime $\tau_n$, and number of neutrinos $N_\nu$.
The scalar and tensor fluctuations produced during inflation can be correlated, if arising from the same underlying mechanism. We investigate such correlation in the model of axion inflation, where the rolling inflaton produces quanta of a $U(1)$ gauge field which, in turn, source scalar and tensor fluctuations. We compute the primordial correlator of the curvature perturbation, $\zeta$, with the amplitude of the gravitational waves squared, $h_{ij}h_{ij}$, at frequencies probed by gravitational wave detectors. This two-point function receives two contributions: one arising from the correlation of gravitational waves with the scalar perturbations generated by the standard mechanism of amplification of vacuum fluctuations, and the other coming from the correlation of gravitational waves with the scalar perturbations sourced by the gauge field. Our analysis shows that the latter effect is generally dominant. The correlator, normalized by the amplitude of $\zeta$ and of $h_{ij}h_{ij}$, turns out to be of the order of $ 10^{-2}\,\times (f_{\rm NL}^{\rm equil})^{1/3}$, where $f_{\rm NL}^{\rm equil}$ measures the scalar bispectrum sourced by the gauge modes.
A thermal interpretation of the stochastic formalism of a slow-rolling scalar field in a de Sitter (dS) universe is given. We construct a correspondence between causal patches in the 3-dimensional space of a dS universe and particles living in an abstract space. By assuming a dual description of scalar fields and classical mechanics in the abstract space, we show that the stochastic evolution of the infrared part of the field is equivalent to the Brownian motion in the abstract space filled with a heat bath of massless particles. The 1st slow-roll condition and the Hubble expansion are also reinterpreted in the abstract space as the speed of light and a transfer of conserved energy, respectively. Inspired by this, we sketch the quantum emergent particles, which may realize the Hubble expansion by an exponential particle production. This gives another meaning of dS entropy as entropy per Hubble volume in the global dS universe.
The interplay between cosmology and strongly coupled dynamics can yield transient features that vanish at late times of cosmic evolution, but which may leave behind phenomenological signatures in the spectrum of primordial fluctuations and cosmological observables. Of particular interest are strongly coupled extensions of the standard model featuring approximate conformal invariance. In flat space, the spectral density for a scalar operator in a conformal field theory is characterized by
a continuum with scaling law governed by the dimension of the operator, and is otherwise featureless. AdS/CFT arguments suggest that for large N, in an inflationary background with Hubble rate H, this continuum is gapped at scale µ =(3/2)H. We demonstrate that in an RS setup with a certain UV boundary condition, there can be additional peak structures that become sharp and particle-like when the dimensionless Hubble rate is within an appropriate range, and we estimate their contribution to cosmological observables. These quasi-particles can be either fundamental, and localized to a UV brane, or composite at the Hubble scale, H, and thus bound to the horizon in the bulk of the 5D geometry. We comment on how stabilization of conformal symmetry breaking vacua can be correlated with these spectral features.
If enough primordial black holes (PBH) are produced in the early Universe, they can come to dominate its energy density. This is usually considered viable as long as the PBHs evaporate and reheat the universe above the temperature needed for Big Bang nucleosynthesis, which requires $m_\mathrm{BH} \lesssim 10^9$ g. However, during this period of early matter domination, perturbations can grow and PBH clusters can form, leading to greatly enhanced and even runaway PBH mergers that can dramatically alter the PBH mass distribution. Using the Press-Schechter formalism to model PBH cluster formation, we find that not only does this runaway merger phenomenon exclude parameter space previously thought to be viable, but also in some cases the mergers can actually generate a population of cosmologically stable PBHs with the right abundance to be dark matter.
I examine one-loop corrections from small-scale curvature perturbations to the superhorizon-limit ones in single-field inflation models, which have recently caused controversy. I consider the case where the Universe experiences transitions of slow-roll (SR) → intermediate period → SR. The intermediate period can be an ultra-slow-roll period or a resonant amplification period, either of which enhances small-scale curvature perturbations. I assume that the superhorizon curvature perturbations are conserved at least during each of the SR periods. Within this framework, I show that the superhorizon curvature perturbations during the first and the second SR periods coincide at one-loop level in the slow-roll limit.
For many years, models of weakly interacting massive particles (WIMPs) have been a useful target for direct detection experiments and other probes of dark matter. However, increasingly precise experimental probes have severely constrained the viable parameter space for these models. In this talk, I will review a paradigmatic WIMP model, Singlet-Doublet dark matter. I will introduce the model and discuss the remaining parameter space in light of contemporary experiments. In order to evade constraints, the model must live in special regions of parameter space. I will discuss these special regions, the prospects for probing them in the future, and explain how one might arrange for the model parameters to naturally inhabit them.
Understanding the dark matter distribution within a few kpc of the galactic center of the Milky Way is essential in estimating the dark matter content of the galaxy for indirect detection experiments, as well as understanding the particle nature of dark matter through the density profile in the Milky Way’s core. Although it is difficult to accurately measure the inner stellar distribution in order to infer the dark matter distribution close to the galactic center, we can gain insight from cosmological simulations. However, the implementation of the baryonic physics in cosmological simulations varies between suites, making it more challenging to draw conclusions about our own Galaxy. In particular, these implementations are quite opaque at best, and for some simulation suites not publicly available. In this talk, I will discuss how we characterized the dark matter density profile in FIRE-2, Auriga, Vintergatan, and Illustris TNG50 using the adiabatic contraction algorithm from (Gnedin et al. 2004) to predict the dark matter density profile in the hydrodynamic simulations. I will show that Auriga, Vintergatan, and Illustris TNG50 can be well described by adiabatic contraction, while the stellar feedback in FIRE-2 dominates over the effects of the baryonic contraction. I will close by showing the dark matter annihilation/decay rates in simulations as well as predictions for the Milky Way’s inner dark matter density profile and annihilation flux using observations of the stellar density profile.
Historically, dark matter searches have primarily focused on hunting for effects from two-to-two scattering. However, given that the visible universe is primarily composed of plasmas governed by collective effects, there is great potential to explore similar effects in the dark sector. Recent semi-analytic work has shown that new areas of parameter space for dark U(1) models can be probed through the observation of collisionless shock formation in astrophysical dark plasmas, a nonlinear process that requires simulation. Here, I will show initial results from simulating such warm, non-relativistic pair plasmas within the Smilei framework, a fully-kinetic particle-in-cell plasma physics simulation suite.
Observations of stellar populations are biased by extinction from foreground dust. By solving the equilibrium collisionless Boltzmann equation using machine learning techniques, one can estimate the unbiased phase space density of an equilibrated stellar population and the underlying gravitational potential. Using a normalizing flow-based estimate for the phase space density of stars measured by the Gaia space telescope, we estimate the local gravitational potential of the Milky Way as well as the unbiased phase space density corrected for dust extinction. We find that our novel and completely data-driven estimate of these quantities is compatible with recent 3-dimensional dustmaps and analytic models of the Milky Way's potential. We anticipate that this measurement of the potential will probe the detailed structure (and substructure) of the Milky Way's dark matter halo.
We propose a novel method using the ZZ-fusion channel and forward muon detection at high-energy muon colliders to address the challenge of the Higgs couplingwidth degeneracy. Our approach enables inclusive Higgs rate measurement to 0.75%
at 10 TeV muon collider, breaking the coupling-width degeneracy. Results indicate
the potential to refine Higgs coupling to sub-percent levels and estimate its total
width within (-0.41%, +2.1%). Key insights include the effectiveness of forward
muon tagging in signal-background separation despite broad recoil mass distribution
due to muon energy reconstruction and beam energy spread. The study emphasizes
the significance of muon rapidity coverage up to |η(µ)| < 6, enhancing measurement
precision. Our findings highlight the unique capabilities of high-energy lepton colliders for model-independent Higgs coupling determination and lay the groundwork for
future advancements in muon collider technology and Higgs physics research.
In this talk we detail how combining recent developments in flavor-tagging and novel statistical analysis techniques will allow future high energy and high statistics electron-positron colliders, such as the FCC-ee, to place phenomenologically relevant bounds on flavor violating Higgs and Z decays to quarks. As a proof of principle, we assess the FCC-ee reach for Z/h → bs, cu decays as a function of jet tagging performance and compare this reach against updated SM predictions for the corresponding branching ratios, as well as the indirect constraints on the flavor violating Higgs and Z couplings to quarks. Additionally, we show that the searches for h → bs, cu decays at FCC-ee can probe new parameter space not excluded by indirect searches using type III two Higgs doublet model as an example of beyond the standard model physics, while we reinterpret the FCC-ee reach for Z → bs, cu in terms of the constraints on models with vectorlike quarks.
Electroweak Precision Measurements are stringent tests of the Standard Model and sensitive probes to New Physics. Accurate studies of the Z-boson couplings to the first-generation quarks could reveal potential discrepancies between the fundamental theory and experimental data. Future e+e- colliders running at the Z pole and around the ZH threshold would be an excellent tool to perform such a measurement, unlike the LHC where hadronic Z decays are only available in boosted topologies. The measurement is based on comparison of radiative and non-radiative hadronic decays. Due to the difference in quark charge, the relative contribution of the events with final-state radiation (FSR) directly reflects the ratio of decays involving up- and down-type quarks. Such an analysis requires proper modeling and statistical discrimination between photons coming from different sources, including initial-state radiation (ISR), FSR, parton showers and hadronisation. In our contribution, we show how to extract the values of the Z couplings to light quarks and present the estimated uncertainties of the measurement.
SMEFT is an efficient tool to parametrize the effect of BSM physics in a model-independent way. We study di-Higgs and tri-Higgs productions at the muon collider which is parametrized by the dimension 6 mass operator. We also study di-boson and tri-boson processes which also include the production of Goldstone bosons. We discuss possible model dependence of multi-boson processes resulting from other dimension 6 operators and identify that multi-Higgs processes could be a golden channel for studying deviation in muon Yukawa coupling. Finally, we extend the study to two Higgs doublet model type-II and show that cross-sections for multi-Higgs productions involving heavy Higgs bosons can be enhanced up to by a factor of which could be very sensitive probe of deviation in muon Yukawa coupling.
The study of Higgs boson production at large transverse momentum is one of the new frontiers for the LHC Higgs physics program. This talk will present the first measurement of Higgs boson production in association with a vector boson in the full hadronic qqbb final state using data recored by the ATLAS detector at the LHC in pp collision at 13 TeV and corresponding to an integrated luminosity of 137 fb^-1. Using novel jet substructure and b-tagging techniques enables the Hbb measurement despite the large irreducible QCD background. Dominant backgrounds from multijet production are determined directly from the data and the extraction of the Z to bb signal is used as a validation of the method. The VH production cross section is measured inclusively and differentially in several ranges of Higgs boson transverse momentum: 250-450, 450-650, and greater than 650 GeV. The inclusive signal yield relative to the Standard Model expectation is observed to be µ=1.4 +1.0 −0.9
We examine the possibility of using muon colliders to make complementary measurements if a non-zero electron EDM is observed in future experiments. All particles that couple to electroweak gauge bosons and the Higgs will contribute to leptonic EDMs through Barr-Zee diagrams, at the 2-loop level. These diagrams have analogous contributions to vector boson fusion present at muon colliders. We consider two minimal BSM models, the singlet-doublet and doublet-triplet fermion models, and examine how the presence of these BSM particles in Barr-Zee-like diagrams would manifest in the $W^+W^- -> hh$ process at a 10 TeV muon collider.
The newly Upgraded Near Detector of the T2K experiment includes a novel 3D-projection tracker called Super Fine-Grained Detector (SFGD) sandwiched between two Time Projection Chambers equipped with resistive MicroMegas. The primary goal of the upgraded near detector is to reduce systematic uncertainties associated with neutrino flux and cross-section models for future studies of neutrino oscillations. To address this, the SFGD has excellent timing resolution, full angle coverage, high light yield and fine granularity. Among others, these features give us the capability of reconstructing the kinematics of neutrons in neutrino and antineutrino beam interactions on an event-by-event basis. This capability will help uncover previously hidden details of what is happening at the heart of the interaction and reconstruct precise (anti)neutrino kinematics. In this talk, I will present the current effort to develop the reconstruction of the neutron kinematics in this novel detector that is currently taking neutrino data.
The Deep Underground Neutrino Experiment (DUNE), hosted by the U.S. Department of Energy’s Fermilab, is expected to begin operations in the late 2020s. The primary physics goals of the experiment include studying neutrino oscillations, detecting and measuring the νe flux from supernova bursts, and searching for physics beyond the Standard Model. In preparation of DUNE, we are building prototype detectors, such as, ProtoDUNE Horizontal Drift (HD) and ProtoDUNE Vertical Drift (VD). Recently, these experiments have begun using the Hierarchical Data Format (HDF5) for some of its data storage applications. DUNE will use HDF5 to record raw data from the ProtoDUNE HD and ProtoDUNE VD. Dedicated I/O modules have been developed to read the HDF5 data from these detectors into the offline framework for reconstruction directly. The recent DAQ produced HDF5 files from HD coldbox are being tested with the ProtoDUNE HD reconstruction chain in preparation of ProtoDUNE-II data taking, processing and analyzing. ProtoDUNE reconstruction strategy runs Wirecell module for pedestal evaluation, charge Calibration, mitigation of readout issues, tail removal, noise suppression and signal processing. The ProtoDUNE reconstruction software contains modules that export data from an offline job in HDF5 format, so that they can be processed by external AI/ML software. The collaboration is also developing strategies for efficient processing of DUNE data as it requires careful attention to data formats and redesign of the processing framework to allow sequential processing of chunks of data.
The Accelerator Neutrino Neutron Interaction Experiment (ANNIE) is a 26-ton water Cherenkov experiment with the Large Area Picosecond Photodetector (LAPPD), operating on the Booster Neutrino Beamline at Fermilab. ANNIE aims to measure the neutron yield from neutrino-nucleus interactions as a function of lepton kinematics to reduce systematic uncertainties in future long baseline neutrino oscillation experiments. ANNIE is also a test bed for novel detector technologies, including LAPPDs, whose precision timing and imaging capabilities are expected to improve the reconstruction of the lepton and the neutrino interaction vertex. ANNIE has achieved the first successful detection of muon neutrino interactions using an LAPPD. We show early results from these data. In particular, by studying the photon arrival gradient across a LAPPD for selected neutrino events, we illustrate the LAPPD’s imaging and track reconstruction capabilities.
The T2K collaboration is currently upgrading the near detector for the experiment. The upgraded near detector include the Super Fine Grained Detector (SuperFGD) which is a 3D scintillator tracker and serves as the primary target for neutrino interactions. The SuperFGD is sandwiched by two time-projection chambers (HA-TPC) and the three detectors are then enclosed by time-of-flight detectors (ToF). With this configuration, the upgraded near detector can provide full polar angle acceptance of charged particles emitted from neutrino interactions as in the far detector which is the SuperKamiokande detector (SK). The SuperFGD also allows for lower momentum threshold for protons and enhanced detection capability of neutrons. In this presentation, an overview on the T2K near detector upgrade will be presented.
The Deep Underground Neutrino Experiment (DUNE) is a next-generation long-baseline neutrino experiment currently under construction in the US. The experiment consists of a broadband neutrino beam from Fermilab to the Sanford Underground Research Facility (SURF) in Lead, South Dakota, a high-precision near detector, and a large liquid argon time-projection chamber (LArTPC) far detector. The protoDUNE experiment located at CERN serves as the prototype for DUNE to validate the technology. The Trigger and Data Acquisition (TDAQ) systems are responsible for the acquisition and selection of data produced by the DUNE detectors and for their synchronization and recording. The main challenge for the DUNE-TDAQ lies in developing effective, resilient software and firmware that optimize the performance of the underlying hardware. The TDAQ is composed of several hardware components. A high-performance Ethernet network interconnects all the elements, allowing them to operate as a single, distributed system. At the output, the high-bandwidth Wide Area Network allows the transfer of data.
Long-baseline neutrino oscillation experiments rely on detailed models of neutrino interactions on nuclei. These models constitute an important source of systematic uncertainty, partially because detectors to date have been blind to final state neutrons. Three-dimensional projection scintillator trackers comprise components of the near detector of the next generation long-baseline neutrino experiments. Due to the good timing resolution and fine granularity, this technology is capable of measuring neutrons in neutrino interactions on an event-by-event basis and will provide valuable data for refining neutrino interaction models and ways to reconstruct neutrino energy. Two prototypes have been exposed to the neutron beamline at Los Alamos National Laboratory (LANL) in both 2019 and 2020 with neutron energies between 0 and 800 MeV. In order to demonstrate the capability of neutron detection, the total neutron-scintillator cross section was measured with one of the prototypes using data taken in 2019 and compared to external measurements. Ongoing work includes updating this measurement with reduced systematic uncertainties using the 2020 data. The results and future prospects are presented in this talk.
This study investigates a detector designed for a 10 TeV Muon Collider, a proposed next-generation facility. With the target luminosity of 10 ab$^{−1}$, this facility would enable direct searches for compelling Beyond the Standard Model (BSM) scenarios as well as precision measurements of Standard Model properties. We study the impact of beam-induced background (BIB), a unique aspect of Muon Colliders where the muon beam decay products affect detector performance. Distinguishing between background particles and those of interest poses challenges for the detector design and event reconstruction. We concentrate on optimizing the tracker's performance through detailed studies of its resolution, efficiency, and fake rates, for a variety of prompt and long-lived track signatures
Vector-like quarks (VLQs) are hypothetical particles that may lead to new physics phenomena, resolving the hierarchy problem. This talk presents a search for vector-like B quarks decaying into a top quark and a W boson, using the full CMS Run 2 proton-proton collision data at √s=13 TeV. The search targets single-lepton final states that contain one well-reconstructed muon or electron. The mass of the vector-like quark candidate is reconstructed from the lepton, hadronic jets and the missing transverse momentum. This talk will highlight the expected improvements from the optimization of event selection and object identification and the exploration of background estimation using machine learning techniques derived from neural autoregressive flows (NAF).
The recent observation of collider neutrinos and BSM searches by the FASER collaboration highlights the potential the forward direction at the LHC has for neutrino and BSM physics. But in these studies, the dominant background comes from muons and significant effort goes into suppressing them. In this work, we describe efforts to use these “background” muons to study muon-philic particles. In a simple model consisting of a scalar coupling to muons, we show how the FASER (FASER2) detector at LHC (HL-LHC) can probe unconstrained regions of the parameter space of this model that can solve the g-2 anomaly.
Novel heavy vector resonances are a common prediction of theories beyond the Standard Model, and the framework of simplified models provides a phenomenological bridge between these theories and the experimental limits obtained at colliders. In this talk I will introduce a simplified model for two colorless heavy vector resonances in the singlet representation of $SU(2)_L$, with zero and unit hypercharge, and discuss their phenomenology at proton colliders. I describe the semi-analytic production and decay of the charged and neutral vectors under the narrow width approximation, and show current LHC constraints, as well as sensitivity projections for the HL-LHC, HE-LHC, SPPC, and FCC-hh. The use of this simplified model is shown by matching onto three explicit models: one weakly coupled abelian and one weakly coupled non-abelian extension of the Standard Model gauge group, and a strongly coupled minimal composite Higgs model. Limits are given on the coupling and the physical resonance mass under these models, and I will use these to motivate future efforts at colliders of higher energy and luminosity.
Quirks are particles with interesting dynamics that appear in several motivated extensions of the Standard Model. Quirky bound states associated with Higgs naturalness may be copiously produced at the LHC. So far, however, collider bounds may be as weak as a few hundred GeV. I show how bound states of this type can be found using the displaced decays of hidden sector glueballs, significantly increasing their discovery potential at the LHC.
Double Higgs production plays a crucial role in assessing the Higgs self-coupling (trilinear Higgs coupling), which is responsible for endowing elementary particles with mass and shaping the Higgs potential. Measuring the trilinear Higgs coupling at proton colliders necessitates high luminosity due to the rarity of processes involving it in the Standard Model. Nonetheless, muon colliders offer distinct advantages over proton colliders, potentially mitigating some measurement challenges associated with the trilinear Higgs coupling. My talk would be based on my recent publication (arXiv:2312.12594v1), focusing on investigating the production of two Higgs particles through the interaction of high-energy muon beams emitting collinear photons. This analysis was conducted within the framework of the Higgs Triplet Model. The cross sections depend on the hierarchy between the doubly charged Higgs boson and the singly charged Higgs boson within the context of the Higgs triplet model. Moreover, I employed both the Effective Photon Approximation (EPA) method and EW PDF to establish parton distribution functions (PDFs) and determine the total cross sections of these processes. Throughout the analysis, I examined decoupling limits and weak coupling decoupling limits.
Any particle physics model exhibiting symmetry breaking is necessarily accompanied by a phase transition taking the particle content of the universe from its initially symmetric phase to one where the underlying gauge symmetry is “broken”. First-order phase transitions (FOPTs) are characterized by the rapid expansion of bubbles containing the new broken phase, which nucleate stochastically throughout space and eventually overtake the old symmetric phase. This violent transportation of matter and energy on cosmological scales invariably produces a stochastic background of gravitational waves (GW). If strong enough, signals from these GW may be detectable by upcoming experiments, offering a probe of an early universe as yet unobserved. Near the end of a FOPT, matter may be trapped within contracting pockets of the old phase, potentially leading to primordial black hole (PBH) formation. This provides additional probes as the PBHs may be evaporating and releasing detectable Hawking radiation. Furthermore, if the PBHs have not completely evaporated, they are expected to make up some fraction of the dark matter and are subject to abundance constraints.
This talk studies these multimessenger probes of the early universe in the context of conformal $B-L$ models. The underlying $B-L$ gauge symmetry is broken with a Higgs mechanism wherein a scalar field develops a nonzero vacuum expectation value, inducing a FOPT. Right-handed neutrinos are included in the model and become trapped in the old symmetric phase, possibly leading to PBHs. We find that not only can these models be simultaneously probed with GW signals and PBH constraints, but also different experiments can probe different energy scales within $B-L$ models as each energy scale exhibits a unique detection signature.
We develop a grand unified theory of matter and forces based on the gauge symmetry $SU(5)_L\times SU(5)_R$ with parity interchanging the two factor groups. Our main motivation for such a construction is to realize a minimal GUT embedding of left-right symmetric models that provide a parity solution to the strong CP problem without the axion. We show how the gauge couplings unify with an intermediate gauge symmetry $SU(3)_{cL}\times SU(2)_{2L}\times U(1)_{L}\times SU(5)_R$, and establish its consistency with proton decay constraints. The model correctly reproduces the observed fermion masses and mixings and leads to naturally light Dirac neutrinos with their Yukawa couplings suppressed by a factor $M_I/M_G$, the ratio of the intermediate scale to the GUT scale. The model predicts $\delta_{CP} = \pm (130.4 \pm 1.2)^\circ $ and $m_{\nu_1} = (4.8-8.4)$ meV for the Dirac CP phase and the lightest neutrino mass.
We explore the possibility of probing new physics particles that scatter into visible particles at DarkQuest, such as neutrino tridents, Bethe-Heitler scattering, etc. The DarkQuest setup consists of a 120 GeV proton beam that impinges on a 5 m iron block with the detector placed 25 m away from the proton source. We find that the closeness of the detector to this high-energy proton source is advantageous in probing new physics that appear through scattering at the large iron dump. We take the $L_{\mu} - L_{\tau}$ gauge bosons as an example where we look at muon-antimuon signals that are produced through neutrino tridents via the gauge boson. We see that DarkQuest can probe a major region in the parameter space that explains the $g-2$ anomaly.
Generalized global symmetries are present in theories of particle physics, and understanding their structure can give insight into these theories and UV completions thereof. We will identify non-invertible chiral symmetries in certain flavorful Z' extensions of the Standard Model, and this will lead us to interesting nonperturbative effects in theories of gauged non-Abelian flavor. For the leptons we will find naturally exponentially small Dirac neutrino masses. In the quark sector, a certain symmetry exists specially because we have the same numbers of colors and generations, and leads us to a massless down-type quarks solution to strong CP in color-flavor unification.
We critically examine the applicability of the effective potential within dynamical situations, as it is often used in phenomenological models, and find in short, that the answer is negative. An important caveat of the use of an effective potential in dynamical equations of motion is an explicit violation of energy conservation.
We introduce an adiabatic effective potential in a consistent quasi-static approximation, and its narrow regime of validity is discussed. Two ubiquitous instances in which even the adiabatic effective potential is not valid in dynamics are studied in detail: parametric amplification in the case of oscillating mean fields, and spinodal instabilities associated with spontaneous symmetry breaking. In both cases profuse particle production is directly linked to the failure of the effective potential to describe the dynamics.
We subsequently propose a consistent, renormalized, energy conserving dynamical framework that is amenable to numerical implementation. Energy conservation leads to the emergence of asymptotic highly excited, entangled stationary states from the dynamical evolution.
We study the dynamics of particle mixing induced by their coupling to a common intermediate state or decay channel, which is of broad fundamental interest within the context of CP violation and/or baryogenesis. Field mixing may also be a consequence of “portals”, connecting standard model degrees of freedom to hypothetical ones via mediator particles beyond the standard model. An effective equation of motion for the reduced density matrix of the two particle fields is derived to study evolutions of one-point and two-point correlation functions, in which a generalized fluctuation-dissipation relation is uncovered. When the two fields are nearly degenerate, we find a strong mixing of the two fields and prominent oscillations and quantum beats in Stoke’s parameters of the two fields. In the long-time limit, Stokes parameters show a non-zero correlation between the two fields.
We construct tree-level amplitude for massive particles using on-shell recursion relations based on two classes of momentum shifts: an all-line transverse shift that deforms momentum by its transverse polarization vector, and a massive BCFW-type shift. We illustrate that these shifts allow us to correctly calculate four-point and five-point amplitudes in massive QED, without an ambiguity associated with the contact terms that may arise from a simple "gluing'' of lower-point on-shell amplitudes. We discuss various aspects and applicability of the two shifts, including the large-z behavior and complexity scaling. We show that there exists a ''good'' all-line transverse shift for all possible little group configurations of the external particles, which can be extended to a broader class of theories with massive particles such as massive QCD and theories with massive spin-1 particles. The massive BCFW-type shift enjoys more simplicity, but a ''good'' shift does not exist for all the spin states due to the specific choice of spin axis.
We show in a very general setup that the linear entropy for the entanglement of a final state, resulting from a quantum 2 to 2 scattering of unentangled initial states in the plane wave limit, is twice of the scattering probability for certain outcomes. In particular, the entropy can be expressed as proportional to some scattering cross section, divided by an area that characterizes the spread of the initial wave function in the transverse directions of the position space. The result does not require the weak coupling limit and is to all orders in coupling strength; the computation requires a careful wave packet formulation of the initial states, though the results are independent of the details of the wave packets as long as the initial states are sufficiently close to momentum eigenstates. Furthermore, different ways to bipartite the system of the final state results in entropy that depends on the cross sections of different scattering outcomes.
Experiments conducted in the late 1950s and early 1960s provided compelling evidence that pions and kaons possess directional properties, challenging their traditional classification as pseudoscalar particles. In particular, four of these experiments, performed by four distinct research groups, each reported deviations exceeding five standard deviations from the expected result for pseudoscalar pions. During the 1950s and 1960s, the scientific doctrine associated vector particles with spin-1 characteristics, inadvertently sidelining the pi-mu asymmetry observations due to the established spin-0 nature of pions. Recently, it has been shown that a spin-0 particle can indeed be a vector [1]. Thus, we are proposing new pion experiments based on the spin-0 vector pion theory. Unlike earlier studies that determined only one value, these experiments will determine how the muon distribution changes with the variation of a parameter. Specifically, by varying the angle between the pion's polarization vector and its momentum vector using a magnetic field (which does not affect the polarization direction due to the pion's zero spin and zero magnetic moment), one measures the variation of the muon distribution with pion angle. (Those earlier experiments indicated a muon distribution peaked in the backward direction relative to the direction of the proton beam that created the pions.) This method, coupled with the investigation into the directional properties of kaons, specifically through the decay process K+ -> mu+ + neutrino, promises to shed new light on the vectorial nature of these particles.
[1] W. A. Perkins, "Massive vector particles with spin zero," EPL (Europhysics Letters) 114 (2016) 41002.
We present higher-order QCD corrections for the associated production of a top-antitop quark pair and a $W$ boson ($t{\bar t}W$ production). We calculate approximate NNLO (aNNLO) and approximate N$^3$LO (aN$^3$LO) cross sections, with second-order and third-order soft-gluon corrections added to the exact NLO QCD result, and we also include electroweak (EW) corrections through NLO. We compare our results to recent measurements from the LHC, and we find that the aN$^3$LO QCD + NLO EW predictions provide improved agreement with the data. We also calculate differential distributions in top-quark transverse momentum and rapidity and find significant enhancements from the higher-order corrections.
Entanglement is an intrinsic property of quantum mechanics and its measurement probes the current understanding of the underlying quantum nature of elementary particles at a fundamental level. A measurement of the extent of entanglement in top quark and top antiquark events produced in proton-proton collisions at a center-of-mass energy of 13 TeV is presented. The events are selected based on the presence of two oppositely charged high transverse momentum leptons and the data recorded by the CMS experiment at the CERN LHC in 2016 correspond to an integrated luminosity of 35.9 fb−1. This measurement provides a new quantum probe of the inner workings of the standard model and is sensitive to new physics contributions.
It is standard to treat the up, down, and strange quarks as "light" (non-perturbative), while the charm, bottom, and top quarks are considered "heavy" (perturbative). However, this is a somewhat simplistic picture. As I will argue in my talk, charm exhibits hints of significant rescattering effects, which is a sign of the importance of non-perturbative QCD. To make my point, I propose a parameter, a combination of hadronic matrix elements, that serves as a clean probe of rescattering effects in charm through $D \rightarrow \pi\pi$ decays. Currently, this parameter cannot be calculated from first principles. In the isospin limit, however, it can be extracted from existing experimental data. I will argue that the current data suggests the presence of significant rescattering effects in charm. A dedicated analysis with current and future data will enable us to significantly reduce the uncertainty of the determination of this parameter and allow us to verify whether there is indeed substantial rescattering in $D \rightarrow \pi\pi$ decays.
The measurement of the charge asymmetry in top-antiquark pairs is presented using data collected by the CMS detector with proton-proton collisions and center of mass energy of 13 TeV. The full Run 2 data is used, corresponding to an integrated luminosity of 138 $fb^{-1}$. Events with exactly one lepton (an electron or muon), at least two jets and missing transverse energy are considered. The latest top tagging technique is used to identify hadronically decaying top quarks. Our final phase space targets both low and high-mass regions where the final state objects can be isolated, semi resolved, or highly collimated. The highly boosted events in our sample is enhanced in valence quark production and thus expected to be more sensitive to deviations in top quark properties, caused by BSM processes. We aim to leverage the precise measurement of the top quark's charge asymmetry, cross-section, and spin correlation properties to interpret any deviation from the Standard Model prediction in the framework of Effective Field Theory (EFT).
It is known that kaon CP-violation could manifest itself in decays into neutral kaons. In particular, the CP asymmetry in $\tau\rightarrow \pi K_s \nu$ had been searched for. In this work we discuss how the measured time integrated CP asymmetry depends on the experimental detection efficiency as a function of the energy and the decay time of the kaon. We show that such dependencies of the experimental efficiency lead to a non-vanishing CP asymmetry for the decay channel into two kaons $\tau\rightarrow\pi K_S K_L \nu$, that is a background to the decay mode into a single $K_S$. We derive a theoretical prediction for such background asymmetry and discuss its experimental relevance.
We revisit the behavior of neutron interpolating currents under singlet chiral rotations and show that not all interpolating currents are good for calculating chirality-sensitive quantities. In particular, for the $\theta$-induced neutron EDM, we show that the $\beta=1$ and $\beta=-1$ current give physical answers that only depend on $\bar{\theta}=\theta_m+\theta_G$ after removing an overall phase, while the $\beta=0$ current, on the other hand, leads to unphysical dependence on the chiral rotation angle.