- Compact style
- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Indico celebrates its 20th anniversary! Check our blog post for more information!
The 2024 edition of the APS Division of Particles & Fields (DPF) Meeting will be hosted collaboratively by the University of Pittsburgh and Carnegie Mellon University in Pittsburgh. This meeting will be combined with the annual Phenomenology Symposium (Pheno) for a single year, and the joint event is set to take place from May 13 to 17, 2024. It will cover the latest topics in particle physics theory and experiment, plus related issues in astrophysics and cosmology. We would like to encourage you to register and submit an abstract for a parallel talk. All talks at the symposium are expected to be in person. There will be no poster sessions in this conference. The conference adjourns at 1:00 PM, Friday May 17. We hope to see you in May!
Early registration ends April 8, 2024
Parallel talk abstract submission deadline has been extended to April 8, 2024
Registration closes April 19 (Friday), 2024
Student travel award application deadline April 1 2024
--- News ---
TOPICS TO BE COVERED:
PLENARY PROGRAM SPEAKERS:
Zeeshan Ahmed, Aram Apyan, Ketevi Assamagan, Carlos Arguelles, Elke Aschenauer, Christian Bauer, Tulika Bose, Andrew Brinkerhoff, Gabriella Carini, Valentina Dutta, Peter Elmer, Mark Elsesse, Jonathan Feng, Carter Hall, Erin Hansen, Roni Harnik, David Hertzog, Kevin Kelly, Kiley Kennedy, Peter Lewis, Elliot Lipeles, Kendall Mahn, Sudhir Malik, Julie Managan, Rachel Mandelbaum, Ethan Neil, Tim Nelson, Laura Reina, David Saltzberg, Mayly Sanchez, Kate Scholberg, Vladimir Shiltsev, Jesse Thaler, Jaroslv Trnka, Sven Vahsen, Daniel Whiteson, Jure Zupan, Kathryn Zurek ... ...
SPECIAL EVENTS:
Conference Reception: Monday May 13
Early Career Forum: Tuesday May 14 at noon
Public Lecture by Prof. Hitoshi Murayama: Tuesday May 14
Conference Banquet: Thursday May 16
Student Travel Awards: With support from DPF, there are a number of awards (up to $300 each) available to graduate students from US institutions for travel and accommodation to DPF-Pheno 24. A student applicant should send an updated CV and a statement of financial need, indication of the talk submission to DPF-Pheno 24, and arrange for a short recommendation letter sent from their thesis advisor, by email to dpfpheno24@pitt.edu with the subject line "DPF-Pheno 24 travel assistance". The decision will be based on the academic qualification and the financial need. The deadline for the application is April 1 (same as the abstract submission), and the winners will be notified by April 19. Winner institutes and names will be announced at the conference banquet.
DPF - PHENO 2024 PROGRAM COMMITTEE: Todd Adams (Florida State U.), Andrea Albert (LANL), Timothy Andeen (U. Texas), Emanuela Barberis (Northeastern U.), Robert Bernstein (FNAL), Sapta Bhattacharya (Wayne State), Tom Browder (U. Hawaii), Stephen Butalla (Florida Tech), Joel Butler (FNAL), Mu-Chun Chen (UC Irvine), Sekhar Chivukula (UC San Diego), Sarah Demers (Yale U.), Dmitri Denisov (BNL), Bertrand Echenard (Caltech), Sarah Eno (U. Maryland), Andre de Gouvea (Northwestern U.), Tao Han (U. Pittsburgh), Mike Kordosky (William and Mary), Mark Messier (Indiana U.), Marco Muzio (Penn State U.), Jason Nielsen (UC Santa Cruz), Vaia Papadimitriou (FNAL), Manfred Paulini (CMU), Heidi Schellman (Oregon State Univ.), Gary Shiu (U. Wisconsin-Madison), Tom Shutt (SLAC), Mayda Velasco (Northwestern U.), Gordon Watts (U. Washington), Peter Winter (ANL).
DPF - PHENO 2024 LOCAL ORGANIZING COMMITTEE: John Alison (CMU), Brian Batell (U. Pitt), Amit Bhoonah (U. Pitt), Matteo Cremonesi (CMU), Arnab Dasgupta (U. Pitt), Valentina Dutta (CMU), Ayres Freitas (U. Pitt), Akshay Ghalsasi (U. Pitt), Joni George (U. Pitt), Grace Gollinger (U. Pitt), Tao Han (U. Pitt), Tae Min Hong (U. Pitt), Arthur Kosowsky (U. Pitt), Da Liu (U. Pitt), Matthew Low (U. Pitt), James Mueller (U. Pitt), Donna Naples (U. Pitt), Vittorio Paolone (U. Pitt), Diana Parno (CMU), Manfred Paulini (CMU), Andrew Zentner (U. Pitt).
The lightest supersymmetric particles could be higgsinos that have a small mixing with gauginos. If the lightest higgsino-like state makes up some or all of the dark matter with a thermal freezeout density, then its mass must be between about 100 and 1150 GeV, and dark matter searches put bounds on the amount of gaugino contamination that it can have. Motivated by the generally good agreement of flavor- and CP-violating observables with Standard Model predictions, I consider models in which the scalar particles of minimal supersymmetry are heavy enough to be essentially decoupled, except for the 125 GeV Higgs boson. I survey the resulting purity constraints as lower bounds on the gaugino masses and upper bounds on the higgsino mass splittings. I also discuss the mild excesses in recent soft lepton searches for charginos and neutralinos at the LHC, and show that they can be accommodated in these models if $\tan\beta$ is small and $\mu$ is negative.
A search for ``emerging jets'' produced in proton-proton collisions at a center-of-mass energy of 13 TeV is performed using data collected by the CMS experiment corresponding to an integrated luminosity of 138 fb^-1. This search examines a hypothetical dark quantum chromodynamics (QCD) sector that couples to the standard model (SM) through a scalar mediator. The scalar mediator decays into an SM quark and a dark sector quark. As the dark sector quark showers and hadronizes, it produces long-lived dark mesons that subsequently decay into SM particles, resulting in a jet, known as an emerging jet, with multiple displaced vertices. This search looks for pair production of the scalar mediator at the LHC, which yields events with two SM jets and two emerging jets at leading order. The results are interpreted using two dark sector models with different flavor structures, and exclude mediator masses up to 1950 (1850) GeV for an unflavored (flavor-aligned) dark QCD model. The unflavored results surpass a previous search for emerging jets by setting the most stringent mediator mass exclusion limits to date, while the flavor-aligned results provide the first direct mediator mass exclusion limits to date.
Minimal Dark Matter models extend the Standard Model by incorporating a single electroweak multiplet, with its neutral component serving as a candidate for the thermal relic dark matter in the Universe. These models predict TeV-scale particles with sub-GeV mass splittings $\Delta$. Collider searches aim at producing the charged member of the electroweak multiplet which then decays into dark matter and a charged particle. Traditionally, these searches involve signatures of missing energy and disappearing tracks. Due to the small size of $\Delta$, the transverse momentum of this charged particle is too soft to be resolved at hadron colliders. In this talk, I show that a Muon Collider is capable of detecting these soft charged decay products, providing a means to discover TeV thermal relics with an almost degenerate charged companion. Our technique also facilitates the determination of $\Delta$, allowing for a comprehensive characterization of the dark sector. Our results indicate that a 3 TeV muon collider will have the capability to discover the highly motivated thermal Higgsino-like dark matter candidate as well as other scenarios of Minimal Dark Matter featuring larger multiplets whose neutral component corresponds to a fraction of the total dark matter in the Universe. This study highlights the potential of a muon collider to make significant discoveries even at its early stages of operation.
Dark portals like the gauge, higgs, and neutrino portals are well-motivated extensions of the standard model (SM). These portals may lead to interactions between dark matter and the SM. In some scenarios, the mediator predominantly decays invisibly, making it challenging to constrain them. The prospect of a future muon collider has triggered a growing interest in the particle physics community. We show how a clean environment and high luminosity can lead to the best bound for masses O(10-100) GeV, even though the proposed collider will have a very high center of mass energy ~ few TeV.
The search for dark matter (DM) continues, with increasingly sensitive detectors at the WIMP scale, and novel detection techniques for discovering sub-GeV DM. In this talk I highlight two types of directionally sensitive experiments, in which the DM signal can be distinguished from the low-energy backgrounds. A new, highly efficient computational method can streamline the theory predictions, reducing the evaluation time by up to seven orders of magnitude.
Cosmic ray (CR) upscattering of dark matter is one of the most straightforward mechanisms to accelerate ambient dark matter, making it detectable at high threshold, large volume experiments. In this work, we revisit CR upscattered dark matter signals at the IceCube detector, considering both proton and electron scattering, in the former case including both quasielastic and deep inelastic scattering. We consider both scalar and vector mediators over a wide range of mediator masses, and use lower energy IceCube data than has previously been used to constrain such models. We show that our analysis sets the strongest existing constraints on cosmic ray boosted dark matter over much of the eV - MeV mass range.
We study the physics of the intermediate scattering regime for boosted dark matter (BDM) interacting with standard model (SM) target nucleons. The phenomenon of BDM, which is consistent with many possible DM models, occurs when DM particles receive a Lorentz boost from some process. BDM would then exhibit similar behavior to neutrinos as it potentially interacts, at relativistic speeds, in terrestrial based neutrino detectors. Producing (in)direct DM signatures in these experiments, as opposed to recoil experiments which probe the interactions of the non-relativistic halo of DM in our solar system. We investigate the intermediate scattering regime, between elastic and inelastic events, of such processes involving BDM at energies of order 1-2 GeV where resonant scattering processes occur. The application of this research is an event generator GENIE code for implementation in future experiments such as LArTPC at DUNE.
We perform a global fit of dark matter interactions with nucleons using a non-relativistic effective operator description, considering both direct detection and neutrino data. We examine the impact of combining the direct detection experiments CDMSlite, CRESST-II, CRESST-III, DarkSide-50, LUX, LZ, PandaX-II, PandaX-4T, PICO-60, SIMPLE, SuperCDMS, XENON100, and XENON1T along with neutrino data from IceCube and Deepcore, ANTARES, and Super-Kamiokande. While current neutrino telescope data lead to increased sensitivity compared to underground nuclear scattering experiments for dark matter masses above 100 GeV, our future projections show that the next generation of underground experiments will significantly outpace solar searches for most dark matter-nucleon elastic scattering interactions.
A sub-component of dark matter with a short collision length compared to a planetary size leads to efficient accumulation of dark matter in astrophysical bodies. Such particles represent an interesting physics target since they can evade existing bounds from direct detection due to their rapid thermalization in high-density environments. In this talk, I will demonstrate that terrestrial probes, such as, large-volume neutrino telescopes as well as commercial/research nuclear reactors, can provide novel ways to constrain or discover such particles.
We propose anti-ferromagnets as optimal targets to hunt for sub-MeV dark matter with spin-dependent interactions. These materials allow for multi-magnon emission even for very small momentum transfers, and are therefore sensitive to dark matter particles as light as the keV. We use an effective theory to compute the event rates in a simple way. Among the materials studied here, we identify nickel oxide (a well-assessed anti-ferromagnet) as an ideal candidate target. Indeed, the propagation speed of its gapless magnons is very close to the typical dark matter velocity, allowing the absorption of all its kinetic energy, even through the emission of just a single magnon.
In this study, we present the development of a portable cosmic muon tracker tailored for both on-site measurements of cosmic muon flux and outreach activities. The tracker comprises two 7cm x 7cm plastic scintillators, wavelength shifting (WLS) fibers, and Hamamatsu SiPMs (S13360-2050VE). The detector utilizes plastic scintillator panels optically coupled to WLS fibers, transmitting scintillation light to the SiPMs. SiPM outputs are routed to a PCB board equipped with op amp amplifiers and a peak hold circuit, connected to an ESP32 microcontroller module. When muons traverse both scintillators, the light emitted triggers the SiPMs, generating equivalent signals proportional to light intensity. These signals are then amplified, and the pulse peak is held for 500 microseconds. The peak analog voltage is subsequently digitized using the onboard ADC in the ESP32. Continuously measuring and recording peak values, the ESP32 triggers muon detection when both peaks surpass a set threshold. The SiPMs are powered by a High Voltage bias supply module, while a BMP180 Module measures temperature and pressure. For real-time event tagging, a GPS module is interfaced with the ESP-32. Housed within an acrylic box measuring 10 x 10 x 10cm, the detector can be powered using a 5V 1A USB power bank. Additionally, a mobile app allows for real-time monitoring. This versatile and cost-effective portable detector facilitates cosmic muon research in various experimental settings. Its portability and low power requirements enable on-site measurements in environments such as tunnels, caves, and high altitudes.
The Mu2e experiment at Fermilab will conduct a world-leading search for Charged Lepton Flavour Violation (CLFV) in neutrino-less muon-to-electron conversion in the field of a nucleus. In doing so, it will provide a powerful probe into physics beyond the Standard Model, which can greatly enhance the rates of CLFV processes. To accomplish this measurement, which will constitute an $\mathcal{O}(10^{4})$ improvement as compared to previous measurements, Mu2e must have excellent control over potential backgrounds: requiring less than one background event for $\mathcal{O}(10^{18})$ muons stopped over the lifetime of the experiment. One such background arises from cosmic muons, which are expected to result in approximately one background event per day. Mu2e will defeat these cosmic ray background events with an active shielding system: a large-area cosmic ray veto (CRV) detector enclosing the apparatus, with the ability to identify and veto cosmic ray muons with an average efficiency of 99.99$\%$. This talk will briefly describe the Mu2e apparatus, the design of the CRV, its expected performance, and its present status in preparation for physics data-taking in 2026.
LYSO crystals are radiation-hard, non-hygroscopic, have a light yield of $\sim30,000\,\gamma$/MeV, a 40-ns decay time, and a radiation length of just 1.14 cm. Conventional photosensors work naturally at the LYSO peak wavelength of 420 nm. These properties suggest that an electromagnetic calorimeter made from LYSO should be ideal for high-rate, low-energy precision experiments where high resolution is imperative at energies below 100\,MeV. Yet, few examples exist and the performance for previous prototypes did not achieve what the light-yield specifications might suggest for energy resolution. We have been designing a large solid angle $\sim$spherical calorimeter made of tapered LYSO crystals for possible use in a new measurement of the branching ratio $R_{e/\mu} = \Gamma(\pi^+\rightarrow e^+\nu(\gamma))/\Gamma(\pi^+\rightarrow \mu^+\nu(\gamma))$. The $\pi$-to-$e$ decay emits a 69 MeV positron, to be measured against the continuum of $<53\,$MeV Michel positrons from muon decay. I will present our studies obtained with an array of recently optimized LYSO crystals made by SICCAS. We have obtained excellent results in bench tests with various sources, an array test with a 17.6 MeV $\gamma$ source from a $p$-Li reaction, and from a test-beam run at the Paul Scherrer Institute using a positron beam from 30 – 100 MeV having excellent momentum resolution.
We present a calculation of QED radiative corrections to low energy electron proton scattering at next-to-leading order. This work is based on that performed previously by Maximon and Tjon which relied on the soft photon approximation for the two-photon exchange diagram. The calculations are performed assuming the finite size of the proton through electromagnetic dipole form factors and relaxation of the approximation made in this earlier work. Comparisons are provided over the same kinematic ranges as those used in Maximon and Tjon. In addition we will discuss the impact of these corrections on several kinematic distributions.
Electron-positron pair production and hadron photoproduction are the most important beam-induced backgrounds at linear electron-positron colliders. Predicting them accurately governs the design and
optimization of detectors at these machines, and ultimately their physics reach. With the proposal, adoption, and first specification of the C3 collider concept it is of primary importance to estimate
these backgrounds and begin the process of tuning existing linear collider detector designs to fully exploit the parameters of the machine. We will report on the status of estimating both of these backgrounds at C3 using the SiD detector concept, and discuss the effects of the machine parameters on the preliminary detector and electronics design.
We present a decision tree-based implementation of autoencoder anomaly detection. A novel algorithm is presented in which a forest of decision trees is trained only on background and used as an anomaly detector. The fwX platform is used to deploy the trained autoencoder on FPGAs within the latency and resource constraints demanded by level 1 trigger systems. Results are presented with two datasets: a BSM Higgs decay to pseudoscalars with a 2gamma 2b final state, and the LHC physics dataset for unsupervised New Physics detection. Finally, the effects of signal contamination on the training set are presented, demonstrating the possibility of training on data.
This work is detailed in 2304.03836. New physics studies are shown with respect to last year's presentation at Pheno 2023.
The Compact Muon Solenoid (CMS) detector at the CERN LHC produces a large quantity of data that requires rapid and in-depth quality monitoring to ensure its validity for use in physics analysis. These assessments are often done by visual inspection which can be time consuming and prone to human error. In this talk, we introduce the “AutoDQM” system for Automated Data Quality Monitoring in CMS to enable prompt and accurate data assessment. AutoDQM uses a beta-binomial probability function, principal component analysis, and autoencoders for anomaly detection. These algorithms were tested on already-validated data collected by CMS in 2022. The algorithms were able to identify anomalous “bad” data-taking runs at a rate 5-6 times higher than “good” runs suitable for physics analysis, demonstrating AutoDQM’s effectiveness in improving data quality monitoring.
We present R-Anode, a new method for data-driven, model-agnostic resonant anomaly detection that raises the bar for both performance and interpretability. The key to R-Anode is to enhance the inductive bias of the anomaly detection task by fitting a normalizing flow directly to the small and unknown signal component, while holding fixed a background model (also a normalizing flow) learned from sidebands. In doing so, R-Anode is able to outperform all classifier-based, weakly-supervised approaches, as well as the previous Anode method which fit a density estimator to all of the data in the signal region instead of just the signal. We show that the method works equally well whether the unknown signal fraction is learned or fixed, and is even robust to signal fraction misspecification. Finally, with the learned signal model we can sample and gain qualitative insights into the underlying anomaly, which greatly enhances the interpretability of resonant anomaly detection and offers the possibility of simultaneously discovering and characterizing the new physics that could be hiding in the data.
Anomaly detection is a promising, model-agnostic strategy to find physics beyond the Standard Model. State-of-the-art machine learning methods offer impressive performance on anomaly detection tasks, but interpretability, resource, and memory concerns motivate considering a wide range of alternatives. We explore using the 2-Wasserstein distance from optimal transport theory, both as an anomaly score and as input to interpretable machine learning methods, for event-level anomaly detection at the Large Hadron Collider. The choice of ground space plays a key role in optimizing performance. We comment on the feasibility of implementing these methods in the L1 trigger system.
Understanding the higgs boson, both in the context of Standard Model physics and beyond-the-Standard Model hypotheses, is a key problem in modern particle physics. An increased understanding could come from the detection and analysis of pairs of higgs bosons produced at hadron colliders. While such higgs pairs have not yet been observed at the Large Hadron Collider (LHC), it is likely that they will be detected within the next few years at the High-Luminosity LHC. In this study, we show how a machine-learning based higgs pair analysis can constrain several dimension-6 SMEFT Wilson coefficients in the higgs sector. We find that including shape-level information, e.g. in the form of the distributions of kinematic observables, in such analyses is likely to place tighter constraints on the coefficients than a rate-only analysis.
A model based on a $U(1)_{T 3R}$ extension of the Standard Model can address the mass hierarchy between the third and the first two generations of fermions, explain thermal dark matter abundance, and the muon $g - 2$ and $R_{K^{(*)}}$ anomalies. The model contains a light scalar boson $\phi'$ and a heavy vector-like quark $\chi_u$ that can be probed at CERN's Large Hadron Collider (LHC). We perform a phenomenology study on the production of $\phi'$ and ${\chi}_u$ particles from proton-proton $(pp)$ collisions at the LHC at $\sqrt{s}=13$ TeV primarily through $g{-g}$ and $t{-\chi_u}$ fusion. We work adopt a phenomenological framework, an effective field theory approach, in which the $\chi_u$ and $\phi'$ masses are free parameters and consider the final states of the $\chi_u$ decaying to $b$-quarks, muons, and MET from neutrinos and the $\phi'$ decaying to $\mu^+\mu^-$. The analysis is performed using machine learning algorithms, over traditional methods, to maximize the signal sensitivity with integrated luminosities of of $150, 300$, and $3000$ fb$^{-1}$. Further, we note the proposed methodology can be a key mode for discovery over a large mass range, including low masses, traditionally considered difficult due to experimental constraints.
Charged Lepton Flavor Violation (cLFV) stands as a compelling frontier in the realm of particle physics, offering a unique window into the mysteries of flavor physics beyond the Standard Model. I will provide a comprehensive overview of the current experimental landscape and future prospects.
A survey of ongoing experimental efforts will be presented, highlighting recent breakthroughs and advancements in the field. Various experiments, ranging from high-energy accelerators to precision low-energy experiments, will be discussed, shedding light on the diverse strategies employed to detect elusive cLFV signals.
Furthermore, the talk will delve into the challenges faced by experimentalists and the ingenious techniques developed to overcome these obstacles. Emphasis will be placed on the interplay between theory and experiment, underscoring the importance of a collaborative approach in pushing the boundaries of our understanding.
In anticipation of the future, the presentation will explore upcoming experiments and their potential to provide crucial insights into cLFV. Novel technologies, experimental designs, and anticipated sensitivities will be discussed, offering a glimpse into the promising avenues that lie ahead.
By the end of the talk, attendees will gain a thorough appreciation of the dynamic landscape of experimental efforts in charged lepton flavor violation.
Neutrino oscillations have shown that lepton flavor is not a conserved quantity. Charged lepton flavor violation (CLFV) is suppressed by the small neutrino masses well below what is experimentally observable, while lepton number violation (LNV) is forbidden in the SM extended to include neutrino masses. New physics models predict higher rates of CLFV and allow for LNV. The CLFV $\mu^- \rightarrow e^-$ conversion and CLFV and LNV $\mu^- \rightarrow e^+$ conversion processes are sensitive to a wide range of new physics models.
$\mu^- \rightarrow e^+$ conversion is complementary to $0\nu\beta\beta$ decay and may be sensitive to flavor effects that $0\nu\beta\beta$ decay is insensitive to. A key background to the search for $\mu^- \rightarrow e^+$ conversion is radiative muon capture (RMC). Previous muon conversion experiments have had difficulty describing the RMC background when searching for $\mu^- \rightarrow e^+$ conversion. The Mu2e experiment at FNAL aims to improve the sensitivity to $\mu^- \rightarrow e^-$ conversion by a factor of 10,000. In order to make a similar improvement in the sensitivity to $\mu^- \rightarrow e^+$ conversion, the RMC background will need to be well understood. I will discuss RMC and previous $\mu^- \rightarrow e^+$ conversion searches, and then the upcoming $\mu^- \rightarrow e^+$ conversion search at the Mu2e experiment.
Charged lepton flavor violation is an unambiguous signature of New Physics. Current experimental status and future prospects from the electron-positron colliders are discussed. Discovery potential of New Physics models with charged lepton flavor violation as its experimental signature are also presented.
Lepton flavor universality (LFU) is an assumed symmetry in the Standard Model (SM). The violation of the lepton flavor universality (LFUV) would be a clear sign of physics beyond the Standard Model and has been actively searched from both small- and large-scale experiments. One of the most stringent tests for LFU comes from the precision measurements of rare decays of light mesons. In particular, the ratio of branching fractions for charged pion decays, $R^{\pi}_{e/\mu}=\Gamma(\pi\rightarrow e\nu(\gamma))/\Gamma(\pi\rightarrow \mu\nu (\gamma))$, has tested the LFU at 0.1% level. However, while the value of $R^{\pi}_{e/\mu}$ is predicted to a precision of $10^{-4}$ in SM, there is an opportunity to improve the experimental probing of LFU by another order of magnitude. In this talk, I will introduce the PIONEER experiment, which has been recently approved at Paul Scherrer Institute (PSI), Switzerland, aiming at bridging the gap between precisions of SM predictions and measurements in experiments. Beside leveraging the intense charged pion beam at PSI, the PIONEER experiment adopts several cutting-edge detector technologies including a fully active 4-D silicon target stopping the pions, a high-performance trigger and data acquisition system, and a liquid Xenon calorimeter with excellent energy resolutions and fast responses. In addition to the precision measurement of $R^{\pi}_{e/\mu}$, the PIONEER experiment will also improve the search sensitivities to new physics beyond the standard model through searches of pion exotic decays, such as involving sterile neutrinos. Future phases of PIONEER experiment with higher intensity will contribute to the test of unitarity of CKM matrix through a precision measurement of pion beta decay leading to a precise determination of $V_{ud}$.
We show how the experiment Mu3e can improve sensitivity to light new physics by taking advantage of the angular distribution of the decay products. We also propose a new search at Mu3e for flavor violating axions through the decay mu ->3e + a which circumvents the calibration challenges which plague the mu -> e a.
Like the weak interaction itself, the Higgs coupling to the left chiral components of the Dirac bispinors for quarks "knows" which up goes with which down in the universal coupling. However, the simple conjecture that the right chiral components of each are not so distinguished provides for a consistent determination of the quark mass spectra and of
the CKM matrix relating their mass eigenstates (flavors) in terms of general, but perturbative, BSM corrections. The extensions to charged leptons follows the same pattern, but the absence of right-chiral components of Dirac bispinors for neutrinos in the SM and the corresponding mass-independent definition of the flavors of the left-chiral Weyl neutrinos leads naturally to the PMNS matrix being almost tri-bi-maximal due to the definition of the charged lepton flavors by their mass. However, a very different structure for the origin of neutrino mass is then required, which we conjecture is related to the Dark Matter nature of the right-chiral components whether they complete neutrinos to Dirac bispinors or form Majorana neutrinos via the see-saw mechanism.
Upcoming cosmological surveys will probe the impact of a non-zero sum of neutrino masses on the growth of structures. These measurements are sensitive to the behavior of neutrinos at cosmic distances, making them a perfect testbed for neutrino physics beyond the standard model at long ranges. In this talk, I will introduce a novel signal from long-range self-interactions between neutrinos. In the late-time universe, this interaction triggers the Jeans instability in the cosmic neutrino background. As a result, the cosmic neutrino background forms macroscopic bound states and induces large isocurvature perturbations in addition to the cold dark matter density perturbations. This enhancement of matter perturbation is uniquely probed by late-time cosmological observables. We find that with the minimum sum of neutrino masses measured by neutrino oscillation experiments, the current SDSS data already place strong constraints on the long-range neutrino self-interactions for interaction range greater than kpc.
The talk will still be about the same generalization of QM but more focused on the difference in the interference pattern of two paths in canonical QM vs. this generalization of QM. The reason for this change is that I have made much more progress in this aspect than the topic of my current abstract. As such, I believe it would be more fruitful to talk about this work as opposed to higher-order interference, a work still in progress.
In this talk, we discuss the cosmological effects of a tower of neutrino states (equivalently a tower of warm dark matter ) on cosmic microwave background (CMB) and large-scale structure. For concreteness, we considered the $N$-Naturalness model which is a proposed mechanism to solve the electroweak Hierarchy problem. The model predicts a tower of neutrino states, which act as warm dark matter, with increasing mass and decreasing temperature compared to the standard model neutrino. Compared to a single neutrino state, such a neutrino tower induces a more gradual suppression of the matter power spectrum. The suppression increases with the total number of states in the neutrino tower.
We explore these effects quantitatively in the scalar $N$-naturalness model and show the parameter space allowed by the CMB, weak lensing, and Lyman-$\alpha$ dataset. We found that neutrinos-induced suppression of the power spectrum at the small scale puts stringent constraints on the model. We emphasize the need for a faster Boltzmann solver to study the effects of the tower of neutrino states on smaller scales.
Natural anomaly-mediated Supersymmetry breaking (nAMSB) models arise from modifications to anomaly-mediated SUSY breaking models to avoid conflicts created by bounds from the Higgs mass, constraints from searches for wino-like WIMPS, and bounds from naturalness. nAMSB models still feature the wino as the lightest gaugino, but the higgsinos become the lightest EWinos. In nAMSB models with soft SUSY breaking in a sequestered sector, the Higgs mass is maintained at $m_h\sim125$ GeV, and sparticle masses fall within the LHC bounds. We explore model lines over the gravitino mass $m_{3/2}$ and find that the lower bound is excluded by gluino pair searches while the upper parameter space is excluded by gaugino pair searches. The middle range of $m_{3/2}\sim90 - 200$ TeV is expected to be fully testable at the HL-LHC with the following discovery channels: soft dilepton and trilepton from higgsino pair production, same sign diboson production, trilepton from wino pair production, and top squark pair production.
Supersymmetric models with low electroweak fine-tuning are more prevalent on the string landscape than fine-tuned models. We assume a fertile patch of landscape vacua containing the minimal supersymmetric standard model (MSSM) as a low-energy EFT. Such models are characterized by light higgsinos in the mass range of a few hundred GeV whilst top squarks are in the 1-2.5 TeV range. Other sparticles are generally beyond current LHC reach. We evaluate prospects for top squark searches of the expected natural SUSY at HL-LHC.
Supersymmetry is an appealing theoretical extension of the Standard Model because this framework presents a viable dark matter candidate. Several CMS analyses have searched for evidence of supersymmetry at the electroweak scale in the compressed region, where the parent sparticle mass is close to that of the child, leading to soft Standard Model decay products that can be difficult to reconstruct. The latest results from several Run 2 CMS analyses are presented with data from proton-proton collisions with a 13 TeV center-of-mass energy with luminosity up to 138 fb-1. These analyses target a variety of final states and employ a suite of methods to set stringent limits on several types of supersymmetric models
A search is presented for the pair-production of charginos and the production of a chargino and neutralino in a supersymmetric model where the near mass-degenerate chargino and neutralino each decay via $R$-parity-violating couplings to a Higgs boson and a charged lepton or neutrino. This analysis searches for a Higgs-lepton resonance in data corresponding to an integrated luminosity of 139 fb${}^{-1}$ recorded in proton-proton collisions at $\sqrt{s} = 13$ TeV with the ATLAS detector at the Large Hadron Collider at CERN.
A search is presented for the direct pair production of scalar tops, which each decay through an $R$-parity violating coupling to a charged lepton and a $b$-quark. The final state has two resonances formed by the lepton-jet pairs. Expected sensitivity will be shown for the dataset consisting of an integrated luminosity of 140 $fb^{-1}$ of proton-proton collisions at a center-of-mass energy of $\sqrt{s}= 13 $ TeV, collected between 2015 and 2018 by the ATLAS detector at the LHC. Supersymmetry is able to resolve many questions left unanswered by the Standard Model, such as the hierarchy problem. This search is inspired by the minimal supersymmetric B-L extension of the Standard Model, which has spontaneous $R$-parity violation that allows violation of lepton number.
In natural supersymmetric models defined by no worse than a part in
thirty electroweak fine-tuning, winos and binos are generically expected
to be much heavier than higgsinos. Moreover, the splitting between the
higgsinos is expected to be small, so that the visible decay products of
the heavier higgsinos are soft, rendering the higgsinos quasi-invisible
at the LHC. Within the natural SUSY framework, heavy electroweak gauginos
decay to $W$, $Z$ or $h$ bosons plus higgsinos in the ratio $\sim
2:1:1$, respectively. This is in sharp contrast to models with a
bino-like lightest superpartner and very heavy higgsinos, where the charged
(neutral) wino essentially always decays to a $W$ ($h$) boson and an
invisible bino. Wino pair production at the LHC, in natural SUSY, thus
leads to $VV$, $Vh$ and $hh+\not\!\!\!{E_T}$ final states ($V=W,Z$) where, for TeV
scale winos, the vector bosons and $h$ daughters are considerably
boosted. We identify eight different channels arising from the leptonic
and hadronic decays of the vector bosons and the decay $h\to b\bar{b}$,
each of which offers an avenue for wino discovery at the high luminosity
LHC (HL-LHC). By combining the signal in all eight channels we find,
assuming $\sqrt{s}=14$ TeV and an integrated luminosity of 3000
fb$^{-1}$, that the discovery reach for winos extends to $m(wino)\sim
1.1$~TeV, while the 95\% CL exclusion range extends to a wino mass of
almost 1.4~TeV. We also identify ``higgsino specific channels'' which
could serve to provide $3\sigma$ evidence that winos lighter than
1.2~TeV decay to light higgsinos rather than to a bino-like LSP, should
a wino signal appear at the HL-LHC.
The Peccei-Quinn (PQ) symmetry that solves the strong CP problem, being a global symmetry, suffers from a potential quality problem in that the symmetry is not respected by quantum gravity. In this talk I will present results from an ongoing work (with B. Dutta and R.N. Mohapatra) where we address successfully this problem based on a gauged U(1) symmetry. The PQ symmetry arises accidentally in a family of models and is protected by the gauged U(1) against quantum gravitational corrections. A unified theory based on SO(10) x U(1) gauge symmetry will also be presented, and the resulting axion phenomenology will be discussed.
A heavy axion avoids the quality problem and has been shown to produce interesting experimental signatures. A mirror sector has been invoked to explain how such axions can occur, often with a large hierarchy between the visible and mirror Higgs masses. I discuss a novel realization of the Twin Higgs framework that produces a heavy axion without this large hierarchy, addressing both the strong CP and electroweak hierarchy problems. I discuss the experimental constraints and discovery opportunities associated with this model.
We identify the QCD axion and right-handed (sterile) neutrinos as bound states of an SU(5) chiral gauge theory with Peccei-Quinn (PQ) symmetry arising as a global symmetry of the strong dynamics. The strong dynamics is assumed to spontaneously break the PQ symmetry, producing a high-quality axion and naturally generating Majorana masses for the right-handed neutrinos at the PQ scale. The composite sterile neutrinos can directly couple to the left-handed (active) neutrinos, realizing a standard see-saw mechanism. Alternatively, the sterile neutrinos can couple to the active neutrinos via a naturally small mass mixing with additional elementary states, leading to light sterile neutrino eigenstates. The SU(5) strong dynamics therefore provides a common origin for a high-quality QCD axion and sterile neutrinos.
The axion or axion like particle (ALP), as a leading dark matter candidate, is the target of many on-going and proposed experimental searches based on its coupling to photons. However, indirect searches for axions have not been as competitive as direct searches that can probe a large range of parameter space. In this talk, I will introduce the idea that axion stars will inevitably form in the vicinity of supermassive black holes due to Bose-Einstein condensation, enhancing the axion birefringence effect and opening up more windows for axion indirect searches. The oscillating axion field around black holes induces polarization rotation on the black hole image, which is detectable and distinguishable from astrophysical effects on the polarization angle, as it exhibits distinctive temporal variability and frequency invariability. We show that the polarization measurement from Event Horizon Telescope can set the most competitive limit on axions in the mass range of $10^{-21}$-$10^{-16}$ eV.
Proto-neutron stars, formed in the center of Type-II supernovae, represent promising science targets for probing axions. The hypothetical particles are emitted via e.g. the Primakoff process and can modify the cooling rate of the proto-neutron stars and also convert to observable gamma rays while propagating through astrophysical magnetic field. Observations of Supernova 1987 (SN 1987A) from the Solar Maximum Mission (SMM) gamma-ray telescope have previously been used to set bounds on the axion-photon coupling. In this work, we present updated limits with SMM data by including nucleon-nucleon bremsstrahlung as an additional mechanism of axion production. We also consider a novel axion conversion mechanism in the progenitor magnetic field of SN 1987A. This allows constraining larger axion masses and smaller axion-photon couplings due to the stronger magnetic field of the progenitor star compared to the magnetic field of the Milky Way. We use these results to project the sensitivity of gamma-ray searches towards a future Galactic supernova with a proposed full-sky gamma-ray telescope network.
Ultra-light axions with weak couplings to photons are motivated extensions of the Standard Model. We perform one of the most sensitive searches to-date for the existence of these particles with the NuSTAR telescope by searching for axion production in stars in the M82 starburst galaxy and the M87 central galaxy of the Virgo cluster. This involves a sum over the full stellar populations in these galaxies when computing the axion luminosity, as well as accounting for the conversion of axions to hard X-rays via magnetic field profiles from simulated IllustrisTNG analogue galaxies. We find no evidence for axions, and instead set robust constraints on the axion-photon coupling at the level of $|g_{a\gamma\gamma}| < 6.44 \times 10^{-13}$ GeV$^{-1}$ for $m_a \lesssim 10^{-10}$ eV at 95% confidence.
In this talk, I will introduce ARCANE reweighting, a new Monte Carlo technique to solve the negative weights problem in collider event generation. We will see a demonstration of the technique in the generation of $(e^+ e^- \longrightarrow q\bar{q} + 1~\mathrm{jet})$ events under the MC@NLO formalism.
In this scenario, ARCANE can reduce the fraction of negative weights by redistributing the contributions of $\mathbb{H}$- and $\mathbb{S}$-type events a) without introducing any biases in the distribution of physical observables and b) without requiring any changes to the matching and merging prescriptions used.
I believe that the technique can be applied to other processes of interest like $(q\bar{q}\longrightarrow W + \mathrm{jets})$ and $(q\bar{q}\longrightarrow t\bar{t}+\mathrm{jets})$ as well.
The Large Hadron Collider was developed, in part, to produce and study heavy particles such as the top quark. The lifetime of the top quark is on the order of less than $10^{-24}$ seconds. Due to its short lifetime, the top quark is observed indirectly by particle detectors through the particles it decays into. A key part of reconstructing heavy particles for observation is to properly assign the decay products to their respective top quarks or other parent particles. One common approach in this process involves summing the momenta and energy of various particle combinations in different permutations to compute the masses of the expected parent particles in a specific decay process. Those masses are then compared to expected masses in order to select the best set of particle assignments for the full collision event. Here we demonstrate that a matrix-based approach, which incorporates additional terms related to the expected transverse momenta associated with both correct and incorrect particle pairings, leads to improvements in reconstruction. For the benchmark task, where two top quarks decay to six quarks (fully-hadronic decay), this method leads to an improvement in reconstruction efficiency of approximately $10-13\%$ in events, containing six to fourteen jets, compared to a mass-only approach.
One key problem in collider physics is that of binary classification to fully reconstruct final states. Considering top quark pair production in the fully hadronic channel as an example, we explore the effectiveness of multiple variational quantum algorithms (VQAs) including quantum approximation optimization algorithm (QAOA) and its derivatives. Compared against other approaches, such as quantum annealing and kinematic methods i.e. the hemisphere method, we demonstrate comparable or better efficiencies for selecting the correct pairing depending on the particular invariant mass threshold.
The Large Hadron Collider (LHC) will undergo a major improvement from 2026-2028 called High Luminosity LHC (HL-LHC). The number of collisions per proton bunch crossing will increase from ~60 to ~200. This will stress the current event selection (trigger) system, and the efficiency of specialized jet triggers in particular. An important challenge lies in classifying jets coming from a single vertex or from multiple ones, and the difficulty in distinguishing this is exacerbated by the increased pile-up interactions and high energy background jets under high luminosity. Therefore, as part of the ongoing ATLAS detector upgrade, we are developing a multi-vertex jet trigger for Level 0 (hardware-based level) at HL-LHC, using machine learning techniques, such as Boosted Decision Trees (BDTs) to do the classification. Building on recent advancements, such as the development of the fwXmachina package in the University of Pittsburgh (useful for BDTs implementation in Level 1), the project spans describing HL-LHC multi-jet background, creating BDTs to classify single and multi-vertex events, and implementing them on Field Programmable Gate Arrays (FPGAs). This trigger will benefit the identification of specific di-Higgs decays like HH $\rightarrow$ 4b, but also any interesting physics with 4 jets in the final state.
The CMS detector will upgrade its tracking detector in preparation for the High Luminosity Large Hadron Collider (HL-LHC). The Phase-2 outer tracker layout will consist of 6 barrel layers in the center and 5 endcap layers. These will be composed of two different types of double-sensor modules, capable of reading out hits compatible with charged particles with transverse momentum above 2 GeV (“stubs”). Stubs are used in the back-end Level 1 track-finding system to form tracks that will be considered by the Level-1 trigger to select interesting events. An important part of this update is ensuring the tracker and the stub building step work correctly, which is where Data Quality Monitoring (DQM) comes in. Currently, there is no automated system to measure the performance of stub reconstruction. This talk focuses on the software development to ensure that we can monitor the performance of stub reconstruction, making use of Monte Carlo truth information.
The possibility of a dark sector photon that couples to standard model lepton pairs has received much theoretical interest. Dark photons with GeV scale masses could have decays with substantial branching fractions to simple decay modes such as opposite-sign muon pairs. If the dark photon originates from a heavy particle, for example a BSM Higgs, the dark photon is boosted in the lab frame (CMS detector) resulting in decay products in a narrow angular cone containing a lepton pair referred to as a “lepton jet. If the dark photon is short-lived, it appears to originate directly from the primary interaction vertex. In several production models, the dark photons are produced in pairs, resulting in events with two lepton jets. Such a distinctive signature is rarely produced from SM processes. We present the status of an analysis for Run 2 (139 inverse fb) for the dimuon decay channel. Selection criteria are based on simulated signals for a Higgs portal model with prompt production and simulated standard model backgrounds. Run 2 data is compared with simulated backgrounds for a control sample of like-sign muon pairs. A multivariate classifier method shows good separation of signal and background. Expected sensitivity to production cross section is discussed.
A search for dark matter (DM) produced in association with a resonant b$\bar{b}$ pair is performed in proton-proton collisions at a center-of-mass energy of 13 TeV collected with the CMS detector during the Run 2 of the Large Hadron Colllider. The analyzed data sample corresponds to an integrated luminosity of 137 fb$^{-1}$.
Results are interpreted in terms of a novel theoretical model of DM production at the LHC the predicts the presence of a Higgs-boson-like particle in the dark sector, motivated simultaneously by the need to generate the masses of the particles in the dark sector and the possibility to relax constraints from the DM relic abundance by opening up a new annihilation channel. If such a dark Higgs boson decays into standard model (SM) states via a small mixing with the SM Higgs boson, one obtains characteristic large-radius jets in association with missing transverse momentum that can be used to efficiently discriminate signal from backgrounds. Limits on the signal strength of different dark Higgs boson mass hypotheses below 160 GeV are set for the first time with CMS data.
We unveil blind spot regions in dark matter (DM) direct detection (DMDD), for weakly interacting massive particles with a mass around a few hundred~GeV that may reveal interesting photon signals at the LHC. We explore a scenario where the DM primarily originates from the singlet sector within the $Z_3$-symmetric Next-to-Minimal Supersymmetric Standard Model (NMSSM). A novel DMDD spin-independent blind spot condition is revealed for singlino-dominated DM, in cases where the mass parameters of the higgsino and the singlino-dominated lightest supersymmetric particle (LSP) exhibit opposite relative signs (i.e., $\kappa < 0$), emphasizing the role of nearby bino and higgsino-like states in tempering the singlino-dominated LSP. Additionally, proximate bino and/or higgsino states can act as co-annihilation partner(s) for singlino-dominated DM, ensuring agreement with the observed relic abundance of DM. Remarkably, in scenarios involving singlino-higgsino co-annihilation, higgsino-like neutralinos can distinctly favor radiative decay modes into the singlino-dominated LSP and a photon, as opposed to decays into leptons/hadrons. In exploring this region of parameter space within the singlino-higgsino compressed scenario, we study the signal associated with at least one relatively soft photon alongside a lepton, accompanied by substantial missing transverse energy and a hard initial state radiation jet at the LHC. In the context of singlino-bino co-annihilation, the bino state, as the next-to-LSP, exhibits significant radiative decay into a soft photon and the LSP, enabling the possible exploration at the LHC through the triggering of this soft photon alongside large missing transverse energy and relatively hard leptons/jets resulting from the decay of heavier higgsino-like states.
We will present the operational status of the LHC Run 3 milliQan detector un, whose installation began last year and was completed during the 2023-4 YETS, and is being commissioned at the time of submission. We will also show any available initial results from data obtained with Run 3 LHC Collisions.
FASER, the ForwArd Search ExpeRiment, has successfully taken data at the LHC since the start of Run 3 in 2022. From its unique location along the beam collision axis 480 m from the ATLAS IP, FASER has set leading bounds on dark photon parameter space in the thermal target region and has world-leading sensitivity to many other models of long-lived particles. In this talk, we will give a full status update of the FASER experiment and its latest results, with a particular focus on our very first search for axion-like particles and other multi-photon signatures.
The constituents of dark matter are still unknown, and the viable possibilities span a very large mass range. Specific scenarios for the origin of dark matter sharpen the focus on a narrower range of masses: the natural scenario where dark matter originates from thermal contact with familiar matter in the early Universe requires the DM mass to lie within about an MeV to 100 TeV. Considerable experimental attention has been given to exploring Weakly Interacting Massive Particles in the upper end of this range (few GeV to ~TeV), while the region ~MeV to ~GeV is largely unexplored. Most of the stable constituents of known matter have masses in this lower range, tantalizing hints for physics beyond the Standard Model have been found here, and a thermal origin for dark matter works in a simple and predictive manner in this mass range as well. It is therefore an exploration priority. If there is an interaction between light DM and ordinary matter, as there must be in the case of a thermal origin, then there necessarily is a production mechanism in accelerator-based experiments. The most sensitive way (if the interaction is not electron-phobic) to search for this production is to use a primary-electron beam to produce DM in fixed-target collisions. The Light Dark Matter eXperiment (LDMX) is a planned electron-beam fixed-target missing-momentum experiment that has unique sensitivity to light DM in the sub-GeV range. This contribution will give an overview of the theoretical motivation, the main experimental challenges and how they are addressed, as well as projected sensitivities in comparison to other experiments.
The mystery of dark matter is one of the greatest puzzles in modern science. What is 85% of the matter, or 25% of the mass/energy, of the universe made up of? No human knows for certain. Despite mountains of evidence from astrophysics and cosmology, direct laboratory detection eludes physicists. A leading candidate to explain dark matter is the WIMP or Weakly Interacting Massive Particle, a thermal relic left over after the Big Bang. I will be presenting the first search results from the LZ experiment, as well as some subsequent analyses in different channels, such as low-energy electron recoils, high-energy nuclear recoils (EFT), and multiple scattering. LZ, deployed in South Dakota, is one of the flagship US DOE dark matter projects, and currently world leading from 10 GeV up to the TeV scale in mass-energy in terms of setting limits on the WIMP interaction strength following non-discovery. I will also showcase the unprecedented degree of agreement between LZ data and simulation software (NEST +FlameNEST, LZLAMA, Geant4, and BACCARAT) utilized to model signal and background interactions in a detector like LZ.
Dark matter, estimated to be 85% of the total mass of the Universe, remains a mystery in physics. Despite accumulating evidence supporting its existence, the true nature of dark matter is still elusive. One of the candidate's hypothesis are the Weakly Interacting Massive Particles (WIMPs). The search for WIMPs represents a real experimental challenge, has been running for more than a decade and has been pushing the limit further and further. The DarkSide program is part of this direct detection search and will continue with its next generation experiment, DarkSide-20k.
The DarkSide-20k detector will consists of a dual phase liquid Argon time projection chamber (LArTPC) surrounded by two veto inside a cryostat of 8x8x8m³. It will be located in the Gran Sasso underground laboratory, providing a natural shielding from cosmic rays. The design has been made in order to minimize background and achieve a state of background-free operation, also allowed by strategy to suppress unwanted signals (such asneutrons, beta and gamma). This is made possible by leveraging the exceptional background rejection power of liquid argon thanks to pulse shape discrimination. The Photon Detection Units (PDUs) constitute a critical component of this design and will soon enter into production. Cryogenic and low-background silicon photomultipliers (SiPMs) will be employed for the project, undergoing rigorous testing before being assembled to build the PDUs at the Nuova Officina Assergi (NOA) cleanroom. This facility is located at the external laboratory adjacent to the underground site. All of this will lead to a very good sensitivity for the WIMP-nucleon cross section in yet undiscovered area of the parameter space.
We have further developed the dark matter (DM) Migdal effect within semiconductors beyond the standard spin independent interaction. Ten additional non-relativistic operators are examined which motivate five unique nuclear responses within the crystal. We derive the generalized effective DM-lattice Migdal Hamiltonian and present new limits for the full list of interactions.
In the context of a U(1)$_X$ extension of the Standard Model (SM), we consider a (super)heavy Dirac fermion dark matter (DM) which interacts with the SM sector through U(1)$_X$ gauge interaction with a sizable gauge coupling. Although its mass exceeds the unitarity bound for the thermal DM scenario, its observed relic density is reproduced through incomplete thermalization with the reheating temperature after inflation being lower than the DM mass. We investigate this DM scenario from the viewpoint of complementarity between direct DM detection experiments and LHC searches for the mediator $Z'$ boson.
As nuclear recoil direct detection experiments carve out more and more dark matter parameter space in the WIMP mass range, the need for searches probing lower masses has become evident. Since lower dark matter masses lead to smaller momentum transfers, we can look to the low momentum limit of nuclear recoils: phonon excitations in crystals. Single phonon experiments promise to eventually probe dark matter masses lower than 1 MeV. However the slightly higher mass range of 10-100 MeV can be probed via multiphonon interactions and importantly, do not require as low of experimental thresholds to make a detection. In this work, we analyze dark matter interacting via a pseudoscalar mediator, which leads to spin-dependent scattering into multiphonon excitations. We consider several likely EFT operators and describe the future prospects of experiments for finding dark matter via this method. Our results are implemented in the python package DarkELF and can be straightforwardly generalized to other spin dependent EFT operators.
We develop benchmarks for resonant di-scalar production in the generic
complex singlet scalar extension of the Standard Model (SM), which contains two new scalars. These benchmarks maximize di-scalar resonant production: $pp\rightarrow h_2 \rightarrow h_1 h_1/h_1h_3/h_3h_3$, where $h_1$ is the observed SM-like Higgs boson and $h_{2,3}$ are new scalars. The decays $h_2\rightarrow h_1h_3$ and $h_2\rightarrow h_3h_3$ may be the only way to discover $h_3$, leading to a discovery of two new scalars at once. Current LHC and projected future collider (HL-LHC, FCC-ee, ILC500) constraints are used to produce benchmarks at the HL-LHC for $h_2$ masses between 250 GeV and 1 TeV and a future $pp$ collider for $h_2$ masses between 250 GeV and 12 TeV. We update the current LHC bounds on the singlet-Higgs boson mixing angle. As the mass of $h_2$ increases, certain limiting behaviors of the maximum rates are uncovered due to theoretical constraints on the parameters. These limits, which can be derived analytically, are ${\rm BR}(h_2\rightarrow h_1h_1)\rightarrow 0.25$, ${\rm BR}(h_2\rightarrow h_3h_3)\rightarrow 0.5$, and ${\rm BR}(h_2\rightarrow h_1h_3) \rightarrow 0$. It can also be shown that the maximum rates of $pp\rightarrow h_2\rightarrow h_1h_1/h_3h_3$ approach the same value. Hence, all three $h_2\rightarrow h_ih_j$ decays are promising discovery modes for $h_2$ masses below $\mathcal{O}(1 {\rm TeV})$, while above $\mathcal{O}(1 {\rm TeV})$ the decays $h_2\rightarrow h_1h_1/h_3h_3$ are more encouraging. Masses for $h_3$ are chosen to produce a large range of signatures including multi-b, multi-vector boson, and multi-$h_1$ production. The behavior of the maximum rates imply that in the multi-TeV region this model may be discovered in the Higgs quartet production mode before Higgs triple production is observed. The maximum di- and four Higgs production rates are similar in the multi-TeV range.
The knowledge of the Higgs potential is crucial for understanding the origin of mass and the thermal history of our Universe. We show how collider measurements and observations of stochastic gravitational wave signals can complement each other to explore the multiform scalar potential in the two Higgs doublet model. In our investigation, we analyze critical elements of the Higgs potential to understand the phase transition pattern. Specifically, we examine the formation of the barrier and the uplifting of the true vacuum state, which play crucial roles in facilitating a strong first-order phase transition. Furthermore, we explore the potential gravitational wave signals associated with this phase transition pattern and investigate the parameter space points that can be probed with LISA. Finally, we compare the impact of different approaches to describing the bubble profile on the calculation of the baryon asymmetry.
We study the conditions under which the CP violation in the quark mixing matrix can leak into the scalar potential of the real two-Higgs-doublet model (2HDM) via divergent radiative corrections, thereby spoiling the renormalizability of the model. We show that any contributing diagram must involve 12 Yukawa-coupling insertions and a factor of the hard $U(1)_{PQ}$-breaking scalar potential parameter $\lambda_5$, thereby requiring at least six loops; this also implies that the 2HDM with only softly-broken $U(1)_{PQ}$ is safe from divergent leaks of CP violation to all orders. In both the type-I and -II 2HDMs, we demonstrate that additional symmetries of the six-loop diagrams guarantee that all of the divergent CP-violating contributions cancel. We also show that the CP leak can occur at seven loops and enumerate the classes of diagrams that can contribute, providing evidence that the real 2HDM is theoretically inconsistent.
Exploring additional CP violation sources at the Large Hadron Collider (LHC) is vital to the Higgs physics programme beyond the Standard Model. An unexplored avenue at the LHC is a significant non-linear realization of CP violation, naturally described in non-linear Higgs Effective Field Theory (HEFT). In this talk, we will discuss constraining such interactions across a broad spectrum of single and double Higgs production processes, incorporating differential information where feasible statistically and theoretically. We will focus on discerning anticipated correlations in the Standard Model Effective Field Theory (SMEFT) from those achievable in HEFT in top-Higgs and gauge-Higgs interactions. We will discuss the LHC sensitivity, particularly when discriminating between SMEFTy and HEFTy CP violations in these sectors.
Field space geometry has been fruitful in understanding many aspects of EFT, including basis-independent criteria for distinguishing HEFT vs. SMEFT, reorganization of scattering amplitudes in covariant form, derivation of renormalization group equations and geometric soft theorem. We incorporate field space geometry in functional matching by dividing the field space into light and heavy subspaces. A modified covariant derivative expansion method is proposed to calculate the functional traces while accommodating the covariance of the light subspace geometry. We apply this formalism to the non-linear sigma model and reproduce the effective theory more efficiently compared to other matching methods.
We explore the connection between the Higgs hierarchy problem and the metastability of the electroweak vacuum. Previous work has shown that metastability bounds the magnitude of the Higgs mass squared parameter in the $m_H^2 < 0$ case, realized in our universe. We argue for the first time that metastability also bounds the Higgs mass in the counterfactual $m_H^2 > 0$ case; that is, metastability windows $m_H^2$. In the Standard Model, these bounds are orders of magnitude larger than the Higgs mass, but new physics can lower these scales. As an illustration, we consider vacuum stability in the presence of additional TeV scale fermions with Yukawa couplings to the Higgs and a dimension-$6$ term required to prevent complete instability of the vacuum. We find that the requirement of metastability imposes stringent bounds on the values of $m_H^2$ and the parameters characterizing the new physics.
The discovery of neutrino oscillation has ushered in a number of questions: Why are neutrino masses small? Are they different from other fermion masses? Are neutrinos the solution to the baryon asymmetry ? Are there really only 3 neutrinos? Is there a relation between neutrino and quark mixing? and many more. In order to get to bottom of these questions a massive experimental program in particle, nuclear and astrophysics is under way. In this talk I will try to highlight how interconnected these endeavors are.
The Deep Underground Neutrino Experiment (DUNE) is a next-generation long-baseline neutrino oscillation experiment in the US. It will have four far detector modules, each holding 17 kilotons of liquid argon. These modules sit 1500 meters underground and 1300 kilometers from the near detector complex. In this overview talk, I will give an overview of DUNE experiment, including the status of DUNE far site, FD and ND prototypes, physics reach and recent results, and construction progress. Prospects of the DUNE Phase II program will also be introduced.
DUNE is the flagship of the next generation of neutrino experiments in the United States. It is designed to decisively measure neutrino CP violation and the mass hierarchy. It utilizes the Liquid Argon Time Projection Chamber (LArTPC) technology, which provides exceptional spatial resolution and the potential to accurately identify final state particles and neutrino events. DUNE's high resolution LArTPC increases the difficulty of reconstructing and identifying neutrino events at DUNE. Deep learning techniques offer a promising solution to this problem. At DUNE, convolutional neural networks, graph neural networks and transformers are being developed and have already shown promising results in kinematic reconstruction, clustering and event/particle identification. Deep learning methods have also been preliminarily tested on data from the DUNE prototype detector ProtoDUNE at CERN. In this talk, I will discuss the development of these deep-learning-based reconstruction methods at DUNE.
I will introduce the general concepts of DUNE (Deep Underground Neutrino Experiment), as well as the current status of protoDUNE-VD, one of the two large-scale LArTPC-based DUNE Far Detector prototypes located at CERN. Later, I will focus on my neural network module, aiming at speeding up photon propagation process for optical simulation of protoDUNE-VD. This module is 50 ~ 100 times faster than the traditional GEANT4 method, making the photon detection simulation much more efficient.
The Deep Underground Neutrino Experiment (DUNE) is one of two big next generation neutrino experiments aimed at measuring neutrino properties, including the mass hierarchy, CP violating phase. The DUNE Far Detector will consist of four 17-kt modules, two of which have been prototypes at the ProtoDUNE experiment at CERN. The ProtoDUNE experiment consists of two liquid argon time projection chambers which took hadron beam data in 2018-2020, and are preparing for a second run in the summer of 2024. This talk summarizes the ProtoDUNE experiment, its past results and future plans.
We present theoretical results at approximate NNLO in QCD for top-quark pair-production total cross sections and top-quark differential distributions at the LHC in the SMEFT. These approximate results are obtained by adding higher-order soft gluon corrections to the complete NLO calculations. The higher-order corrections are large, and they reduce the scale uncertainties. These improved theoretical predictions can be used to set stronger bounds on top-quark QCD anomalous couplings.
We study the implications of precise gauge coupling unification on supersymmetric particle masses. We argue that precise unification favors the superpartner masses that are in the range of several TeV and well beyond. We demonstrate this in the minimal supersymmetric theory with a common sparticle mass threshold, and two simple high-scale scenarios: minimal supergravity and minimal anomaly-mediated supersymmetry. We also identify candidate models with a Higgsino or a wino dark matter candidate. Finally, the analysis shows unambiguously that unless one takes foggy naturalness notions too seriously, the lack of direct superpartner discoveries at the LHC has not diminished the viability of supersymmetric unified theories in general nor even precision unification in particular.
We present the basis of dimension-eight operators associated to
universal theories. We first derive a complete list of independent
dimension-eight operators formed with the Standard Model bosonic
fields characteristic of such universal new physics
scenarios. Without imposing C nor P the basis contains 175 operators
- this is, the assumption of Universality reduces the number of
independent SMEFT coefficients at dimension eight from 44807 to 175.
89 of the 175 universal operators are included in the general
dimension-eight operator basis in the literature. The 86 additional
operators involve higher derivatives of the Standard Model bosonic
fields and can be rotated in favor of operators involving fermions
using the Standard Model equations of motion for the bosonic fields.
By doing so we obtain the allowed fermionic operators generated in this
class of models which we map into the corresponding 86 independent
combinations of operators in the dimension-eight basis of
arXiv:2005.00059.
We are investigating the effects of dimension 6 dipole moment operators on dipole
moment measurements, which are electric dipole moment (EDM) and magnetic dipole moment (MDM).
Baryon number violation is our most sensitive probe of physics beyond the Standard Model. Its realization through heavy new particles can be conveniently encoded in higher-dimensional operators that allow for a model-agnostic analysis. The unparalleled sensitivity of nuclear decays to baryon number violation makes it possible to probe effective operators of very high mass dimension, far beyond the commonly discussed dimension-six operators. To facilitate studies of this ginormous and scarcely explored testable operator landscape we provide the exhaustive set of UV completions for baryon-number-violating operators up to mass dimension 15, which corresponds roughly to the border of sensitivity. In addition to the known Standard Model fields we also include right-handed neutrinos in our operators.
As in arXiv:2307.04255, we consider a radically modified form of supersymmetry (called susy here to avoid confusion), which initially combines standard Weyl fermion fields and primitive (unphysical) boson fields. A stable vacuum then requires that the initial boson fields, whose excitations would have negative energy, be transformed into three kinds of scalar-boson fields: the usual complex fields $\phi$, auxiliary fields $F$, and real fields $\varphi$ of a new kind (with degrees of freedom and gauge invariance preserved under the transformation). The requirement of a stable vacuum thus imposes Lorentz invariance, and also immediately breaks the initial susy -- whereas the breaking of conventional SUSY has long been a formidable difficulty. Even more importantly, for future experimental success, the present formulation may explain why no superpartners have yet been identified: Embedded in an $SO(10)$ grand-unified description, most of the conventional processes for production, decay, and detection of sfermions are excluded, and the same is true for many processes involving gauginos and higgsinos. This implies that superpartners with masses $\sim 1$ TeV may exist, but with reduced cross-sections and modified experimental signatures. For example, a top squark (as redefined here) will not decay at all, but can radiate pairs of gauge bosons and will also leave straight tracks through second-order (electromagnetic, weak, strong, and Higgs) interactions with detectors. The predictions of the present theory include (1) the dark matter candidate of our previous papers, (2) many new fermions with masses not far above 1 TeV, and (3) the full range of superpartners with a modified phenomenology.
Baryon Acoustic Oscillations are considered one of the most powerful cosmological probes. They are assumed to provide distance measures independent of a specific cosmological model. At the same time the obtained distances are considered agnostic with respect to other cosmological observations. However, in current measurements, the inference is done assuming parameter values of a fiducial LCDM model and employing prescriptions tested to be unbiased only within some LCDM fiducial cosmologies. Moreover the procedure needs to face the ambiguity of choosing a specific correlation function model-template to measure cosmological distances.
Does this comply with the requirement of model and parameter independent distances useful, for instance, to select cosmological models, detect Dark Energy and characterize cosmological tensions?
In this talk I will review the subject, answer compelling questions and explore new promising research directions.
Models of cosmology including dark radiation (DR) have garnered recent interest, due in part to their versatility in modifying the $\Lambda$CDM concordance model in hopes of resolving observational tensions. Equally interesting is the capacity for DR models to be constrained or detected with current and near-term cosmological data. Finally, DR models have the potential to be embedded into specific microphysical models of BSM physics with clear particle physics origins. With these three features of DR in mind, we explore the detailed dynamics for a class of DR models that thermalize after big-bang nucleosynthesis by mixing with the standard model (SM) neutrinos. Such models were proposed in previous work (2301.10792), where only background quantities were studied, and the main focus was on the large viable parameter space. Concentrating on a sub-class of these models with a mass threshold within the dark sector, motivated by the successes of such models for resolving the Hubble tension, we perform a detailed MCMC analysis to derive constraints from CMB, BAO, and Supernovae data. In this talk, I will comment on (i) the degree to which interactions/mixing of DR with SM neutrinos is constrained by current data, (ii) the prospect of the model to resolve the Hubble tension, and (iii) the relevance of this type of self-interacting dark neutrino for explaining anomalies in neutrino experiments.
Cosmological first order phase transitions are typically associated with physics beyond the Standard Model, and thus of great theoretical and observational interest. Models of phase transitions where the energy is mostly converted to dark radiation can be constrained through limits on the dark radiation energy density (parameterized by $\Delta N_{\rm eff}$). However, the current constraint ($\Delta N_{\rm eff} < 0.3$) assumes the perturbations are adiabatic. We point out that a broad class of non-thermal first order phase transitions that start during inflation but do not complete until after reheating leave a distinct imprint in the scalar field from bubble nucleation. Dark radiation inherits the perturbation from the scalar field when the phase transition completes, leading to large-scale isocurvature that would be observable in the CMB. We perform a detailed calculation of the isocurvature power spectrum and derive constraints on $\Delta N_{\rm eff}$ based on CMB+BAO data. For a reheating temperature of $T_{\rm rh}$ and a nucleation temperature $T_*$, the constraint is approximately $\Delta N_{\rm eff}\lesssim 10^{-5} (T_*/T_{\rm rh})^{-4}$, which can be much stronger than the adiabatic result. We also point out that since perturbations of dark radiation have a non-Gaussian origin, searches for non-Gaussianity in the CMB could place a stringent bound on $\Delta N_{\rm eff}$ as well.
We demonstrate that the searches for dark sector particles can provide probes of reheating scenarios, focusing on the cosmic millicharge background produced in the early universe. We discuss two types of millicharge particles (mCPs): either with, or without, an accompanying dark photon. These two types of mCPs have distinct theoretical motivations and cosmological signatures. We discuss constraints from the overproduction and mCP-baryon interactions of the mCP without an accompanying dark photon, with different reheating temperatures. We also consider the $\Delta N_{\rm eff}$ constraints on the mCPs from kinetic mixing, varying the reheating temperature. The regions of interest in which the accelerator and other experiments can probe the reheating scenarios are identified for both scenarios. These probes can potentially allow us to set an upper bound on the reheating temperature down to $\sim 10$ MeV, much lower than the previously considered upper bound from inflationary cosmology at around $\sim 10^{16}$ GeV. In addition, we derive a new ``distinguishability condition'', in which the two mCP scenarios may be differentiated by combining cosmological and theoretical considerations.
The decay of asymmetric dark matter (ADM) leads to possible neutrino signatures with an asymmetry of neutrinos and antineutrinos. In the high-energy regime, the Glashow resonant interaction $\bar{\nu}_e+e^- \rightarrow W^-$ is the only way to differentiate the antineutrino contribution in the diffuse astrophysical high-energy neutrino flux experimentally, which provides a possibility to probe heavy ADM. In this talk, I will discuss the neutrino signal from ADM decay, the constraints with the current IceCube observation of Glashow resonance, and the projected sensitivities with the next-generation neutrino telescopes.
We study the cosmological phase transition in the Conformal Freeze-In (COFI) dark matter model. The dark sector is a 4D conformal field theory (CFT) at high energy scales, but its conformal symmetry is broken in the IR through a small coupling of a relevant CFT operator $\mathcal{O}_\mathrm{CFT}$ to a Standard Model (SM) portal operator. The dark sector confines below a gap scale $M_{\mathrm{gap}}$ of order keV$--$MeV, forming bound states amongst which is the dark matter candidate. We consider the holographic dual in 5D given by a Randall-Sundrum-like model, where the SM fields and the dark matter candidate are placed on the UV and IR branes respectively. The separation between the UV and IR branes is stabilized by a bulk scalar field dual to $\mathcal{O}_\mathrm{CFT}$, naturally generating a hierarchy between the electroweak scale and $M_{\mathrm{gap}}$. The confinement of the CFT is then dual to the spontaneous symmetry breaking by the 5D radion potential. We find the viable parameter space of the theory which allows the phase transition to complete promptly without significant supercooling.
Dark glueballs, bound states of dark gluons in a $SU(N)$ dark sector (DS), have been considered as a dark matter (DM) candidate. We study a scenario where the DS consists only of dark gluons and dominates the Universe after primordial inflation. As the Universe expands and cools down, dark gluons get confined to a set of dark glueball states; they undergo freeze-out, leaving the Universe glueball-dominated. To recover the visible sector and standard cosmology, connectors between the sectors are needed. The heavy connectors induce decays of most glueball states, which populates the visible sector; however, some of the glueballs could remain long-lived on a cosmological time scale because of the (approximately) conserved charge, and hence they are a potential DM candidate. We study in detail the cosmological evolution of the DS, and show resulting constraints and observational prospects.
We introduce a model of dark matter (DM) where the DM is a composite of a spontaneously broken conformal field theory. We find that if the DM relic abundance is determined by freeze-out of annihilations to dilatons, where the dilatons are heavier than the DM, then the model is compatible with theoretical and experimental constraints for DM masses in the 0.1-10 GeV range. The conformal phase transition is supercooled and strongly first-order, and can thus source large stochastic gravitational wave signals consistent with those recently observed at NANOGrav. Future experiments are projected to probe a majority of the viable parameter space in our model.
We outline a new production mechanism for dark matter that we dub “recycling”:dark sector particles are kinematically trapped in the false vacuum during a dark phase transition; the false pockets collapse into primordial black holes (PBHs), which ultimately evaporate before Big Bang Nucleosynthesis (BBN) to reproduce the dark sector particles. The requirement that all PBHs evaporate prior to BBN necessitates high scale phase transitions and hence high scale masses for the dark sector particles in the true vacuum. Our mechanism is therefore particularly suited for the production of ultra heavy dark matter (UHDM) with masses above ∼ 10^12 GeV. The correct relic density of UHDM is obtained because of the exponential suppression of the false pocket number density. Recycled UHDM has several novel features: the dark sector today consists of multiple decoupled species that were once in thermal equilibrium and the PBH formation stage has extended mass functions whose shape can be controlled by IR operators coupling the dark and visible sectors.
White dwarfs have long been considered as large-scale dark matter (DM) detectors. Owing to their high density and relatively large size, these objects can capture large amounts of DM, potentially producing detectable signals. In this talk, I will show how we can probe for the first time the elusive higgsino, one of the remaining supersymmetric DM candidates that is largely unconstrained, using the white dwarf population within the Milky Way’s central parsec combined with existing gamma-ray observations of this region.
This study demonstrates how magnetically levitated (MagLeV) superconductors can detect dark-photon and axion dark matter via electromagnetic interactions, focusing on the underexplored low-frequency range below a kHz. Unlike traditional sensors that primarily detect inertial forces, Maglev systems are sensitive to electromagnetic forces, enabling them to respond to oscillating magnetic fields induced by dark matter. The research highlights the superconductors' capacity to probe dark matter when its Compton frequency matches the superconductor's trapping frequency and details the adjustments necessary for detection. This approach could significantly enhance sensitivity in the Hz to kHz frequency range for dark matter detection.
We explore the possibility of probing (ultra)-light dark matter (DM) using Mössbauer spectroscopy technique. Due to the time-oscillating DM background, a small shift in the emitted photon energy is produced, which in turn can be tested by the absorption spectrum. As the DM induced effect (signal) depends on the distance between the Mössbauer emitter and the absorber, this allows us to probe DM mass inverse of the order of the macroscopic distance scales. By using the existing synchrotron based Mössbauer setup, we can probe DM parameter space which is at par with the bounds from various fifth force experiments. We show our method can improve the existing limits coming from experiments looking for oscillating nature of DM, by several orders of magnitude. An advancement of the synchrotron facilities would enable us to probe DM parameter space beyond the fifth force limit by several orders of magnitude.
Detecting axion and dark photon dark matter in the milli-eV mass range has been considered being a significant challenge due to its frequency being too high for high-Q cavity resonators and too low for single-photon detectors to register. I will present a method that overcomes this difficulty (based on recent work arXiv:2208.06519) by using trapped electrons as high-Q resonators to detect axion and dark photon dark matter, and set a new limit on dark photon dark matter at 148 GHz (~0.6meV) that is around 75 times better than previous constraints by a 7 days proof-of-principle measurement. I will also propose some updates to this work that improve the result a lot by optimizing some of the experimental parameters and techniques.
Atom interferometers and gradiometers have unique advantages in searching for various kinds of dark matter (DM). Our work focus on light DM scattering and gravitational effect from macroscopic DM in such experiments.
First we discuss sensitivities of atom interferometers to a light DM subcomponent at sub-GeV masses through decoherence and phase shift from spin-independent scatterings. Benefiting from their sensitivities to extremely low momentum deposition and the coherent scattering, atom interferometers will be highly competitive and complementary to other direct detection experiments, in particular for DM subcomponent with mass $m_\chi \leq 10$ keV.
As an excellent accelerometer, atom gradiometers can also be sensitive to macroscopic DM through gravitational interactions. We present a general framework for calculating phase shifts in atom interferometers and gradiometers with metric perturbations sourced by time-dependent weak Newtonian potentials. We derive signals from gravitationally interacting macroscopic DM and found that future space missions like AEDGE could constrain macroscopic DM fractions to less than unity for DM masses around $m_\text{DM}\sim 10^7$ kg.
Beyond the Standard Model (BSM) Higgs with a same-flavor, opposite-charge dilepton plus Missing Transverse Energy (MET) final state are predicted by many models, including extensions of supersymmetry with an additional scalar. Such models are motivated by phenomenological issues with the Standard Model, such as the hierarchy problem, and by astrophysical observations such as the excess of gamma-ray radiation in the Milky Way galactic center. We have seen sensitivity over the range of 1-4 GeV and >20 GeV for the mass of this scalar. Now, we are targeting the 4-20 GeV range. Conveniently, the proposed signal decay has the same final state as that of the signal region of a published ATLAS search for gauginos in a compressed-mass scenario at the LHC. Due to this apparent signal region overlap, we can take advantage of the analysis preservation and reinterpretation framework (RECAST) to calculate limits on the branching ratio for this decay mode instead of building a dedicated analysis in this range.
We present a search for the y+H production mode with data from the CMS experiment at the LHC using 138fb$^{-1}$ of data with sqrt(s) = 13TeV. In this analysis we target a signature of a boosted Higgs boson recoiling against a high energy photon for H->4l and H->bb final states. Effective HZγ and Hγγ anomalous couplings are considered in this work within the framework of Effective Field Theory. Within this model, constraints on the γH production cross-section are presented, and simultaneous constraints on four anomalous HZγ and Hγγ couplings are reported.
The Standard Model (SM) predicts couplings to the Higgs boson for a given mass of the Higgs boson, and experimental values different from these predictions would be strong indicators of physics beyond the SM. While Higgs decays to vector bosons and third-generation charged fermions have been established with good agreement to SM couplings, the Higgs boson coupling to charm quarks has yet to be experimentally determined with statistical significance. In the production mechanism where the Higgs boson is produced in association with a vector boson (VH, H->cc) and subsequently decays to a pair of charm quarks, is a promising process for studying the Higgs-charm Yukawa coupling due to its high signal to background ratio. We discuss the planned SM search for VH, H->cc in the resolved-jet regime, where the Higgs boson has a low to moderate transverse momentum (<~ 300 GeV), using CMS Run-3 proton-proton data. Previous analyses with CMS Run-2 data reconstructed the Higgs decay in the resolved-jet regime with two small-radius jets that were flavor-tagged independently, resulting in an underperformance due to excluding information from the radiation between the decay products of the Higgs. At the LHC Run-3, the Higgs, we intend to employ the novel “PAIReD” jet reconstruction technique: elliptical clusters of particles defined by pairs of small-radius jets with arbitrary separations between them. Modern flavor tagging algorithms trained on such novel jets allow us to increase tagging performance by exploiting correlations between hadronization products and extending the capabilities of merged-jet reconstruction flavor tagging techniques to small-radius jets. Flavor-tagging and simultaneously predicting the mass of PAIReD jets via machine learning provide greater leverage for separating the signal from the background. The overall analysis will be extended to include Higgs decays to bottom quarks, resulting in a simultaneous measurement of the Higgs-charm and Higgs-bottom Yukawa couplings. We expect to improve the rejection of major backgrounds by a factor of around 2.
The Higgs boson gives masses to all massive particles in the Standard Model (SM) and plays a crucial role in the theory. Studying different production and decay modes of the Higgs at the Large Hadron Collider is essential. The Vector Boson Fusion (VBF) is the second-largest production mechanism of the Higgs. Higgs bosons have the largest probability of decaying into a pair of bottom quarks, whereas the Higgs interaction to charm quarks has never been observed directly before. Thus, I led a sensitivity study conducted in the summer of 2023 to give insight into the best optimizations for Run-3 VBF Higgs to bb and Higgs to cc analysis. A new VBF trigger was made in late 2018 at the end of Run 2, allowing us to observe the Higgs boson decay to charm quark using the VBF production mode. The sensitivity study utilized the new trigger and began with determining the best working points for the flavor tagging of b and c quarks. I optimized hyperparameters and input variables of the Boosted Decision Trees (BDT). I introduced cuts on the BDT score to increase the significance of the invariant mass of the two signal b-quarks and c-quarks. The sensitivity analysis proved the feasibility of searching for VBF Higgs to bb and Higgs to cc using a partial Run 2 and Run 3 ATLAS dataset, leading to a full ATLAS analysis in September of 2023. This talk will summarize the sensitivity study and my current involvement in further optimizing the analysis to enhance signal sensitivity.
In this talk, we present the two-loop order $\mathcal{O}(\alpha\alpha_s)$ correction to the bottom quark on-shell wavefunction renormalization constant and we update the $\overline{MS}$-mass and the Yukawa coupling corrections at the same order, considering the full dependence on the top quark mass and on the bottom mass itself.
The Georgi-Machacek (GM) model is a motivated extension of the Standard Model (SM) that predicts the existence of singly and doubly charged Higgs bosons (denoted H± and H±±). Searches for these types of particles were conducted by the ATLAS collaboration at CERN with 139 fb$^{-1}$ of $\sqrt{s} = 13$ TeV $pp$ collision data (Run 2, collected between 2015 and 2018, see arXiv:2312.00420 and arXiv:2207.03925). Slight excesses were observed in searches utilizing events with vector boson-fusion (VBF) topologies. To further study these excesses, a new combined search for the H± and H±± is underway using additional $pp$ data collected by ATLAS during 2022-2024 (Run 3) at a collision energy of $\sqrt{s} = 13.6$ TeV. The VBF production of the H± and H±± is once again utilized, where the H± decays to a $W$ and $Z$ boson and the H±± decays into two same-sign $W$ bosons. Only the fully leptonic decays of the vector bosons are considered. Improvements over the Run 2 H± and H±± searches are discussed and some preliminary results are presented.
We present work in progress on using the timing information of jet constituents to determine the production vertex of highly displaced jets formed from the decay of a long-lived particle. We also demonstrate that the same network can output a much more consistent jet time that is less sensitive to geometric effects; allowing for better exclusionary power compared to $p_T$-weighted time.
Hadronization, a crucial component of event generation, is traditionally simulated using finely-tuned empirical models. While current phenomenological models have achieved significant success in simulating this process, there remain areas where they fall short of accurately describing the underlying physics. An alternative approach is machine learning-based models.
In this talk, I will present recent developments in MLHAD – a machine learning-based model for hadronization. We introduce a new training method for normalizing flows, which improves the agreement between simulated and experimental distributions of high-level observables by adjusting single-emission dynamics. Our results constitute an important step toward realizing a machine-learning-based model of hadronization that utilizes experimental data during training.
New physics at the LHC may be hiding in non-standard final state configurations, particularly in cases where stringent particle identification could obscure the signal. Here we present a search for resonances in the three-photon final state where two photons are highly merged. We target the case where a heavy vector-like particle decays to a photon and a new spin-0 particle $\phi$, where $\phi$ is light and decays to two photons, resulting in a merged diphoton signature. To classify and obtain the relevant kinematic properties of these merged photons, we use a convolutional neural network that takes individual crystal deposits in the CMS electromagnetic calorimeter as input. This method performs remarkably well for these highly merged decays where standard particle identification fails.
Normalizing flows have proven to be state-of-the-art for fast calorimeter simulation. With access to the likelihood, these flow-based fast calorimeter surrogate models can be used for other tasks such as unsupervised anomaly detection and incident energy regression without any additional training costs.
The invariant mass of particle resonances is a key analysis variable for LHC physics. For analyses with di-tau final states, the direct calculation of the invariant mass is impossible because tau decays always include neutrinos, which escape detection in LHC detectors. The Missing Mass Calculator (MMC) is an algorithm used by the ATLAS Experiment to calculate the invariant mass of resonances decaying to two tau particles. The MMC solves the system of kinematic equations involving the tau visible decay products by minimizing a likelihood function, making use of the tau mass constraint and probability distributions from Z → ττ decays. Because the algorithm uses Z decays it is most accurate in the Z mass range. This presentation will show that for high mass BSM resonances the MMC mass increasingly deviates from the true value, warranting further studies and the search for solutions to this discrepancy. We will show studies into machine learning solutions to di-tau mass reconstruction, aimed at providing improved accuracy for high-mass resonances. The specific use case is the search for X → SH → bbττ, sensitive to the Two-real-scalar-singlet extension to the Standard Model (TRSM), in which the Standard Model scalar sector is extended by two scalar singlets, labeled as X and S.
In a search for an exotic Higgs boson decay, a novel signature with highly collimated photons is studied where the Higgs boson decays into hypothetical light pseudoscalar particles of the form H to AA. In the highly boosted scenario, two collimated photons from the A decay are reconstructed as a single photon object, or an artificially merged photon shower. A deep learning based tagger is developed to identify the signal merged photon signature. We utilize the images of its electromagnetic shower shape and track structures. In this talk, we present the merged photon tagger that utilizes low-level detector information and its excellent performance across different boosts of A, compared with the standard CMS photon identification algorithm.
Quantum sensing employs a rich arsenal of techniques, such as squeezing, photon counting, and entanglement assistance, to achieve unprecedented levels of sensitivity in various tasks, with wide-reaching applications in fields of fundamental physics. For instance, squeezing has been utilized to enhance the sensitivity of gravitational wave detection and expedite the hunt for exotic dark matter candidates. In this talk, I will dive into the various quantum strategies aimed at accelerating the search for weak signals and explore initial approaches to transcend Standard Quantum Limits en route to achieving the ultimate limits of measurement sensitivity set by quantum mechanics. Along the way, I will underscore the important roles that distributed quantum sensing and entanglement can have in pushing the limits of our sensing capabilities.
Superconducting transmon qubits play a pivotal role in contemporary superconducting quantum computing systems. These nonlinear devices are typically composed of a Josephson junction shunted by a large capacitor and the bottom two energy eigenstates serve as qubits. When a qubit is placed in its excited state, it decays to its ground state with a relaxation timescale $T_1$. However, recent studies have suggested that cosmic rays or ambient gamma radiation could significantly degrade the relaxation times of transmon qubits, leading to detrimental correlated errors that impede quantum error correction processes [1,2]. In this study, we explore the potential of utilizing transmon qubits as radiation detectors by investigating the impact of radioactivity on transmons fabricated at the Superconducting Quantum Materials and Systems (SQMS) center, Fermilab. We develop a fast detection protocol based on rapid projective measurements and active reset to perform detection with milli-second time resolution. We utilize the underground facility at INFN-Gran Sasso and controlled radioactive sources (such as Thorium) to validate our scheme. Additionally, we investigate the possibility of enhancing detection efficiency by evaluating transmons fabricated with various superconducting materials and improved signal analysis schemes.
[1] Matt McEwen et al., Nature Physics18, 107–111 (2022)
[2] C.D. Wilen et al., Nature 594, 369–373 (2021)
*The work was supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Superconducting Quantum Materials and Systems (SQMS) Center under the contract No. DE-AC02-07CH11359, by the Italian Ministry of Foreign Affairs and International Cooperation, grant number US23GR09, and by the Italian Ministry of Research under the PRIN Contract No. 2020h5l338.
The QCD axion, originally motivated as a solution to the strong CP problem, is a compelling candidate for dark matter, and accordingly, the last decade has seen an explosion in new ideas to search for axions. Simultaneously, we have witnessed a revolution in quantum sensing and metrology, with the emergence of platforms enabling ever-greater measurement sensitivity. These platforms are now being brought to bear on axion dark matter searches, with the aim of a comprehensive probe of the phase space for QCD axions. In this talk, I briefly overview efforts to apply techniques evading the Standard Quantum Limit of amplification, such as squeezing, photon counting, and backaction evasion, to axion dark matter searches. I then focus on techniques well-suited to resonant electromagnetic probes of pre-inflationary sub-ueV axions, for which photon counting of the thermal state in the resonator is not advantageous relative to quantum-limited amplification. I describe, in particular, the RF Quantum Upconverter (RQU), a superconducting lithographed device containing a Josephson junction interferometer that upconverts kHz-MHz electromagnetic signals (corresponding to the sub-ueV mass window) to GHz signals. By leveraging mature microwave techniques as well as adapting sensitive measurement schemes utilized in cavity optomechanical systems (e.g., LIGO), the RQU can evade the Standard Quantum Limit. Recent experimental results for the RQU are discussed. I describe plans to integrate the RQU into DMRadio, an experimental campaign for sub-ueV dark matter with the ultimate goal of probing GUT-scale axions, and the Princeton Axion Search, which will probe QCD axions in the 0.8-2 ueV mass range.
Recent advancements in quantum computing have introduced new opportunities alongside classical computing, offering unique capabilities that complement traditional methods. As quantum computers operate on fundamentally different principles from classical systems, there is a growing imperative to explore their distinct computational paradigms. In this context, our research aims to explore the potential applications of quantum machine learning in the field of high-energy physics. Specifically, we seek to assess the feasibility of employing supervised quantum machine learning for searches conducted at the Large Hadron Collider. Additionally, we aim to investigate the potential of generative quantum machine learning for simulating tasks relevant to high-energy physics. By leveraging quantum computing technologies, we aim to advance the capabilities of computational approaches in addressing complex challenges within the field of particle physics.
ProtoDUNE-SP was a large-scale prototype of the single phase DUNE far detector which took test beam data in Fall 2018. The beam consisted of positive pions, kaons, muons, and protons, and this data is being used to measure the various hadron-Ar interaction cross sections. These measurements will provide important constraints for the nuclear ground state, final state interaction, and secondary interaction models of argon-based neutrino-oscillation and proton-decay experiments such as DUNE. This talk will focus on the measurement of the pion-argon inelastic interaction cross sections.
The SPS Heavy Ion and Neutrino Experiment (NA61/SHINE) is a fixed-target hadron spectrometer at CERN’s Super Proton Synchrotron. It has a dedicated program to measure hadron-nucleus interactions with the goal of constraining the accelerator-based neutrino flux, which mainly originates from the not precisely known primary and secondary hadron production. NA61/SHINE’s previous measurements of protons colliding on thin carbon targets and a replica T2K target have significantly reduced the flux uncertainty in the T2K experiment. This contribution will present the recent results and ongoing hadron production measurements in NA61/SHINE, the upcoming data-taking with a replica LBNF/DUNE target, as well as the plan after the Long Shutdown 3 of the accelerator complex at CERN.
The Deep Underground Neutrino Experiment (DUNE) is a long baseline oscillation experiment that, among its many physics goals, seeks to measure the charge-parity (CP) violating phase, $\delta_{\mathrm{CP}}$. To do so requires precise knowledge of both the neutrino and antineutrino fluxes. DUNE will achieve this via the use of both a near and far detection system. The leading source of systematic uncertainty associated with predicting the DUNE flux comes from the production of hadrons, closely followed by the uncertainties associated with beam focusing effects.
The DUNE flux was simulated within a custom geant4 framework in parallel with the Package to Predict the Flux (PPFX).The total systematic uncertainties associated with the hadron production and beam focusing effects within DUNE’s region of interest [0.5, 8] GeV, was found to be on average 8-10% across all modes, detector locations and neutrino species. Construction of the correlation matrix indicated that the systematic uncertainties were highly correlated, while the Far to Near Flux Ratio allowed for the cancellation of many systematic effects, effectively reducing the total systematic uncertainties to the order of 1.5-5%.
Identification of high-energy neutrino point sources by IceCube is exciting for particle phenomenology, as propagation of neutrinos over large distances allows us to test properties that are hard to access. However, beyond-Standard Model effects would often show up as distortions of the energy spectrum, which makes it difficult to distinguish new physics from uncertainties in the source modeling. In this talk, I will present ongoing work to determine how well a future dataset containing multiple point-source observations could simultaneously distinguish some of these effects, and how the analysis can account for this.
Charged leptons produced by high-energy and ultrahigh-energy neutrinos have a substantial probability of emitting prompt internal bremsstrahlung $\nu_\ell + N \rightarrow \ell + X + \gamma$. This can have important consequences for neutrino detection. We discuss observable consequences at high- and ultrahigh-energy neutrino telescopes and LHC's Forward Physics Facility. Logarithmic enhancements can be substantial (e.g.\ $\sim 20\%$) when either the charged lepton's energy, or the rest of the cascade, is measured. We comment on applications involving the inelasticity distribution including measurements of the $\nu/\bar{\nu}$ flux ratio, throughgoing muons, and double-bang signatures for high-energy neutrino observation. Furthermore, for ultrahigh-energy neutrino observation, we find that final state radiation affects flavor measurements and decreases the energy of both Earth-emergent tau leptons and regenerated tau neutrinos. Finally, for LHC's Forward Physics Facility, we find that final state radiation will impact future extractions of strange quark parton distribution functions. Final state radiation should be included in future analyses at neutrino telescopes and the Forward Physics Facility.
We use publicly available data to perform a search for correlations of high-energy neutrino candidate events detected by IceCube and high-energy photons seen by the HAWC collaboration. Our search is focused on unveiling such correlations outside of the Galactic plane. This search is sensitive to correlations in the neutrino candidate and photon skymaps which would arise from a population of unidentified point sources.
The scenario of neutrino self-interactions is an interesting beyond-Standard Model possibility that is difficult to test. High energy neutrinos measured by the IceCube neutrino detector having traveled long distances present an opportunity to attempt to constrain the parameters governing neutrino self-interaction: the mediator mass and coupling constant. We have modeled neutrino production, propagation, and detection by IceCube to predict the detected flux of neutrinos with neutrino self-interactions at a given value of the coupling constant and mediator mass. Using this model we can perform a joint analysis of several neutrino sources (the TXS 0506+056 blazar and the NGC 1068 AGN) whose different inherent assumptions make the joint analysis beneficial. Prior works have only examined sources individually, so our study of data points taken from multiple sources provides a statistically novel approach to this problem. We present our ongoing work on this analysis.
The ForwArd Search ExpeRiment (FASER) has been successfully acquiring data at the Large Hadron Collider (LHC) since the inception of Run 3 in 2022. FASER opened the window on the new subfield of collider neutrino physics by conducting the first direct detection of muon and electron neutrinos at the LHC. In this talk, we discuss the latest neutrino physics results from FASER. A review of the first neutrino results from the electronic detectors of FASER will be given, and the rest of the talk will focus on the first measurements of neutrino cross sections in the TeV-energy range with the FASER𝜈 sub-detector.
Proton-proton collisions at the LHC generate a high-intensity collimated beam of neutrinos in the forward direction, characterized by energies of up to several TeV. The recent observation of LHC neutrinos by FASERν and SND@LHC signals that this hitherto ignored particle beam is now available for scientific inquiry. Here we quantify the impact that neutrino deep-inelastic scattering (DIS) measurements at the LHC would have on the parton distributions (PDFs) of protons and heavy nuclei. We generate projections for DIS structure functions for FASERν and SND@LHC at Run III, as well as for the FASERν2, AdvSND, and FLArE experiments to be hosted at the proposed Forward Physics Facility (FPF) operating concurrently with the High-Luminosity LHC (HL-LHC). We determine that up to one million electron- and muon-neutrino DIS interactions within detector acceptance can be expected by the end of the HL-LHC, covering a kinematic region in x and Q2 overlapping with that of the Electron-Ion Collider. Including these DIS projections into global (n)PDF analyses reveals a significant reduction of PDF uncertainties, in particular for strangeness and the up and down valence PDFs. We show that LHC neutrino data enables improved theoretical predictions for core processes at the HL-LHC, such as Higgs and weak gauge boson production. Our analysis demonstrates that exploiting the LHC neutrino beam effectively provides CERN with a “Neutrino-Ion Collider” without requiring modifications in its accelerator infrastructure.
A search for a massive resonance $X$ decaying to a pair of spin-0 bosons $\phi$ that themselves decay to pairs of photons ($\gamma$), is presented. The search is based on CERN LHC proton-proton collision data at $\sqrt{s} = 13$ TeV, collected with the CMS detector, corresponding to an integrated luminosity of 138 $\textrm{fb}^{-1}$. The analysis considers masses $m_X$ between 0.3 and 3 TeV, and is restricted to values of $m_\phi$ for which the ratio $m_\phi/m_X$ is between 0.5 and 2.5\%. In these ranges, the two photons from each $\phi$ boson are expected to spatially overlap significantly in the detector. Two neural networks are created, based on computer vision techniques, to first classify events containing such merged diphotons and then to reconstruct the mass of the diphoton object. The mass spectra are analyzed for the presence of new resonances, and are found to be consistent with standard model expectations. Model-specific limits are set at 95\% confidence level on the production cross section for $X \to \phi\phi \to (\gamma\gamma)(\gamma\gamma)$ as a function of the resonances’ masses, where both the $X \to \phi\phi$ and $\phi \to \gamma\gamma$ branching fractions are assumed to be 100\%. Observed (expected) limits range from 0.03 - 1.06 fb (0.03 - 0.79 fb) for the masses considered, representing the most sensitive search of its kind at the LHC.
We present the first search for "soft unclustered energy patterns" (SUEPs) described by an isotropic production of many soft particles. SUEPs are a potential signature of some Hidden Valley models invoked to explain dark matter, and which can be produced at the LHC via a heavy scalar mediator. It was previously expected that such events would be rejected by conventional collider triggers and reconstruction; however, using custom data samples augmented by storing track-level information, and by targeting events where the scalar mediator recoils against initial-state radiation, this search is uniquely able to reconstruct large track clusters that are associated with the SUEP signature. The large QCD background is estimated utilizing a novel data-driven background prediction method which is shown to accurately describe data. This search achieves sensitivity across a broad range of mediator masses for the first time, where the track multiplicity is high.
We present a search for low-mass narrow quark-antiquark resonances. This search uses data from the LHC in proton-proton collisions at a center-of-mass energy of 13 TeV, collected by the CMS detector in Run 2, and corresponds to an integrated luminosity of 136 fb^-1. The analysis strategy makes use of an initial state photon recoiling against the narrow resonance. The resulting large transverse momentum (pT) of the resonance leads to its decay products being collimated into a single jet with internal two-pronged substructure. The new physics signal is searched for as a narrowly peaking excess above the standard model backgrounds in the jet mass spectrum. During the 2018 data taking period, a lower photon pT threshold trigger was implemented and is used in this analysis, allowing us to better probe the lower mass region. The variable N2DDT is used to identify two-pronged substructure jets, which is decorrelated with the jet’s mass and pT. An alternate method of selecting jets with two-pronged substructure using a machine learning algorithm called ParticleNet is also in development. A mostly data-driven method is used to determine the backgrounds in the analysis. A leptophobic Z’ decaying to quarks is the benchmark model used, and the analysis is further motivated by a simplified model of dark matter involving a mediator particle interacting between quarks and dark matter.
A search for Drell Yan production of leptoquarks is performed using proton-proton collision data collected at √s = 13 TeV using the full Run-2 dataset with the CMS detector at the LHC, CERN. The data corresponds to an integrated luminosity of approximately 137 fb−1. The search spans scalar and vector leptoquarks that couple up and down quarks to electrons and muons. Dielectron and dimuon final states are considered, with dilepton invariant masses above 500 GeV. Since the Drell-Yan production of leptoquarks is non-resonant, we fit the dilepton angular distribution to templates built from reweighted Monte Carlo samples. This allows us to probe higher leptoquark masses than previous searches. 95% Exclusion limits on leptoquark Yukawa couplings are presented for leptoquark masses upto 5 TeV.
Long-lived, charged particles are included in many beyond the standard model theories. It is possible to observe massive charged particles through unusual signatures within the CMS detector. We use data recorded during 2017-18 operations to search for signals involving anomalous ionization in the silicon tracker. Two new, enhanced methods are presented. The results are interpreted within several models including those with staus, stops, gluinos, and multiply charged particles as well as a new model with decays from a Z' boson.
Long-lived particles (LLPs) arise in many promising theories beyond the Standard Model. At the LHC, LLPs typically decay away from their initial production vertex, producing displaced and possibly delayed final state objects that give rise to non-conventional detector signatures. The development of custom reconstruction algorithms and dedicated background estimation strategies significantly enhance sensitivity to various LLP topologies at CMS. We present recent results of tracking- and calorimeter-based searches for LLPs and other non-conventional signatures obtained using data recorded by the CMS experiment during Run 2 and Run 3 of the LHC.
Since the landmark discovery in 2012 of the h(125) Higgs boson at the LHC, it should be a nobrainer to pursue the existence of a second Higgs doublet. We advocate, however, the general 2HDM
(g2HDM) that possesses a second set of Yukawa couplings. The extra top Yukawa coupling ρtt drives
electroweak baryogenesis (EWBG), i.e. generating Baryon Asymmetry of the Universe (B.A.U.) with
physics at the electroweak scale — hence relevant at the LHC! At the same time, the extra electron
Yukawa coupling ρee keeps the balance towards the stringent ACME2018 & JILA2023 bounds on the
electron electric dipole moment (eEDM), spectacularly via the fermion mass and mixing hierarchies
observed in the Standard Model — Discovery could be imminent (possibly followed by nEDM echo)!
EWBG suggests that exotic Higgs bosons H, A, H⁺ in g2HDM ought to be sub-TeV in mass, with
effects naturally well-hidden so far by 1) flavor structure, i.e. the aforementioned fermion mass-mixing
hierarchies; and 2) the emergent alignment phenomenon (i.e. small h−H mixing) that suppresses
processes such as t → ch, with the equivalent best limit by CMS and ATLAS. It is then natural to
pursue direct search modes such as cg → tH/A → ttc(bar) with extra top Yukawa couplings ρtc and ρtt
that are not alignment-suppressed; the results have just been published by CMS in 3/2024, which was
preceded by ATLAS. CMS would now pursue cg → bH⁺ → btb(bar), as well as continue to study t →
ch and ttc(bar) by adding Run III data, all with discovery potential. CMS also continues to pursue Bs,d
→ μμ, where the result published in 2023 has changed the world view.
Belle II would probe g2HDM with precision flavor measurements such as B → μν, τν; a ratio
deviating from 0.0045 would provide a smoking-gun. The τ → μγ process would need a large dataset.
With H, A, H⁺ expected at 300−600 GeV hence ripe for LHC search, we pursue lattice simulation
studies of first order electroweak phase transition, a prerequisite for EWBG in the early Universe, the
main motivation for our program. We also investigate the Landau pole phenomenon of g2HDM Higgs
sector for a new strong interaction scale, which could prove crucial for the future of collider physics.
Thus, our Decadal Mission:
“Find the extra H, A, H⁺ bosons; Crack the Flavor code; Solve the Mysterious B.A.U.!"
Baryon number violation is our most sensitive probe of physics beyond the Standard Model, especially through the study of nucleon decays. Angular momentum conservation requires a lepton in the final state of such decays, kinematically restricted to electrons, muons, or neutrinos. We show that operators involving tauons, which are at first sight too heavy to play a role in nucleon decays, still lead to clean nucleon decay channels with tau neutrinos. While many of them are already constrained from existing two-body searches such as $p\to \pi^+\nu$, other operators induce many-body decays such as $p \to \eta \pi^{+} \bar\nu_\tau$ and $n\to K^+ \pi^-\nu_\tau$ that have never been searched for.
The fermion mass hierarchy of the Standard Model (SM) spans many orders of magnitude and begs for a further explanation. The Froggatt-Nielsen (FN) mechanism is a popular solution which introduces an additional $U(1)$ symmetry to the SM under which SM fermions are charged. We studied the general class of FN solutions to the lepton flavor puzzle, including multiple different scenarios of neutrino masses. In this talk, we present preliminary results for the phenomenologically viable set of leptonic FN solutions. We calculate the magnitude of resulting flavor-changing observables for both low-energy decays and collider signatures, especially the observational potential of a future muon collider. We also discuss the potential for distinguishing between different FN scenarios based on the patterns observed in flavor-violating observables.
Charged lepton flavor violation arises in the Standard Model Effective Field Theory at mass dimension six. The operators that induce neutrinoless muon and tauon decays are among the best constrained and are sensitive to new-physics scales up to 10^7 GeV. An entirely different class of lepton-flavor-violating operators violates lepton flavors by two units rather than one and does not lead to such clean signatures. Even the well-known case of muonium–anti-muonium conversion that falls into this category is only sensitive to two out of the three ∆Lμ = −∆Le = 2 dimension-six operators. We derive constraints on many of these operators from lepton flavor universality and show how to make further progress with future searches at Belle II and future experiments such as Z factories or muon colliders.
Non-abelian symmetries are strong contenders as solutions to the flavour puzzle that seeks to explain the mass and mixing matrices of SM fermions. The Universal Texture Zero (UTZ) model charges all quark and lepton families as triplets under the $\Delta(27)$ symmetry group, while simultaneously exploiting the seesaw mechanism to generate light neutrino masses. Together with BSM triplet scalars, called flavons, the fermions and flavons generate a Yukawa structure that agrees with the current measurements and makes predictions for poorly constrained leptonic CP-violation parameters and other observables like $0\nu\beta\beta$ rates. In this talk, we present the inclusion of non-renormalizable potential in the flavon sector and illustrate how the additional 6-dimensional scalar potential introduce modification to the vacuum alignment. We investigated the possible symmetry contraction of arbitrary dimensional terms using the Hilbert-series-based DECO algorithm and classified terms that could contribute to non-trivial changes to the vacuum alignment and, hence, the flavour measurements. We are also looking into the possibility of classifying a general number of flavons using neural network. The perturbation to the vacuum alignment due to the non-renormalizable scalar potential can affect the effective coupling in the Yukawa sector after family symmetry breaking. We further outlines the possible phenomenological effect in the neutrino sector.
I will discuss effective field theory tools and model building efforts focused on describing probeable signals of charged lepton flavor violation at current and future muon-to-electron conversion experiments.
The “Hubble tension” refers to a disagreement between the present expansion rate of the universe, and that projected by applying our current model (“Lambda Cold Dark Matter” or Lambda-CDM) to early universe measurements; Lambda-CDM yields an expansion rate substantially different from current measurement, by more than five standard deviations. We describe the model, in particular the meaning of Lambda, which has a parameter w = -1. We find that if instead w = -1.73, the projected expansion rate comes out right; however, any w < -1 will cause the end of the universe in a finite time. We present the mathematics and some conclusions.
Cosmological observables are particularly sensitive to key ratios of energy densities and rates, both today and at earlier epochs of the Universe. Well-known examples include the photon-to-baryon and the matter-to-radiation ratios. Equally important, though less publicized, are the ratios of pressure-supported to pressureless matter and the Thomson scattering rate to the Hubble rate around recombination, both of which observations tightly constrain. Preserving these key ratios in theories beyond the $\Lambda$ Cold-Dark-Matter ($\Lambda$CDM) model ensures broad concordance with a large swath of datasets when addressing cosmological tensions. We demonstrate that a mirror dark sector, reflecting a partial $\mathbb{Z}_2$ symmetry with the Standard Model, in conjunction with percent level changes to the visible fine-structure constant and electron mass which represent a $\textit{phenomenological}$ change to the Thomson scattering rate, maintains essential cosmological ratios. Incorporating this ratio preserving approach into a cosmological framework significantly improves agreement to observational data ($\Delta\chi^2=-35.72$) and completely eliminates the Hubble tension with a cosmologically inferred $H_0 = 73.80 \pm 1.02$ km/s/Mpc when including the S$H_0$ES calibration in our analysis. While our approach is certainly nonminimal, it emphasizes the importance of keeping key ratios constant when exploring models beyond $\Lambda$CDM.
With the growing precision of cosmological measurements, tensions in the determination of cosmological parameters have arisen that might be the first manifestations of physics going beyond $\Lambda$CDM. We propose a new class of interacting dark sector models, which lead to qualitatively distinct cosmological behavior, dark acoustic oscillation, which can potentially simultaneously address the two most important tensions in cosmological data, the H0 and S8 tensions. The main ingredients in this class of models are self-interacting dark radiation and its dark acoustic oscillation induced by strong interactions with a fraction of dark matter. I will also present the latest results from applying this model across various combinations of cosmological data, illustrating the improvement it provides over $\Lambda$CDM.
Phase transitions provide a useful mechanism to produce both electroweak baryogenesis (EWBG) and gravitational waves (GW). We propose a left-right symmetric model with two Higgs doublets, a left-handed doublet $H_L$ and a right-handed doublet $H_R$, and a scalar singlet $\sigma$ under a $H_L \leftrightarrow H_R$ and $\sigma \leftrightarrow -\sigma$ symmetry as discussed by Gu. We utilize a multistep phase transition to produce EWBG and GW. At the first transition, $\sigma$ acquires a vev which results in GW being produced. At the second transition at a lower temperature, $H_R$ acquires a vev providing $W_R$ with a mass. This also produces a baryon asymmetry in the right-handed sector, which eventually is transferred to the left-handed sector. Finally, at an even lower temperature, the electroweak phase transition occurs and $H_L$ acquires a vev.
We propose a novel framework where baryon asymmetry of the universe can arise due to forbidden decay of dark matter (DM) enabled by finite-temperature effects in the vicinity of a first order phase transition (FOPT). In order to implement this novel cogenesis mechanism, we consider the extension of the standard model by one scalar doublet $\eta$, three right handed neutrinos (RHN), all odd under an unbroken $Z_2$ symmetry, popularly referred to as the scotogenic model of radiative neutrino mass. While the lightest RHN $N_1$ is the DM candidate and stable at zero temperature, there arises a temperature window prior to the nucleation temperature of the FOPT assisted by $\eta$, where $N_1$ can decay into $\eta$ and leptons generating a non-zero lepton asymmetry which gets converted into baryon asymmetry subsequently by sphalerons. The requirement of successful cogenesis together with a first order electroweak phase transition not only keep the mass spectrum of new particles in sub-TeV ballpark within reach of collider experiments but also leads to observable stochastic gravitational wave spectrum which can be discovered in planned experiments like LISA.
We calculate the effects of a light, very weakly-coupled boson $X$ arising from a spontaneously broken $U(1)_{B-L}$ symmetry on $\Delta N_{\rm eff}$ as measured by the CMB and $Y_p$ from BBN. Our focus is the mass range $1 \; {\rm eV} \, \lesssim m_X \lesssim 100 \; {\rm MeV}$. We find $U(1)_{B-L}$ is more strongly constrained by $\Delta N_{\rm eff}$ than previously considered. While some of the parameter space has complementary constraints from stellar cooling, supernova emission, and terrestrial experiments, we find future CMB observatories including Simons Observatory and CMB-S4 can access regions of mass and coupling space not probed by any other method.
A larger Planck scale during an early epoch leads to a smaller Hubble rate, which is the measure for efficiency of primordial processes. The resulting slower cosmic tempo can accommodate alternative cosmological histories. We consider this possibility in the context of extra dimensional theories, which can provide a natural setting for the scenario. If the fundamental scale of the theory is not too far above the weak scale, to alleviate the ``hierarchy problem," cosmological constraints imply that thermal relic dark matter would be at the GeV scale, which may be disfavored by cosmic microwave background measurements. Such dark matter becomes viable again in our proposal, due to smaller requisite annihilation cross section, further motivating ongoing low energy accelerator-based searches. Quantum gravity signatures associated with the extra dimensional setting can be probed at high energy colliders -- up to $\sim 13$ TeV at the LHC or $\sim 100$ TeV at FCC-hh. Searches for missing energy signals of dark sector states, with masses $\gtrsim 10$ GeV, can be pursued at a future circular lepton collider.
We describe a simple dark sector structure which, if present, has implications for the direct detection of dark matter (DM): the Dark Sink. A Dark Sink transports energy density from the DM into light dark-sector states that do not appreciably contribute to the DM density. As an example, we consider a light, neutral fermion $\psi$ which interacts solely with DM $\chi$ via the exchange of a heavy scalar $\Phi$. We illustrate the impact of a Dark Sink by adding one to a DM freeze-in model in which $\chi$ couples to a light dark photon $\gamma'$ which kinetically mixes with the Standard Model (SM) photon. This freeze-in model (absent the sink) is itself a benchmark for ongoing experiments. In some cases, the literature for this benchmark has contained errors; we correct the predictions and provide them as a public code. We then analyze how the Dark Sink modifies this benchmark, solving coupled Boltzmann equations for the dark-sector energy density and DM yield. We check the contribution of the Dark Sink $\psi$'s to dark radiation; consistency with existing data limits the maximum attainable cross section. For DM with a mass between $\text{MeV} -\mathcal{O}(10\text{ GeV})$, adding the Dark Sink can increase predictions for the direct detection cross section all the way up to the current limits.
Collisions between large fermionic dark matter bound states may produce characterisic photon bursts that are highly intense but rare in occurrence and short in duration. We discuss strategies and prospects for discovering such less explored class of indirect detection signals with nontrivial temporal structures. We also provide a concrete dark matter model that yields burst-like gamma-ray signals.
Axion-like particles (ALPs) offer a pathway for dark matter (DM) to interact with the Standard Model (SM) through a pseudoscalar mediator, addressing the absence of signals in direct detection experiments. This makes ALPs a compelling candidate for connecting DM to the SM. Our model assumes a dirac fermion DM particle that couples through an ALP. The freeze-out mechanism suggests that the ALP effective field theory (EFT) may not suffice, motivating us to explore a KSVZ-like UV completion. We extend the ALP effective theory by considering interactions with scalar and pseudoscalar particles, including couplings to various SM vector bosons. Our calculations reveal that these interactions may have greater importance than previously anticipated. The outcome of our study will shed light on where the correct relic density can arise concerning direct bounds on DM and the ALP, providing insights into the UV completion of the model.
A QCD axion with a decay constant below $ 10 ^{ 11} ~{\rm GeV} $ is a strongly-motivated extension to the Standard Model, though its relic abundance from the misalignment mechanism or decay of cosmic defects is insufficient to explain the origin of dark matter. Nevertheless, such an axion may still play an important role in setting the dark matter density if it mediates a force between the SM and the dark sector. In this work, we explore QCD axion-mediated freeze-out and freeze-in scenarios, finding that the axion can play a critical role for setting the dark matter density. Assuming the axion solves the strong CP problem makes this framework highly predictive, and we comment on experimental targets.
We present calculations of higher-order QCD corrections for the production of a heavy charged-Higgs pair ($H^+ H^−$) in the two-Higgs-doublet model at LHC energies. We calculate the NNLO soft-plus-virtual QCD corrections and the N$^3$LO soft-gluon corrections to the total and differential cross sections in single-particle-inclusive kinematics.
This talk discusses a new method to overcome common limitations in data-driven background predictions by validating the background model with synthetic data samples obtained using hemisphere mixing. These synthetic data samples allow for the validation of the extrapolation of the background model to the relevant signal region and avoid the problem of low statistical power in the most signal-like phase space. This technique also provides a way to determine the expected variance of the background prediction, resulting from the finite size of the data sample used to fit the model.
The results of a search for Higgs boson pair (HH) production in the decay channel to two bottom quarks and two W bosons using CMS data will be presented. The search is based on proton-proton collision data recorded at √s = 13 TeV center-of-mass energy during the period 2016 to 2018, corresponding to an integrated luminosity of 138 fb−1 and includes both resonant and non resonant as well as single lepton and double lepton channels. Run2 results show no excess in the resonant channel and in the non-resonant channel the upper limit of the cross section time branching ratio is 14-18 times that predicted by the standard model. In addition to presenting the Run2 results, this talk will also discuss expected improvements to the Run3 analysis, specifically improvements to the Heavy Mass Estimator and inclusion into the single lepton channel.
The simplest extension that can be added to the SM is the addition of a real singlet scalar S, which can result in a double Higgs bosson production if this new singlet is sufficiently heavy. New benchmark points are found by maximizing the production rate, which will allow to compare to the experimental results while this are being searched. The maximum values are shown for different values of the mixing angle and the resulting new mass eigenstate.
A search is presented for pair production of higgsinos in scenarios with gauge-mediated supersymmetry breaking. Each higgsino is assumed to decay into a Higgs boson and a nearly-massless gravitino. The search targets the $b\bar{b}$ decay channel of the Higgs bosons, leading to a reconstructed final state with at least three energetic $b$-jets and missing transverse momentum. Two complementary analysis channels are used to target higgsino masses below and above 250 GeV. The low (high) mass channel uses 126 (139) fb$^{-1}$ of $pp$ collision data collected at $\sqrt{s}$=13 TeV by the ATLAS detector during Run 2 of the Large Hadron Collider, extending previous ATLAS results with 24.3 (36.1) fb$^{-1}$. No significant excess above the Standard Model prediction is observed. At 95% confidence level, higgsino masses below 940 GeV are excluded. Exclusion limits as a function of the higgsino decay branching ratio to a $Z$ or a Higgs boson are also presented.
Neutrino physics is advancing into a precision era with the construction of new experiments, particularly in the few GeV energy range. Within this energy range, neutrinos exhibit diverse interactions with nucleons and nuclei. In this talk I will delve in particular into neutrino–nucleus quasi-elastic cross sections, taking into account both standard and, for the first time, non-standard interactions, all within the framework of effective field theory (EFT). The main uncertainties in these cross sections stem from uncertainties in the nucleon-level form factors, and from the approximations necessary to solve the nuclear many-body problem. I will explain how these uncertainties influence the potential of neutrino experiments to probe new physics introduced by left-handed, right-handed, scalar, pseudoscalar, and tensor interactions. For some of these interactions the cross section is enhanced, making long-baseline experiments an excellent place to search for them.
MicroBooNE is Liquid Argon Time Projection Chamber (LArTPC), able to image neutrino interactions with excellent spatial resolution, enabling the identification of complex final states resulting from neutrino-nucleus interactions. MicroBooNE currently possesses the world's largest neutrino-argon scattering data set, with a number of published cross section measurements and more than thirty ongoing analyses studying a wide variety of interaction modes. This talk provides an overview of MicroBooNE's neutrino cross-section physics program, focusing on the latest results.
The study of neutrino-nucleus scattering processes is important for the success of a new generation of neutrino experiments such as DUNE and T2K. Quasielastic neutrino-nucleus scattering, which yields a final state consisting of a nucleon and charged lepton, makes up a large part of the total neutrino cross-section in neutrino experiments. A significant source of uncertainty in the cross-section comes from limitations in our knowledge of nuclear effects in the scattering process.
The observations of short-range correlated proton-neutron pairs in exclusive electron scattering experiments led to the proposal of the Correlated Fermi Gas nuclear model. This model is characterized by a depleted Fermi gas region and a correlated high-momentum tail. We present an analytic implementation of this model for electron-nucleus and neutrino-nucleus quasi-elastic scattering. Also, we compare separately the effects of
nuclear models and electromagnetic and axial form factors on electron and neutrino scattering cross-section data.
NOvA, a long-baseline neutrino oscillation experiment, is primarily designed to measure the muon (anti)neutrino disappearance and electron (anti)neutrino appearance. It achieves this by utilizing two functionally identical liquid scintillator detectors separated by 810 km, positioned in the off-axis Fermilab NuMI beam, with a narrow band beam centered around 2 GeV. Energetic neutral pions, originating from Δ resonance, deep-inelastic interactions, or final state interactions, pose a significant challenge to the measurement of the electron (anti)neutrino appearance. This challenge stems from the potential misidentification of photons from neutral pion decay as electrons or positrons. Leveraging high-statistics antineutrino mode data from the near detector, we perform a measurement of the differential cross section for muon antineutrino charged-current neutral pion production. In this talk, we will present a detailed analysis of our approach and findings.
Neutrino-nucleus cross section measurements are needed to improve interaction modeling to enable upcoming precision oscillation measurements and searches for physics beyond the standard model. There are two methods for extracting cross sections, which rely on using either the real or nominal flux prediction for the measurement. We examine the different challenges faced by these methods, and how they must be treated when comparing to a theoretical prediction. Furthermore, the necessity for model validation in both procedures is addressed, and differences between “traditional” fake-data based validation and data-driven validation are discussed. Data-driven model validation leverages goodness-of-fit tests enhanced by the conditional constraint procedure. This procedure aims to validate a model for a specific measurement so that any bias introduced in unfolding will be within the quoted uncertainties of the measurement. Results are shown for the first measurement of the differential cross section $d^{2}\sigma(E_{\nu})/d\cos(\theta_{\mu})dP_{\mu}$ for inclusive muon-neutrino charged-current scattering on argon, which uses data from MicroBooNE, a nominal-flux-prediction unfolding, and data-driven model validation.
We report on a global extraction of the 12C Longitudinal (RL) and Transverse (RT ) nuclear electromagnetic response functions from an analysis of all available electron scattering dats on carbon. The response functions are extracted for a large range of energy transfer ν, spanning the nuclear excitation, quasielastic, and ∆(1232 MeV) region, over a large range of the square of the four-momentum transfer Q2. We extract RL and RT as a function of ν for both fixed values of Q2 (0 ≤ Q2 ≤ 1.5 GeV), as well fixed values of momentum transfer q. The data sample consists of more than 10,000 12C differential electron scattering and photo-absorption-cross section measurements. Since the extracted response functions cover a large range of Q2 and ν, they can be readily used to validate both nuclear models as well Monte Caro (MC) generators for electron and neutrino scattering experiments. The extracted response functions are compare to the prediction of several theoretical models and to predictions of the electron-mode versions of the NuWro and GENIE neutrino MC generators.
Project 8 is an experiment that seeks to determine the electron-weighted neutrino mass via the precise measurement of the electron energy in beta decays, with a sensitivity goal of $40\,\mathrm{meV/c}^2$. We have developed a technique called Cyclotron Radiation Emission Spectroscopy (CRES), which allows single electron detection and characterization through the measurement of cyclotron radiation emitted by magnetically-trapped electrons produced by a gaseous radioactive source. The technique has been successfully demonstrated on a small scale in waveguides to detect radiation from single electrons, and to measure the continuous spectrum from tritium. In order to achieve the projected sensitivity, the experiment will require novel technologies for performing CRES using tritium atoms in a magneto-gravitational trap in a multi-cubic-meter volume. In this talk, I will present a brief overview of the Project 8 experimental program, highlighting the latest results including our first tritium endpoint measurement and neutrino mass limit.
We focus on the potential of neutrino - 13C neutral current interactions in clarifying the reactor antineutrino flux around the 6 MeV region. The interactions produce 3.685 MeV photon line via the process of de-excitation of 13C in organic liquid scintillators, which can be observed in reactor neutrino experiments. We expect the future measurements of neutrino - 13C cross section in JUNO and IsoDAR@Yemilab at low energies might help testing the reactor flux models with the assistance of excellent particle identification.
The COHERENT collaboration made the first measurement of coherent elastic neutrino-nucleus scattering (CEvNS) and did so by employing neutrinos produced by the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL). The uncertainty of the neutrino flux generated from the SNS is on the order of 10% making it one of COHERENT's most dominant systematic uncertainties. To address this issue, a heavy water (D2O) detector has been designed to measure the neutrino flux through the well-understood electron neutrino-deuterium interaction. The D2O detector is composed of two identical modules designed to detect Cherenkov photons generated inside the target tank with Module 1 containing D2O as the target and Module 2 initially containing H2O for comparison and background subtraction. We also aim to make a measurement of the cross-section of the charged-current interaction between the electron neutrino and oxygen, providing valuable insight for supernova detection in existing and future large water Cherenkov detectors. In this talk, we present the construction and commissioning updates for Module 2 along with some preliminary results from Module 1.
The Karlsruhe Tritium Neutrino (KATRIN) Experiment directly measures the neutrino mass-scale with a target sensitivity of 0.3 eV/c2 by determining the shape change in the molecular tritium beta spectrum near the endpoint. KATRIN makes this measurement by employing its Magnetic Adiabatic Collimation with Electrostatic (MAC-E) Filter process to measure the integrated energy spectrum of the betas coming from molecular tritium decay. KATRIN is currently operating and has published an electron neutrino mass limit of 0.8 eV/c2 (90% C.L.) from its first two neutrino mass campaigns. The results from its first five neutrino mass campaigns are on track to be released later this year. In this talk, I will explain the operation of KATRIN and the analysis being done to understand the systematics that impact the KATRIN neutrino mass results.
Neutrino-nucleus scatterings in the detector could induce electron ionization signatures due to the Migdal effect. We derive prospects for a future detection of the Migdal effect via coherent elastic solar neutrino-nucleus scatterings in liquid xenon detectors, and discuss the irreducible background that it constitutes for the Migdal effect caused by light dark matter-nucleus scatterings. Furthermore, we explore the ionization signal induced by some neutrino electromagnetic and non-standard interactions on nuclei. In certain scenarios, we find a distinct peak on the ionization spectrum of xenon around 0.1 keV, in clear contrast to the Standard Model expectation.
The Coherent CAPTAIN-Mills (CCM) experiment is a 10 ton liquid argon scintillation detector located at Los Alamos National Lab studying neutrino and beyond Standard Model physics. The detector is located 23m downstream from the Lujan Facility's stopped pion source which will receive 2.25 x 10^22 POT in the ongoing 3 year run cycle. CCM is instrumented with 200 8-inch PMTs, 80% of which are coated in wavelength shifting tetraphenyl-butadiene, and 40 optically isolated 1-inch veto PMTs. The combination of PMTs coated in wavelength shifter and uncoated PMTs allows CCM to resolve both scintillation and Cherenkov light. Argon scintillation light peaks at 128nm, which requires the use of wavelength shifters into the visible spectrum for detection by the PMTs. The uncoated PMTs, however, will be more sensitive to the broad-spectrum Cherenkov light and less sensitive to the UV scintillation light produced in argon. This combination of coated and uncoated PMTs, along with our 2 nsec timing resolution, enables event by event identification of Cherenkov light. This event-by-event identification of Cherenkov light is a powerful tool in rejecting neutron backgrounds – enabling improved sensitivities to dark sector and beyond Standard Model physics searches.
HEP experiments are operated by thousands of international collaborators and serve as big drivers of frontier science and human knowledge. They provide a fertile ground to train next generation of scientists. While we invest in science, it is equally imperative that we integrate in our scientific mission, opportunities for participation and contribution from underrepresented and marginalized populations of our society. One of most powerful enablers to alleviate this challenge is to address the needs of professional development that can advance skills needed to succeed in HEP and STEM areas. NSF-funded IRIS-HEP "Training, Education & Outreach" program is uniquely placed to implement this. Its experiment agnostic collaborative approach has trained over two thousand users with sustainability as its centerpiece. Its open source training modules allow technical continuity and collaboration. Beyond HEP users, this software material is used to train students in HEP based internship programs, imparting an enriched experience. Its broader impacts allow software training for the high school teachers with a goal to tap, grow and diversify the talent pipeline for future cyber-infrastructure, starting with K-12 students. These efforts are lowering the barriers and building a path for a greater participation in STEM areas by our underrepresented populations. This contribution would describe the aforementioned efforts.
If science outreach is about connecting with new audiences, music remains a uniquely accessible form of outreach. However, physics music needn’t be limited to campy parodies. A project for creating music that is accessible at multiple technical levels will be presented. Using a form of 2D wavetable synthesis, a form of electronic music uses stereo audio signals, mapped onto an oscilloscope’s X-Y mode, to create images of LHC experiments from the music itself. On the dance floor, ColliderScope allows the physics community to reach truly new audiences. This talk will describe the project and its philosophy.
Aside from specialized skills, physicists also have quantitative skills useful in a wide variety of contexts. Among these are the abilities to quantify uncertainty and make useful approximations. These skills, if practiced by members of the general public, can help in understanding scientific results, in understanding the progress of science, in evaluating claims from non-scientific sources, and in taking into account the limitations of approximate models. In this talk, I discuss some efforts to make these skills intelligible to a more mainstream audience.
nEXO is a planned next-generation neutrinoless double beta decay experiment, designed to be built at SNOLAB in Ontario, Canada. Within the international nuclear and astroparticle physics communities, we strive to be a leader and role model in the areas of Diversity, Equity, and Inclusion (DEI) while drawing inspiration from the trailblazers who came before us. In 2020 nEXO founded its Diversity, Equity, and Inclusion Committee which has since developed a series of programming efforts such as a mentorship program, an internal DEI lecture series, and an internal newsletter and information hub. With recent funding from a DOE RENEW grant, nEXO plans to further its reach in the realm of DEI by starting several new initiatives including the creation of a DEI workshop for collaborations to be held in the summer of 2025. It is our hope that through this workshop, ideas can be shared and best practices for the community can be developed. This talk outlines the work of the nEXO DEI committee and a vision for the integration of DEI into physics collaborations.
I created a presentation, Building Inclusive Communities, and workshopped in in my classes over five years. Now, I've conducted a physics education research study to measure the impact on students' sense of belonging, scientific identity, and course performance. I will be sharing these results, as well as EDI resources for teachers, mentors, and students.
Models of freeze-in darkmatter (DM) can incorporate baryogenesis by a straightforward extension to two or more DM particles with different masses. We study a novel realization of freeze-in baryogenesis, in which a new SU(2)-doublet vector-like fermion (VLF) couples feebly to the SM Higgs and multiple fermionic DM mass eigenstates, leading to out-of-equilibrium DM production in the early universe via the decays of the VLF. An asymmetry is first generated in the Higgs and VLF sectors through the coherent production, propagation, and rescattering of the DM. This asymmetry is subsequently converted into a baryon asymmetry by SM processes, and potentially, by additional VLF interactions. We find that the asymmetry in this Higgs-coupled scenario has a different parametric dependence relative to previously considered models of freeze-in baryogenesis. We characterize the viable DM and VLF parameter spaces and find that the VLF is a promising target for current and future collider searches.
In this work, we explore baryon number violating interactions (BNV) within a specific model framework involving a charged iso-singlet, color-triplet scalar and a Majorana fermion with interactions in the quark sector. This model has been useful for explaining baryogenesis, neutron-antineutron oscillations, and other puzzles such as the DM-baryon coincidence puzzle. We revisit this model, with chiral perturbation theory as a guide, at the level of baryons and mesons in the dense environments of neutron stars. BNV neutron decays become accessible in this environment where in vacuum they would be kinematically forbidden. By considering several equations of state in binary pulsar candidates, we establish strong constraints on the model parameter space from these decays, and the subsequent scattering of the Majorana fermions, in total amounting to a $\Delta B=2$ loss in the star. These limits are highly complementary to laboratory bounds from rare dinucleon decay searches and collider probes.
Extending the Standard Model (SM) with right-handed neutrinos (RHN) provides a minimal explanation for both the origin of the SM neutrino masses through the type-I seesaw and of the present imbalance between matter and antimatter in our universe through leptogenesis. Even though the mass of these RHNs is in principle unbounded from above, an attractive possibility would be for the RHN masses to lie at a relatively low scale, i.e. MeV to TeV, such that these new particles can be searched for at present-day experiments. In this talk, I will discuss how the testability of the model changes compared to the minimal case with 2 RHNs when one considers a scenario with 3 RHN generations instead. Moreover, I will also look into the effects of flavour and CP symmetries on the parameter space of the model.
Based on arXiv:2106.16226, arXiv:2203.08538 and other upcoming works
Heavy neutral leptons (HNLs) are an extension of the Standard Model that are well-motivated by neutrino masses, dark matter, and baryogenesis via leptogenesis. We present a comprehensive analysis of all significant HNL production and decay mechanisms. This work has been incorporated into a new module that generates events for HNLs with arbitrary couplings to the $e$, $\mu$, and $\tau$ neutrinos within the FORESEE simulation package. We apply this new framework to simulate results for the well known benchmarks $U_e^2:U_\mu^2:U_\tau^2 =$ 1:0:0, 0:1:0, 0:0:1, as well as the recently proposed benchmarks 0:1:1, and 1:1:1. The simulations are performed for FASER and proposed experiments at the Forward Physics Facility. We find projected sensitivities that extend into currently unexplored regions of parameter space with HNL masses in the 2 to 3.5 GeV range.
In the ongoing Short-Baseline Neutrino facilities such as the Short-Baseline Near Detector (SBND), MicroBooNE and ICARUS, there exists an iron dump positioned $\sim$ 45.79 m from the Fermilab Booster Neutrino Beam (BNB)’s beryllium target. The neutrinos produced from charged pion and kaon decays can undergo up-scattering off iron nuclei resulting in the production of MeV mass scale heavy neutral leptons (HNLs). These HNLs then travel to the respective detectors and decay into Standard Model neutrinos, photons, $e^+e^−$, etc. While previous studies have predominantly focused on the production of HNLs without considering the iron dump, the inclusion of it significantly enhances sensitivity allowing us to probe more unconstrained parameter space of HNL coupling versus mass. Additionally, distinctive signatures indicating the production origin of HNLs have also been observed in the energy and angular spectra of the final states. Furthermore, we also investigated the effects of the dump in the case of inelastic dark matter thereby probing unexplored regions of the parameter space.
As we push to high precision measurements, the PDF uncertainty is often a limiting factor. To achieve improved precision, our goal is to not only ‘fit’ the PDFs, but to better understand the underlying process at the precision level. Toward this goal, we extend the QCD Parton Model analysis using a factorized nuclear structure model incorporating individual nucleons, and pairs of correlated nucleons. Our analysis simultaneously extracts the universal effective distribution of quarks and gluons inside correlated nucleon pairs, and the nucleus-specific fractions of such correlated pairs. These results fit data from lepton Deep-Inelastic Scattering, Drell-Yan processes, and high-mass boson production. This successful extraction of nuclear structure properties marks a significant advancement in our understanding of the fundamental structure of nuclei.
We present a very simple method for calculating the mixed Coulomb-nuclear effects in the $pp$ and $\bar{p}p$ scattering amplitudes, and illustrate the method using simple models frequently used to describe their differential cross sections at small momentum transfers. Combined with the pure Coulomb and form-factor contributions to the scattering amplitude which are known analytically from prior work, and the unmixed nuclear or strong-interaction scattering amplitude, the results give a much simpler approach to fitting the measured $pp$ and $\bar{p}p$ cross sections and extracting information on the real part of the forward scattering amplitudes than methods now in use.
In this work, we complete our CT18qed study with the neutron’s photon parton distribution function (PDF), which is essential for the nucleus scattering phenomenology. Two methods, CT18lux and CT18qed, based on the LUXqed formalism and the DGLAP evolution, respectively, to determine the neutron’s photon PDF have been presented. Various low-Q2 non-perturbative variations have been carefully examined, which are treated as additional uncertainties on top of those induced by quark and gluon PDFs. The impacts of the momentum sum rule as well as isospin symmetry violation have been explored and turned out to be negligible. A detailed comparison with other neutron’s photon PDF sets has been performed, which shows a great improvement in the precision and a reasonable uncertainty estimation. Finally, two phenomenological implications are demonstrated with photon-initiated processes: neutrino-nucleus W-boson production, which is important for the near-future TeV–PeV neutrino observations, and the axion-like particle production at a high-energy muon beam-dump experiment.
The X(6900) resonance, originally discovered by the LHCb collaboration and later confirmed by both ATLAS and CMS experiments, has sparked broad interests in the fully-charmed tetraquark states. Relative to the mass spectra and decay properties of fully-heavy tetraquarks, our knowledge
on their production mechanism is still rather limited. In this talk, I will discuss the production of S-wave fully-heavy tetraquark at the LHC and electron-ion collider with the nonrelativistic QCD (NRQCD) framework. We predicted the differential pT spectra of various fully-charmed S-wave tetraquarks at the LHC, and compare with the results predicted from the fragmentation mechanism at large pT end. We also looked at the production prospects at various electron-proton colliders.
Until recently, it was widely believed that every hadron is a composite state of either three quarks or one quark and one antiquark. In the last 20 years, dozens of exotic heavy hadrons have been discovered, and yet no theoretical scheme has unveiled the general pattern. For hadrons that contain more than one heavy quark or antiquark, the Born-Oppenheimer approximation for QCD provides a rigorous approach to the problem. In this approximation, a double-heavy hadron corresponds to an energy level in a potential that increases linearly at large interquark distances. Pairs of heavy hadrons, on the other hand, correspond to energy levels in potentials that approach a constant at large interquark distances. In this talk, I will discuss decays of double-heavy hadrons into pairs of heavy hadrons, which are mediated by couplings between the respective Born-Oppenheimer potentials. I will show that conventional and exotic double-heavy hadrons follow different decay patterns dictated by the symmetries of QCD with two static color sources. As case studies, I will compare selection rules and branching ratios for the decays of quarkonium and quarkonium-hybrid mesons into the lightest pairs of heavy mesons. I will also discuss the corresponding decays of double-heavy tetraquarks.
Double-heavy hadrons can be identified as bound states in the Born-Oppenheimer potentials for QCD. We present parameterizations of the 5 lowest Born-Oppenheimer potentials from pure $SU(3)$ lattice gauge theory as functions of the separation $r$ of the static quark and antiquark sources. The parametrizations have the correct limiting behavior at small $r$, where the potentials form multiplets associated with gluelumps. They also have the correct limiting behavior at large $r$, where the potentials form multiplets associated with excitations of a relativistic string. These Born-Oppenheimer potentials can be used to develop models based on QCD for the many exotic heavy hadrons that have been discovered since 2003.
Non-perturbative dynamics of gauge theories has been notoriously difficult to study. I discuss that supersymmetry slightly broken by anomaly mediation allows us to derive many features of dynamics. They include explicit demonstration of chiral symmetry breaking as well as monopole condensation, calculation of non-perturbative condensates, correct large $N_c$ behavior, and some of the low-lying spectra.
We find a complete set of 4-point vertices in the Constructive Standard Model (CSM) by satisfying perturbative unitarity. We use these and the 3-point vertices to calculate a comprehensive set of 4-point amplitudes in the CSM. We also introduce a package to numerically calculate phase-space points for constructive amplitudes and use it to validate the 4-point amplitudes against Feynman diagrams.
This talk is based on the following preprints:
arXiv:2403.07977
arXiv:2403.07978
arXiv:2403.07981
It is well known that in QFT, perturbative series expansions in powers of the coupling constant yield an asymptotic series. At weak coupling, this is not an issue, since the series is valid at lower orders and one can use it to make reliable predictions. However, the series fails completely at strong coupling. I will show that one can develop two different types of series expansions that are absolutely convergent and are valid at both strong and weak coupling. The first series is the usual one, in powers of the coupling constant, but where we pay special attention to the order of two asymptotic limits. In the second series, we expand the quadratic/kinetic part but not the interaction part containing the coupling. This yields a series in inverse powers of the coupling. The first series converges quickly at weak coupling and slowly at strong coupling whereas it is the reverse for the second series. We apply this to a basic one-dimensional integral and also to a path integral in quantum mechanics both of which contain a quadratic term and a quartic interaction term containing the coupling.
I will present some recent progress at the intersection between machine learning and field theories, highlighting Feynman diagram methods for neural network correlators and neural scaling laws.
First, building on a correspondence between neural network ensembles and statistical field theories, I will introduce a diagrammatic framework to calculate neural network correlators in the large-width expansion and study RG flow and criticality. Then, I will show how large-N field theory methods can be used to solve a model of neural scaling laws.
Based on 2305.02334 and work to appear.
The constructive method of determining amplitudes from on-shell pole structure has been shown to be promising for calculating amplitudes in a more efficient way. However, challenges have been encountered when a massless internal photon is involved in the gluing of three-point amplitudes with massive external particles. In this talk, I will describe how to use the original on-shell method, old-fashioned perturbation theory, to shed light on the constructive method, and show that one can derive the Feynman amplitude by correctly identifying the residue even when there is an internal photon involved.
We demonstrate how the scattering amplitudes of some scalar theories, scaffolded general relativity, multi-flavor DBI, and the special Galileon, vanish at multiple loci in momentum space that include and extend their soft-limit behaviors. We elucidate the factorization of the amplitudes near the zero loci into lower point amplitudes. We explain how the occurrence of the zero loci in these theories can be understood in terms of the double copy formalism.
In the presence of axion dark matter, electrons experience an "axion wind" spin torque and an "axioelectric" force, which give rise to magnetization and polarization currents in common ferrite materials. The radiation produced by these currents can be amplified in multilayer setups, which are potentially sensitive to the QCD axion without requiring a large external magnetic field.
The future Electron-Ion Collider (EIC) will have the capability to collide various particle beams with large luminosities in a relatively clean environment, providing access to untouched parameter space for new physics. In this study, we will look at the EIC’s sensitivity to Axion-like particles (ALPs) that are created via photon fusion and promptly decay to photons. Proton-electron collisions mildly improve the parameter space reach in the 2-6 GeV ALP mass range while collisions between lead ions and electrons improve the reach by ~10² in the same region, along with mild improvement from 6-30 GeV. This large improvement is due to the coherent scattering of electrons with lead ions, which benefits from a Z² enhancement to the cross section. A brief look into the same ALP production methods at the future Muon-Ion Collider yields similar improvement of ~10² to the sensitivity in the 30-300 GeV ALP mass range due to larger beam energies.
Owing to its high temperature, a copious number of heavy axion-like particles (ALPs) coupled to the photon field are produced by the Primakoff process and photon coalescence process in the plasma of massive stars in the later stages of their evolution. These heavy axions produced inside stars spontaneously decay into two photons, yielding the possibly detectable photon signal by current and future X-ray and gamma-ray telescopes. We discuss the observability of this photon signal by using the stellar model constructed by the 1D stellar evolution code MESA.
We identify a new resonance, axion magnetic resonance (AMR), that can greatly enhance the conversion rate between axions and photons. A series of axion search experiments rely on converting them into photons inside a constant magnetic field background. A common bottleneck of such experiments is the conversion amplitude being suppressed by the axion mass when $m_a \gtrsim 10^{-4}~$eV. We point out that a spatial or temporal variation in the magnetic field can cancel the difference between the photon dispersion relation and that of the axion, hence greatly enhancing the conversion probability.
We demonstrate that the enhancement can be achieved by both a helical magnetic field profile and a harmonic oscillation of the magnitude. Our approach can extend the projected ALPS II reach in the axion-photon coupling ($g_{a\gamma}$) by two orders of magnitude at $m_a = 10^{-3}\;\mathrm{eV}$ with moderate assumptions.
I will discuss a recently proposed novel experimental setup for axion-like particle (ALP) searches. Nuclear reactors produce a copious number of photons, a fraction of which could convert into ALPs via the Primakoff process in the reactor core. The generated flux of ALPs leaves the nuclear power plant, and its passage through a region with a strong magnetic field results in efficient conversion to photons, which can be detected. Such a magnetic field is the key component of axion haloscope experiments. I will discuss existing setups featuring an adjacent nuclear reactor and axion haloscope and I will demonstrate that the obtained sensitivity projections complement constraints from existing laboratory experiments, e.g., light-shining-through-walls.
Primordial black holes (PBHs) remain a viable dark matter candidate in the asteroid-mass range. We point out that in this scenario, the PBH abundance would be large enough for at least one object to cross through the inner Solar System per decade. Since Solar System ephemerides are modeled and measured to extremely high precision, such close encounters could produce detectable perturbations to orbital trajectories with characteristic features. We evaluate this possibility with a suite of simple Solar System simulations, and we argue that the abundance of asteroid-mass PBHs can plausibly be probed by existing and near-future data.
If present in the early universe, primordial black holes (PBHs) will accrete matter and emit high-energy photons, altering the statistical properties of the Cosmic Microwave Background (CMB). This mechanism has been used to constrain the fraction of dark matter that is in the form of PBHs to be much smaller than unity for PBH masses well above one solar mass. Moreover, the presence of dense dark matter mini-halos around the PBHs has been used to set even more stringent constraints, as these would boost the accretion rates. In this work, we critically revisit CMB constraints on PBHs taking into account the role of the local ionization of the gas around them. We discuss how the local increase in temperature around PBHs can prevent the dark matter mini-halos from strongly enhancing the accretion process, in some cases significantly weakening previously derived CMB constraints. We explore in detail the key ingredients of the CMB bound and derive a conservative limit on the cosmological abundance of massive PBHs.
We demonstrate a novel mechanism for forming dark compact objects and black holes through a dissipative dark sector. Heavy dark sector particles can be responsible for an early matter dominated era before Big Bang Nucleosynthesis (BBN). Density perturbations in this epoch can grow and collapse into tiny dissipative dark matter halos, which can cool via self-interactions. Once these halos have formed, a thermal phase transition can then shift the Universe back into radiation domination and standard cosmology. These halos can continue to collapse after BBN, resulting in the late-time formation of fragmented compact MACHOs and sub-solar mass primordial black holes.
Atomic dark matter is a dark sector model including two fermionic states oppositely charged under a dark U(1) gauge symmetry, which can result in rich cosmological signatures. I discuss recent work using cosmological n-body simulations to investigate the impact of an atomic dark matter sector on observables such as the galactic UV luminosity function at redshifts >10, and consider the constraining power of recent JWST observations for this model.
Atomic Dark Matter (aDM) is a well motivated class of models which has potential to be discovered at ground based Direct Detection experiments. The class of models we consider contains a massless dark photon and two Dirac fermions with different masses and opposite dark charge (dark protons and dark electrons), which will generally interact with the Standard Model through a kinetic mixing portal with our photon. The dark fermions have the potential to be captured in the Earth. Due to the mass difference, evaporation efficiencies are lower for dark protons than dark electrons, leading to a net dark charge in the Earth. This has the potential to alter the incoming flux of aDM in complex ways, due to interactions between the ambient dark plasma and the dark charged Earth. This modifies event rates in ground based direct detection experiments compared to the standard DM expectation. In this talk I will describe our ongoing effort to calculate aDMs interaction with and subsequent capture in the Earth through the dark photon portal. We identify regions of the aDM parameter space where there may be significant accumulation of aDM in the Earth, taking into account cosmological constraints on the massless dark photon kinetic mixing for aDM.
Atomic dark matter (ADM) models, with a minimal content of a dark proton, dark electron, and a massless dark photon, are motivated by theories such as Mirror Twin Higgs. ADM models might address the seeming tension between cold dark matter (CDM) and observations at small scales: excessive number of dwarf galaxies in the Milky Way, and the cuspiness of galactic cores. ADM has been shown to suppress matter perturbations on small scales. N-body simulations with percent ADM subcomponent predict interesting sub-galactic structures. We use similar N-body simulations and Lyman-alpha forest data, which is sensitive to small-scale ADM effects, to produce robust constraints on ADM parameter space. We use machine learning methods to optimize computational efficiency when scanning over the parameter space.
Primordial black holes (PBHs) can be formed from the collapse of large-density inhomogeneities in the early Universe through various mechanisms. One such mechanism is a strong first-order phase transition, where PBH formation arises due to the delayed vacuum transition. The probabilistic nature of bubble nucleation implies that there is a possibility that large regions are filled by the false vacuum, where nucleation is delayed. When the vacuum energy density inside those regions decays into other components, overdensity reaches a threshold, and the whole mass inside the region could gravitationally collapse into PBHs. In this scenario, PBHs can serve as both dark matter candidates and probes for models featuring first-order phase transitions, making it phenomenologically appealing. This mechanism can be tested through a multi-pronged approach, encompassing gravitational wave detectors, microlensing studies, and collider experiments.
Its been demonstrated that "optimized partial dressing" (OPD) thermal mass resummation, which uses gap equation solutions inserted into the tadpole, efficiently tames finite temperature perturbation theory calculations of the effective thermal potential, without necessitating use of the high-temperature approximation. Even though it was shown that OPD has a similar scale dependence as 3D EFT approaches in the high-T limit, the calculated scale dependence of variables, in particular strength of gravitational wave signal from phase transition is sizeable. In this talk we will show a self-consistent way to RG improve scalar potential at finite temperature in the OPD formalism and demonstrate large reduction in scale dependence of physical observables in comparison to current techniques.
We consider a classically conformal $U(1)$ extension of the Standard Model (SM) in which the new $U(1)$ symmetry is radiatively broken via the Coleman-Weinberg mechanism, after which the $U(1)$ Higgs field $\phi$ drives electroweak symmetry breaking through a mixed quartic coupling with the SM Higgs doublet via coupling constant $\lambda_{mix}$. For $m_{\phi}$ < $\frac{m_{h}}{2}$, the coupling governing the decay $h \rightarrow \phi \phi$ is strongly suppressed, and experimental signals lie in the domain of future experiments such as the ILC. Additional probes of this conformal model are future gravitational wave (GW) observatories, capable of detecting primordial GW generated from a strong first-order phase transition (FOPT). We perform a numerical analysis to investigate possible characteristic GW signals and detection prospects for a conformal model with such a phase transition, specifically for parameter regions which would simultaneously reproduce observed Dark Matter relic density and for which the anomalous Higgs properties can be measured at the ILC.
Our study presents a comprehensive analysis of baryon number violation during the electroweak phase transition (EWPT) within the framework of an extended scalar electroweak multiplet. We perform a topological classification of scalar multiplet's representation during the EWPT, identifying conditions under which monopole or sphaleron field solutions emerge, contingent upon whether their hypercharge is zero; which indicates that only monopole scalar multiplet can contribute to the dark matter relic density. We also conduct a systematic research of other formal aspects, like the construction of higher dimensional sphaleron matrix, computation of the sphaleron and monopole mass, and the analysis of boundary conditions for the field equation of motions. We then scrutinize the computation of sphaleron energy and monopole mass within the context of a multi-step EWPT, employing the SU(2) septuplet scalar extension to the Standard Model (SM) as a case of study. In the scenario of a single-step EWPT leading to a mixed phase, we find that the additional multiplet's contribution to the sphaleron energy is negligible, primarily due to the prevailing constraint imposed by the $\rho$ parameter. Conversely, in a two-step EWPT scenario, the monopole mass can achieve significantly high values during the initial phase, thereby markedly constraining the monopole density and preserving the baryon asymmetry if the universe undergoes a first-order phase transition. In the two-step case, we delineate the relationship between the monopole mass and the parameters relevant to dark matter phenomenology.
Minimal Supersymmetric Standard Model(MSSM) shortcomings in inducing a strong first-order phase transition and providing sufficient CP violation to explain the observed baryon asymmetry in the universe(BAU). In this talk, I will discuss how BAU could be generated in the context of Next-to-Minimal Supersymmetric Standard Model(NMSSM), and how strong the CP violation ingredients in NMSSM will be constrained by ongoing experiments, especially searches for permanent EDM of fundamental particles.
We employ a derivative expansion method to analyze the effective action within
the SU(2)-Higgs model at finite temperature. By utilizing a specific power
counting scheme, we compute gauge-invariant constraints on primordial gravi-
tational waves arising from a thermal first-order electroweak phase transition.
We then compare these results with findings from a pre-existing nonperturba-
tive analysis, effectively benchmarking the framework’s validity and assessing its
implications for the detectability of a stochastic gravitational wave background
by forthcoming experiments such as LISA.
Recently, NANOGrav collaboration (based on 12.5 years of observation) reported strong evidence [Arzoumanian et al. (2020)] and later, the analysis of 15 years of data resulted in confirming the detection of a stochastic gravitational wave background [Agazie et al. (2023)] that can be understood, along with the possibility of the astrophysical sources (such as supermassive black holes) induced gravitational waves, as a signal possibly from the early universe [Figueroa et al. (2023)]. Note, the detection of the stochastic gravitational waves has been confirmed by several missions of pulsar timing array (PTA), including European PTA (EPTA) and Indian PTA (InPTA) [EPTA collaboration; InPTA collaboration (2023)]. I will report the results of direct numerical simulations of gravitational waves induced by hydrodynamic and hydromagnetic turbulent sources that might have been present at quantum chromodynamic (QCD) phase transitions. Based on existing data I will discuss cosmological models constraints.
We analyse sound waves arising from a cosmic phase transition where the full velocity profile is taken into account as an explanation for the gravitational wave spectrum observed by multiple pulsar timing array groups. Unlike the broken power law used in the literature, in this scenario the power law after the peak depends on the macroscopic properties of the phase transition, allowing for a better fit with pulsar timing array (PTA) data. We compare the best fit with that obtained using the usual broken power law and, unsurprisingly, find a better fit with the gravitational wave (GW) spectrum that utilizes the full velocity profile. Even more importantly, the thermal parameters that produce the best fit are quite different. We then discuss models that can produce the best-fit point and complementary probes using CMB experiments and searches for light particles in DUNE, IceCUBE-Gen2, neutrinoless double $\beta-$decay, and forward physics facilities (FPF) at the LHC like FASER$\nu$, etc.
We show that observations of primordial gravitational waves of inflationary origin can shed light into the scale of flavor violation in a flavon model. The mass hierarchy of fermions can be explained by a flavon field. If it exists, the energy density stored in oscillations of the flavon field around the minimum of its potential redshifts as matter and is expected to dominate over radiation in the early universe. The evolution of primoridial gravitational waves acts as a bookkeeping method to understand the expansion history of the universe. Importantly, the gravitational wave spectrum is different if there is an early matter dominated era, compared to radiation domination expected from standard cosmological model and gets damped by the entropy released in the flavon decays, determined by the mass of the flavon field $m_S$ and new scale of flavor violation $\Lambda_{\rm FV}$. Furthermore, the flavon decays can source the baryon asymmetry of the universe. We show that the $m_S-\Lambda_{\rm FV}$ parameter space in which the correct baryon asymmetry is produced can also be probed by gravitational wave observatories like BBO, DECIGO, U-DECIGO, ARES, LISA, ET, CE etc. for a blue-tilted gravitational wave spectrum. Our results are compatible with primordial origin of NANO-GRAV observations.
Different inflation models make testable predictions that are often close to each other, and breaking this degeneracy (i.e. distinguishing different models) may then require additional observables. In this talk, we explore the minimal production of gravitational waves during reheating after inflation, arising from the minimal coupling of the inflaton to gravity. The subsequent signal shows a strong distinction between different inflaton potentials. If detected, such signal can also be used to probe the reheating process and would serve as a direct measurement of the inflaton mass.
Gravitational-wave (GW) signals offer unique probes into the early universe dynamics, particularly those from topological defects. We investigate a scenario involving a two-step phase transition resulting in a network of domain walls bound by cosmic strings. By introducing a period of inflation between the two phase transitions, we show that the stochastic GW signal can be greatly enhanced. The generality of the mechanism also allows the resulting signal to appear in a broad range of frequencies and can be discovered by a multitude of future probes, such as Pulsar Timing Arrays, and space- and ground-based observatories. We also offer a concrete model realization that relates the second phase transition to the epoch of inflation. In this model, the successful detection of the GW spectrum peak pinpoints the soft supersymmetry breaking scale and the domain wall tension.
The dynamical generation of right-handed-neutrino (RHN) masses $M_N$ in the early Universe naturally enta