ICHEP is a series of international conferences organized by the C11 commission of the International Union of Pure and Applied Physics (IUPAP). It has been held every two years since more than 50 years, and is the reference conference of particle physics where most relevant results are presented.
At ICHEP, physicists from around the world gather to share the latest advancements in particle physics, astrophysics/cosmology, and accelerator science and discuss plans for major future facilities.
Main web-page of the ICHEP 2024 conference: https://ichep2024.org/
Registration is open: https://ichep2024.org/registration-guidelines/
The conference is in-person only. The plenary sessions on July 22-24 will be streamed on Youtube, see link.
The discovery of the Higgs boson marked the beginning of a new era in HEP. Precision measurement of the Higgs properties become a natural next step beyond the LHC and HL-LHC. Among the proposed Higgs factories worldwide, the Circular Electron Positron Collider (CEPC) was proposed in 2012. CEPC can produce Higgs/W/Z and top which aims to measure Higgs, EW, flavor physics and QCD with unprecedented precision and to probe new physics beyond SM. With the official release of CEPC Accelerator Technical Design Report (TDR) in December, 2023, we are intensively preparing accelerator Engineering Design Report (EDR) and reference detector TDR. The purpose is to submit CEPC proposal to Chinese government for approval and start construction within the “15th five-year plan (2026-2030)”. In this talk, the overview and global aspects of the CEPC project, highlights of CEPC physics, accelerator and detector R&D will be presented. International participation and contributions are warmly welcome.
The International Linear Collider (ILC) and Compact Linear Collider (CLIC) are well-developed with mature and resource-conscious designs as next-generation high-energy electron-positron colliders. With their key features of polarised beams and extendable energy reach they offer unique possibilities to explore the Higgs boson, the electroweak gauge bosons, the top quark as well as beyond Standard Model sectors. An overview and status of each collider project will be given, including the design, key technologies, accelerator systems, energy-staging strategies, and the most recent cost and power estimates. An overview of the ongoing development strategy for each project over the next 4-5 years will be presented, as well as possible long-term visions for a linear collider facility.
The SuperKEKB is a high-luminosity electron-positron collider where a “nanobeam collision scheme” is utilized to achieve an unprecedented high luminosity. Its luminosity performance had gradually improved, achieving a peak luminosity of 4.7e34 cm-2s-1 in June 2022. While making steady progress, it was found that the SuperKEKB encountered some challenges as a luminosity frontier machine such as short beam lifetime, severe beam instabilities including sudden beam loss, and low injection efficiency. To overcome these challenges, we had a long shutdown from July 2022 to January 2024 to perform many upgrades including construction of a non-linear collimation system, modification of injection point, and additional radiation shielding at the interaction region. Commissioning has just resumed in January 2024, and it is expected that many fruitful results will be obtained during this beam operation that will serve as a good reference for future colliders.
In response to the directives of the 2020 European Strategy for Particle Physics (ESPP), CERN, in collaboration with international partners, is exploring the feasibility of an energy-frontier, 100 TeV hadron collider, including, as an initial stage, a high-luminosity circular electron-positron collider serving as Higgs and electroweak factory.
This effort builds upon the 2019 conceptual design reports of the Future Circular Collider (FCC) study. Currently, the FCC Feasibility study, spanning over five years, aims at providing conclusive inputs to the next update of the ESPP, with a focus on implementing these accelerators inside a 90.6 km tunnel in the Lake Geneva basin.
The ongoing study aims to validate tunnel construction, refine collider and injector designs, develop organization and funding models, and conduct R&D on critical machine components. This presentation will provide an overview of the study status and the latest advancements on the electron-positron collider FCC-ee.
With concerted R&D efforts under way, the Energy Recovery Linac (ERL) technique is an outstanding novel means to considerably improve the performance of particle physics colliders, providing excellent physics opportunities with significantly reduced power as is required for a next generation of sustainable machines. The European R&D Roadmap for ERL, endorsed by CERN Council, identifies the most crucial and impactful R&D actions to build confidence in the technical feasibility of high-power ERL accelerating systems. The presentation will provide an overview on the implementation status of this roadmap and evaluate the feasibility and potential performance of a portfolio of electron beam based future ERL accelerators under study, especially high-luminosity electron-proton and electron-positron colliders, which at high and at maximum considered beam energy will be suitable to thoroughly investigate the Higgs mechanism in single but as well double-Higgs boson production, resp.
The development of Energy Recovery Linacs (ERL) has been recognized as one of the five main pillars of accelerator R&D in support of the European Strategy for Particle Physics. Two projects for high power ERLs, PERLE and bERLinPro are considered key infrastructures for the development of ERLs for future HEP colliders, like e.g. LHeC or FCC-eh. Whereas bERLinPro will be demonstrating high intensity beam creation and recovery in a single turn ERL, PERLE focusses on demanding multi-turn ERL technology as a necessary demonstrator for the future HEP machines, with which is shares the same tech choices and beam parameters. Both facilities, PERLE and bERLinPro recently joint forces to collaborate on improving the efficiency of ERLs in the context of beam operation, but also power consumption in the EU Horizon iSAS framework. Here we will report on the projects status, introduce the main ongoing achievements and describe the staged strategy for construction and on-going commissioning.
The ICARUS LArTPC, currently placed at Fermilab, is collecting data exposed to Booster Neutrino and Numi off-axis beams within the SBN program. A light detection system, based on PMTs deployed behind the TPC wire chambers, is in place to detect vacuum ultraviolet photons produced by ionizing particles in LAr. This system is fundamental for the detector operation, providing an efficient trigger and contributing to the 3D reconstruction of events. Moreover, since the TPC is exposed to a huge flux of cosmic rays due to its operations at shallow depths, the light detection system allows for the time reconstruction of events, contributing to the identification and to the selection of neutrino interactions within the beam spill gates.
This contribution will primarily focus on the comparative study (data vs. MC) of light signal of cosmic muons to validate the light emulation. An overview of the current analysis status and its first results will be reported.
The Pacific Ocean Neutrino Experiment (P-ONE) is a planned cubic-kilometer deep-sea detector targeting the study of high-energy neutrinos, their sources, and their unknown acceleration mechanisms. With low expected scattering in the deep sea, the ocean is an ideal location for high-energy neutrino detectors with the potential for sub-degree angular resolution. However, operating large-scale infrastructure in deep waters carries various challenges. With ever-changing ocean currents, detection lines will sway through the water column, effectively resulting in time-variable detector geometry, water properties, and optical backgrounds. Together with Ocean Networks Canada, P-ONE aims to install long-lived sub-sea photosensor and calibration instrumentation, to enable continuous and precise neutrino detection. In this talk, we will present the ongoing development of the first P-ONE detector line, its instrumentation, and the expected performance of the first cluster of P-ONE lines.
High-energy neutrinos propagating over cosmological distances are the ideal messenger particles for astrophysical phenomena, but the neutrino landscape above 10 PeV is currently completely uncharted. At these extreme energies and the frugal flux expected, the dominant experimental strategy is to detect radiofrequency emissions from particle cascades produced by neutrinos interacting in the vast polar ice sheets.
The Radio Neutrino Observatory in Greenland (RNO-G) is an array of radio antennas embedded in the ice near Summit Station, currently being deployed. At completion, RNO-G will consist of 35 autonomous antenna stations interspaced by 1.25 km on a rectangular grid, making it the largest and most sensitive in-ice neutrino telescope with unique access to the northern sky.
In this talk, I will describe the design of RNO-G, outline calibration and analysis strategies developed on the way to first physics, and share a look at the data collected by the first seven operating stations.
A large mystery that is currently being investigated by the High Energy Physics (HEP) field is the origin and the nature of the Ultra-high energy Cosmic Rays (UHECR). Coming from deep within the Universe, they bring information from afar as well as on possible new physics. This talk reports on the development and design of DUCK (Detector system of Unusual Cosmic-ray casKades), a new cosmic-rays detector at the Clayton State University campus with ns-level detection resolution. The main scientific importance for the DUCK project will be to contribute to the general EAS event analysis methodology novel approach using the full waveform and detector response width, and to an independent verification of the detection of the ‘unusual’ cosmic ray events by the Horizon-T detector system that may be indicating direction towards the novel physics possibilities.
The Askaryan Radio Array (ARA) is an in-ice ultrahigh energy (UHE, >10 PeV) neutrino experiment at the South Pole that aims to detect radio emissions from neutrino-induced particle cascades. ARA has five independent stations which together have collected nearly 30 station-years of livetime of data. Each of these stations searches for UHE neutrinos by burying in-ice clusters of antennas ∼200 meters deep in a roughly cubical lattice with side length ~20m. Additionally, the fifth ARA station (A5) has a beamforming trigger, referred to as the Phased Array, consisting of a trigger array of 7 tightly packed vertically-polarized antennas. In this proceeding, we review the physics results from ARA, report on the progress on the analysis of the full ARA data set, and discuss future prospects for ARA emphasizing the discovery potential and benefits for the radio community and future UHE energy detection experiments.
We present a detailed study of the production of dark matter in the form of a sterile neutrino via freeze-in from decays of heavy right-handed neutrinos. Our treatment accounts for thermal effects in the effective couplings, generated via neutrino mixing, of the new heavy neutrinos with the Standard Model gauge and Higgs bosons and can be applied to several low-energy fermion seesaw scenarios featuring heavy neutrinos in thermal equilibrium with the primordial plasma. We find that the production of dark matter is not as suppressed as to what is found when considering only Standard Model gauge interactions. Our study shows that the freeze-in dark matter production could be efficient.
The Short-Baseline Near Detector (SBND) is a 112-ton liquid argon time projection chamber 110 m away from the Booster Neutrino Beam (BNB) target at Fermilab (Illinois, USA). In addition to its role as a near detector enabling precision searches for short-baseline neutrino oscillations, the proximity of SBND to the BNB target makes the experiment ideal for many beyond the Standard Model (BSM) searches of new particles produced in the beam. The nanosecond-timing resolution of the scintillation light detectors further boosts the experiment capabilities. In this talk, we present the status and expected sensitivity to new BSM particles such as heavy neutral leptons using a full beamline and detector simulations, as well as with a model-independent approach.
The MicroBooNE detector, an 85-tonne active mass liquid argon time projection chamber (LArTPC) at Fermilab, is ideally suited to search for physics beyond the standard model due to its excellent calorimetric, spatial, and energy resolution. We will present several recent results using data recorded with Fermilab’s two neutrino beams: a first search for dark-trident scattering in a neutrino beam, world-leading limits on heavy neutral lepton production, including the first limits in neutrino-neutral pion final states, and new constraints on Higgs portal scalar models. We also use off-beam data to develop tools for a neutron-antineutron oscillation search in preparation for the DUNE experiment. The talk will also discuss the opportunities for future searches using MicroBooNE data.
The BDF/SHiP experiment is a general purpose intensity-frontier experiment for the search of feebly interacting GeV-scale particles and to perform neutrino physics measurements at the HI-ECN3 (high-intensity) beam facility at the CERN SPS, operated in beam-dump mode, taking full advantage of the available 4x$10^{19}$ protons per year at 400 GeV. The CERN Research Board recently decided in favour of BDF/SHiP for the future programme of this facility.
The setup consists of two complementary detector systems downstream an active muon shield. The forrmer, the scattering and neutrino detector (SND), consists of a light dark matter (LDM) / neutrino target with vertexing capability. The latter, the hidden sector decay spectrometer (HSDS), consists of a 50 m long decay volume followed by a spectrometer, timing detector, and a PID system. BDF/SHiP offers an unprecedented sensitivity to decay and scattering signatures of various new physics models and tau neutrino physics.
The unique dimension-5 effective operator, LLHH, known as the Weinberg operator, generates tiny Majorana masses for neutrinos after electroweak spontaneous symmetry breaking. If there are new scalar multiplets that take vacuum expectation values (VEVs), they should not be far from the electroweak scale. Consequently, they may generate new dimension-5 Weinberg-like operators which in turn also contribute to Majorana neutrino masses. In this study, we consider scenarios with one or two new scalars up to quintuplet SU(2) representations. We analyse the scalar potentials, studying whether the new VEVs can be induced and therefore are naturally suppressed, as well as the potential existence of pseudo-Nambu-Goldstone bosons. Additionally, we also obtain general limits on the new scalar multiplets from direct searches at colliders, loop corrections to electroweak precision tests and the W-boson mass.
We explore the potential of neutrinoless double-beta ($0\nu\beta\beta$) decays to probe scalar leptoquark models that dynamically generate Majorana masses at the one-loop level. By relying on Effective Field Theories, we perform a detailed study of the correlation between neutrino masses and the $0\nu\beta\beta$ half-life in these models. We describe the additional tree-level leptoquark contributions to the $0\nu\beta\beta$ amplitude with higher-dimensional operators, which can overcome the ones from the standard dimension-five Weinberg operator for leptoquark masses as large as $\mathcal{O}(10^3~\mathrm{TeV})$. In particular, we highlight a possibly ambiguity in the determination of neutrino mass ordering by only using $0\nu\beta\beta$ decays in this type of models. The interplay between $0\nu\beta\beta$ with other flavor measurements is also explored and we discuss the importance of properly accounting for the neutrino and charged-lepton mixing matrices in our predictions.
The axion represents a well-motivated dark matter candidate with a relatively unexplored range of viable masses. Recent calculations argue for post-inflation axion mass ranges corresponding to frequencies of roughly 10-100 GHz. These frequency ranges offer challenges for the traditional cavity halscope which can be overcome through the use of metamaterial resonators that fill large volumes. The ALPHA (Axion Longitudinal Plasma HAloscope) experiment, located at Yale University, is an axion dark matter detector probing the 10-45 GHz frequency range. Axions can convert into photons in the tunable and cryogenically-cooled resonator within the 16-T magnet of the experiment, and be detected with the quantum-limited amplification and readout. In this talk, we will describe the general design parameters of the experiment and the expected sensitivity.
The MAgnetized Disk and Mirror Axion eXperiment is a future experiment aiming to detect dark matter axions from the galactic halo by resonant conversion to photons in a strong magnetic field. It uses a stack of dielectric disks, called booster, to enhance the axion-photon conversion probability over a significant mass range. Several smaller scale prototype systems have been developed and used to verify the experimental principles. This talk will present the current status of the experiment and its prototypes, including the ongoing research and development and remaining challenges.
The Haloscope At Yale Sensitive To Axion CDM (HAYSTAC) experiment is a microwave cavity used to search for cold dark matter (CDM) axions with masses above 10 $\mu$eV. HAYSTAC searches for axion conversion into a resonant photon signal in an 8 T magnetic field, due to the Primakoff effect. In typical cavity experiments, the output signal power is exceedingly small, and thus quantum amplifiers are required. As a result, quantum uncertainty manifests as a fundamental noise source, limiting the measurement of the quadrature observables. Data taking for HAYSTAC was divided into two parts: Phase I achieved a near quantum-limited sensitivity using a single Josephson parametric amplifier (JPA), and covered a range between 23.15 < $m_a$ < 24.0 $\mu$eV, while Phase II used vacuum squeezing to circumvent the quantum limit, making HAYSTAC the first axion experiment to surpass it. In this talk, we will present an overview of the HAYSTAC experiment, and discuss the latest results from Phase II.
DEAP-3600, with its 3.3 tonnes liquid argon target, is a dark matter direct detection experiment set at SNOLAB in Sudbury, Canada. Since 2019 the experiment has held the most stringent exclusion limit in argon for Weakly Interacting Massive Particles (WIMPs) above 20 GeV/c^2.
Since the end of the second fill run in 2020 the detector has been upgraded, to reduce the backgrounds coming from shadowed alphas and dust dissolved in the detector in the third fill run, scheduled this year.
In parallel with that, the physics reach of the experiment has been widened, with unique contributions to the {39}^Ar activity measurements and ultra-heavy dark matter candidates, while developing a detailed Profile likelihood Ratio WIMP search on the full second run, which will push the experiment down to unprecedented sensitivity.
Dark matter candidates with masses below 10 GeV/c² show considerable potential. Our last-generation detector, DarkSide-50, has achieved world-leading results in this mass range using ionization-only analysis with 46kg of active mass. Building upon the advancements of DarkSide-50 for low-mass dark matter searches, and in line with the ongoing progress towards the next-generation high-mass dark matter detector, DarkSide-20k, a dedicated detector named DarkSide-LowMass has been proposed. DarkSide-LowMass is optimized for low-threshold electron-counting measurements, and sensitivity to light dark matter is explored across various potential energy thresholds and background rates. Our studies indicate that DarkSide-LowMass can achieve sensitivity to light dark matter down to the level of the solar neutrino fog for GeV-scale masses, and significant sensitivity down to 10 MeV/c², considering the Migdal effect or interactions with electrons.
DarkSide-20k is a direct dark matter search experiment located at Laboratori Nazionali del Gran Sasso (LNGS). It is designed to reach an exposure of 200 tonne-years free from instrumental backgrounds. The core of the detector is a dual-phase Time Projection Chamber (TPC) filled with 50 tonnes of low-radioactivity liquid argon. The TPC is surrounded by a gadolinium-loaded polymethylmethacrylate (Gd-PMMA), which acts as a neutron veto, immersed in an low-radioactivity liquid argon bath enclosed in a stainless steel vessel, placed inside a proto-dune like cryostat. Readout systems consist of large-area Silicon Photomultiplier (SiPM) array detectors DarkSide-20k aims to reach a dark matter- nucleon cross-section sensitivity of $7.4 \times 10^{-48} cm^{2} $ at 90% confidence level for a dark matter mass of 1 TeV/c^{2} in a 200 tonne-year exposure.This talk will give an overview of the status of construction and the physics program of the project.
We shall introduce the novel LiquidO technology, relying for the first time on light detection in “opaque” media. This way, LiquidO enables sub-atomic particle event-wise imaging, so event topology, which, once combined with fast timing, the combined system enables powerful particle-ID even at MeV energies. The development is led by the homonymous international academic collaboration with institutions from over 11 countries. LiquidO appears capable of offering several detection features that might lead to a breakthrough potential in neutrino, rare decay physics and generally high-energy physics. The performance of LiquidO betters with higher energies, starting from a fraction of MeV, if scintillation is used. Its preliminary physics potential will also be highlighted. LiquidO opens a test-bed context for further detection R&D, where further innovation is ongoing, including pioneering new technology elements such as opaque scintillators.
Traditionally used for photon detection, superconducting Transition-edge Sensors (TESs) take on a new role in the PTOLEMY project as we investigate their application for electron detection to establish the existence of relic neutrinos. PTOLEMY requires TESs with 50 meV energy resolution for discerning electrons in the tens of eV range. Our focus is on exploring TES detectors' response to low-energy electrons—an unexplored area. For electron generation at low temperatures, we are exploiting both field emission from carbon nanotubes and photoemission from a thin aluminium foil exposed to UV photons. The study provides insights into TES device design and integration with the low-energy cryogenic electron source, marking a significant advancement for PTOLEMY and applications requiring electron detectors capable of discriminating single low-energy electrons with excellent energy resolution and low dark count rates.
The Jiangmen Underground Neutrino Observatory (JUNO), a 20-kiloton liquid scintillator detector equipped with more than 43 thousand photomultiplier tubes, is under construction currently, aiming primarily to determine the neutrino mass ordering by detecting reactor electron anti-neutrinos. To achieve the physics goal, the detector energy resolution should be better than 3% at 1 MeV and the uncertainty of the absolute energy scale is required to be better than 1%. In order to meet these stringent requirements, a comprehensive calibration system comprising the Automatic Calibration Unit, the Cable Loop System, the Guide Tube Calibration System and the Remotely Operated Vehicle is under development to calibrate the energy nonlinearity and energy non-uniformity of central detector. This talk will present the JUNO calibration system status and analysis strategy, including the calibration subsystems hardware progresses as well as the simulation of the JUNO detector response calibration.
Charged particles in Liquid Argon (LAr) produce light in the Vacuum Ultraviolet range, challenging traditional optics. Current LAr particle detectors rely on drift electron signals for readout, but this method is not efficient in high event-rate scenarios. New readout methods are needed for scintillation light detection in LAr. The Near Detector complex (ND) of DUNE (Deep Underground Neutrino Experiment) will be installed at Fermilab, with the main goal of monitoring the neutrino beam and probing several neutrino properties. DUNE ND will instrument advanced detectors, including a LAr detector (GRAIN, GRanular Argon for Interactions of Neutrinos) that will perform optical readout of images to identify neutrino interaction vertices and reconstruct particle tracks. This will complement the event reconstruction made by other DUNE subdetectors while also enhancing our understanding of LAr-neutrino interactions. The GRAIN design, goals, and ongoing activities will be described in this talk.
The Deep Underground Neutrino Experiment (DUNE) is a next-generation long-baseline neutrino oscillation experiment with a primary physics goal of observing neutrino and antineutrino oscillation patterns to precisely measure the parameters governing long-baseline neutrino oscillation in a single experiment, and to test the three-flavor paradigm. DUNE is being built with the exquisite imaging capability of massive LArTPC far detector modules and an argon-based near detector. Fermilab and DUNE have built ICEBERG for the R&D of the DUNE Cold Electronics(CE) for both Horizontal and vertical drift TPC including the Photon Detector(PD), DAQ, Trigger, and online and offline software development. ICEBERG has a 1280 channel DUNE APA with 30 cm LAr dual drift volume along with X-ARAPUCA Photon Detector. We are working to implement OnEdge AI/ML in the DUNE-DAQ. The status of R&D will be discussed.
The T2K neutrino experiment in Japan obtained a first indication of CP violation in neutrino oscillations. To obtain better sensitivity, T2K upgraded the near detector. A novel 3D highly granular scintillator detector called SuperFGD of a mass of about 2 tons will be functioning as a fully-active neutrino target and a 4\pi detector of charged particles from neutrino interactions. It consists of about two millions of small optically-isolated plastic scintillator cubes with a 1 cm side. Each cube is read out in the three orthogonal directions with wave-length shifting fibers coupled to compact photosensors, micro pixel photon counters (MPPCs). SuperFGD was installed into the ND280 magnet and accept the neutrino beam since October 2023. In this talk, the main detector parameters, detection and reconstruction of first neutrino events, and its performance in the neutrino beam will be reported.
The main mission of IPPOG, the International Particle Physics Outreach Group, is to bring the excitement of particle physics to the public and especially to the young generatiοn. In the last years, IPPOG has undertaken to emphasize also the benefits to society from fundamental research. A tangible example is the particle therapy masterclass, an integral part of the masterclasses programme, which introduces high-school students to applications of accelerators in the fight against cancer. Another example is the effort of the working group “Outreach of applications for society”. Its objective is to create a collection of short stories, covering a wide spectrum of spin-offs from our field. The ultimate goal is to connect fundamental research to everyday life and provide a practical communication tool for the science outreach community.
Showcasing the ATLAS detector and its enormous facilities to local audiences often proves difficult as it's difficult to convey the sheer size of the project. In a project together with the National Videogame Museum (NVM) in Sheffield, we have developed a virtual tour through ATLAS and the CERN site. It can be used in a web browser but is also available for use with google cardboard, a cheap but effective VR headset based on mobile phones. The virtual tour has been successfully used in a number of outreach events, and is also integrated into a workshop devised with the NVM that incorporates the physics of video games, particle physics and ask participants to design a CERN-related videogame. This contribution presents the developed tours and their implementation and gives an overview over its current use cases and received feedback.
Exographer is a video game based on particle physics, coming out in 2024. It will put our field of research in brand new (gamer) hands. In Exographer, players use gluoboots or a photosphere to overcome obstacles while discovering, one by one, all the particles of the Standard Model. The levels are inspired by real laboratories (giant colliders and detectors, neutrino underground facilities, cosmic ray observatories…). A lost civilization inspired by real physicists such as Pauli or Curie with timeline is transposed from the real discovery history.
It was imagined by Raphael Granier de Cassagnac, a particle physicist and member of the CMS collaboration, who brought together a team of videogame professionals in the research center of Ecole Polytechnique, France.
I will show the main aspects of Exographer, explain how it was conceived, show how it can be used for outreach, and possibly extended to new levels.
Exographer on Steam: https://store.steampowered.com/app/2834320/Exographer/
Virtual reality (VR) is emerging as a transformative tool across various disciplines, revolutionising the way we perceive and interact with objects, data, and their visualisation. In this talk, we present a novel CMS project wherein we use VR, utilising Meta Quest headsets, to create an immersive virtual experience. The virtual world features 3D models of the CMS detector and the underground hallways that lead to the cavern where the detector is located.
The experience is particularly useful for visitors during data taking phase of the LHC when access underground is possible, but entry to the detector cavern is forbidden. The concept is thus that visitors can "see through the walls" to the detector, whilst underground. Through this presentation we will demonstrate our VR project with the help of captured images and videos, and describe general workflows of how it is set up. After the talk, the attendees will also get a chance to experience the project themselves using our VR headsets.
CMS Virtual Visits allow thousands of people each year to experience CMS from the comfort of their own homes or schools. These visits are hosted online where people interact with CMS scientists as they are shown the experimental areas in Cessy, France, often in their own language! Not everybody can visit the site in person, but this should not be a barrier to experiencing everything CMS has to offer and creating excitement for our audiences. Since its inception in 2006, we have created a system for running the virtual visits for many different audiences and languages. We will take you through the history of this initiative, how they are run today, the feedback we have received, and the exciting possibilities this online visit format can have in the coming years for CMS and other outreach teams.
In the boardgame Sci-me!, you build up your own scientific laboratory, hire people to do research for you and try to get as many grants as possible to fund your research. Since publications are the scientific currency, your main goal is to publish your scientific findings.
Our game is a concept game designed for educational purposes. All actions in the game and their meta-level meaning are also explained in the interpretation book. In Sci-me we have different complexity modes to catch non-scientists, budding scientists as well as established scientists. The game is available as a digital prototype, a board game and a simplified travel version. Sci-me can also be extended with other add-ons to address specific areas of research or to emphasize different mechanics such as grant applications or open science.
Heavy ion collisions allow access to novel QCD and QED studies in a laboratory setting. This talk will present recent CMS highlights on precision measurements of the properties of quark-gluon plasma and the strong electromagnetic fields produced in high-energy heavy ion collisions.
The research conducted by the NA61/SHINE spans a broad spectrum of hadronic physics within the CERN SPS energy range.This presentation will delve into the energy-dependent characteristics derived from the SMES model (the horn and step phenomena), along with the latest findings concerning particle production properties observed in p+p collisions and Be+Be, Ar+Sc, and Xe+La collisions at SPS energies. Furthermore, recent observations by the experiment have unveiled an unexpected surplus of charged meson production compared to neutral mesons in central Ar+Sc collisions. This contribution will provide an analysis of these results. A second pivotal aspect of the physics program is the quest for the critical point of nuclear matter. This presentation will highlight the outcomes of fluctuation, HBT and intermittency analyses, offering insights directly relevant to the search for the critical point. The current achievements and future plans for measuring open charm production will be outlined
The LHCb detector is a unique tool for studying high-energy heavy-ion colli-
sions. Its forward geometry, along with its excellent vertex reconstruction and
particle identification capabilities, allow the LHCb detector to study a wide vari-
ety of observables in pPb and PbPb collisions in previously unexplored kinematic
territory. Recent results from the LHCb heavy-ion program will be discussed,
along with prospects for heavy-ion physics with the newly upgraded LHCb de-
tector.
Owing to the injection of gas into the LHC beampipe while multi-TeV proton
or ion beams are circulating, the LHCb spectrometer has the unique capabil-
ity to function as the as-of-today highest-energy fixed-target experiment. The
resulting beam-gas collisions cover an unexplored energy range that is above
previous fixed-target experiments, but below RHIC or LHC collider energies.
In this contribution, recent results for hadron production and polarization from
beam-gas fixed-target collisions at LHCb are presented. Also, the upgrade of
the fixed-target system, named SMOG2, and the preliminary results from the
first collected data, will be discussed.
sPHENIX is a next-generation, state-of-the-art particle detector at the Relativistic Heavy-Ion Collider (RHIC) that has recently taken its first dataset of 200 GeV Au+Au collisions during a commissioning run in 2023. sPHENIX features a variety of subsystem capable of detailed studies of bulk particle production in heavy-ion collisions, including the first barrel hadronic calorimeter at RHIC. This talk presents the first measurements by sPHENIX of bulk QGP properties in the 2023 commissioning data, including the charged particle pseudorapidity density, the total transverse energy, neutral pion production, and azimuthal anisotropies. These measurements are compared to the previous results at RHIC, as well as the latest models of bulk particle production. In addition, we highlight that these first measurements in sPHENIX serve as an important way to benchmark the detector performance and reconstruction for the future measurements that follow.
The pseudorapidity dependence of charged particle production provides information on the partonic structure of the colliding hadrons and is, in particular at LHC energies, sensitive to non-linear QCD evolution in the initial state. For Run3, ALICE has increased its pseudorapidity coverage to track charged particles over a wider range of −3.6 < $\eta$ < 2 combining the measurement from the upgraded Inner Tracking System (ITS) and the newly installed Muon Forward Tracker (MFT).
Particle production mechanisms are explored by addressing the charged-particle pseudorapidity densities measured in pp and Pb−Pb collisions, presenting new final results from Run 3. These studies allow us to investigate the evolution of particle production with energy and system size and to compare models based on various particle-production mechanisms and different initial conditions both at mid and forward rapidities.
The Higgs boson decay to two bosons can be used to perform some of the most precise measurements of the Higgs boson production cross sections. This talk presents the more recent Higgs boson cross section measurements by the ATLAS experiment in the bosonic decay channel. Interpretations of these results in the context of Standard Model effective field theories will be presented. The results are based on pp collision data collected at 13 TeV during Run 2 of the LHC.
In this presentation we will discuss the most recent measurements of the couplings of the Higgs boson, as well as its inclusive and fiducial production cross sections, with data collected by the CMS experiment. Data collected at centre of mass energies of 13 and 13.6 TeV are analyzed.
This talk presents precise measurement of the Higgs boson mass, obtained using the full dataset collected in pp collisions at 13 TeV during Run 2 of the LHC. The measurements are performed exploiting the Higgs boson decays into two photons or four leptons, as well as their combinations. The talk will describe the adopted analysis strategies, and it will stress the impact of the experimental techniques on these measurements.
An important aspect of the Higgs boson physics programme at the LHC is to determine all the properties of this particle, including its mass, which is a free parameter in the SM, and its width. This presentation will discuss the latest developments in measurements of the Higgs boson mass and width, with data collected by the CMS experiment at a centre of mass energy of 13 TeV. Both direct and indirect constraints on the Higgs boson width will be shown.
MiNNLOPS is a method which uses different jet-multiplicities in order to perform QCD simulations at next-to-next-to-leading order (NNLO) accuracy which are naturally combined with Parton Showers (PS) for a realistic description of LHC events. In this talk I summarise the method and our recent implementation for the Higgs production via bottom annihilation (bbH). Although the bbH signal is extremely challenging at the LHC, it is relevant in BSM theories with an enhanced bottom-Yukawa coupling and in the background studies for Higgs-pair production.
Different schemes can be adopted for the calculation since the bottom quark can be considered both a massless (in the five flavour scheme, 5FS) or a massive quark (with four massless flavours, 4FS). I present our NNLO+PS results in 5FS against fixed order predictions as well as resummed calculations. I also show our recent studies in the 4FS setup in order to capture the massive effects at NNLO+PS accuracy for the first time.
The Daya Bay reactor neutrino experiment, pioneering in its measurement of a non-zero value for the neutrino mixing angle $\theta_{13}$ in 2012, operated for about nine years from Nov. 24, 2011 to Dec. 12, 2020. Antineutrinos emanating from six reactors with a thermal power of 2.9 GW$_{\mathrm{th}}$ were detected by eight identically designed detectors, which were positioned in two near and one far underground experimental halls. This spatial configuration, spanning kilometer-scale baselines between detectors and reactors, facilitates a precise examination of the three-neutrino mixing framework. This talk will show the measurements of $\theta_{13}$ and the mass-squared difference by utilizing the Gd-capture tagged sample. Updates on the results derived from the H-capture tagged sample and the search for light sterile neutrinos will also be included.
The RENO experiment has precisely measured the amplitude and frequency of reactor antineutrino oscillation at Hanbit Nuclear Power Plant since Aug. 2011. The 2018 publication reported the measured oscillation parameters based on 2200 days of data. Before the RENO far detector was shut down in March 2023, additional 1600 days of data had been acquired. This presentation reports the updated and final result on the reactor antineutrino oscillation amplitude(frequency), with improved statistical and systematic uncertainties by 10%(14%) and 13%(23%), respectively.
This talk will present a reactor flux and spectrum measurement with the Daya Bay full data set, 34% increase in statistics compared to the previous results. Using detector data spanning effective $\mathrm{^{239}Pu}$ fission fractions $F_{239}$ from 0.25 to 0.35, Daya Bay measures an average IBD yield and a fuel-dependent variation in IBD yield, $d\sigma_f/dF_{239}$. In addition, the yields and prompt spectra of the two dominant isotopes, $\mathrm{^{235}U}$ and $\mathrm{^{239}Pu}$, are extracted. Using SVD unfolding techniques, the $\bar{\nu}_e$ spectra are estimated from the prompt spectra of $\mathrm{^{235}U}$, $\mathrm{^{239}Pu}$, and the total measurement, thereby providing a model-independent reactor $\bar{\nu}_e$ spectrum prediction for the other reactor antineutrino experiments. Among them, the $\bar{\nu}_e$ spectrum for $\mathrm{^{235}U}$ is one of the most precise measurements, and the $\bar{\nu}_e$ spectra for $\mathrm{^{239}Pu}$ and total are the most precise measurements.
New DANSS results on searches for sterile neutrinos based on 8.5M $\nu$ events exclude an important part of the $\nu_s$ parameter space. Obtained limits exclude practically all sterile neutrino parameters preferred by BEST results for $Δm^2$ < 5 $eV^2$. Analysis relying on absolute $\nu$ flux predictions excludes practically all $\nu_s$ parameters preferred by the BEST results. The neutrino spectrum dependence on the $^{239}Pu$ fission fraction agrees with predictions of the Huber-Mueller model. The ratio of cross sections for $^{235}U$ and $^{239}Pu$ also agrees with the Huber-Mueller model and somewhat larger than in other experiments. The reactor power was measured using the $\nu$ event rate during 7.5 years with a statistical accuracy of 1.5$\%$ in 2 days and with the relative systematic uncertainty of less than 0.5$\%$. The fraction of the antineutrino yield with energies above 8 MeV is measured. A new method of calibration using the Bragg curve for stopping muons is presented.
The ICARUS collaboration employed the 760-ton T600 detector in a successful three-year physics run at the underground LNGS laboratory, performing a sensitive search for LSND-like anomalous $\nu_e$ appearance in the CNGS beam. After a significant overhaul at CERN, the T600 detector has been installed at Fermilab where, in June 2022, the data taking for neutrino oscillation physics began collecting events from BNB and NuMI off-axis beams. ICARUS aims at first to either confirm or refute the claim by Neutrino-4 short-baseline reactor experiment. It will also perform measurements of neutrino cross sections in LAr with the NuMI beam and several BSM searches. ICARUS will soon jointly search for evidence of sterile neutrinos with the Short-Baseline Near Detector (SBND). In this presentation, preliminary results from the ICARUS data with the BNB and NuMI beams are shown both in terms of performance of all ICARUS subsystems and of capability to select and reconstruct neutrino events.
The MicroBooNE experiment utilizes liquid argon time projection chamber to detect neutrinos emanating from Fermilab's Booster Neutrino Beam (BNB) and the Neutrinos at the Main Injector (NuMI) beam. MicroBooNE is investigating the observed low energy excess (LEE) of electron neutrino and antineutrino charged current quasielastic events reported by the MiniBooNE experiment. This presentation will report on the search for an electron neutrino excess compatible with the MiniBooNE LEE utilizing the full 5-year dataset of 11e20 POT collected with the BNB from MicroBooNE. Additionally we present the status of searches for short baseline neutrino oscillations within the framework of a 3+1 eV-scale sterile neutrino model. This work combines data from both the BNB and NuMI beams leveraging their substantially different $\nu_e/\nu_\mu$ ratios to mitigate degeneracy resulting from the cancellation of $\nu_e$ appearance and disappearance allowing to greatly enhance the experiment's sensitivity.
The Short-Baseline Near Detector (SBND) is one of three Liquid Argon Time Projection Chamber (LArTPC) neutrino detectors positioned along the axis of the Booster Neutrino Beam (BNB) at Fermilab, as part of the Short-Baseline Neutrino (SBN) Program. The detector is currently being commissioned and is expected to take neutrino data this year. SBND is characterized by superb imaging capabilities and will record over a million neutrino interactions per year. Thanks to its unique combination of measurement resolution and statistics, SBND will carry out a rich program of neutrino interaction measurements and novel searches for physics beyond the Standard Model (BSM). It will enable the potential of the overall SBN sterile neutrino program by performing a precise characterization of the unoscillated event rate, and constraining BNB flux and neutrino-argon cross-section systematic uncertainties. In this talk, the physics reach, current status, and future prospects of SBND are discussed.
The LHC will undergo an upgrade program to deliver an instantaneous luminosity of $7.5\times 10^{34}$ cm$^{-2}$ s$^{-1}$ and collect more than 3 ab$^{-1}$ of data at $\sqrt{s}=$13.6 (14) TeV. To benefit from such a rich data-sample it is fundamental to upgrade the detector to cope with the challenging experimental conditions. The ATLAS upgrade comprises a new all-silicon tracker with extended rapidity coverage; a redesigned TDAQ system for the calorimeters and muon systems allowing the implementation of a free-running readout system. Finally, a new High Granularity Timing Detector will aid the track-vertex association in the forward region by incorporating timing information into the reconstructed tracks. An important ingredient is a precise determination of the delivered luminosity with systematic uncertainties below the percent level. This presentation will describe the ongoing ATLAS detector upgrade status and the main results obtained with the prototypes.
The Belle II experiment at the SuperKEKB $e^+e^-$ collider started recording collision data in 2019, with the ultimate goal of collecting $50~\mathrm{ab}^{-1}$. The wealth of physics results obtained with the current data sample of $424~\mathrm{fb}^{-1}$ demonstrate excellent detector performance. The first years of running, however, also reveal novel challenges and opportunities for reliable and efficient detector operations with machine backgrounds extrapolated to full luminosity. In order to make Belle II more robust and performant at the target luminosity of $6\times 10^{35}~\mathrm{cm}^{-1}\mathrm{s}^{-1}$, a Belle II upgrade is being planned for a 2027-2028 SuperKEKB shutdown. This talk will cover the full range of proposed upgrade ideas, which include the replacement of select readout electronics, upgrades of detector elements, and the possibility of substituting entire detector sub-systems such as the vertex detector.
The Upgrade II of the LHCb experiment is proposed for the long shutdown 4 of the LHC. The upgraded detector will operate at a maximum luminosity of 1.5×1034 cm-2 s-1, with the aim of reaching a total integrated luminosity of ∼300 fb-1 over the lifetime of the HL-LHC. The collected data will probe a wide range of physics observables with unprecedented accuracy, with unique sensitivities for the measurement of CKM phases, charm CP violation, and rare heavy-quark decays.
To achieve this, the current detector performance must be maintained at the expected maximum pile-up of ∼40, and even improved in certain specific areas. It is planned to replace all existing spectrometer components to increase the granularity, reduce the amount of material in the detector and exploit the use of new technologies, including precision timing on the order of tens of picoseconds.
The presentation will review the key points of the physics programme and the main options of the detector design.
The LHCb detector underwent a major upgrade after Run-2 of the LHC which
ended in 2018. To fully profit from an increased instantaneous luminosity
of 2x10^33 cm-2s-1 , the lowest level hardware trigger is removed, and the
full event information is shipped to a software trigger at 40 MHz. As a
result, all detector readout electronics is replaced. In addition, the
tracking detectors (consisting of a pixelated vertex locator, a silicon
strip detector and a state-of-the-art scintillating fiber tracker) and the
photodetectors of the two RICH detectors are all newly constructed. In
this presentation the latest results of the LHCb detector performance in
Run-3 will be presented.
During LHC LS3 (2026-28) ALICE will replace its inner-most three tracking layers by a new detector, "ITS3", based on newly developed wafer-scale monolithic active pixel sensors, bent into cylindrical layers, and held in place by light carbon foam edge ribs. Unprecedented low values of material budget (0.07% per layer) and closeness to interaction point (19 mm) lead to a factor two improvement in pointing resolutions from very low $p_{\rm T}$, achieving, for example, 18 $\mu$m in the transversal plane at 1 GeV/c.
After a successful R&D phase 2019-2023, which demonstrated the feasibility of this innovational detector, the final sensor and mechanics are being developed right now.
This contribution will shortly review the conceptual design, the main R&D achievements, and the road to completion and installation. It concludes with a projection of the improved physics performance, for heavy-flavour mesons and baryons, as well as for thermal dielectrons, that will come into reach with ITS3.
The High Luminosity Large Hadron Collider at CERN is expected to produce proton collisions at a center-of-mass energy of 14 TeV, aiming to achieve an unprecedented peak instantaneous luminosity of 7 x 10^34 cm^-2 s^-1, implying an average pileup of 200. To cope with these running conditions, the CMS detector will undergo an extensive upgrade: Phase-2. This upgrade includes the complete replacement of the CMS silicon pixel detector, introducing improvements such as increased radiation resilience, finer granularity, and capability to manage increased data rates among other changes. This is, however, the second time CMS has replaced their pixel detector. We will outline the differences and similarities between the Phase-1 and Phase-2 upgrade of the inner tracker of CMS. We will highlight specific lessons learned from operating the Phase-1 detector and how this experience has informed our approach in design and assembly or the Phase-2 inner tracker as we approach preproduction of modules.
Rare kaon decays are among the most sensitive probes of both heavy and light new physics beyond the Standard Model description thanks to high precision of the Standard Model predictions, availability of very large datasets, and the relatively simple decay topologies. The $K^{+} \rightarrow \pi^{+}\nu\bar{\nu}$ decay is a “golden mode” for search of New Physics in the flavour sector. The Standard Model provides a high-precision prediction of its branching ratio of less than $10^{-10}$, and this decay mode is highly sensitive to indirect effects of New Physics up to the highest mass scales. The NA62 experiment at the CERN SPS is designed to study the $K^{+} \rightarrow \pi^{+}\nu\bar{\nu}$ decay, and provided the world’s most precise investigation of this decay using 2016--18 data. Building on this success, the status of the analysis of data collected in 2021--2022, after beam-line and detector upgrades, is presented. NA62 is a multi-purpose high-intensity kaon decay experiment, and carries out a broad rare-decay and hidden-sector physics programme with dedicated trigger lines. New results on searches for hidden-sector mediators and searches for violation of lepton number and lepton flavour conservation in kaon decays based on the NA62 2016--2018 dataset are presented. Future prospects of these searches are discussed.
The NA62 experiment at CERN reports new results from the analyses of rare kaon and pion decays, using data samples collected in 2017-2018. A sample of $K^+ \rightarrow \pi^+ \gamma \gamma$ decays was collected using a minimum-bias trigger, and the results include measurement of the branching ratio, study of the di-photon mass spectrum, and the first search for production and prompt decay of an axion-like particle with gluon coupling in the process $K^+ \rightarrow \pi^+ A$, $A \rightarrow \gamma \gamma$. A sample of $\pi^0 \rightarrow e^+ e^-$ decay candidates was collected using a dedicated scaled down di-electron trigger, and a preliminary result of the branching fraction measurement is presented. New searches for lepton flavour violating kaon decays, and for production of a weakly-coupled particle X in the $K^+ \rightarrow \mu^+ \nu X$, $X \rightarrow \gamma \gamma$ decay chain, are also presented.
The KOTO experiment at J-PARC searches for the rare decay, $K_L → π^0ν\overlineν$. The mode is CP-violating, with a theoretical branching ratio highly suppressed in the Standard Model at $(2.94 ± 0.15) × 10^{−11}$. With a small theoretical uncertainty, this search is sensitive to new physics. In the analysis of 2016-2018 data, there were three observed events within the signal region, consistent with the background estimation. An upper limit on the branching ratio was set at < $4.8 × 10^{−9}$ (90% CL). Since that analysis, new hardware and analysis methods have been implemented to reduce the background level. The search in 2021 had a single event sensitivity of $8.66 × 10^{−10}$ which was comparable to 2016-2018 data taking. There were no events observed in the signal region, allowing KOTO to set the best upper limit on $BR(K_L → π^0ν\overlineν)$ to date at $< 1.99 × 10^{−9}$ (90% CL). I will report on the latest result of the $K_L → π^0ν\overlineν$ search from data taken in 2021.
The KOTO II is a next-generation experiment to measure the branching ratio of $K_L\to \pi^0\nu\overline{\nu}$ with 30-GeV proton beam at J-PARC. The KOTO II is a successor of the currently running KOTO experiment. We plan to expand the hadron experimental facility at J-PARC, and construct a new beamline of KOTO II there. The extraction angle of the $K_L$ is 5 degrees, which is smaller than that in KOTO to have more $K_L$ with higher momentum spectrum. The KOTO-II detector is being designed with a 12-m signal decay region and a 3-m diameter calorimeter to have more signal acceptance. The expected numbers of signal and background events are 35 and 40, respectively, where the Standard Model value of branching ratio and $3\times 10^7$-s running time are assumed. The signal can be observed with $5.6\sigma$ significance. The design, current developments, and the expected sensitivity of KOTO II will be reported.
Recent progress achieved in the standard B-unitarity triangle and in the Kaon unitarity triangle is discussed. In particular, we outline how further inroads into the Kaon UT can be made via K0=>pi0+l+l- in both theory and in experiments. In the current precision flavor era, where there are large fluxes of B’s and kaon’s from LHCb, Belle-II, NA62, KOTO and the proposed experiments such as HIKE, we point out new ways to utilize these precious resources to extract vital information on CP violation in order to refine our tests of the CKM-paradigm of CP violation and improve searches for new physics.
With the large datasets of $𝑒^+𝑒^−$ annihilation at the 𝐽/𝜓 and 𝜓(3686) resonances collected by the BESIII experiment, multi-dimensional analyses making use of polarisation and entanglement can shed new light on the production and decay properties of hyperon-antihyperon pairs. In a series of recent studies performed at BESIII, significant transverse polarisation of the (anti)hyperons has been observed in 𝐽/𝜓 or 𝜓(3686) decays to $Λ\bar{Σ}$ , $Σ\bar{Σ}$ , $Ξ\bar{Ξ}$. The decay parameters for the most common hadronic weak decay modes were measured, and due to the non-zero polarization, the parameters of hyperon and antihyperon decays could be determined independently of each other for the first time. Comparing the hyperon and antihyperon decay parameters yields precise tests of direct, $Δ𝑆 = 1$ CP-violation that complement studies performed in the kaon sector.
In any relativistic quantum field theory, such as Quantum Chromodynamics or Electroweak theory, the interactions are invariant under the combined operation of Charge conjugation (C), Parity transformation (P) and Time reversal (T). One of the consequences of this (CPT) symmetry is that particles and their corresponding antiparticles must have exactly the same mass. While the mass difference between proton and antiproton has been measured with very high precision, the extension to the (multi-)strange baryon domain still lacks precise measurements.
In this contribution, the most precise measurement of mass differences between $\Xi^{-}$ and $\overline{\Xi}^{+}$ and between $\Omega^{-}$ and $\overline{\Omega}^{+}$ using the ALICE detector will be presented, sensibly improving the precision obtained by averaging the results from previous experiments.
In this talk, recent measurements of distributions sensitive to the underlying event, the hadronic activity observed in relationship with the hard scattering in the event, by the ATLAS experiment are presented. Underlying event observables like the average particle multiplicity and the transverse momentum sum are measured for Kaons as Lambda baryons as a function of the leading track-jet and are compared to MC predictions which in general fail to describe the data. In addition, a recent measurement of charged-particle multiplicities in diffractive pp collisions are presented. Events are classified using the ATLAS forward proton tagging. An analysis of the momentum differences between charged hadrons in proton-proton, proton-lead and lead-lead collisions is presented. The difference in the yield of hadron pairs with like-sign and opposite-sign charge is used to extract the spectra of pairs adjacent in colour flow. The measurement is sensitive to the dynamics of hadronization.
Inclusive event shape distributions, as well as event shapes as a function of charge particle multiplicity are extracted from CMS low-pileup and compared with predictions from various generators. Multi-dimensional unfolded distributions are provided, along with their correlations, using state-of-the-art machine-learning unfolding methods.
We will present results on exclusive production processes in CMS, including the production of charged hadron or lepton pairs. To select these signatures, some analyses use intact protons tagged in the TOTEM roman pot detectors.
The production of W/Z bosons in association with light or heavy flavor jets or hadrons at the LHC is sensitive to the flavor content of the proton and provides an important test of perturbative QCD. In this talk, measurements by the ATLAS experiment probing the charm and beauty content of the proton are presented. Inclusive and differential cross-sections of Z boson production with at least one c-jet, or one or two b-jets are measured for events in which the Z boson decays into a pair of electrons or muons. Moreover, the production of W boson in association with D+ and D*+ mesons will be discussed. This precision measurement provides information about the strange content of the proton. Finally, measurements of inclusive, differential cross sections for the production of missing transverse momentum plus jets are presented.
The study of the associated production of vector bosons and jets constitutes an excellent environment to check numerous QCD predictions. Total and differential cross sections of vector bosons produced in association with jets have been studied in pp collisions using CMS data. Differential distributions as a function of a broad range of kinematical observables are measured and compared with theoretical predictions.
Jet substructure measurements, using the distribution of final state hadrons, provide insight into partonic shower and hadronisation. Observables for such measurements include the transverse momentum ($j_\mathrm{T}$) and longitudinal momentum fraction ($z$) of jet constituent particles. ALICE has recently measured the $j_\mathrm{T}$ distributions of the jet fragments in proton-proton and proton-lead collisions $\sqrt{s_{\rm{NN}}}$ = 5.02 TeV, which are well-described by parton-shower models. This talk will present a new ALICE measurement of jet fragmentation in pp collisions, which extends to multiple dimensions in $j_\mathrm{T}$ and $z$ to provide a more detailed picture of the parton shower and fragmentation processes. The measured $j_\mathrm{T}$ distributions are characterized by a fit that separately constrains the hadronization and perturbative components of the shower. The final results and their fitted distributions are compared with the theoretical predictions.
Hadronic object reconstruction is one of the most promising settings for cutting-edge machine learning and artificial intelligence algorithms at the LHC. In this contribution, highlights of ML/AI applications by ATLAS to particle and boosted-object identification, MET reconstruction and other tasks will be presented.
The latest measurements on W and Z boson production, decays and properties at CMS obtained with CMS proton collision data at 13 and 13.6 TeV are presented. Some of these measurements lead to constraints to SM parameters and to new physics models.
The study of single W and Z boson production at the LHC provides stringent tests of the electroweak theory and perturbative QCD. The ATLAS experiment has measured the W boson production cross section in the LHC data collected in 2022 at 13.6 TeV. By forming ratios of Z, W, and ttbar production cross sections, this measurement becomes a sensitive probe of the quark and gluon content of the proton. Measurements of the transverse momentum of the W and Z boson at 5 and 13 TeV from dedicated LHC runs with reduced instantaneous luminosity are also presented. A search for exclusive hadronic decays of the W boson to single pions, Kaons or rho-mesons in association with a photon are highlighted, and provide a test bench for the quantum chromodynamics factorization formalism. Differential cross sections as functions of mass and rapidity are presented for the neutral current Drell-Yan process in the invariant mass regions below and above the Z-boson peak.
The GENEVA method provides a means to combine resummed and fixed order calculations at state-of-the-art accuracy with a parton shower program. GENEVA NNLO+PS generators have now been constructed for a range of colour-singlet production processes and using a range of different resolution variables. I will review the GENEVA framework and then describe several recent advancements, such as the use of jet veto resummation at NNLL' accuracy and the ongoing extension to processes including jets in the final state.
In this talk, we discuss the main features of the combined QCD and QED resummation formalism for weak vector boson production at hadron colliders. Specifically, resummation is realized at NNLL+NNLO in QCD with the inclusion of mixed QCD-QED effects at LL and pure QED ones at NLL matched with fixed-order full-EW NLO contribution (i.e. at one loop). Since the naive Abelianization of the QCD formalism is not suitable when considering an electrically charged final state, we exploited the heavy-quark resummation formalism, thereby properly incorporating QED final-state soft radiation. Numerical results at hadron colliders are presented for relevant kinematic distributions: on-shell Z and W boson transverse-momentum distributions and their ratio. We found that QED effects can reach up to the percent level, potentially important in view of SM parameters extraction.
The LHCb experiment covers the forward region of proton-proton collisions, and it can improve the current electroweak landscape by studying the production of electroweak bosons in this phase space complementary to ATLAS and CMS. The precision measurements of the properties of single W and Z boson at LHCb could not only provide stringent test of the Standard Model, but also are essential inputs for the PDF global fitting. In this talk the most latest results on the single W and Z boson property measurements, using the LHCb Run-2 data-sets, will be presented, which will includes the single Z boson production measurement at 5.02 TeV, the weak mixing angle measurement, and the W/Z rare decay searches.
At hadron colliders, charged and neutral Drell-Yan processes can be used for a high precision determination of the W-boson mass and the weak mixing angle through template fits. Since these measurements rely on Monte Carlo templates, it is crucial to have both flexible and accurate event generators.
In this contribution, we present the latest updates of the Z_ew-BMNNPV package for the simulation of the neutral-current Drell-Yan process at NLO QCD plus NLO EW accuracy with exact matching to QCD and QED parton shower in the POWHEG-BOX framework, ranging from the development of new electroweak input parameter/renormalization schemes, like the MSbar one useful for the measurement of the MSbar running of the weak mixing angle, to the implementation of higher-order fermionic corrections. We perform a detailed comparison of the predictions obtained in the different input parameter/renormalization schemes, discussing their main features and the related theory uncertainties.
ATLAS has used the W and Z boson production processes to perform a range of precision measurements of SM parameters. The production rate of Z+jet events with large missing transverse momentum is used to measure the decay width of the Z boson decaying to neutrinos. Differential measurements of this topology with minimal assumptions on theoretical calculations are discussed and allow comparisons to the Standard Model as well as the interpretation in beyond-the-Standard-Model scenarios. Finally, the LHC pp collision data collected by the ATLAS experiment at sqrt(s)=7 TeV is revisited to measure the W boson mass and its width.
The proposed STCF is a symmetric electron-positron beam collider designed to provide e+e− interactions at a centerof-mass energy from 2.0 to 7.0 GeV. The peaking luminosity is expected to be 0.5×10^35 cm−2s−1. STCF is expected to deliver more than 1 ab−1 of integrated luminosity per year. The huge samples could be used to make precision measurements of the properties of XYZ particles; search for new sources of CP violation in the strange-hyperon and tau−lepton sectors; make precise independent mea-surements of the Cabibbo angle (theta)c) to test the unitarity of the CKM matrix; search for anomalous decays with sensitivities extending down to the level of SM-model expectations and so on. In this talk, the physics interests will be introduced as well as the the recent progress on the project R&D.
Super Tau-Charm Facility (STCF) was proposed as a third-generation circular electron-positron collider of 2-7 GeV (CoM) and 510^34 cm^-2s^-1 (luminosity), aiming to explore charm-tau physics in the next decades. This presentation will introduce the accelerator design and R&D efforts for STCF. Under the financial support of the local provincial and national funding agencies, the STCF accelerator team is working on the conceptual design of the accelerator. The accelerator consists of an injector and two collider rings. The injector will provide full-energy electron and positron beams for top-up injections. The collider rings with typical third-generation features are designed to have an extremely low beta (<1 mm), a large Piwinski angle (>10) and a high beam current (2 A) with the Crab-Waist collision scheme. Several challenges have been identified for intense study and R&D efforts, e.g. a very short Touschek lifetime of less than 300 s and twin-aperture superconducting magnets at IR.
The machine-detector interface (MDI) issues are one of the most complicate and challenging topics at the Circular Electron Positron Collider (CEPC). Comprehensive understandings of the MDI issues are decisive for achieving the optimal overall performance of the accelerator and detector. The machine will operate at different beam energies, therefore, a flexible interaction region design will be plausible to allow for the large beam energy range. The design has to provide high luminosity that is desirable for physics studies, but keep the radiation backgrounds tolerable to the detectors. This requires careful balance of the requirements from the accelerator and detector sides.
In this talk, the latest design of the CEPC MDI based on CEPC Technical Design Report (TDR) will be presented, covering the design of the beam pipe and whole IR, the estimation of beam induced backgrounds, the mitigating schemes, and also our plan towards the Ref-TDR of CEPC detector.
The HALHF concept utilises beam-driven plasma-wakefield acceleration to accelerate electrons to very high energy and collide them with much lower-energy positrons accelerated in a conventional RF linac. This idea, which avoids difficulties in the plasma acceleration of positrons, has been used to design a Higgs factory that is much smaller, cheaper and greener than any other so far conceived. The talk will outline the original design, discuss the challenges of doing physics with a significantly boosted final state and describe a number of possible energy and facility upgrades. Finally the current status of the design will be given, including possible evolution in several parameters and next steps towards a more optimised design that can form the basis for a pre-Conceptual Design Report.
The CERN Future Circular electron-positron Collider (FCC-ee) will enable extreme precision physics experiments from the Z-pole up to above the top-pair production threshold. Very precise beam energy measurements will be performed by resonant depolarization (RD) of e+ and e- pilot bunches, using novel 3D-polarimeters. Additional measurements will be needed to reduce the center-of-mass energy uncertainty to the level of the statistical precision of 4 keV ($m_Z$, $\Gamma_Z$) and 250 keV ($m_W$) expected for key Standard Model parameters. In addition, monochromatization of the beams, down to a few MeV, is necessary to observe the resonant s-channel e+e- → H(125) production; of which a first optics implementation has been achieved.
Positron source yield is crucial for achieving the required luminosity in future lepton colliders. The conventional approach involves an e-beam impinging a high-density solid target to initiate an electromagnetic shower and capture positrons afterwards. But, this scheme is limited by the Peak Energy Deposited Density(PEDD) on the target before its structural failure.
We can utilize the large photon emission in axial channeling within a high-Z crystal to increase positron yield and/or decrease target thickness, thus lowering the PEDD[^]. Together with the conventional scheme, the crystal-based one is under study for the FCC-ee injector design[*].
We carried out experiments at DESY and CERN PS with high-Z crystal and e-beam with energy useful for FCC-ee.The results were used to validate a new simulation model implemented in Geant4 that will be included in the injector design[@].
^ DOI:10.1140/epjc/s10052-022-10666-6
* DOI:10.18429/JACoW-IPAC2019-MOPMP003
@ DOI:10.1007/s40042-023-00834-6
Positron Sources for high luminosity high-energy colliders are a challenge for all future lepton colliders as, for instance, the International Linear Collider (ILC) as well as new concepts as the HALHF collider design. In the talk new R&D developments for the undulator-based positron source are discussed. The talk includes current prototypes for optic matching devices as pulsed solenoid as well as plasma lenses. The applicability of the positron source for the ILC as well as for the HALHF concept are discussed.
The HERD (High Energy cosmic-Radiation Detection facility) experiment is a future experiment for the direct detection of high energy cosmic rays that will be installed on the Chinese space station in 2027. It is constituted by an innovative calorimeter made of about 7500 LYSO scintillating crystals assembled in a spheroidal shape and it is surrounded on five faces by multiple sub-detectors, in order to detect particles entering from five sides.
It will extend direct measurements of cosmic rays of more than one order of magnitude in energy, measuring proton and nuclei fluxes up to the PeV/nucleon energy region, performing the first direct measurement of the cosmic proton and helium knee. HERD will also measure the high energy electron+positron flux and high energy photon flux to search for possible indirect signals of dark matter and perform multi-messenger astronomy.
In this talk the HERD experiment, its scientific goals and its detector design will be introduced.
The High Energy cosmic-Radiation Detection facility (HERD) will be the largest calorimetric experiment dedicated to the direct detection of cosmic rays. HERD aims at probing potential dark matter signatures by detecting electrons from 10 GeV and photons from 500 MeV, up to 100 TeV. It will also measure the flux of cosmic protons and heavier nuclei up to a few PeV, shedding light on the origin and propagation mechanisms of high-energy cosmic rays. HERD will be equipped with a scintillating-fiber tracker (FIT) read out by silicon photomultipliers that will enable the reconstruction of charged particle trajectories, the measurement of their absolute electric charge, and the enhancement of photon conversion into electron-positron pairs. A miniature version of the FIT sector, called MiniFIT, was designed, built, and tested with particle beams at CERN. This presentation will delve into the design and physics performance of MiniFIT, particularly focusing on its space and charge resolution.
The Dark Matter Particle Explorer (DAMPE) is an ongoing space-borne experiment for the direct detection of cosmic rays (CR). Thanks to its large geometric acceptance and thick calorimeter, DAMPE is able to detect CR ions up to unprecedented energies of hundreds of TeV. Following by now more than 8 years of successful operation, DAMPE has amassed a large dataset of high-energy hadronic interactions in a regime that is often difficult to probe by accelerator experiments. In this contribution, we show how DAMPE data can be used to measure inelastic ion-nucleon cross sections, and present a cross section measurement of both proton and helium on the BGO calorimeter. The phenomenological A^2/3 and nuclear-radius scaling is then used to compare our measurements to existing accelerator data and other experimental results.
The Calorimetric Electron Telescope (CALET) is a cosmic-ray observatory operating since October 2015 on the International Space Station. The primary scientific goals of the CALET mission include the investigation of the mechanism of cosmic-ray acceleration and propagation in the Galaxy and the detection of potential nearby sources of high-energy electrons and potential dark matter signatures. The CALET instrument can measure the inclusive spectrum of cosmic electrons and positrons up to about 20 TeV. In addition, it can measure the energy spectra and elemental composition of cosmic-ray nuclei from H to Fe and the abundance of trans-iron elements up to about 1 PeV. Finally, it can monitor the gamma-ray sky up to about 10 TeV, search for signals from gravitational-wave event candidates, and observe gamma-ray burst events. In this contribution the on-orbit performance of the instrument and the main results obtained during the first 8 years of operation will be reported and discussed.
In half a century of predictions on the potential of X-Ray polarimetry, we have encountered ideas—sparse yet not infrequent—on how it could provide insights into several fundamental physics problems. These include birefringence or strong-gravity effects as evidence of photon propagation in extreme magnetic or gravitational fields, anomalies in propagation over large distances due to Lorentz Invariance Violations, or signs of the existence of Axion-Like Particles. Some measurements were proposed for individual objects, while others pointed to a modifications of a distribution. Nowadays, we can benefit from two years in orbit of IXPE, the first space observatory entirely dedicated to polarimetry of celestial X-ray sources in 2-8 keV energy band, resolved in time, energy, and angle. IXPE observed about 60 sources across almost all classes. In this talk, we will review some of the proposed measurements of fundamental physics and how they align with the world unveiled by IXPE.
The LHCb detector at the LHC offers unique coverage of forward rapidities. The detector also has a flexible trigger that enables low-mass states to be recorded with high efficiency, and a precision vertex detector that enables excellent separation of primary interactions from secondary decays. This allows LHCb to make significant (and world-leading) contributions in these regions of phase space in the search for long-lived particles that would be predicted by dark sectors which accommodate dark matter candidates. A selection of results from searches of heavy neutral leptons, dark photons, axions, hidden-sector particles, and dark matter candidates produced from heavy-flavour decays among others will be presented, alongside the potential for future measurements in some of these final states.
Several astrophysical observations indicate that the majority of the mass of the Universe is made of a new type of matter, called Dark Matter (DM), not interacting with light. DM may be composed of a dark sector (DS) of new particles, charged under a new U(1) gauge boson kinetically mixed with the ordinary photon, called dark photon (A'). The NA64 experiment at CERN aims to produce and detect DS particles using the 100 GeV SPS electron beam impinging on a thick active target (electromagnetic calorimeter). In accordance with the ERC funded project POKER, from 2022 NA64 started collecting data also with positron beams, in order to exploit the DS production yield enhancement due to the positron resonant annihilation process. This talk will present latest NA64 results and its future outlook, with a special focus on the progress and perspectives of the positron beam measurement, reporting the sensitivity of the experiment to several beyond SM scenarios.
We present the study of the massless dark photon ($\bar\gamma$) in the $K_{L}^{0}\rightarrow\gamma\bar\gamma$ decay at the J-PARC KOTO experiment. Distinguished from the massive dark photon, the massless one does not directly mix with the ordinary photon but could interact with Standard Model (SM) particles through direct coupling to quarks. Some theoretical models propose that the branching ratio ($\mathcal{BR}$) of the $K_{L}^{0}\rightarrow\gamma\bar\gamma$ decay could reach up to $\mathcal{O}(10^{-3})$, which is well within the KOTO's sensitivity for this study. Although the challenge is posed by the lack of kinematic constraints, the KOTO hermetic veto system provides a unique opportunity to probe for such decay. In this presentation, we will present the open-box result of the $K_{L}^{0}\rightarrow\gamma\bar\gamma$ search based on the data collected in 2020.
Flavour violation in axion models can be generated by choosing flavour non-universal Peccei-Quinn(PQ) charges. Such an axion is easily implemented in a UV completion with a DFSZ model: containing two Higgs doublets (PQ-2HDM) and the PQ scalar. This charge arrangement also produces flavour violation at tree level in the PQ-2HDM, which we will show it is directly correlated to the flavour violation of the axion. This general relation allows us to link flavour violating observables across the scales of the axion and the 2HDM, in such a way that information of one sector is directly related with the other. We will show in two examples how this can be done using flavour violating observables in the quark and lepton sector, finding an interesting interplay between astrophysical and LHC searches.
We discuss dark matter phenomenology, neutrino magnetic moment and their masses in a Type-III radiative scenario. The Standard Model is enriched with three vector-like fermion triplets and two inert scalar doublets to provide a suitable platform for the above phenomenological aspects. The inert scalars contribute to total relic density of dark matter in the Universe. Neutrino aspects are realised at one-loop with magnetic moment obtained through charged scalars, while neutrino mass gets contribution from charged and neutral scalars. Taking inert scalars up to $2$ TeV and triplet fermion in few hundred TeV range, we obtain a common parameter space, compatible with experimental limits associated with both neutrino and dark matter sectors. Finally, we demonstrate that the model is able to provide neutrino magnetic moments in a wide range from $10^{-12}\mu_B$ to $10^{-10}\mu_B$, meeting the bounds of various experiments such as Super-K, TEXONO, Borexino and XENONnT.
The vector $U$-bosons, or so called 'dark photons', are one of the possible candidates for the dark matter mediators. We present a procedure to define theoretical constraints on the upper limit of $\epsilon^2(M_U)$ from heavy-ion as well as $p+p$ and $p+A$ dilepton data from SIS to LHC energies. We used the microscopic Parton-Hadron-String Dynamics (PHSD) transport approach which reproduces well the measured dilepton spectra in $p+p$, $p+A$ and $A+A$ collisions. In addition to the different dilepton channels originating from interactions and decays of ordinary (Standard Model) matter particles (mesons and baryons), we incorporate in the PHSD the decay of hypothetical $U$-bosons to dileptons, $U\to e^+e^-$, where the $U$-bosons themselves are produced by the Dalitz decay of pions, $\eta$-mesons, Delta resonances as well as by vector meson decays. This analysis can help to estimate the requested accuracy for future experimental searches of 'light' dark photons by dilepton experiments.
XENONnT is the current experiment of the XENON dark matter (DM) project, currently in data acquisition at the INFN Laboratori Nazionali del Gran Sasso (Italy). The detector employs a LXe dual-phase TPC with an active target mass of 5.9 t. The TPC is surrounded by two water Cherenkov detectors, which serve as active muon and neutron veto systems.
XENONnT completed its first science run (SR0) with a collected exposure of 1.1 tonne-year. The lowest background level ever achieved with this kind of detectors, allowed for the most sensitive search for solar axions, bosonic DM and WIMP search.
With the subsequent longer science run (SR1), XENONnT improves upon those results, and enables the possibility of directly observing, for the first time, the CE$\nu$NS interaction of solar ($^8B$) neutrinos.
Recently, the NV performances are boosted by doping water with Gadolinium gaining more sensitivity to ultra-rare processes involving DM and neutrino physics.
LUX-ZEPLIN (LZ) is an experiment built for direct detection of dark matter with world-leading sensitivity over a diverse science program. LZ has been operating at the Sanford Underground Research Facility (SURF) in South Dakota since 2021. The experiment employs three nested detectors; a central dual phase TPC with 7 tonnes of xenon in its active region, an instrumented liquid xenon skin, and an outer detector featuring tanks of gadolinium-loaded liquid scintillator. This talk will provide an overview of the LZ experiment and report on the most recent status in its operation and searches.
LUX-ZEPLIN (LZ) is a dark matter experiment located at the Sanford Underground Research Facility in South Dakota, USA employing a 7 tonne active volume of liquid xenon in a dual-phase time projection chamber (TPC). It is surrounded by two veto detectors to reject and characterize backgrounds. A comprehensive material assay and selection campaign for detector components, along with a xenon purification campaign, have ensured an ultra-low background environment. In its first science run (SR1) LZ attained a background rate of (6.3 ± 0.5) x 10$^{−5}$ events/kg/day/keVee in the < 15 keVee region, enabling it to achieve world-leading limits for the spin-independent elastic scattering of nuclear recoils of WIMPs with masses above 10 GeV/c$^2$. This talk will provide an overview of how LZ has reached even lower background rates and improved its background modeling in its current science run. The impacts of these improvements on LZ’s WIMP sensitivity and science results will also be discussed.
Dark Matter (DM) still eludes detection by modern experiments and its nature puzzles the minds of physicists. Weakly Interacting Massive Particles (WIMPs) are commonly seen as one of the prime candidates for the role of DM. The DARk matter WImp search with liquid xenoN (DARWIN) detector is envisioned to be the ultimate multi-tonne xenon-based direct detection astroparticle observatory. Hosting a time projection chamber with 40 tonnes of liquid xenon at its core, with a keV-range threshold and an ultra-low radioactive background it will aim to probe the entire parameter space for WIMP DM down to the so-called neutrino fog. Moreover, DARWIN's scientific research program also includes searches for solar axions, axion-like particles, as well as measurements of the solar neutrino flux and a probe of the Majorana nature of neutrinos. This talk outlines the key technological and physics challenges associated with DARWIN, and the recent progress of the collaboration to address them.
CYGNO is developing a high-precision gaseous Time Projection Chamber to be installed at the Gran Sasso National Laboratories (LNGS) for directional studies of rare low energy events, as dark matter. The detector consists in a TPC filled with He:CF4 gas mixture operating at atmospheric pressure with a triple GEM amplification stage. The gas scintillating properties allow the realization of an optical readout which comprises photomultipliers tubes and extremely low-noise granular sCMOS camera sensors. This technology provides a set of information on the recoil tracks, as released energy, 3D topology and position down to few keV of energy deposits, granting the advantages of a directional detector.
We will present the latest results on the underground operation at LNGS of a 50 l, 50 cm drift prototype focusing on the MonteCarlo-data comparison. In addition, we will show the design and features of the CYGNO demonstrator, a 0.4 m3 detector whose installation is foreseen for 2025 at LNGS.
PandaX-4T detector is a dual phase xenon time projection chamber. In 2021, the commissioning run set the most stringent limit on dark matter-nucleon spin-independent interactions in the mass range from GeV to TeV level. However, for sub-GeV light dark matter, the nuclear recoil energy falls below the detection threshold of approximately 5 keV in the traditional search window requiring the selection of paired scintillation and ionization signals. To search for the light dark matter, we selected ionization-only events and lowered the detection threshold down to approximately 0.8 keV. In such a low energy window, two types of dominant backgrounds were identified in the PandaX-4T commissioning run, namely micro-discharge and cathode activity. In this talk, the latest search result using ionization-only events in the data of the first scientific run of PandaX-4T will be presented, together with further studies on the origin and discrimination of the main backgrounds in this energy window.
We have studied saturated LiCl water solution for the neutrino detection for Jinping Neutrino Experiment. The solution takes advantage of the high electron-neutrino charge-current interaction cross-section with Li-7, high natural abundance of Li-7, and the high solubility of LiCl. We have achieved a 50-m long attenuation length at 430 nm. The solution is good in studying energy-dependent solar neutrino physics, including the solar neutrino upturn effect and light sterile neutrino. The sensitivity of a hundred-ton-scale Jinping detector is comparable with other multi-thousand-ton detectors. The contained Cl-35 and Li-6 also make a delayed-coincidence detection for electron-antineutrino possible. The Jinping Neutrino Experiment can measure the crust geo neutrinos of Tibet. In addition to being a pure Cherenkov detector medium, a wavelength shifter, carbostyril 124, is added to the LiCl aqueous solution enabling the development of a water-based Cherenkov-enhanced lithium-rich detector.
Large scale noble element time projection chambers (TPC's) play a central role in many HEP experiments. Future planned experimental programs using noble element TPC's aim to construct very large detectors, up to the multi-kiloton scale. Pixel based 3D readout offers the opportunity to realize such robust large scale noble element TPC's by recording the information from ionization events in an natively 3D way, however offer a new set of challenges in detection of the scintillation light. In particular, searching for photoconductive materials which are capable of converting VUV light to charge could open the doorway to a potentially game changing solution of an integrated charge and light (Q+L) sensor for large area pixel based noble element detectors. In this presentation we will explore a novel photodetector design based on single layer graphene and amorphous selenium (aSe) as a potential integrated Q+L sensor and show some preliminary results from the first manufactured devices.
Microchannel plate photomultiplier tubes working in photon-counting
mode to detect extremely low number of photons see adoption at the
future large liquid-based neutrino detectors. By coating materials of
high secondary electron yield by the atomic layer deposition at the
end face of the microchannel plates, collection efficiencies of
photo-electrons are pushed to 100%. That, however, introduces a
single electron charge spectra departing from the Gaussian
distribution. Based on laboratory measurements, we present the
mechanism of electron amplification at the end face and formulate a
probabilistic model of the single electron charge spectra. Our
simplified model with Gamma-Tweedie mixture is straightforwardly
deployed in future neutrino experiments under commissioning.
The Water Cherenkov Test Experiment (WCTE) will be installed in CERN's recently upgraded T9 “Test Beam” Area in Summer 2024. It has three goals: to prototype photosensor and calibration systems for Hyper-Kamiokande, to develop new calibration and reconstruction methods for water Cherenkov detectors and to measure lepton and hadron scattering on Oxygen.
The collaboration performed a 3-week-long beam test in July 2023. It uses newly developed aerogel Cherenkov threshold counters (ACTs) to perform an efficient separation of pions from muons in the sub-GeV range, which had not been done before. Additionally, a new compact tagged photon beamline was developed, composed of a Neodymium (N52) Halbach array permanent magnet and a hodoscope array placed downstream of the magnet. The combination of the ACTs and tagged photon beamline provides sub-GeV p, e, pi, mu and gamma test beams. Using this setup, the collaboration was able to estimate the beam flux of CERN's T9 beam.
The Q-Pix concept is a continuously integrating low-power charge-sensitive amplifier (CSA) viewed by a Schmitt trigger. When the trigger threshold is met, the comparator initiates a ‘reset’ transition and returns the CSA circuitry to a stable baseline. The reset time is captured in a 32-bit clock value register, buffers the cycle and then begins again. The time difference between one clock capture and the next sequential capture, called the Reset Time Difference (RTD), measures the time to integrate a integrated quantum of charge (Q). Waveforms are reconstructed without differentiation and an event is characterized by the sequence of RTDs. Q-Pix offers the ability to extract all track information providing very detailed track profiles and also utilizes a dynamically established network for DAQ for exceptional resilience against single point failures. This talk will present the first results of the Q-Pix 180 nm ASICs, introduce novel light-based prototypes, and discuss future tests.
A novel approach to science communication is presented, using cake to explain particle physics ideas to engage new audiences. This talk will present a public engagement strategy where baking has been used to engage the general public, both at in-person events and with online platforms such as social media and virtual science fairs. This innovative approach using the juxtaposition of cake and physics makes for a fun and memorable experience, and has been demonstrated to engage new and low science capital audiences and spark their interest in particle physics.
The BeInspired project for high school students aims to dispel the myth that individuals are inherently inclined towards either the sciences (such as mathematics and physics) or the humanities and arts. Instead, the project seeks to foster a dialogue between the artistic and technical aspects of each individual.
The project began with an initial one-day workshop, where students were introduced to particle physics and art through lectures and hands-on artistic activities with real artists. Following this workshop, students collaborated with their teachers at their schools to create artistic artifacts.
Throughout the project, we met with students several times via Zoom. We organized Master Classes on particle physics, virtual visits to CERN, and discussions on the preparation of artifacts. The culmination of the project will be an exhibition at ICHEP 2024.
In this presentation, we will discuss the main concept, the details of the realization, and the results of the entire process.
Creativity and vision are essential across disciplines, shaping both artistic and scientific endeavors. "Art & Science across Italy", a project led by the Italian National Institute for Nuclear Physics (INFN) in collaboration with CERN, cultivates a broad perspective in high-school students to disseminate scientific knowledge. Embracing the STEAM field, it integrates STEM and arts without privileging one over the other. Throughout the project, high school students attend scientific seminars and subsequently, drawing inspiration from science, create their own artworks. These artworks are then showcased in local and national art exhibitions. In the fourth edition (2022-2024), 6500 students produced 1000 artworks showcased in 20 exhibitions. This talk outlines the project, methodology, and key results.
The Cosmic Piano is designed to detect Muons generated by the arrival of cosmic rays to Earth. When Muons impact a module, sounds and flashes of light are generated, by means of a phase shift fiber and two avalanche photodiodes (APD) placed at the ends of the fiber, the flashes are detected, converting them into electrical pulses. The APDs collect the light produced by the scintillator plastic, processed, and different activities can be carried out with this detector through the configuration options it has. Every time a cosmic ray is detected, a sound and a flash of light are produced, and the number of events detected by each channel is displayed through a screen. This system is composed of 5 modules a control and data processing system where the operation of the detector is configured. This way we can show what particle detection is like and demonstrate through sounds and lights that attract the attention of many people because each module has a particular musical note.
For the first time ever, the CERN community has collaborated with established (non-science) writers to produce an anthology of fictional short stories. The stories, based on submissions of ideas from the world-wide CERN community, were put together in a book entitled Collisions : Stories from the Science of CERN, co-edited by Rob Appleby of Manchester University and Connie Potter of CERN. The book has sold thousands of copies and has been a huge success. We talk about the idea, the process and the marketing of such a unique public outreach project, which has left the public wanting more.
I present a new method of teaching that blends a science fiction narrative into an intermediate
level astronomy course. “The Salvation of the Yggdrasil” is a sci-fi scenario where students must
solve a series of challenges to guide the people of an intergenerational spaceship through a
catastrophe and set them safely back on their journey to a new home amongst the stars. Each
challenge requires the students to conceptually understand and apply the astronomy and
physics concepts presented in the class, while also building proficiency in the skills of problem
solving, critical thinking, collaboration, and interdisciplinary work. Based on the principles of
Self-determination Theory, the curriculum is designed to give students opportunities to explore
their own interests in an environment that strongly instills a sense of intrinsic motivation. This
presentation focuses on the methods the class uses and initial observations and results from
the first run of the class.
This talk presents a comprehensive overview of recent ATLAS measurements of collective flow phenomena in a variety of collision systems. Measurements of the mean, variance, and skewness of the distribution of event-by-event per- particle average transverse momentum, [pT] are reported for Pb+Pb collisions at 5.02 TeV and Xe+Xe collisions at 5.44 TeV. These measurements give insight into the nature of the spatial energy fluctuations in the QGP produced in heavy- ion collisions. Measurements of the azimuthal anisotropy of high-pT particles in Pb+Pb collisions, using two and four-particle cumulants are presented. The high-pT vn measurements provide information on the path-length dependence of parton energy loss in the QGP. Two sets of measurements that investigate if the presence of jets affects the flow-like behavior observed in pp collisions are also presented.
We investigate the possibility of a partonic phase in small systems with the elliptic flow of mesons (π⁺⁻, K⁺⁻, K⁰) and baryons (p+p̅, Λ+Λ̅) in high-multiplicity p--Pb collisions at $\sqrt{s_{{\rm NN}}}$ = 5.02 TeV and pp collisions at $\sqrt{s}$ = 13 TeV measured by ALICE. The results show a grouping (with 1$\sigma$ significance) and splitting (with 5$\sigma$ confidence) behavior of $v_2$ at intermediate pt. This phenomenon, reminiscent of partonic flow in heavy-ion collisions, has been observed with such high precision for the first time in small collision systems. Comparison with the hydrodynamic model with hadronization via quark coalescence indicates the formation of a deconfined partonic medium in small systems. We further extend these measurements down to the low multiplicity in pp collisions employing large pseudorapidity separation (5.0 < |$\Delta\eta$| < 6.0) to explore the limits to the formation of the collective medium and presence of partonic degrees of freedom.
Studies have yielded strong evidence that a deconfined state of quarks and gluons, the quark-gluon plasma, is created in heavy-ion collisions. This hot and dense matter exhibits almost zero friction and a strong collective behavior. An unexpected collective behavior has also been observed in small collision systems. In this talk, the origin of collectivity in small collision systems, which is still not understood, is addressed by confronting different tunes of PYTHIA8 and EPOS4 event generators using measurements of azimuthal correlations for inclusive and identified particles. In particular, anisotropic flow coefficients measured using two- and four-particle correlations with various pseudorapidity gaps and balance functions are reported in different multiplicity classes of pp collisions at $\sqrt{s}=13.6$ TeV and p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV. Comparisons with available experimental data are also presented.
Precision measurements of transverse momentum-differential elliptic flow, $v_{2}(p_{\rm T})$, of identified particles have been done in proton-lead (p-Pb) collisions. The characteristic mass-ordering of $v_{2}(p_{\rm T})$ at low $p_{\rm T}$ and the grouping/splitting of $v_{2}(p_{\rm T})$ for mesons and baryons at intermediate $p_{\rm T}$, which have been regarded as the smoking gun of QGP signal, are observed in p-Pb collisions. However, the exact physics mechanism is not entirely clear. A multi-phase transport (AMPT) model incorporating a partonic phase followed by quark coalescence hadronization can reproduce the flow measurements. The mass-ordering can be reproduced in p-Pb collisions by the standard AMPT, while the grouping/splitting remains challenging. This talk significantly improves the coalescence in AMPT by implementing a precise quark phase-space distribution, which reproduces the measured grouping/splitting of $v_{2}(p_{\rm T})$ in p-Pb collisions for the first time.
Balance functions have been used extensively to elucidate the time evolution of quark production in heavy-ion collisions. Early models predicted two stages of quark production, one for light quarks and one for the heavier strange quark, separated by a period of isentropic expansion. This led to the notion of clocking particle production and tracking radial flow effects, which drive the expansion of the system. In this talk, balance functions of identified particles in different multiplicity classes of pp Run 3 collisions at $\sqrt{s} = 13.6\;\text{TeV}$ recorded by ALICE are reported. The results are compared with different models as well as with previously published results on pp and Pb-Pb collisions at different energies. The results enable tracking the balancing of electric charge and strangeness by measuring how the widths and integrals of the charge and strangeness balance functions evolve across the collision energies.
Measurements of light-flavour particle production in small collision systems at the LHC energies have shown the onset of features that resemble what is typically observed in nucleus- nucleus collisions. New results on the (multi-)strange hadron production in Pb–Pb collisions at $\sqrt{s_{\rm NN}}$ =5.02 and 5.36 TeV will be presented. These results are discussed in the context of recent measurements of light-flavour hadron production in pp collisions at $\sqrt{s}$ =0.9 and 13.6 TeV collected by the ALICE experiment. In order to understand the strangeness production mechanism, angular correlation between multi- strange and associated identified hadrons are measured and compared with predictions from the string-breaking model PYTHIA8, the cluster hadronisation model HERWIG7, and the core-corona model EPOS-LHC. In addition, the connection of strange hadron production to hard scattering processes and to the underlying event is studied, using di-hadron correlations triggered with the highest-$p_{\rm T}$ hadron in the event.
We will discuss the latest differential measurements of Higgs boson cross sections with the CMS detector. Both fiducial differential cross section measurements and measurements in the simplified template cross section framework will be presented. The data collected during Run 2 of the LHC by the CMS experiment are used. We also present interpretations of these measurements as constraints on anomalous interactions.
The Standard Model predicts several rare Higgs boson processes, among which are the production in association with c-quarks, the decays to a Z boson and a photon, to a low-mass lepton pair and a photon, and to a meson and photon. The observation of some of these processes could open the possibility of studying the coupling properties of the Higgs boson in a complementary way to other analyses. In addition, lepton-flavor-violating decays of the observed Higgs boson are searched for, where on observation would be a clear sign of physics effects beyond the Standard Model. Several results for decays based on pp collision data collected at 13 TeV will be presented.
The couplings of the Higgs boson to fermions have been studied with third and second generation quarks and leptons, while no direct measurements of its interactions with the lighter u,d,s quarks have been performed to date. The search for ultra rare decays H->gamma+ phi/rho/K*0 can probe these couplings. While the contribution to the rate of these decays from the diagrams involving Yukawa couplings is negligible in the Standard Model (SM), in theories beyond the SM this contribution could be significantly enhanced and deviations from the SM branching ratios could be observed because of the interference with the dominant diagrams, where meson is formed via Higgs boson decays to Z bosons or photons. Results with data collected by the CMS experiment at a centre of mass energy of 13 TeV will be shown.
While the Standard Model predicts that the Higgs boson is a CP-even scalar, CP-odd contributions to the Higgs boson interactions with vector bosons or fermions are presently not strongly constrained. A variety of Higgs boson production processes and decays can be used to study the CP nature of the Higgs boson interactions. This talk presents the most recent CP measurements of such analyses by the ATLAS experiment, based on pp collision data collected at 13 TeV.
To fully characterize the Higgs boson, it is important to establish whether it presents coupling properties that are not expected in the Standard Model of particle physics. These can probe BSM effects, such as CP conserving or CP violating couplings to particles with masses not directly accessible at the LHC through virtual quantum loops. In this talk we will present the most recent searches from the CMS experiment for anomalous Higgs boson interactions.
The large dataset of about 3 ab-1 that will be collected at the High Luminosity LHC (HL-LHC) will be used to measure Higgs boson processes in detail. Studies based on current analyses have been carried out to understand the expected precision and limitations of these measurements. The large dataset will also allow for better sensitivity to di-Higgs processes and the Higgs boson self coupling. This talk will present the prospects for Higgs and di-Higgs results with the ATLAS detector at the HL-LHC.
The precise measurement of solar neutrino flux is essential for the Standard Solar Model (SSM) and neutrino physics. The proton-proton (pp) fusion chain dominates the neutrino production in the Sun, and pp neutrinos contribute roughly 91% of the solar neutrino flux. PandaX-4T, an experiment located in China Jinping underground Laboratory, aims to detect dark matter and astrophysical neutrinos using a large-scale dual-phase xenon TPC. In this talk, using the 0.63 tonne×year exposure of PandaX-4T, the first measurement of solar pp neutrinos below 165 keV electron recoil energy with a natural xenon detector will be presented.
T2K is a long-baseline experiment for the measurement of neutrino and antineutrino oscillations. (Anti)neutrinos are produced by the J-PARC accelerator and measured at the ND280 near detector, and then at the Super-Kamiokande far-detector, in Kamioka.
The most recent results of neutrino oscillations will be presented, featuring world-leading sensitivities on the search of Charge-Parity violation, by comparing oscillation measurements of neutrinos and antineutrinos. Measurements of the atmospheric parameters $\sin^2\theta_{23}$ and $\Delta m^2_{23}$, are extracted from the rate of muon neutrino disappearance and electron neutrino appearance. The results include data collected with first Gd-loading at the far detector, which required a revision of the selection strategy and systematic uncertainties modelling the detector response.
The T2K results will be assessed in terms of their statistical interpretation and alternative parameterisations, and unitarity triangles will be presented.
The nature of the neutrino mass ordering and whether neutrino oscillations violate CP symmetry remain among several open questions surrounding PMNS mixing. At present no single experiment has the ability to resolve these issues. Atmospheric neutrino data at Super-Kamiokande (Super-K) and accelerator neutrino data from T2K, however, offer complementary sensitivity to these puzzles. As both neutrino sources are observed at the same detector, Super-K, there is a clear benefit to analyzing the data sets together. This presentation will report results from the first such combined analysis, which utilizes unified uncertainty models of both neutrino interactions and the detector response. Combined constraints on open questions in the PMNS paradigm, including studies of the mass ordering and CP violation, using 3244.4 days of Super-K atmospheric neutrino data combined with beam neutrino data corresponding to 36e20 protons-on-target from T2K’s first 10 run periods will be presented.
NOvA is a long-baseline neutrino oscillation experiment with a one megawatt beam and near detector at Fermilab and a far detector 810 km away in northern Minnesota. It features two functionally identical scintillator tracking calorimeter detectors. The near detector samples the beam before significant oscillations to allow the measurement of muon-neutrino disappearance and electron-neutrino appearance, and their antineutrino counterparts, at the far detector. These measurements are used to measure neutrino mass differences and the parameters of the PMNS mixing matrix. In this talk, results of a new analysis featuring double the neutrino-mode beam exposure are presented.
T2K and NOvA are two currently active long-baseline neutrino oscillation experiments studying $\nu_\mu$/$\bar{\nu}_\mu$ disappearance and $\nu_e$/$\bar{\nu}_e$ appearance in $\nu_\mu$/$\bar{\nu}_\mu$ accelerator neutrino beams.
This talk presents a joint T2K+NOvA neutrino oscillation analysis within the standard three active neutrino flavor paradigm, which includes each experiment’s fully detailed detector simulations and takes advantage of the experiments’ complementary oscillation baselines of 295 km and 810 km and neutrino energies around 0.6 GeV and 2 GeV for T2K and NOvA, respectively.
The combination of the differing sensitivities to neutrino oscillation and the T2K+NOvA data could constrain the oscillation parameters better than either experiment alone. Within a unified Bayesian inference, the results from the first T2K+NOvA joint neutrino oscillation measurement will be presented.
One of the open questions in neutrino physics is that of the mass-ordering. In the three flavor paradigm, it is unknown if the masses of the three massive neutrinos are arranged in the normal (m1>m2>m3) or inverted (m3>m1>m2) ordering. Atmospheric neutrinos, which are electron and muon neutrinos produced in the atmosphere by cosmic rays, provide a window into the neutrino mass-ordering. If the mass-ordering is normal (inverted), we expect a resonance of electron neutrinos (anti-neutrinos). At Super- Kamiokande (SK), the signal from resonance is obscured by tau neutrinos arising from the oscillation of the atmospheric neutrino flux. Consequently, the sensitivity of SK towards mass-ordering depends on its ability to effectively remove the background of oscillated tau neutrinos. We present the latest measurement of tau neutrino appearance at SK and potential enhancements to the experiment's sensitivity to the neutrino mass-ordering by constraining tau neutrinos with a neural network.
The Jiangmen Underground Neutrino Observatory (JUNO) is a multipurpose neutrino detector under construction in China. It is located 700 m underground, 53 km away from 8 nuclear reactors. It will use 20 kt of liquid scintillator surrounded by 17,512 20" photomultipliers and 25,600 3" photomultipliers to detect neutrino interactions with a 3% energy resolution at 1 MeV. JUNO's main physics goals are the determination of the neutrino mass ordering and the high-precision measurement of $\Delta m^2_{21}$, $\sin^2\theta_{12}$, and $\Delta m^2_{31}$.
I will present how JUNO can measure the reactor antineutrino oscillations to reach a $3\sigma$ sensitivity to the neutrino mass ordering with 6 years of data. JUNO can also measure atmospheric neutrino oscillations to enhance this sensitivity. After 6 years, JUNO will improve the current precision on $\Delta m^2_{21}$, $\sin^2\theta_{12}$, and $\Delta m^2_{31}$ by an order of magnitude, achieving precision well below the sub-percent level.
The ALICE detector underwent significant upgrades during the LHC Long Shutdown 2 from 2019 to 2021. A key upgrade was the installation of the new Inner Tracking System (ITS2), comprising 7 layers with 12.5 billion pixels over 10 m², enhancing its tracking capabilities using the ALPIDE chips that are capable of recording Pb-Pb collisions at an interaction rate of 50 kHz. It offers a significant improvement in impact parameter resolution and tracking efficiency at low transverse momentum, attributed to its increased granularity, low material budget of only 0.36% X0/layer for the innermost 3 layers and the closer positioning of the first layer to the interaction point.
ITS2 was successfully commissioned in ALICE, becoming operational with the start of LHC Run 3. This presentation will give an overview of ITS2's operational experience and first performance results, covering aspects of detector operation, calibration, alignment and tracking performance in both pp and Pb-Pb collisions.
The tracking system of the CMS experiment is the world’s largest silicon tracker with its 1856 and 15148 silicon pixel and strip modules, respectively. To accurately reconstruct trajectories of charged particles the position, rotation and curvature of each module must be corrected such that the alignment resolution is smaller than, or comparable to, the hit resolution. This procedure is known as tracker alignment.
At the end of 2022 and 2023 the alignment was optimized with the aim to improve physics precision in the data reprocessing. A new inner layer of the barrel pixel was installed prior to Run 3 resulting in an increased need to mitigate irradiation of the pixel modules. In addition, the tracker alignment must account for other changes in track reconstruction caused by e.g. temperature variations and magnet cycles. The results of this effort are presented with a focus on physics performance, highlighting the strategies employed to tackle the challenges from Run 3 data-taking.
The tracking performance of the ATLAS detector relies critically on its 4-layer Pixel Detector. As the closest detector component to the interaction point, this detector is subjected to a significant amount of radiation over its lifetime. At present, at the start of 2024-Run3 LHC collision ATLAS Pixel Detector on innermost layers, consisting of planar and 3D pixel sensors, will operate after integrating fluence of O(10^15) 1 MeV n-eq cm2. The ATLAS collaboration is continually evaluating the impact of radiation on the Pixel Detector. In this talk the key status and performance metrics of the ATLAS Pixel Detector are summarised, putting focus on performance and operating conditions with special emphasis to radiation damage and mitigation techniques adopted for LHC Run3. These results provide useful indications for the optimisation of the operating conditions for the new generation of pixel trackers under construction for HL-LHC upgrades.
The LHCb Experiment is running after its first major upgrade to cope with increased luminosities of LHC Run3, being able to improve on many world-best physics measurements. A new tracker based on scintillating fibers (SciFi) replaced Outer and Inner Trackers and is delivering an improved spatial resolution for the new LHCb trigger-less era, with a readout capable of reading ~524k channels at 40MHz. Fully automated calibration routines for SciFi Detector Devices, based on dedicated software tools and operational procedures, were validated during SciFi commissioning and have been applied to further improve the detector performance since the beginning of Run3. This oral presentation demonstrates the experience gained on SciFi operations during data taking - such as solutions to improve performance - and presents early results showing the performance after commissioning versus the expected one. Foreseen challenges to face with detector aging and luminosity upgrades will also be presented.
To cope with the resulting increase in occupancy, bandwidth and radiation damage at the HL-LHC, the ATLAS Inner Detector will be replaced by an all-silicon system, the Inner Tracker (ITk). The innermost part will consist of a pixel detector with an active area of about 13m^2. Several silicon sensor technologies will be employed. The pixel modules assembled with RD53B readout chips have been built to evaluate their production rate. Irradiation campaigns were done to evaluate their thermal and electrical performance before and after irradiation. A new powering scheme – serial – will be employed, helping to reduce the material budget of the detector as well as power dissipation. This contribution presents the status of the ITk-pixel project focusing on the lessons learned and the biggest challenges towards production, from mechanics structures to sensors, and it will summarize the latest results on closest-to-real demonstrators built using module, electric and cooling services prototypes.
The HL-LHC is expected to provide an integrated luminosity of 4000 fb-1, that will allow to perform precise measurements in the Higgs sector and improve searches of new physics at the TeV scale. ATLAS is currently preparing for the HL-LHC upgrade, and an all-silicon Inner Tracker (ITk) will replace the current Inner Detector, with a pixel detector surrounded by a strip detector. The strip system consists of 4 barrel layers and 6 EC disks. After completion of final design reviews in key areas, such as Sensors, Modules, Front-End electronics and ASICs, a large scale prototyping program has been completed in all areas successfully. We present an overview of the Strip System, and highlight the final design choices of sensors, module designs and ASICs. We will summarise results achieved during prototyping and the current status of production on various detector components, with an emphasis on QA and QC procedures.
The Muon g-2 experiment at Fermilab aims to measure the muon magnetic moment anomaly, aμ = (g−2)/2, with a final accuracy of 0.14 parts per million (ppm). The experiment’s first result, published in 2021 and based on Run-1 data collected in 2018, confirmed the previous result obtained at Brookhaven National Laboratory with a similar sensitivity of 0.46 ppm. In this talk, we will present the improvements in systematic and statistical uncertainties in the latest result, based on the 2019 and 2020 datasets of Run-2 and Run-3. These datasets contain a factor of four more data than in Run-1, thus entering a new sensitivity regime to g-2 which led to the unprecedented uncertainty of 0.20 ppm. We will also discuss the future prospects for the experiment, the projected uncertainties on aμ for the final publication which will include the last three datasets collected from 2021 to 2023, and an overview of the comparison with the Standard Model prediction for muon g-2.
The Muon $g-2$ Experiment at Fermilab aims to measure the muon magnetic moment anomaly, $a_{\mu} = (g-2)/2$, with a final accuracy of 0.14 parts per million (ppm). A $3.1$-GeV muon beam is injected into a storage ring of $14\,$m diameter, in the presence of a $1.45\,$T magnetic field. The anomaly $a_\mu$ can be extracted by accurately measuring the anomalous muon spin precession frequency $\omega_a$, based on the arrival time distribution of decay positrons observed by $24$ calorimeters, and the magnetic field. In 2023, the experiment published a result based on the 2019 and 2020 datasets, reaching the unprecedented sensitivity of $0.20\,$ppm. In this talk, I will outline the major systematic uncertainties on the $\omega_a$ frequency and provide an overview of the ongoing $\omega_a$ analysis for the last three datasets, collected from 2020 to 2023, along with the projected uncertainties on the final Muon $g-2$ measurement at Fermilab.
We describe the procedures that were developed to verify the consistency and combine multiple independent analyses of the muon precession measurement by the FNAL-E989 collaboration. These procedures were applied to the first (2021) and second (2023) results published by the collaboration. To properly verify the consistency of different analyses up to 20 ppb, correlations have been modeled and estimated, in several cases exploiting bootstrap techniques. A combination procedure has been designed to combine highly correlated measurements to obtain a robust final result with a small (sub-ppm) but nevertheless conservative uncertainty.
We report a measurement of the $e^+e^-\to\pi^+\pi^-\pi^0$ cross section in the energy range from 0.62$~$GeV to 3.5$~$GeV using an initial-state radiation technique. We use an $e^+e^-$ data sample corresponding to $191~\mathrm{fb}^{-1}$ of integrated luminosity, collected at a centre-of-mass energy at or near the $\Upsilon(4S)$ resonance with the Belle$~$II detector at the SuperKEKB collider. The uncertainty at the $\omega$ and $\phi$ resonances is 2.2%. The leading order hadronic vacuum polarization contribution to the muon anomalous magnetic moment using this result is $a^{3\pi}_{\mu}= (49.02\pm 0.23\pm 1.07)\times 10^{-10}$.
Lepton flavor violation (LFV) is a suitable avenue to look for physics beyond the SM. The observation of neutrino oscillation has opened a new window, indicating new physics. In our work, we study the charged LFV $\mu$ decays such as $\mu\rightarrow e\gamma$, $\mu \rightarrow eee$, and $(\mu - e)_{\text{Ti}}$ with a vector leptoquark ($U_3$) by considering the constraints from non-standard neutrino interaction (NSI) sector parameter $\epsilon_{e\mu}$. We consider that these NSIs are attributed to the presence of leptoquarks to account for the difference in the experimental observations of $\delta_{CP}$ measurement by NOvA and T2K. We obtained the branching ratios with uncertainties for three decay modes: $(\mu \rightarrow e \gamma) \leq 10^{-18}$, $(\mu \rightarrow eee) \leq 10^{-21}$ and $(\mu \rightarrow e)_{\text{Ti}} \leq 10^{-19}$.Our results show an improvement in the current limits, which can be explored in the future experiments.
The MEG II experiment searches for the lepton flavour violating decay $\mu^+\to e^+\gamma$ with the world's most intense continuous muon beam at the Paul Scherrer Institute and high-performance detectors, aiming at ten times higher sensitivity than the previous MEG experiment. The result with the first dataset in 2021 was published, and the MEG II experiment took data in 2022 and 2023 corresponding to ten times larger data statistics than in 2021 and a more than twenty-fold increase in data statistics is anticipated by 2026 to reach the sensitivity goal. The latest results from the MEG II experiment will be presented.
The COMET Experiment at J-PARC aims to search for the lepton-flavour violating process of muon to electron conversion in a muonic atom, $\mu^{-}N\rightarrow\mathrm{e}^{-}N$, with a 90% confidence level branching-ratio limit of $6\times 10^{-17}$, in order to explore the parameter region predicted by most well-motivated theoretical models beyond the Standard Model. In order to realize the experiment effectively, a staged approach to deployment is employed; COMET Phase-I & II. At the Phase-I experiment, a precise muon-beam measurement will be conducted, and a search for $\mu^{-}N\rightarrow\mathrm{e}^{-}N$ will also be carried out with an intermediate sensitivity of $7\times 10^{-15}$ (90% CL upper limit).
The dedicated proton beam-line was recently completed and its commissioning run (COMET Phase-$\alpha$) was successfully conducted in 2023. In this paper, the construction status and some prospects of the experiment are presented in addition to the experimental overview.
The Energy-Energy Correlator is an observable that explores the angular correlations of energy depositions in detectors at high-energy collider facilities. It has been extensively studied in the context of precision QCD. In this presentation, I will discuss our recent work on the energy-energy correlator in the context of Deep Inelastic Scattering. In the limit where the energy emissions are back-to-back, the proposed observable is sensitive to the universal transverse momentum-dependent parton distribution functions and fragmentation functions. In the collinear limit, a definition of the nuclear energy-energy correlator was introduced. We would revisit the NEEC definition, which involves weighting the EEC by Bjorken x, and conducting the study across the entire phase space region.
The radiation pattern within high energy quark and gluon jets (jet substructure) is used as a precision probe of QCD and for optimizing event generators. As compared to hadron colliders, the precision achievable by collisions involving electrons is superior, as most of the complications from hadron colliders are absent. Therefore jets are analyzed in deep inelastic scattering events, recorded by the H1 detector at HERA. This measurement is unbinned and multi-dimensional, making use of machine learning to unfold for detector effects. The fiducial volume is given by momentum transfer $Q^2>150$ GeV$^2$, inelasiticity $0.2< y < 0.7$, jet transverse momentum $p_{T,jet}>10$ GeV, and jet pseudorapidity $-1<\eta_{jet}<2.5$. The jet substructure is analyzed in the form of generalized angularites, and is presented in bins of $Q^2$ and $y$. All of the available object information in the events is used by means of graph neural networks. The data are compared with a broad variety of predictions.
The H1 Collaboration at HERA reports the first measurement of groomed event shapes in deep inelastic ep scattering (DIS) at $\sqrt{s} = 319$ GeV, using data recorded between 2003 and 2007 with an integrated luminosity of $351.1\pm 9.5$ pb$^{−1}$. Event shapes in DIS collisions provide incisive probes of perturbative and non-perturbative QCD, and recently developed grooming techniques investigate similar physics in jet measurements of hadronic collisions. This paper presents the first application of grooming to DIS data. The analysis is carried out in the Breit frame, utilizing the novel Centauro jet clustering algorithm. Events are selected with squared momentum–transfer $Q^2 > 150$ GeV$^2$ and inelasticity $0.2 < y < 0.7$. Cross sections of groomed event 1-jettiness and groomed invariant jet mass are measured for several choices of grooming parameter. The measurements are compared to Monte Carlo models and to analytic calculations based on Soft Collinear Effective Theory (SCET).
The H1 Collaboration reports the first measurement of the 1-jettiness event shape observable $\tau_{1}^{b}$ in neutral-current deep-inelastic electron-proton scattering. The analysis is based on data recorded in 2003-2007 by the H1 detector at the HERA collider for ep collisions at sqrt(s)=319 GeV, with integrated luminosity of 351.1 pb$^{-1}$. The observable $\tau_{1}^{b}$ is equivalent to a thrust observable defined in the Breit frame. The triple differential cross section is presented as a function of $\tau_{1}^{b}$, event virtuality $Q^2$, and inelasticity y, in the kinematic region $Q^2 > 150$ GeV$^2$. The data are compared to predictions from Monte Carlo event generators and NNLO pQCD calculations. These comparisons reveal sensitivity of this observable to QCD parton shower and resummation effects, the magnitude of the strong coupling constant, and proton parton distribution functions, as well as the modeling of hadronization and fragmentation.
Measurements of the substructure of jets are presented using 140 fb-1 of proton-proton collisions with sqrt(s)=13 TeV center-of-mass energy recorded with the ATLAS detector at CERN Large Hadron Collider. Various results are presented including the measurement of non-perturbative track functions, or, the ratio of a jet transverse momentum carried by its charged constituents to its complete transverse momentum. The first differential cross-section measurement of Lund sub-jet multiplicities using dijet events and the measurement of the Lund Jet Plane in ttbar events are also shown in this contribution. Moreover, the measurements of the substructure of top-quark jets are presented using top quarks which are reconstructed with Antikt algorithm with a radius parameter R=1.0.
This talk presents the ALICE measurements of $\pi^{0}$, $\eta$, and $\omega$ meson production in pp collisions at 13 TeV. The results are given for several multiplicity classes, each for an unprecedented $p_{\rm T}$ coverage. Furthermore, the measurement of $\pi^{0}$ and $\eta$ mesons inside of jets will be shown.
ALICE measurements of neutral meson production in pp, p+Pb and Pb+Pb collisions give constraints to parton distribution functions (PDF) and FF, and provide essential background corrections for direct photon and dilepton analyses.
Observables previously attributed to the formation of a QGP in Pb–Pb have been measured in high-multiplicity pp and p–Pb collisions by ALICE and CMS, suggesting a continuous evolution from small to large collision systems. In addition, the correlation of neutral mesons and jets measured in pp collisions provides constraints on the meson FF.
Jet substructure measurements sensitive to the strong coupling are presented, namely the primary Lund jet plane and the energy-energy correlated. The measurements are motivated by their sensitivity to the strong coupling and present interesting experimental properties.
High precision measurements of top quark pair production are crucial to advance our understanding of perturbative and soft QCD and provide a deeper understanding of the partons inside the proton. In this talk, recent highlights of top quark cross-section measurements at CMS ranging from 5 TeV up to 13.6 TeV will be presented. Moreover, new results of differential cross-section measurements in challenging phase space will be discussed.
The LHC produces a vast sample of top quark pairs and single top quarks. Measurements of the inclusive top quark production rates at the LHC have reached a precision of several percent and test advanced Next-to-Next-to-Leading Order predictions in QCD. In this contribution, comprehensive measurements of top-quark-antiquark pair and single-top-quark production are presented that use data recorded by the ATLAS experiment in the Run 2 and Run 3 of the LHC . A recent result of the top-quark pair production in proton-lead collisions is also included.
I present theoretical results for top-pair production as well as for the associated production of top quarks with $W$ bosons. Soft-gluon corrections from resummation are calculated through approximate N$^3$LO and added to fixed-order QCD results, and electroweak corrections are included at NLO. Top-quark transverse-momentum and rapidity distributions are also presented. In all cases the higher-order corrections are large, they reduce the scale dependence, and they improve agreement with recent data.
We investigate the impact of recent LHC measurements of differential top-quark pair production cross sections on the proton parton distribution functions (PDFs) using the ABMP16 methodology. The theoretical predictions are computed at NNLO QCD using the state-of-the-art MATRIX framework. The top-quark mass and strong coupling constants are free parameters of the fit, and we pay particular attention to the values of these parameters and their correlation as obtained from variants of the fit using different input data sets. We discuss the compatibility of different datasets and the compatibility of the fitted PDFs with those extracted from other datasets in the global ABMP16 fit, as well as with other modern global PDF sets. In addition, we compare the fit results with those obtained using the open-source xFitter framework.
The high center-of-mass energy of the LHC opens the window to precise measurements of electroweak top quark production as well as vector boson and quark-associated production of top quark pairs and single top quarks. In this talk, recent inclusive and differential measurements of single-top and rare-top quark production will be discussed.
Being the heaviest fermion and having a Yukawa interaction almost equal to one, the top-quark represents one of the most interesting portals to New Physics (NP). If it is light or belongs to a secluded sector, NP can be difficult to detect in colliders with traditional methods. An alternative way, at least for setting bounds, is studying the virtual corrections to SM processes. Kinematical distributions of top-quark pairs produced at the LHC are studied both by the CMS and ATLAS experiments. A plethora of data for both the threshold region and the tails of the distributions are already available. In addition, SM theoretical predictions are known at very high accuracy (NNLO QCD and beyond). In this talk I will discuss the effect of virtual corrections to top-pair production coming from different types of NP: Axion-Like-Particles, CP-even and CP-odd scalars. I will also discuss the opportunity that top-pair production opens to bound new-particle interactions or probe their existence.
The High-Luminosity LHC project aims to increase the integrated luminosity by an order of magnitude and enable its operation until the early 2040s. This presentation will give an overview of the current status of the project, for which several achievements can be reported, from the completion of the civil engineering to the successful demonstration of new key technologies such as the Nb3Sn magnets and MgB2 based sc links.
Preparing the LHC machine for a targeted integrated luminosity of 3000 fb-1 requires not only new, more radiation-tolerant and larger aperture triplet magnets but developments in several other key areas of accelerator technology, including crab-cavities, beam optics, collimation, beam instrumentation, magnet protection systems and high accuracy high current power converters. The HL-LHC project is therefore not only an upgrade of the LHC machine, but also a technology driver that develops technologies that will impact future accelerator projects like the FCC and EIC.
In the High Luminosity Large Hadron Collider (HL-LHC) and most future colliders crab crossing is required to recuperate the significant geometric luminosity loss due to finite crossing angle at the collision point. In the framework of the HL-LHC, a decade long R&D program on ultra-compact superconducting crab cavities led to the successful demonstration of crabbing with high energy proton beams in the CERN SPS for the first time. This contribution will cover the main highlights of the development of superconducting crab cavities, including the global effort to realize the final crab cavity system for the HL-LHC. The implications of these developments on future colliders such as FCC and EIC will be discussed.
The build-up of electron clouds in accelerator beam chambers can lead to detrimental effects, such as transverse instabilities, emittance growth, beam loss, vacuum degradation, and heat load. Such effects are systematically observed in the Large Hadron Collider (LHC) during operation with proton beams, limiting the total intensity achievable in the collider. The High Luminosity LHC (HL-LHC) project aims at an order of magnitude increase of the integrated luminosity of the LHC. With the associated increase in bunch intensity, as well as an observed increase in electron cloud effects after each long shutdown of the LHC, electron cloud poses a significant risk to the performance of the HL-LHC. In this contribution, we discuss the related limitations and proposed mitigation measures to ensure the best possible performance of the HL-LHC.
The HL-LHC performance relies on handling safely and reliably high intensity beams of unprecedented stored energy. The 7TeV design target is compatible a factor 2 larger current than the LHC and levelled peak luminosities 5 times, and ultimately 7.5 times, larger. This goal requires a massive collimation system upgrade, both for the halo betatron collimation that must sustain beam losses up to 1MW, and for the collimation systems around the experiments. This paper describes the solutions elaborated for HL-LHC and the operational experience from a first collimation upgrade deployed in the 2019-2021 long shutdown. These upgrades include new low-impedance collimators, crystal collimation for ion beams and local dispersion suppression collimation. The present and future collimator design are presented. This effort paves the way for beam collimation solutions that are being studies for future projects like the lepton and hadron future Circular Colliders (FCC) presently pursued at CERN.
The ongoing feasibility study of the Future Circular Collider (FCC) comprises two distinct accelerators: a high-luminosity circular electron-positron collider known as FCC-ee and an energy-frontier hadron collider named FCC-hh. These two facilities are designed to take advantage of a common tunnel infrastructure. We present the new baseline design of FCC-hh, underlining the most recent updates. These include studies of the corrector systems, optimisation of the arc cell, increasing the dipole filling factor, and subsequent updates to the layouts of the different technical and experimental insertions.
Magnet technology is a key enabler for the Future Circular Collider (FCC) and its hadron collider variant (FCC-hh). The European High-Field Magnet Program (HFM), hosted at CERN, implements a European research network for high-field accelerator magnets that is geared towards FCC-hh. The research network includes four national laboratories and CERN for magnet design and construction, as well as institutes and universities for conductor research and other enabling technologies. In this contribution we present the status of the Programme, medium-term plans for technology demonstrations, as well as the strategy for integration and coordination with the FCC integrated program. The authors present in the name of the entire HFM Programme network.
Supersymmetric models with the anomaly-mediated SUSY breaking (AMSB) have run into serious conflicts with 1. LHC sparticle and Higgs mass constraints, 2. constraints from wino-like WIMP dark matter searches and 3. bounds from naturalness. These conflicts may be avoided by introducing changes to the underlying phenomenological models providing a setting for natural anomaly-mediation (nAMSB). We examine spectra of nAMSB arising from string landscape. Here, we investigated LHC constraints on nAMSB models that allow m3/2 to lie within 90−200 TeV which may soon be discovered or falsified by a combination of 1. soft OS dilepton plus jet+ MET (OSDLJMET) searches which arise from higgsino pair production, 2. non-boosted hadronically decaying wino pair production searches and 3. same-sign diboson + MET searches arising from wino pair production followed by wino decay to W +higgsino. Some excess above SM background in the OSDLJMET channel already seems to be present in both ATLAS and CMS data.
Supersymmetry (SUSY) provides elegant solutions to several problems in the Standard Model, and searches for SUSY particles are an important component of the LHC physics program. With increasing mass bounds on MSSM scenarios other non-minimal variations of supersymmetry become increasingly interesting. This talk will present the latest results of searches conducted by the ATLAS experiment targeting strong and electroweak production in R-parity-violating models, as well as non-minimal-flavour-violating models
Since the classic searches for supersymmetry under R-parity conserving scenarios have not given any strong indication for new physics yet, more and more supersymmetry searches are carried out on a wider range of supersymmetric scenarios. This talk focuses on searches looking for signatures of stealth and R-parity-violating supersymmetry. The results are based on proton-proton collisions recorded at sqrt(s) = 13 TeV with the CMS detector.
Supersymmetry (SUSY) models with featuring small mass splittings between one or more particles and the lightest neutralino could solve the hierarchy problem as well as offer a suitable dark matter candidate consistent with the observed thermal-relic dark matter density. However, the detection of SUSY higgsinos at the LHC remains challenging especially if their mass-splitting is O(1 GeV) or lower. Searches are developed using 140 fb^{-1} of proton-proton collision data collected by the ATLAS Detector at a center-of-mass energy \sqrt{s}=13 TeV to overcome the challenge. Novel techniques are developed exploiting machine-learning techniques, low-momentum tracks with large transverse impact parameters, or topologies consistent with VBF production of the supersymmetric particles. Results are interpreted in terms of SUSY simplified models and, for the first time since the LEP era, several gaps in different ranges of mass-splittings are excluded.
A wide variety of searches for Supersymmetry have been performed by experiments at the Large Hadron Collider. In this talk, we focus on searches for electroweak production of Supersymmetric particles as well as third generation Supersymmetric particles. Some analyses are optimized for Supersymmetric particles in compressed spectra. The results are obtained from the proton-proton collision data with luminosity up to 138 fb-1 at the center of mass energy of 13 TeV collected during the LHC Run 2.
The ANAIS experiment aims to independently verify or refute the longstanding positive annual modulation signal observed by DAMA/LIBRA using the same target and technique. While other experiments have ruled out the parameter region highlighted by DAMA/LIBRA, their results rely on assumptions on the dark matter particle and its velocity distribution, as they utilize different target materials. ANAIS−112, comprising nine 12.5 kg NaI(Tl) modules arranged in a 3×3 matrix configuration, has been continuously collecting data at the Canfranc Underground Laboratory in Spain since August 2017, demonstrating outstanding performance. Results based on three-year exposure were consistent with the absence of modulation and not compatible with DAMA/LIBRA at a sensitivity of almost 3σ confidence level. We will discuss the current state of the experiment and its most recent data analysis. Updated sensitivity projections will be provided, foreseeing a 5σ exclusion of the DAMA/LIBRA signal by late 2025.
The COSINE-100 experiment aims to detect dark matter-induced recoil interactions in NaI(Tl) crystals to test the DAMA/LIBRA collaboration's claim.
Data taking operated from September 2016 to March 2023 at the Yangyang underground laboratory in Korea, utilizing 106 kg of low-background NaI(Tl) detectors.
The COSINE-100 experimental setup was moved to a newly built underground laboratory, Yemilab. Here, we will proceed with the detector design upgrade to enhance light collection and operate the COSINE-100U experiment. Furthermore, the COSINE collaboration is in the process of developing a high-purity NaI(Tl) detector for the upcoming COSINE-200 experiment.
In this talk, we will report on the overall status and the latest outcomes of the dark matter search in COSINE-100. Additionally, we will provide an update on the status of COSINE-100 and discuss prospects for COSINE-200.
SABRE aims to provide a model independent test of the signal observed by DAMA/LIBRA through two separate detectors that rely on joint ultra-high NaI(Tl) purity crystal R&D activities: SABRE South at SUPL Australia and SABRE North at LNGS Italy. SABRE South is designed to disentangle seasonal/site-related effects from the dark matter-like modulated signal. Ultra-high purity crystals are immersed in a liquid scintillator veto, further surrounded by passive shielding and a plastic scintillator muon veto. Significant work has been undertaken to assess and mitigate background from the detector materials, and to understand the performance of both the crystal and veto systems. SUPL is a newly built facility located 1024 m underground in Australia. SABRE South is currently being assembled and will be completed in 2025, with first subsystems already installed in SUPL. This talk will report on the general status of the SABRE South assembly, its expected performance , and the design of SUPL.
SABRE aims to deploy arrays of ultra-low background NaI(Tl) crystals to carry out a model-independent search for dark matter through the annual modulation signature. SABRE will be a double-site experiment, made up of two separate detectors which rely on a joint crystal R&D activity, located in the North (LNGS) and Sout hemisphere (SUPL). SABRE has carried out, since more than 10 years, an extensive R&D on ultra radio-pure NaI(Tl) crystals. Several crystals have been grown and tested in active and passive shields at LNGS. Based on these results SABRE North is proceeding to a full scale design with purely passive shielding. To reach an unprecedented level of radiopurity for NaI(Tl) crystals, SABRE North is exploiting zone refining purification of the NaI powder prior to growth. We will present the first results from the zone refining activities and predictions on the ultimate radio purity achievable for the crystals. The status of SABRE North installation at LNGS will also be discussed.
Indirect dark matter detection experiments aim to observe the annihilation or decay products of dark matter. The flux of neutrinos produced by such processes in nearby dark matter containers, such as the Sun and the Galactic Centre, could be observed in neutrino telescopes. The KM3NeT observatory is composed of two undersea Čerenkov neutrino telescopes (KM3NeT-ORCA and ARCA) located offshore of France and Italy, respectively. In this work, searches for WIMP annihilations in the Galactic Centre and the Sun with KM3NeT are presented. An unbinned likelihood method is used to discriminate the signal originating from the Galactic Centre and the Sun from the background in the data samples of the first configurations of both detectors, ORCA6 and ARCA6/8/19/21. No significant excess over the expected background was found in either of the two analyses, resulting in limits on the velocity-averaged pair-annihilation cross section of WIMPs and the WIMP-nucleon scattering cross section.
The Super Tau-Charm Facility (STCF) is a high-luminosity electron-positron collider proposed in China. It will operate in an energy range of 2-7GeV with a peak luminosity higher than 0.5*10^35 cm^2 s^-1. The STCF physics goals require efficient and precise reconstruction of exclusive final states produced in the e+e- collisions. This places stringent demands on the performance of the STCF detector. It must provide maximal solid angle of coverage, high efficiency and good resolution for both charged and neutral particles of low momentum or energy, excellent hadron identification in a large momentum range, and powerful muon identification capability. The STCF detetor conceptual design has been published (available at arXiv:2303.15790). A full detector R&D program has been established and funded, and is going full steam ahead. This report presents the conceptual design and R&D progress of the STCF detector.
The FORMOSA detector at the proposed Forward Physics Facility is a scintillator-based experiment designed to search for signatures of "millicharged particles" produced in the forward region of the LHC. This talk will cover the challenges and impressive sensitivity of the FORMOSA detector, expected to extend current limits by over an order of magnitude. A pathfinder experiment, the FORMOSA demonstrator, was installed in the FASER cavern at the LHC in early 2024 and has been collecting collisional data. Results from this demonstrator and important implications for the full detector design will be shown.
A storage ring proton electric dipole moment (EDM) experiment (pEDM) would be the first direct search for a proton EDM and would improve on the current (indirect) limit by 5 orders of magnitude. It would surpass the current sensitivity (set by neutron EDM experiments) to QCD CP-violation by 3 orders of magnitude, making it potentially the most promising effort to solve the strong CP problem, and one of the most important probes for the existence of axions, CP-violation and the source of the universe’s matter-antimatter asymmetry. These, coupled with a new Physics reach of $\mathcal{O}(10^3)$ TeV and a construction cost of $\mathcal{O}$(£100M), makes it one of the low-cost/high-return proposals in particle physics today. The experiment will build upon the highly successful techniques of the Muon g-2 Experiment at Fermilab and, in this talk, I will motivate and describe the pEDM experiment, and detail its path to success by building upon previous recent achievements.
The proposed LHeC and the FCC in electron-hadron mode will make possible the study of DIS in the TeV regime. These facilities will provide electron-proton (nucleus) collisions with per nucleon instantaneous luminosities around $10^{34}$($10^{33}$) cm$^{−2}$s$^{−1}$ by colliding a 50-60 GeV electron beam from a highly innovative energy-recovery linac system with the LHC/FCC hadron beams, concurrently with other experiments for hadron-hadron collisions. The detector design was updated in the 2020 CDR. Ongoing developments since then include an improved IR design together with a more detailed study of an all-silicon central tracking detector. Additional capabilities for PID, enabling improved semi-inclusive DIS and eA studies, are also under study. In this talk, we describe the current detector design and ongoing discussion in the framework of a new ep/eA study, highlighting areas of common interest with other future collider experiments and the new Detector R&D Collaborations in Europe.
SUB-Millicharge ExperimenT (SUBMET) searches for sub-millicharged particles from the proton fixed-target collisions at J-PARC. The detector, installed 280 m from the target, is composed of two layers of stacked scintillator bars and PMTs. The main background is expected to be a random coincidence between the two layers due to dark counts in PMTs and the radiation from the surrounding materials, which can be reduced significantly using the timing of the proton beam. With $\rm{N}_{\rm{POT}}=5\times 10^{21}$, the experiment provides sensitivity to $\chi$s with the charge down to $8\times 10^{−5}𝑒$ in $𝑚_\chi<0.2$ GeV/$\rm{c}^2$ and $10^{−3}𝑒$ in $𝑚_\chi>1.6$ GeV/$\rm{c}^2$. This is the regime largely uncovered by the previous experiments. This talk will address the assembly, construction, and installation of the detector as well as the future outlook of the experiment
The COmpact DEtector for EXotics at LHCb (CODEX-b) is a particle physics detector dedicated to displaced decays of exotic long-lived particles (LLPs), compelling signatures of dark sectors Beyond the Standard Model, which arise in theories containing a hierarchy of scales and small parameters. CODEX-b is planned to be installed near the LHCb interaction point and makes use of fast RPCs, which provide both a good space and temporal sensitivity and also a zero background environment, hence complementing the new-searches program of other detectors like ATLAS or CMS. A demonstrator detector, CODEX-beta, is being assembled now to take data beginning in the second half of 2024 and 2025. It will validate the design and physics case for the future CODEX-b. CODEX-beta will be responsible for validating the background estimations for CODEX-b, demonstrating the seamless integration in the LHCb readout system, and showing the suitability of the baseline tracking and its mechanical support.
The CMS at DESY outreach Instagram account provides science communication and outreach for a large experimental particle physics group. It aims to promote science, engage young scientists in outreach and showcase their work. The Instagram platform was selected for its demographic alignment with the target stakeholders and broad user base in Germany and abroad.
The communication focuses on highlighting young scientists, offering insights into the scientific journey, and sharing particle physics outreach content. Multiple contributors collaborate on the content, fostering training opportunities in science communication at a manageable time investment for early career researchers. This talk will cover the project's evolution, initial objectives, target audiences, and experiences in content creation and engagement on social media.
In 2021, CERN’s social media audience was not growing. The Organization’s follower-base grew to 4.73Mtoday; its social media presence grew by 16% in reach (292M impressions) and 22% in engagement (11.2M reactions) despite the increasingly competitive and ever-evolving digital landscape. We review CERN’s main social media activities-content creation, digital partnerships, and community engagement-and present our learnings about how we managed to break this plateau. We will focus on how we worked to align our activities with other communications and outreach teams at CERN, to build our digital networks, and to engage the many different actors of the digital landscape. We will examine our monitoring, measurement, and evaluation activities and how we conduct the analyses described above, also addressing the increasing mistrust in organisations and polarisation of the digital landscape. We will share our perspective about what comes next, both for CERN and for the broad digital landscape.
Launched in 2016 and confirmed by the Update of the European Strategy of Particle Physics, the Physics Beyond Colliders Initiative aims to exploit the scientific potential of CERN's accelerator complex and technical infrastructure, as well as its expertise in accelerator and detector science and technology. The diverse PBC projects, ranging from QCD to BSM searches and, in particular, searches for feebly interacting particles, complement the goals of the Laboratory’s main collider experiments by targeting fundamental physics questions. Flagship projects emerging from PBC include the NA64 experiment, which is searching for light dark matter, and the ECN3 high-intensity facility in CERN’s North Area.
This presentation will highlight the new communications strategy that supports both the PBC Initiative’s general outreach and will also discuss a few use cases of how this strategy supports specific projects that need to increase their global visibility.
We use particle physics as a prime example to engage young students to get involved in this subject area and gain a new, everyday perspective on STEM topics. Our strategy is designed to demystify physics, making it more accessible and attractive early in school.
In Germany, students usually decide whether or not to continue physics education around the age of 15. That's why our project is aimed specifically at students aged 10 to 15 to give them a real insight into particle physics research. Our efforts have focused on creating educational and engaging workshops for young learners. We have reached over 620 students across these age groups through more than 25 interventions in the last two years. The initial results are promising, indicating that our efforts are successfully igniting a motivation for physics, especially among girls. We aim to inspire the next generation of (particle) physicists. In this talk, we will present our workshops, the methodologies we use and initial data.
The European Researchers’ Night stands as a beacon of scientific outreach and engagement. It unfolds as a platform for dialogue, enabling researchers to share their passion and latest breakthroughs with a diverse audience. In this talk, we delve into the journey of the Italian National Institute for Nuclear Physics (INFN) within this prestigious event. The INFN obtained so far a large number of financed projects and this contributed to the diffusion of the initiative throughout the national territory. With a focus on fostering the interaction between researchers and the public, our participation offers a kaleidoscope of engaging activities and enlightening discussions. Through interactive exhibits and hands-on experiments, the INFN showcases the intricacies of particle physics, inviting attendees to embark on a journey of scientific discovery. We describe our experience by including suggestions for improving the success of involvement in Reseachers' Night also in other countries.
The ATLAS Collaboration has recently, for the first time, released a large volume of data for use in research publications. The entire 2015 and 2016 proton collision dataset has been made public, along with a large quantity of matching simulated data, in a light format, PHYSLITE, which is also used internally for ATLAS analysis. In order to allow detailed analyses of these data, all the corresponding software has been made public, along with extensive documentation targeting several different levels of users, from those who are new to particle physics to experienced researchers that need only an introduction to the ATLAS-specific details of the data. This contribution describes the data, the corresponding metadata and software, and the documentation of the open data, along with the first interactions with non-ATLAS researchers.
In the Standard Model, the ground state of the Higgs field is not found at zero but instead corresponds to one of the degenerate solutions minimising the Higgs potential. In turn, this spontaneous electroweak symmetry breaking provides a mechanism for the mass generation of nearly all fundamental particles. The Standard Model makes a definite prediction for the Higgs boson self-coupling and thereby the shape of the Higgs potential. Experimentally, both can be probed through the production of Higgs boson pairs (HH), a rare process that presently receives a lot of attention at the LHC. In this talk, the latest non-resonant HH searches by the ATLAS experiment are reported. Results are interpreted both in terms of sensitivity to the Standard Model and as limits on the Higgs boson self-coupling. A combined measurement of single and double Higgs production results are presented based on pp collision data collected at a centre-of-mass energy of 13 TeV with the ATLAS detector.
The measurement of the production of Higgs boson pairs (HH) at the LHC allows the exploration of the Higgs boson interaction with itself and is thus a fundamental test of the Standard Model theory and has a key role in the determination of the Higgs boson nature. The most recent results from the CMS collaboration on measurements of non-resonant HH production using different final states and their combination using the data set collected by the CMS experiment at a centre of mass energy of 13 TeV will be presented.
We consider next-to-leading order electroweak corrections to Higgs boson pair production and to Higgs plus jet production in gluon fusion. This requires the computation of two-loop four-point amplitudes with massive internal particles such as top quarks, Higgs and gauge bosons. We perform analytic calculations in various kinematical limits and show that their combination covers the whole phase space, thus circumventing time-consuming numerical approaches.
We present a new simulation for Higgs boson production in association with bottom quarks ($bbH$) at next-to-leading order (NLO) matched to parton showers. The contributions proportional to the bottom-quark Yukawa coupling and top-quark Yukawa coupling (from gluon fusion) are both taken into account in a scheme with massive bottom quarks. The $bbH$ process constitutes a crucial background to measurements of Higgs-boson pair ($HH$) production at the LHC when at least one of the Higgs bosons decays to bottom quarks. So far, the modeling of $bbH$ induced one of the dominant theoretical uncertainties to $HH$ measurements, as the gluon-fusion component was described only at the leading order with uncertainties of O(100%). Including NLO corrections allows to reduce the scale dependence to O(50%). We provide an in-depth analysis of the $bbH$ background to $HH$ measurements, and we propagate the effect of the new $bbH$ simulation to $HH$ searches in the $2b2\gamma$ and $2b2\tau$ final states.
Higgs boson pair production plays an important role in the determination of the Higgs boson self coupling, a major element in the LHC physics program. The predictions based on next-to-leading order corrections show a large dependence on the renormalization scheme of the top quark mass, which requires a next-to-next-to-leading order calculation. We show first results of the three-loop virtual corrections, expanded around the forward-scattering kinematics, which covers a large p$the phase space.
We analyse the sensitivity to beyond-the-Standard-Model effects of hadron-collider processes involving the interaction of two electroweak (V) and two Higgs (H) bosons, VVHH, with V being either a W or a Z boson.
We examine current experimental results by the CMS collaboration in the context of a dimension-8 extension of the Standard Model in an effective-field-theory formalism. We show that constraints from vector-boson-fusion Higgs-pair production on operators that modify the Standard Model VVHH interactions are already comparable with or more stringent than those quoted in the analysis of vector-boson-scattering final states. We study the modifications of such constraints when introducing unitarity bounds, and investigate the potential of new experimental final states, such as ZHH associated production. Finally, we show perspectives for the high-luminosity phase of the LHC.
The Deep Underground Neutrino Experiment (DUNE) is a next-generation long-baseline neutrino oscillation experiment aimed at determining the neutrino mass hierarchy and the CP-violating phase. The DUNE physics program also includes the detection of astrophysical neutrinos and the search for signatures beyond the Standard Model, such as nucleon decays. DUNE consists of a near detector complex located at Fermilab and four 17-kton Liquid Argon Time Projection Chamber (LArTPC) far detector modules to be built 1.5 km underground at SURF, approximately 1300 km away. The detectors are exposed to a wideband neutrino beam generated by a 1.2 MW proton beam with a planned upgrade to 2.4 MW. Two 700 ton LArTPCs (ProtoDUNEs) have been operated at CERN for over 2 years as a testbed for DUNE far detectors and have been optimized to take new cosmic and test-beam data in 2024-2025. This talk will present the DUNE and ProtoDUNE experiments and physics goals, as well as recent progress and results.
The IceCube Neutrino Observatory is a cubic kilometer detector in the ice of the South Pole for the detection of neutrinos with energies from GeV to PeV, which has been fully in operation since 2010. In the 2025/2026 Antarctic summer season, the detector will receive a low-energy upgrade by adding about 700 new optical modules. Several new digital optical modules with multiple PMTs and calibration devices will be deployed in a high-density configuration in the center of the IceCube array. The Upgrade will improve the detection of GeV-scale neutrinos which in turn will lead to more precise measurements of fundamental physics phenomena such as neutrino oscillations and beyond the Standard Model physics. This talk will give an overview of the new designs and the status of their testing, the planned installation and also discuss the prospects for the exciting measurements to be expected with the Upgrade.
Next-generation experiments aim at ensuring high-precision measurements of the oscillation parameters to reveal the main unknowns in neutrino physics. Among them, validating the three-flavors paradigm remains one of the most stimulating because it allows for exploring new physics.
KM3NeT/ORCA is a water Cherenkov neutrino telescope, under construction in the Mediterranean Sea, whose primary physics goal is an early measurement of the neutrino mass ordering from the oscillation of atmospheric neutrinos traversing the Earth. Thanks to its huge fiducial mass, KM3NeT/ORCA will have unprecedented statistics to exploit the tau neutrino appearance channels as an indirect test of the PMNS matrix unitarity and, thus, of the three-neutrino flavors paradigm. With a focus on the event reconstruction and selection methods, the results from the first blind measurement of the tau neutrino normalization performed by exploiting data collected with a partially instrumented volume will be presented.
KM3NeT/ORCA is a water-Cherenkov neutrino telescope currently under construction in the Mediterranean sea, with the goal of measuring atmospheric neutrino oscillations and determining the neutrino mass ordering. The detector is located 40 km off-shore Toulon, France, and consists of a three-dimensional grid of detection units equipped with 18 digital optical modules, hosting 31 photo-multiplier tubes each. By inspecting the arrival direction of GeV-neutrinos crossing the Earth, ORCA can effectively constrain the oscillation parameters $\Delta m^{2}_{31}$ and $\theta_{23}$, and can additionally be used to search for deviations from the Standard Model in neutrino interactions, the so-called Non-Standard neutrino Interactions (NSI).
This presentation covers the most up-to-date results from the ORCA detector on neutrino oscillations and NSI, which improve on previous ORCA measurements and benefit from increased exposure, refined event selections and optimised reconstruction algorithms.
The Hyper-Kamiokande experiment aims to discover the CP violation in
leptons by the precise measurement of $ \nu_{\mu} \to \nu_{e}$ and
$\bar{\nu}_{\mu}\to\bar{\nu}_{e}$ oscillations. It will be realized
by high statistics using the new 260 kiloton far-detector and the
intense neutrino beam from J-PARC, and by precise understand on the
neutrino-nucleus interaction using the new intermediate water
Cherenkov detector (IWCD). The J-PARC accelerator and neutrino beam
facility is being improved for 1.3MW beam power from the original
design value of 750kW, and the new experimental facility for IWCD will
be constructed at the new site away from ~900m from the neutrino
production target. The role and prospects of IWCD measurements, the
IWCD facility design, and the latest progress of the upgrade of J-PARC
and the IWCD project that are started in 2020 towards the data taking
start in 2027 are described.
It was the best of methods, it was the worst of methods... This talk will introduce and discuss the low-ν method for constraining the neutrino flux shape by isolating neutrino interactions with low energy transfer to the nucleus in two different contexts. Firstly, at few-GeV accelerator neutrino energies relevant for precision oscillation experiments where the method is well known, but we find that model-dependence limits its utility for the precision era. Secondly, at the TeV neutrino energies relevant for planned searches at the Forward Physics Facility, using neutrino produced in LHC collisions. We show that the low-nu method would be effective for extracting the muon-neutrino flux shape at the FPF, in a model-independent way, for a variety of detector options, and that the precision would be sufficient to discriminate between various realistic flux models.
The neutrino research program in the coming decades will require improved precision. A major source of uncertainty is the interaction of neutrinos with nuclei that serve as a target of many such experiments. Broadly speaking, this interaction often depends, e.g., for Charge-Current Quasi-Elastic (CCQE) scattering, on the combination of “nucleon physics” expressed by form factors and “nuclear physics” expressed by a nuclear model. It is important to get a good handle on both.
This talk presents a fully analytic implementation of the Correlated Fermi Gas (CFG) Model for CCQE electron-nuclei and neutrino-nuclei scattering. The implementation is used to compare separately form factors and nuclear model effects for both electron-carbon and neutrino-carbon scattering data.
The increased instantaneous luminosity levels expected to be delivered by the High-Luminosity LHC (HL-LHC) will present new challenges to High-Energy Physics experiments, both in terms of detector technologies and software capabilities. The current ATLAS inner detector will be unable to cope with an average number of 200 simultaneous proton-proton interactions resulting from HL-LHC collisions. As such, the ATLAS collaboration is carrying out an upgrade campaign, known as Phase-II upgrade, that foresees the installation of a new all-silicon tracking detector, the Inner Tracker (ITk), designed for the expected occupancy and fluence of charged particles. The new detector will provide a wider pseudorapidity coverage and an increased granularity. In this contribution the expected performance of the ITk detector will be presented, with emphasis on the improvements on track reconstruction resulting from the new detector design.
The VELO is the detector surrounding the interaction region of the LHCb experiment, responsible of reconstructing the proton-proton collision as well as the decay vertices of long-lived particles. It consists of 52 modules with hybrid pixel technology, with the first sensitive pixel being at 5.1 mm from the beam line. It operates in an extreme environment, which poses significant challenges to its operation. The detector performance in the first two years of operation will be presented.
In order to fully exploit the High-Luminosity LHC potential in flavour physics, a Phase-II Upgrade of the detector is proposed. Due to the extreme environment of HL-LHC, the design of the upgraded detector is particularly challenging: assuming the same hybrid pixel design and detector geometry, the front-end electronics of the VELO Upgrade-II will have to cope with rates as high as 8 Ghits/s, with the hottest pixels reaching up to 500 khits/s. The status of the Upgrade-II project will be discussed.
The LHCb detector has undergone a major upgrade, enabling the experiment to acquire data with an all software trigger, made possible by front-end readout in real-time and fast and efficient online reconstruction. At the heart of the real-time analysis is a fast and efficient track reconstruction, without spurious tracks composed of segments associated with hits from different charged particles. The Upstream Tracker (UT), a 4-plane silicon microstrip detector in front of the dipole magnet, is crucial to the charged particle trajectory reconstruction. The UT also aids the reconstruction of long-lived particles. The UT was installed in LHCb in early 2023. The first year of commissioning was challenging for data synchronization issues related to the GBTx properties. We report the lessons learned during the early commissioning phase and the upcoming run when the UT performance with beams will be studied.
At the beginning of 2024 data taking of the Belle II experiment resumed after the Long Shutdown 1, primarily required to install a new two-layer DEPFET detector (PXD) and upgrade accelerator components. The whole silicon tracker (VXD) was extracted, the two halves of the outer strip detector (SVD) were split for the PXD insertion and reconnected again. The new VXD was commissioned for the start of the new run.
We will describe the challenges of this VXD upgrade and report on the operational experience and the SVD performance obtained with this first-year data. We then introduce the various improvements in the reconstruction procedure, that exploit the excellent SVD hit-time resolution to enhance beam-induced background rejection and reduce track fake-rate, crucial aspects for the higher luminosity regime.
ALICE 3 is the next generation heavy-ion experiment proposed for the LHC Runs 5-6. Its tracking system includes a vertex detector, on a retractable structure inside the beam pipe to achieve a pointing resolution of better than 10 microns for $p_{\rm T}$>200 MeV/c, and a large-area tracker covering 8 units of pseudorapidity (|$\eta$|<4). The tracking system will be based on Monolithic Active Pixel Sensor (MAPS) technology.
An intensive R&D program has started, to meet the challenging detector requirements: the innermost vertex detector layer, placed at 5 mm from the interaction point, must withstand an integrated radiation load of 9x10$^{15}$ 1 MeV neq/cm$^2$ NIEL; the tracker will cover 50 m$^2$, extending to a radius of 0.8 m and a total longitudinal length of 8 m.
This contribution will discuss the detector requirements and target sensor specifications, the ideas for mechanics and integration, and the R&D challenges expected for the implementation of the ALICE 3 tracking system.
The Belle II experiment considers upgrading its vertex detector with new pixel sensors to prepare for the target luminosity of 6 10^35 cm-2 s-1. The 5 layers of the new VTX detector are equipped with the same depleted monolithic active CMOS pixel sensor, featuring a 33 µm pitch, a 100 ns integration time and a trigger logic matching 30 kHz average rate and 10 µs trigger latency for a maximum hit rate of 120 MHz/cm2.
The two innermost layers are based on an all-silicon ladder concept with air cooling, aiming for a material budget below 0.2 % X0/layer. The three outer layers follow a more traditional approach still targeting aggressive material budget, from 0.3 % to 0.8 % X0 depending on the radius.
The VTX could be the first MAPS-based vertex detector running at an e+e- collider, facing high rate and featuring low mass. This contribution will overview the VTX concepts, detail critical aspects, and discuss the various tests on-going with prototypes to validate the technical choices.
Explaining the matter-antimatter asymmetry in the Universe requires new sources of CP violation beyond the predictions of the Standard Model (SM). Electric dipole moments (EDMs) of particles, being zero if CP is exactly conserved and extremely small in the SM, are a very clean and sensitive probe for new physics. We will present the status of the muEDM experiment, a search for a muon EDM at PSI (CH) pioneering the frozen spin technique. Muons will be stored in a solenoid, with a radial electric field tuned to eliminate the spin precession generated by the magnetic moment. Measuring a residual, longitudinal precession would indicate a non-zero EDM. The first phase of the experiment will demonstrate, by 2026, the feasibility and unique potential of the technique, while reaching a sensitivity competitive with the parasitic measurements performed in the muon g-2 experiments. The ultimate goal of the muEDM experiment is to improve this sensitivity by a factor of 100 by the early 2030s.
Although unobservable in the standard model, charged lepton flavour violating (LVF) processes are predicted to be enhanced in new physics extensions. We present the final results of a search for electron-muon flavour violation in 𝛶(3S) → e±μ∓ decays using data collected with the BaBar detector at the SLAC PEP-II e+e− collider operating with a 10.36 GeV centre-of-mass energy. The search was conducted using a data sample of 118 million 𝛶(3S) mesons from 27 fb−1 of data and is the first search for electron-muon LFV decays of a b quark and b antiquark bound state. No evidence for a signal is found, and we set a limit on the branching fraction of 𝛶(3S) → e±μ∓ and interpret it as a limit on the energy scale divided by the coupling-squared of relevant LFV new physics (NP): ΛNP/gNP2 > 80 TeV.
The Belle and Belle$~$II experiments have collected a $1.4~\mathrm{ab}^{-1}$ sample of $e^+e^-$ collision data at centre-of-mass energies near the $\Upsilon(nS)$ resonances. This sample contains approximately 1.3 billion $e^+e^-\to \tau^+\tau^{-}$ events, which we use to search for lepton-flavour violating decays. We present searches for tau decay to three charged leptons, $\tau^-\to K_{\rm S}^0\ell^{-}$, $\tau^-\to\Lambda\pi^-$, $\tau^-\to \bar{\Lambda}\pi^-$ and $\tau^-\to \ell^-\alpha$, where $\alpha$ is an invisible scalar particle.
The Belle$~$II experiment has collected a $424~\mathrm{fb}^{-1}$ sample of $e^+e^-$ collision data at centre-of-mass energies near the $\Upsilon(nS)$ resonances. This sample contains 389 million $e^+e^-\to \tau^+\tau^{-}$ events, which we use for precision tests of the standard model. We present measurements of leptonic branching fractions, lepton-flavour universality between electrons and muons, the tau mass and the Cabibbo-Kobayashi-Maskawa matrix element $V_{us}$.
The International Linear Collider (ILC) offers favorable low-background
environment as well as the high energy reach to measure properties of heavy
quarks and the top-quark in particular. As these particles are likely messengers of
new physics, precision measurements of their properties can be interpreted in the
context of search for beyond-the-Standard-Model (BSM) realizations. The latest
results from ILC studies will be discussed in this respect.
The unparalleled production of beauty and charm hadrons and tau's in the $6\cdot 10^{12}$ Z boson decays expected at FCC-ee offers outstanding opportunities in flavour physics. A wide range of measurements will be possible in heavy-flavour spectroscopy, rare decays and CP violation, benefitting from a low-background environment, initial-state energy-momentum constraints, high Lorentz boost, and availability of the full hadron spectrum. The huge data sample offers also improved determinations of tau properties (lifetime, leptonic/hadronic widths, mass) allowing for key tests of lepton universality. Via the measurement of the tau polarisation, the partial width and forward-backward asymmetries of heavy quarks, FCC-ee can precisely determine the neutral-current couplings of e$^\pm$, taus and heavy quarks. Such measurements present strong challenges to match the $O(10^{-5})$ stat uncertainties, raising strict detector requirements and novel experimental methods to limit systematic effects.
Recent R&D work associated with upgrading the SuperKEKB $e^+e^−$ collider with polarized electron beams and Chiral Belle’s program of unique precision measurements using Belle II will be described. These include five values of $\sin^2\theta_W$ via left-right asymmetry measurements ($A_{LR}$) in $e^+e^- \rightarrow e^+e^-, \mu^+\mu^-, \tau^+\tau^-, c\bar{c},b\bar{b}$. $A_{LR}$ yields values of the neutral current (NC) coupling constant of each fermion species that will match (e,$\tau$) or greatly exceed (b, c, $\mu$) existing $Z^0$ world averages precision, but at 10GeV, thereby providing unique probes the running of $\theta_W$. The program also probes new physics via the highest precision measurements by many factors of NC universality and tau lepton properties, including the tau g-2. After providing an update on Chiral Belle's physics potential, we will report on recent R&D related to provision of the required hardware, including modest upgrades to the SuperKEKB electron ring.
The coupling constant of the strong force is determined from the transverse-momentum distribution of Z bosons produced in 8 TeV proton-proton collisions. The Z-boson cross sections are measured in the full phase space of the decay leptons. The analysis is based on predictions evaluated at third order in perturbative QCD, supplemented by the resummation of logarithmically enhanced contributions in the low transverse-momentum region of the lepton pairs.
A new measurement of inclusive-jet cross sections in the Breit frame in neutral current deep inelastic scattering using the ZEUS detector at the HERA collider is presented. The data were taken at a centre-of-mass energy of 318 GeV and correspond to an integrated luminosity of 347 pb-1. Massless jets, reconstructed using the kt-algorithm in the Breit reference frame, have been measured as a function of the squared momentum transfer, and the transverse momentum of the jets in the Breit frame. The measurement has been used in a next-to-next-to-leading-order QCD analysis to perform a simultaneous determination of parton distribution functions of the proton and the strong coupling, resulting in a value of $\alpha_s(M^2_Z) = 0.1142 \pm 0.0017$ (exp/fit)$^{+0.0006}_{−0.0007}$ (model/param) $^{+0.0006}_{−0.0004 (scale)}$, whose accuracy is improved compared to similar measurements. In addition, the running of the strong coupling is demonstrated using data obtained at different scales.
The production of jets at hadron colliders provides stringent tests of perturbative QCD. The latest measurements by the ATLAS experiment are presented in this talk, using multijet events produced in the proton-proton collision data at sqrt(s) = 13 TeV delivered by the LHC. Jet cross-section ratios between inclusive bins of jet multiplicity are measured differentially in variables that are sensitive to either the energy-scale or angular distribution of hadronic energy flow in the final state. Several improvements to the jet energy scale uncertainties are described, which result in significant improvements of the overall ATLAS jet energy scale uncertainty. The measurements are compared to state-of-the-art NLO and NNLO predictions, and used to determine the strong-coupling constant. A measurement of new event-shape jet observables defined in terms of reference geometries with cylindrical and circular symmetries using the energy mover???s distance is highlighted.
A vast program of measurements of the strong coupling constant alpha_S is being undertaken by CMS. These measurements exploit several QCD dominated processes that are sensitive to alpha_S, and present different theoretical and experimental challenges. A review of the current public results and perspective is given.
The jet cross sections and azimuthal correlations among jets with large transverse momentum at CMS are measured, the results were compared to theory predictions, and the strong coupling constant was extracted.
We extend the existing NNPDF4.0 sets of parton distributions (PDFs) to approximate next-to-next-to-next-to-leading (aN3LO).
We construct an approximation to the N3LO splitting functions that includes all available partial information from both fixed-order computations and from small- and large-x resummation, and estimate the uncertainty on this approximation. We include known N3LO corrections to DIS structure functions.
The determined PDFs will account both for uncertainties due to incomplete knowledge of N3LO terms and to missing higher corrections.
We compare our results to the existing aN3LO PDFs from the MSHT group.
Finally, we examine the phenomenological impact of aN3LO PDFs at LHC, giving a first assessment of the impact on the Higgs and Drell-Yan total production cross-section. We find that aN3LO corrections to NNPDF4.0 PDFs are in agreement with their NNLO counterparts, that they improve the description of the global dataset and the perturbative convergence.
An ambitious project of the CzechInvest agency implemented with financial support from the state budget through the Ministry of Industry and Trade in the programme The Country for the Future.
Without supporting high disruptive start-ups in the Czech Republic. Our goal is to seek out and help create companies/projects that are exceptionally innovative, feasible and scalable.
Low and high energy radiation resistance behaviour of synthetic compounds to immobilize HLWs is made out on zirconolites for radiation and thermal stability besides high loading capacity on incorporation of lanthanides and actinides, maintaining crystallinity of host element. Nuclear energy significantly contributes to global energy needs from low carbon emissions providing clean environment but spent nuclear fuel poses threat to ecological and environmental safety. Over a period, novel nuclear waste forms have been evolved to immobilize high level wastes. Swift heavy ion induced effects on Nd-doped and Ce- & Y-doped zirconolites as a function of temperature through irradiation from a 15 UD tandem pelletron accelerator beam facility are examined for structural changes. The doped zirconolites have been found stable after swift heavy ion irradiations making them potential candidates for immobilization of radioactive wastes and their usefulness in nuclear reactor engineering.
Muon tomography has emerged as a powerful technique for non-invasive imaging in various fields, including nuclear security, geology, and archaeology. For ten years, genetic multiplexed resistive Micromegas (MultiGen) detectors, invented at CEA/Irfu, have been developed for muon tomography, aiming to enhance imaging resolution and efficiency. MultiGen detectors provide telescopes with high spatial resolution, and a low number of electronic channels, making them suitable for deployment in various experimental environments, including those encountered in projects like ScanPyramids and nuclear dismantling.
After describing our effort to optimize the MultiGen-based telescopes, our contribution in ScanPyramids project and the first three-dimensional muon tomography of a nuclear reactor will be presented. A sustained effort was also made to produce MultiGen detectors in a Frencg PCB company.
Future projects on nuclear dismantling for non-destructive inspection and imaging will be presented.
The ``Laser-hybrid Accelerator for Radiobiological Applications'', LhARA, is being developed to serve the Ion Therapy Research Facility (ITRF). ITRF/LhARA will be a novel, uniquely-flexible facility dedicated to the study of the biological impact of proton and ion beams. The technologies that will be demonstratedcan be developed to transform the clinical practice of proton and ion beam therapy (PBT) by creating a fully automated, highly flexible laser-driven system to:
* Deliver multi-ion PBT in completely new regimens at ultra-high dose rate in novel temporal-, spatial- and spectral fractionation schemes; and
* Make PBT widely available by integrating dose-deposition imaging with real-time treatment planning in an automatic, triggerable system.
The status of the ITRF/LhARA project will be described along with the collaboration’s vision for the development of a transformative proton- and ion-beam system.
We discuss the use of Low Gain Avalanche (LGAD) silicon detectors for two specific applications, namely measuring cosmic rays in space in collaboration with NASA and beam properties and doses for patients undergoing cancer treatment in flash beam therapy. For the first time, the use of LGADs and fast sampling electronics will be used in space in order to identify the type of particles in cosmic rays and measure their energies. Similar techniques allow to measure instantaneously with high precision the doses received by patients in flash beam therapy.
Since 1960s nuclear polarised targets have been an essential tools for study of spin structure of nucleons. The solid state polarised targets make use of the Dynamic Nuclear Polarisation (DNP). Spin physics observables strongly depend on the degree of nuclear polarisation. This is similar issue for the Nuclear Magnetic Resonance (NMR) and NMR Imaging, where the sensitivity also strongly depends on the degree of nuclear polarisation. Additionally one of special NMR techniques, the radiation detected NMR (RD-NMR), also requires high degree of polarisation. The RD-NMR has been predominantly performed using beams of radioactive nuclei polarised. With the widespread availability of isotopes for medical use, DNP could allow for use of RD-NMR outside of beam facilities.
In this contribution we will illustrate the rich history of polarised targets and present the current project for the first ever use of DNP for polarisation of unstable nuclei to be used for potential medical applications.
Scintillation materials can convert high-energy rays into visible light. Compared with crystal scintillator, the glass scintillator has many advantages, such as a simple preparation process, low cost and continuously adjustable components. Therefore, glass scintillator has long been conceived for application in the nuclear detection such as hadronic calorimeter. Given the deficiency of the crystal and the plastic scintillator, a new concept, Glass Scintillator Hadronic Calorimeter was proposed. In 2021, the researchers in the Institute of High Energy Physics (IHEP) have set up the Large Area Glass Scintillator Collaboration (GS group) to study the new glass scintillator with high density and high light yield. Currently, a series of high density and high light yield scintillation glasses have been successfully developed. The density of Ce3+ doped borosilicate and silicate glasses exceed 6 g/cm3 with a light yield of 1000 ph/MeV.
Millions of top quarks already produced at LHC TeV are ideal for searching for rare top-quark decays. Besides flavor-changing neutral currents that are highly suppressed in the Standard Model, baryon and lepton number conservation can be probed in top quark events. In this talk, recent searches for rare and beyond the Standard Model top-quark production and decay with significantly increased sensitivity will be discussed. Several of the measurements are the first of their kind.
The LHC is a top factory and run 2 has delivered billions of top quarks to the experiments. In this contribution, the results are presented of searches by the ATLAS experiment for Charge Lepton Flavour Violation (cLFV), and lepton flavour universality where the ratio of the branching ratios of the W boson to muons and electrons is measured.
The LHC is a top quark factory and provides a unique opportunity to look for top quark production and decay processes that are highly suppressed or forbidden in the SM. In this contribution results are presented of searches for Flavour Changing Neutral Currents (FCNC) interactions of the top quark. These processes are beyond the experimental sensitivity in the SM, but can receive enhanced contributions in many extensions of the SM. Any measurable sign of such interactions is an indication of new physics. An overview is presented of this search programme, with emphasis on recent searches for FCNC tqX vertices, where X is a Z-boson, a photon, or a Higgs boson, with several Higgs decay channels. A combination for the Higgs-decay related searches is also shown. All searches find good agreement with the background expectation and exclusion bounds are derived that improve very significantly on previous results.
The top quark loop gives the major quantum correction to the Higgs mass squared, playing the dominant role in the well-known Hierarchy Problem. Traditional models address the issue by introducing TeV-scale top partners. However, the absence of these new particles urges for an alternative solution. In this talk, I will present a new scenario where the top Yukawa coupling is modified to tackle the hierarchy problem. In the model, the top Yukawa coupling is strongly suppressed at high scales due to new interactions and degrees of freedom which will have direct impacts on Top physics. I will discuss both the possible UV completions and the relevant phenomenology.
Extrapolations of sensitivity to new interactions and standard model parameters inform the particle physics community about the potential of future upgrade programmes and colliders. Statistical considerations based on inclusive quantities and established analysis strategies typically give rise to a sensitivity scaling with the square root of the luminosity, $\sqrt{L}$. This suggests only a mild sensitivity improvement for the LHC's high-luminosity phase (HL-LHC), compared to the presently available LHC data. We provide clear evidence that the $\sqrt{L}$ scaling for the HL-LHC is overly conservative and unrealistic, using representative analyses in top quark, Higgs boson and electroweak gauge boson phenomenology.
The electro-weak couplings of the top quark are directly accessible in rare "top+X" production processes at the LHC, where top quark pairs or single top quark are produced in associations with bosons. We present a new analysis of the top sector of the Standard Model EFT. The fit is based on a fully NLO parameterization and includes the most recent (differential) results from ATLAS and CMS. We show that run 2 of the LHC allows, for the first time, to overconstrain the qqttbar and two-fermion operator coefficients and yields competitive bounds. We compare the current bounds to projections for the HL-LHC and future lepton colliders, that can yield powerful constraints.
For the first time, correlations between higher order moments of two and three Fourier flow harmonics (up to orders 8 or 10) are measured in Run 2 XeXe (deformed nuclei) and Run 3 PbPb (spherical nuclei) collisions data as a function of collision centrality. The measurements are performed with multiparticle mixed harmonic cumulants using charged particles in the pseudorapidity region |$\mathrm{\eta}$| < 2.4 and transverse momentum range 0.5 < $p_\mathrm{T}$ < 3.0 GeV/c. The results are compared to calculations using the IP-Glasma+MUSIC+UrQMD model to constrain the initial-state deformation parameters of Xe nuclei. The higher order moments of cumulants, skewness, kurtosis, and superskewness (5th moment) are expressed through the $v_\mathrm{2}${2k } (k = 1, ..., 5) harmonics and are measured against centrality. These moments probe the dependence of flow harmonics on the size and initial geometry of the system as well as the transport properties of the quark-gluon plasma.
The speed of sound squared, $c_s^2$, a property of the quark-gluon plasma (QGP) connected to the QCD equation of state, can be extracted from ultra-central heavy-ion collisions, where the medium maintains a fixed size and the initial-state and thermal fluctuations dominate. We present the first ALICE measurements of the event-by-event mean transverse momentum, $\langle[p_\mathrm{T}]\rangle$, its average and higher-order fluctuations as a function of multiplicity using particle spectra and multi-particle $\langle[p_\mathrm{T}]\rangle$ cumulant techniques, in ultra-central Pb--Pb collisions at $\sqrt{s_\mathrm{NN}} = 5.02$ TeV. The pronounced rise in $\langle[p_\mathrm{T}]\rangle$ and the sudden transition in higher-order fluctuations at high multiplicities are used to extract the $c_s^2$ and to probe the thermalization of the QGP. Our approach yields valuable insights into the thermalized nature of the QGP, contributing to a deeper understanding of the QCD equation of state.
The elliptic flow ($v_2$) of identified hadrons is an observable sensitive to the early dynamics of heavy-ion collisions and to the equation of state (EoS) of the medium. In particular, strange and (multi-) strange baryons have small hadronic cross-sections, thus being clean probes of the early stages of the collision systems' evolution. Additionally, strange and multi-strange baryons are also sensitive to the vorticity of the produced medium and to the magnetic field that it experiences at collision time. The effect of vorticity and magnetic field can be examined experimentally by studying the polarization of strange and (multi-) strange baryons. This talk will present the $v_{2}$ and the polarization of $\Lambda$, $\Xi^\pm$ and $\Omega^\pm$ measured with the high statistics Pb-Pb collisions collected by the ALICE collaboration during the Run 3 of the LHC.
Intense electromagnetic fields from ultrarelativistic heavy ions can trigger photonuclear reactions, which can be used to probe the nuclear gluon distribution at low Bjorken-$x$ and targets gluonic fluctuations. Our study examines ultra-peripheral and nuclear-overlap collisions, covering measurements of peripheral Pb--Pb collisions' $y$-differential cross section and coherent J/$\psi$ photoproduction polarization. We present new Run 2 measurements, including $p_T$ spectra of incoherent J/$\psi$ in Pb--Pb UPCs at both forward and midrapidity, revealing lead nucleus substructure. Additionally, we observe J/$\psi$ photoproduction with proton dissociation in p--Pb collisions, offering fresh insights into proton sub-nucleonic fluctuations. Combining forward and midrapidity data offers a robust test of theoretical models.
We discuss exclusive heavy-vector-meson photoproduction in ultraperipheral collisions at the LHC in a tamed collinear factorisation approach at Next-to-Leading Order (NLO). By employing the Shuvaev transform as a reliable means to relate Generalised Parton Distributions (GPDs) to Parton Distribution Functions (PDFs) at small values of the skewness parameter ξ, we perform a parton analysis within the public PDF fitting tool xFitter to determine the gluon PDF at moderate-to-low values of x using recent measurements from the LHC. We comment on the prospects of this approach to ascertain the nuclear gluon PDF in heavy-ion collisions. Additionally, we emphasise that a combined fit to exclusive heavy-quarkonium production data from multiple collision systems will increase our understanding of the underlying theoretical mechanisms at play in these interactions and, importantly, lead to an improved understanding of the behaviour of the gluon distribution in the proton and nuclei at small x.
Realization of high intensity neutrino beam over 1 MW beam power is crucial to search for CP violation in Lepton sector. J-PARC accelerator and neutrino beamline are being upgraded towards 1.3 MW beam power for Hyper-Kamiokande experiment. Magnetic horns are used to focus secondary particles produced in a neutrino production target and can intensify the neutrino beam by more than an order of magnitude. Significant upgrades have been made in recent years. Rated current is increased from 250kA to 320kA, which enable to increase the neutrino intensity by 10%, by upgrading almost all the electrical components of the system (power supplies, transformers, etc). Cooling capability has also been improved by developing a new cooling scheme. Reinforcement of removal of hydrogen gas produced from a water radiolysis by intense beams has also been in progress. Details of the upgrades and operation experience, as well as prospects for 1.3 MW operation, are described.
A plethora of ideas for exploiting the full scientific potential at the fixed-target complex has been brought forward within the Physics Beyond Colliders Initiative (PBC) at CERN seeking to exploit the full intensity the Super Proton Synchrotron (SPS) can provide. Out of the findings of a PBC Task Force, a new project has been mandated to prepare the technical design for a new high-intensity user facility in the ECN3 cavern in the CERN North Area for beam dump and/or kaon physics. In addition, several experiments wish to have higher intensities of secondary beams to address searches for BSM physics, amongst them NA64, employing high-energy electron and muon beams, as well as MuonE, aiming to measure the hadron vacuum polarisation as an input to explain the $(g-2)_µ$ puzzle. Also in the QCD sector, several high-intensity experiments are proposed, such as AMBER with a rich physics programme ranging from determining the proton radius with muon beams to meson structure investigations.
The Physics Beyond Colliders (PBC) study at CERN explores, among other topics, the potential of extending the Large Hadron Collider (LHC) physics program by Fixed-Target (FT) experiments. One option is to use two bent crystals (double-crystal setup): the first crystal deflects particles from the beam halo onto an in-vacuum target. Another crystal deflects short-lived particles created in the target, thus inducing spin precession. This setup has the potential to measure the electric and magnetic dipole moments of these particles, well beyond what can be done with magnets. The second crystal must induce a deflection of several mrad over a few cm. A proof-of-principle setup, TWOCRYST, is foreseen to be installed in the LHC and operated in 2025. It aims to validate the operational feasibility, assess the crystal properties at TeV energies, and gather data on achievable statistics. This contribution outlines the principle and objectives of the TWOCRYST project and the studies planned.
We review the current plans for the EIC Electron Injector chain. These include and overview of the accelerator chain necessary to deliver 5, 10 and 18 GeV polarized electrons to the Electron Storage Ring (ESR), the charge accumulation and polarized electron transport approach.
Leveraging the novel concept of ERLs, we present the LHeC and FCC-eh that allow the exploration of electron-hadron interactions above TeV scale. The presented design of the electron accelerator is based on two superconducting linear accelerators in a racetrack configuration that can produce lepton beam energies in excess of 50 GeV. In energy recovery mode, the accelerator is capable of reaching luminosities in excess of 10$^{34}$ cm$^{-2}$s$^{-1}$ with an energy footprint of around 100 MW for the electron accelerator. The proposed collider concept enables luminosity values high enough for a general-purpose experimental program. While the envisaged physics results have the potential to empower the HL-LHC or FCC-hh physics results, they also include flagship EW, Higgs, QCD and top quark measurements beyond current precision, and complementary BSM searches. New thematic ep/eA@CERN WGs are pursued with the HL-LHC and the EIC programs.
The Observing Run 4 (O4) is the most recent period of data taking for the LIGO-Virgo-KAGRA (LVK) network of ground-based gravitational-wave (GW) interferometric detectors. Its first half, O4a, started in May 2023 and ended in January 2024 while its second part, O4b, is scheduled to start in April 2024 after a two-month commissioning break, and to end in January 2025. After an introduction summarizing the evolution of the different detectors since the end of Observing Run 3 (O3) in March 2020 and their current status, this talk will review the performance of the network and of the individual instruments since the beginning of O4. Some emphasis will be put on the improved alert system that allows astronomers to be notified in low latency when a promising transient GW candidate is identified. To conclude, the plans of the three collaborations for the coming years will be discussed.
The success of gravitational wave astronomy hinges on precise data quality assessment and the meticulous validation of detected events. This presentation emphasizes the critical role of these processes, focusing on their importance within the ongoing O4 joint observational campaign of the LIGO, Virgo, and KAGRA detectors. We begin by introducing the concepts of detector sensitivity and data quality, with a particular emphasis on data quality issues. We then examine how these issues impact the search for astrophysical signals, affecting their confidence levels and the reliability of astrophysical parameter estimation results. We emphasize the importance of robust statistical tests in distinguishing genuine signals from noise. Additionally, we delve into the process of event validation, which involves scrutinizing candidate signals to support their astrophysical origin. Our discussion includes the presentation of the framework used in O4 to assess these properties effectively.
So far, high frequency gravitational waves (GWs) remain unexplored messengers of new physics. Proposed sources in the MHz - GHz band include primordial black hole (PBH) mergers, PBH superradiance and several stochastic backgrounds.
Our collaboration is working on tapping into this source by employing superconducting radio frequency cavities for high precision measurements.
The detection principle is to load an EM mode of a cavity so that a GW-induced vibration of the cavity walls up-converts some power into another, unloaded EM mode. The power in the unloaded mode is then taken to be the GW signal.
This talk will outline the ongoing work and future plans of our project in Hamburg. Specifically, the projected sensitivity to potential sources, the cavity design, and the signal readout mechanism will be discussed.
The Euclid mission satellite was launched on July 1st, 2023 from Cape Canaveral, Florida, with a Space X Falcon 9 rocket . After one month journey it is set in its orbit around the Sun-Earth L2 point and has already finished its commissioning period. Euclid survey started in February 2024 and will map 15000 deg2 of the sky in the following six years observing more than 1 billion galaxies with unprecedented image quality. The survey will provide a 3D map of the universe and will improve out knowledge on the cosmological model by an order of magnitude with respect to current constraints. This talk will describe the Euclid mission, its instruments, its current status and first images, the expected schedule on Data Releases, and describe the cosmological probes that will be measured and how it will contrubete to our understanding of the dark content of the Universe.
The Large Hadron Collider forward (LHCf) experiment, located at the LHC, plays a crucial role in high-energy particle physics research, specifically in measuring neutral particle production in the forward pseudorapidity region, to improve the understanding of ultra-high energy cosmic ray interactions with the Earth atmosphere. Our presentation will summarize the latest advancements from LHCf, focusing on the significant findings from Run II in 13 TeV proton-proton collisions. We will show the measured spectra for key particles such as photons, neutrons, $\pi^0$ and $\eta$. Additionally, we will highlight the combined analysis with the ATLAS experiment, in particular emphasizing the energy spectra of very-forward photons in diffractive collisions. Finally, we will discuss the successful data-taking in 13.6 TeV proton-proton collisions of Run III, preliminary results for the corresponding ongoing analyses and the motivation for the next operation in proton-oxygen collisions at the LHC.
upersymmetry (SUSY) provides elegant solutions to several problems in the Standard Model, and searches for SUSY particles are an important component of the LHC physics program. Naturalness arguments favour supersymmetric partners of the gluons and third-generation quarks with masses light enough to be produced at the LHC. This talk will present the latest results of searches conducted by the ATLAS experiment which target gluino and squark production, including stop and sbottom, in a variety of decay modes within RPC SUSY
Electroweak-inos, superpartners of the electroweak gauge and Higgs bosons, play a special role in supersymmetric theories. Their intricate mixing into chargino and neutralino mass eigenstates leads to a rich phenomenology, which makes it difficult to derive generic limits from LHC data. We present a global analysis of LHC constraints for promptly decaying electroweak-inos in the context of the minimal supersymmetric standard model, exploiting the SModelS software package. Combining up to 16 ATLAS and CMS searches, we study which combinations maximise the sensitivity in different regions of the parameter space, how fluctuations in the data in individual analyses influence the global likelihood, and what is the resulting exclusion power of the combination compared to the analysis-by-analysis approach. Coupled with a bottom-up procedure, we also highlight parameter space regions that maximally violate the standard model hypothesis while remaining compatible with the LHC constraints.
In the quest for physics beyond the Standard Model, TeV-scale New Physics (NP) remains a very attractive possibility. However, this is challenged by constraints across different energy scales, from flavour observables to high-$p_T$ searches at the LHC, going through electroweak precision tests. The emerging picture is that TeV-scale NP cannot have a generic flavour structure. In particular, the idea of new states coupled mainly to the third generation has recently received a lot of attention.
We present a model-independent analysis of this scenario within the SMEFT, with a $U(2)^5$ symmetry imposed on the effective operators. This reduces the number of parameters to 124, which we analyse one-by-one, taking into account RGE effects and flavour violation from the leading $U(2)$ breaking term, and confronting them against current data. We then show how under non-tuned hypotheses NP coupled mainly to the third generation can still be compatible with an effective scale as low as 1.5 TeV.
The interpretation of LHC data, and the assessment of possible hints of new physics (NP), require precise knowledge of the proton structure in terms of parton distribution functions (PDFs). These are usually extracted with a data-driven approach, assuming that the underlying theory is the SM, and later used as inputs for theoretical predictions in searches for NP. The evident inconsistency of the procedure demands an investigation as to whether NP could inadvertently be absorbed in the proton parametrisation and hinder the discovery of subtle deviations from the SM. In order to tackle this problem, we devise two strategies. First, we develop a a robust framework to perform simultaneous fits of SMEFT Wilson coefficients and PDFs, enabling us to disentangle the different sources of information coming from the data. Secondly, we present a systematic methodology designed to determine whether global PDF fits can inadvertently fit away signs of NP in the high-energy tails of distributions.
The Drell Yan (DY) scattering is an highly sensitive probe for new physics. Indeed, being a well measured phenomenon, any deviation between experimental and theoretical results could point at new physics beyond the Standard Model. To enable precise comparisons between theory and experimental data, extensive calculations have been performed in both the electroweak and QCD sectors of the Standard Model. Following this line of reasoning, the DY scattering has been investigated also in the Standard Model Effective Field Theory (SMEFT) framework, both at LO and NLO. Nevertheless, existing results do not include 4-fermion operators at NLO SMEFT. In this talk we extend these calculations in order to include all dimension-6 operators with an arbitrary flavor structure, providing NLO QCD and electroweak for the neutral Drell-Yan process.
We study the possibility for large volume underground neutrino experiments
to detect the neutrino flux from captured inelastic dark matter in the Sun.
The neutrino spectrum has two components: a mono-energetic "spike" from
pion and kaon decays at rest and a broad-spectrum "shoulder" from prompt
primary meson decays. We focus on detecting the shoulder neutrinos
from annihilation of hadrophilic inelastic dark matter with masses in the
range 4-100 GeV. We find the region of parameter space that these
neutrino experiments are more sensitive to than the direct-detection
experiments. For dark matter annihilation to heavy-quarks, the projected
sensitivity of DUNE is weaker than current (future) Super (Hyper) Kamiokande
experiments, while for the light-quark channel, only the spike is
observable and DUNE will be the most sensitive experiment.
BESIII is a symmetric $e^+e^-$ collider operating at c.m. energy from 2.0 to 4.95 GeV. With the world’s largest data set of $J/\psi$ (10 billion), and $\psi$(3686) (2.6 billion), and about $25 fb^{-1}$ of energy scan data from 3.77 to 4.95 GeV, various dark sectors particles produced in $e^+e^-$ annihilation and meson decay processes can be searched for at BESIII. Axion-like particles (ALPs) are pseudo-Goldstone bosons arising from some spontaneously broken global symmetry, addressing the strong CP or hierarchy problems. In this talk, we report the search for invisible dark photon decays using initial state radiation, search for invisible muonic Z’ boson decays, and search for axion-like particles with a light scalar or vector particle in the muonic decay of $J/\psi$.
The Scintillating Bubble Chamber (SBC) experiment is a novel low-background technique aimed at detecting low-mass WIMP interactions and coherent scattering of reactor neutrinos (CEvNS). The detector consists of a quartz-jar filled liquid argon, spiked with 100 ppm of xenon to act as a wavelength shifter. The target fluid is de-pressurized into a super-heated state by a mechanically controlled piston. Particles interacting with the fluid can generate heat (bubbles) and scintillation light, depending on the energy intensity and density. The detector is equipped with cameras, SiPMs, and piezo-acoustic sensors to detect events. In this talk, I will present the design of SBC and provide an update on the ongoing commissioning and calibration of an SBC device at Fermilab. Finally, I will discuss the collaboration’s plans for operation at SNOLAB and at a reactor for physics searches.
Diagrammatic approaches to perturbation theory transformed the practicability of calculations in particle physics. In the case of extended theories of gravity, however, obtaining the relevant diagrammatic rules is non-trivial: we must expand in metric perturbations and around (local) minima of the scalar field potentials, make multiple field redefinitions, and diagonalise kinetic and mass mixings. In this talk, I will motivate these models, introduce the package FeynMG — a Mathematica extension of FeynRules that automates the process described above — and describe an application to a model with unique collider phenomenology.
Sterile neutrinos are well-motivated and simple dark matter (DM) candidates. However, sterile neutrino DM produced through oscillations by the Dodelson-Widrow mechanism is excluded by current X-ray observations and bounds from structure formation. One minimal extension, that preserves the attractive features of this scenario, is self-interactions among sterile neutrinos. In this work, we analyze how sterile neutrino self-interactions mediated by a scalar affect the production of keV sterile neutrinos for a wide range of mediator masses. We find four distinct regimes of production characterized by different phenomena, including partial thermalization for low and intermediate masses and resonant production for heavier mediators. We show that significant new regions of parameter space become available which provide a target for future observations.
It is well known that inside an oriented crystal a strong acceleration of the e.m. shower development is observed, if a high energy ($> 10$ GeV) e$^\pm$ or photon impinges within 0.1$^\circ$ from one of its crystallographic axes. This phenomenon can be exploited to develop novel ultra-compact calorimeters, capable of containing the energy of the incident particles as efficiently as much thicker non-oriented detectors, with an improved particle identification capability. Such a calorimeter has never been developed before, but the INFN ORiEnted calOrimeter (OREO) project is now aiming at assembling a first prototype composed of oriented PbWO$_4$ crystals. This contribution will present the status of the OREO project and the results of both numerical simulations and beamtests, performed at the CERN PS and SPS with a 3x1 and a 2x2 matrix of oriented PbWO$_4$ crystals. We will also discuss the potential application of such a detector in fixed target experiments and $\gamma$-ray telescopes.
LUXE experiment: For the measurements of positrons, a tracker and an
electromagnetic calorimeter are foreseen. Since the expected number of positrons
varies over five-orders of magnitude, and has to be measured over a widely spread
low energy background, the calorimeter must be compact and finely segmented. The
concept of a sandwich calorimeter made of tungsten absorber plates interspersed
with thin sensor planes is developed. The sensor planes comprise a silicon pad
sensor, flexible Kapton printed circuit planes for bias voltage supply and signal
transport to the sensor edge, all embedded in a carbon fibre support. A dedicated readout is developed comprising
front-end ASICs in 130 nm technology and FPGAs to orchestrate the ASICs. As an alternative, GaAs are considered with integrated
readout strips on the sensor. Both prototypes are studied in an
electron beam of 5 GeV. Results will be presented on the homogeneity of the
response, edge effects and cross-talk between channels.
A muon collider is being proposed as a next generation facility. The incredible physics potential comes at the cost of technological challenges due to the short muon lifetime. The beam-induced background, produced by the muon decays in the beams and subsequent interactions, may limit the detector performance. A diffused flux of photons and neutrons passes through the calorimeter, which thus requires a design to avoid this substantial background. The Crilin calorimeter is being studied as a valuable option for the electromagnetic calorimeter: it is a semi-homogeneous calorimeter with Lead Fluoride crystals interfaced with SiPMs.
In this talk the simulation studies towards the Crilin design are presented. The Crilin performance and the impact of the background are discussed.
The experimental tests on a prototype are also presented. These tests are fundamental to demonstrate that the requirements established with the muon collider simulation are achieved by the Crilin technology.
As one of the future collider experiments, CEPC aims to achieve extremely precise measurements of Standard Model particles. This necessitates a high granularity imaging calorimeter system and a dedicated Particle Flow reconstruction. In CEPC’s reference detector, a homogeneous crystal ECAL is proposed, offering optimal EM resolution, a low photon energy threshold and a promising jet energy response.
This report includes our latest R&D efforts of this ECAL. We have studied the optical properties of BGO crystals and SiPM responses through simulations, comparing the results with measurements obtained in lab. A small-scale module has been developed and tested under beam conditions. At the full detector level, A novel PFA has been developed and its performance has been validated in various scenarios, including the full simulation of 2-jet events in CEPC. The pattern recognition concepts introduced in this PFA could potentially be considered for the reconstruction of other homogeneous ECALs.
The Mu2e experiment will search for the charged-lepton flavor violating conversion of muons into electrons in the field of a nucleus, planning to reach a single event sensitivity of 3x10$^{−17}$. The conversion electron has a monoenergetic signature at ~105 MeV and is identified by a high-resolution tracker and an electromagnetic calorimeter (EMC). The EMC is composed of 1348 CsI crystals, each read by two custom SiPMs, arranged in two annular disks. It will achieve $<$10% energy resolution and 500 ps timing resolution for 100 MeV electrons while maintaining high levels of reliability in a harsh operating environment with high vacuum. The production phase of all EMC components is almost complete. The two disks are assembled, with a full integration and test of all the analogic sensors and electronics. This talk summarises the construction and assembly phases, the QC and calibration tests performed, and the installation and commissioning plans as Mu2e nears data-taking.
The commissioning work of a Cosmic Muon Veto detector (CMVD)
on top of the mini-ICAL detector at Madurai, India is continued using extruded plastic scintillators, embedded
WLS fibers and the SiPM as a photo-transducer. The CMVD is being built to study the
feasibility of a cosmic muon veto for a shallow-depth neutrino experiment. An experimental setup was
designed to characterise all those SiPMs in a temperature and humidity controlled environment. The readout electronics
involves trans-impedance amplifiers followed by an opamp buffer stage of combined gain 1.245 mV/$\mu$A and a
digital storage oscilloscope for storing the data with a minimal distortion of SiPM signal.
Various characteristics of the Hamamatsu SiPM (S13360-2050VE), e.g. signal shape, optically correlated
and uncorrelated noise, recovery time etc were studied as a function of $V_{ov}$, number of photoelectrons,
the ambient temperature and the humidity. This paper will cover the details of those results.
Relativistic heavy-ion beams at the LHC are accompanied by a large flux of nearly-real photons, leading to a variety of photon-induced processes. This talk presents a series of measurements of dilepton production from photon fu- sion performed by the ATLAS Collaboration. Recent measurements of exclu- sive dielectron production in ultra-peripheral collisions (UPCs) are presented. These processes provide strong constraints on the nuclear photon flux and its dependence on the impact parameter and photon energy. Comparisons of the measured cross-sections to QED predictions from the Starlight and SuperChic models are also presented. Tau-pair production measurements can constrain the tau lepton???s anomalous magnetic dipole moment (g-2), and a recent ATLAS measurement using muonic decays of tau leptons in association with electrons and tracks provides one of the most stringent limits available to date.
We will present the latest measurements of charmonia photoproduction and two-photon processes in ultra-peripheral Pb-Pb collisions at the LHC, using the ALICE detector. These processes probe the nuclear gluon distribution at low Bjorken-x and QED effects in strong fields. ALICE has an active program on UPC physics, which is benefiting from the Run 3 detector upgrades because of a continuous and trigger-less data acquisition mode. This greatly improves the sensitivity and efficiency for these rare phenomena. We will compare the data with theoretical predictions, highlighting the new insights and directions for the field of ultra-peripheral heavy-ion collisions. In addition, we will present the prospects of measuring $a_{\tau}$ with ALICE in Run~3 using tau pair production, and compare them with theoretical predictions and recent measurements.
We will present the latest measurements of the anomalous magnetic moment (g – 2) of the tau lepton at CMS. These are obtained from photon-induced processes in heavy-ion collisions, or in proton-proton collisions.
In ultraperipheral Pb+Pb collisions, intense electromagnetic fields enable the generation of magnetic monopole pairs via the Schwinger mechanism. Due to their high ionization and unique trajectories in a solenoidal magnetic field, monopoles are expected to leave a large number of clusters in the innermost ATLAS pixel detector without associated reconstructed charged-particle tracks or calorimeter activity. This talk presents a search for monopole-pair production in ultraperipheral Pb+Pb collisions in the monopole mass range of 2-100 GeV, based on 5.02 TeV data recorded in 2015. The results are compared with a leading-order spin-1/2 photon-photon fusion model and a recently developed semiclassical model that includes non-perturbative cross section calculations -- as well as with a recent search limits obtained by the MoEDAL Collaboration using complementary techniques.
Heavy quarks, i.e. charm and beauty, are produced at the initial stage of heavy-ion collisions. In the presence of a large angular momentum and initial magnetic field, they can be polarised. The quark polarisation is expected to be transferred to the hadron during the hadronisation process, and it can be probed by measuring the $\rho_{00}$ parameter of the spin density matrix element of spin-1 hadrons.
We will present the first measurement of the $\rho_{00}$ parameter of D$^{*+}$ meson in Pb--Pb collisions at $\sqrt{s_{\rm NN}}=5.02~$TeV, collected by ALICE during the LHC Run 2, and a comparison with the ${\rm J}/\psi$ polarisation measurement.
The $\rho_{00}$ parameter of D$^{*+}$ mesons measured in high-energy pp collisions will also be presented, including the first studies with Run 3 data. In this case, the measurement is performed also for D$^{*+}$ mesons originating from B-meson decays, expected to be longitudinally polarised due to the helicity conservation in weak decays.
The Future Circular Collider physics programme is based on the sequence of a 90-365 GeV high luminosity e+e- collider (FCC-ee) followed by a 100 TeV hadron collider (FCC-hh). A main goal of the FCC is to fully study the Higgs boson properties. The FCC-ee makes use of the well-known c.m. energy by using Z tagging to perform a model-independent determination of the ZH cross-section at 240 GeV, and thereby measure the coupling to Z bosons and the total width. The couplings to W, b, c, g (and, partially, strange) will be measured at the FCC-ee, and the more rare, γγ, Zγ, μμ, invisible decays at the FCC-hh. The precise top-quark measurements at FCC-ee will be instrumental to determine the top Yukawa at the FCC-hh. The Higgs self-coupling will be determined from high-order corrections to sigma(ZH) at FCC-ee, and at percent-level from HH production at the FCC-hh. Finally, the FCC-ee offers a unique opportunity to determine the electron Yukawa coupling via resonant s-channel Higgs production.
At the Future e+e- Circular Collider a long data taking period is also foreseen at the ttbar production threshold and slightly above, up to $\sqrt{s}$=365 GeV, with more than 300 000 ZH events expected at these energies. We study the precision which can be reached with this dataset on the Higgs mass, and combine it with the measurement obtained with the same recoil mass technique in the e+e- and mu+mu- final state, at $\sqrt{s}$=240 GeV, which are also presented in detail in this report. We present also the precision which can be obtained on the total ZH cross section measurement at $\sqrt{s}$=365 GeV, and the test which can be performed on the total ZH cross section evolution as a function of sqrt(s). Expected precisions on the measurements at the energy points of the ttbar production threshold scan are also presented.
Subtle field-theoretical effects suggest the presence of additional Higgs contributions in standard model processes. This has been supported by electroweak lattice calculation, e.g. for vector boson scattering. These effects can be included in perturbation theory by a suitable augmentation.
We use such augmented perturbation theory to determine the impact at next-to-leading order at lepton colliders, from LEP to future machines such as FCC, in collisions with fermion-antifermion final states. After providing the formal background, we give first approximate results for differential cross sections and total cross sections. We will discuss what would be necessary to detect these effects.
At a center-of-mass energy of 10 TeV, muon collisions copiously produce Higgs bosons, enabling the measurement of their couplings with bosons and fermions with unprecedented accuracy, achievable with just 10 ab$^{−1}$ of data. Additionally, pairs of Higgs bosons are produced with a significant cross-section, enabling the determination of the second term of the Higgs potential through measurements of the double Higgs production cross-section and the trilinear self-couplings. These collisions offer the possibility to study triple Higgs production, allowing the determination of the quadrilinear coupling allowing a deep investigation of the Higgs potential. The muon collider enables Higgs physics studies already at 3 TeV center-of-mass energy, laying the groundwork for the higher energy experiment. This contribution discusses the expected accuracy of Higgs measurements using detailed detector simulations, which include physics and machine backgrounds at both center of mass energies.
Operating in the Higgs factory mode and beyond at center-of-mass energies up to
1 TeV, ILC offers plethora of measurements in the Higgs sector to address open
questions of the Standard Model of particle physics and cosmology. This will be
discussed from the perspective of global fits and individual measurements of the
Higgs properties, including its exotic and CP violating interactions as well as the
trilinear self-coupling.
The Large Hadron-electron Collider and the Future Circular Collider in electron-hadron mode will make possible the study of DIS in the TeV regime providing electron-proton collisions with instantaneous luminosities of $10^{34}$ cm$^{−2}$s$^{−1}$. With a charged current cross section around 200 (1000) fb at the LHeC (FCC-eh), Higgs bosons will be produced abundantly. We examine the opportunities for studying several of its couplings, particularly $H\to b\bar b$, $H\to c\bar c$, $H\to WW$, and Higgs to invisible. We also discuss the possibilities to measure anomalous Higgs couplings, and the implications of precise parton densities measured in DIS on Higgs physics. We finally address the complementarity in measuring Higgs couplings between the LHeC and the FCC-he and the respective hadronic colliders, the HL-LHC and the FCC-hh, and $e^+e^-$ Higgs factories, but will also emphasise the gain in accuracy achievable by combining results between those colliders.
The MicroBooNE experiment is a Liquid Argon Time Projection Chamber (LArTPC) detector located at Fermilab. MicroBooNE is part of the Short Baseline Neutrino (SBN) Program and detects neutrinos coming from the on-axis Fermilab Booster Neutrino (BNB) beam from the off-axis Neutrinos at the Main Injector (NuMI) neutrino beam. Understanding the electron-neutrino cross section with precision is key information for neutrino oscillation measurements. Understanding electron neutrino interactions on argon is crucial for the neutrino oscillation physics measurements of the Short Baseline Neutrino program and DUNE experiment, yet data constraints to date are scarce. This talk will focus on MicroBooNE’s electron-neutrino cross section measurements, consisting of 3 published measurements and ongoing analyses using neutrinos from the BNB and NuMI beamlines.
The MicroBooNE liquid argon time projection chamber (LArTPC) experiment operated in the Fermilab Booster Neutrino (BNB) and Neutrinos at the Main Injector (NuMI) beams from 2015-2021. Among the major physics goals of the experiment is a detailed investigation of neutrino-nucleus interactions. MicroBooNE currently possesses the world's largest neutrino-argon scattering data set, with a number of published cross section measurements and more than thirty ongoing analyses studying a wide variety of interaction modes. This talk provides an overview of MicroBooNE's measurements of muon neutrino inclusive and pionless cross sections.
MicroBooNE is a Liquid Argon Time Projection Chamber (LArTPC), able to image neutrino interactions with excellent spatial resolution, enabling the identification of complex final states resulting from neutrino-nucleus interactions. MicroBooNE currently possesses the world's largest neutrino-argon scattering data set, with a number of published cross section measurements and more than thirty ongoing analyses studying a wide variety of interaction modes. This talk provides an overview of MicroBooNE's measurements of topologies with pions in the final state, as well as the first cross section measurements of eta and Lambda production, and studies of neutron detection in argon.
NOvA is a long-baseline accelerator-based neutrino experiment based in the USA. For its physics goals, NOvA uses two functionally-identical detectors. The Near Detector is situated at Fermilab, 1 km from the neutrino target and the Far Detector is located at Ash River, MN, a distance of 810 km from the neutrino source. The ND sees high intensity of the neutrino beam due to its close proximity to the neutrino target, giving us a unique opportunity to do high-precision neutrino cross-section measurements. In this talk, we present our latest results of the muon antineutrino charge current inclusive cross section measurement in the NOvA ND. The new measurement is a triple differential cross section in antimuon kinematic phase-space and in the total energy of all observable final state hadrons. We also compare our data results to various neutrino generator predictions, for example, comparisons to GENIE, NuWro, GiBUU and NEUT neutrino generators are presented.
T2K is a long-baseline experiment for the measurement of neutrino oscillations. The neutrino flux and neutrino-nucleus cross-sections are measured by a suite of near detectors, including ND280, an off-axis multipurpose magnetised detector, WAGASCI, featuring a water-enriched target at a different off-axis angle, and INGRID an on-axis detector composed of sandwiched layers of iron and scintillator.
The near detectors perform a wide variety of neutrino-nucleus cross-section measurements on different targets and for different final states. Such a program, to control systematic uncertainties for T2K and beyond, provides high-quality data to benchmark improved models of neutrino-nucleus scattering.
We will review the most relevant cross-section results, including Charged Current (CC) interactions on water, Neutral Current interactions (NC) and electron-neutrino CC interactions with pions in the final state.
ProtoDUNE-SP was a large-scale prototype of the single phase DUNE far detector which took test beam data in Fall 2018. The beam consisted of positive pions, kaons, muons, and protons, and this data is being used to measure the various hadron-Ar interaction cross sections. Uncertainties in these interaction cross sections are a significant systematic uncertainty in long baseline neutrino oscillation analyses. These measurements will provide important constraints for the nuclear ground state, final state interaction, and secondary interaction models of argon-based neutrino-oscillation and proton-decay experiments such as DUNE. This talk will report the results of the cross-section measurements of pions, protons and kaons that interact inelastically with argon.
FASER, the ForwArd Search ExpeRiment, has successfully taken data at the LHC since the start of Run 3 in 2022. With 2022 data alone, FASER directly detecteda the first muon and electron neutrinos at the LHC, opening the window on the new subfield of collider neutrino physics. In this talk, we will give a full status update of the FASER and FASERnu experiments and their latest results, with a particular focus on our very first measurements of neutrino cross sections in the TeV-energy range, along with its implications for far forward hadron production.
The upgraded LHCb detector is taking data at a five times higher instantaneous luminosity than in Run 2. To cope with the harsher data taking conditions, LHCb deployed a purely software based trigger composed of two stages: in the first stage the selection is based on a fast and simplified event reconstruction, while in the second stage a full event reconstruction is used. This gives room to perform a real-time alignment and calibration after the first trigger stage, allowing to have an offline-quality detector performance in the second stage of the trigger. In this talk we will present the framework and the procedure for a real-time alignment of the LHCb detector and show key figures such as tracking and PID performance on Run 3 data.
The increased particle flux expected at the HL-LHC poses a serious challenge for the ATLAS detector performance, especially in the forward region which has reduced detector granularities. The High-Granularity Timing Detector (HGTD), featuring novel Low-Gain Avalanche Detector silicon technology, will provide pile-up mitigation and luminosity measurement capabilities, and augment the new all-silicon Inner Tracker in the pseudo-rapidity range from 2.4 to 4.0. Two double-sided layers will provide a timing resolution better than 50 ps/track for MIPs throughout the HL-LHC running period, and provide a new timing-based handle to assign particles to the correct vertex. The LGAD technology provides suitable gain to reach the required signal-to-noise ratio, and a granularity of 1.3 × 1.3 mm2 (3.7M channels in total). Requirements, specifications, technical designs, recent updates, and the project status will be presented, including the on-going R&D efforts on sensors, the readout ASIC, etc.
The Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider (LHC) is undergoing an extensive Phase 2 upgrade program to prepare for the challenging conditions of the High-Luminosity LHC (HL-LHC). A new timing detector for CMS will measure minimum ionizing particles (MIPs) with a time resolution of ~30-40 ps. The precise timing information from the MIP timing detector (MTD) will reduce the effects of the high levels of pileup expected at the HL-LHC, bringing new capabilities to CMS. The MTD will be composed of an endcap timing layer (ETL), instrumented with low-gain avalanche diodes and read out with the ETROC chip, and a barrel timing layer (BTL), based on LYSO:Ce crystals coupled to SiPMs and read out with the TOFHIR2 chip. This contribution will provide an overview of the MTD design and its expanded physics capabilities, describe the latest progress towards prototyping and production, and show the ultimate results demonstrating the achieved target time resolution.
The ALICE Collaboration proposed a completely new apparatus, ALICE 3, for the LHC Runs 5 and 6, which will enable novel studies of the QGP focusing on low-pT heavy-flavour production and on precise multi-differential measurements of dielectron emission. The detector consists of a large pixel tracker covering eight units of pseudorapidity and a comprehensive particle identification (PID) system, implementing silicon time-of-flight (TOF), featuring 20 ps resolution, ring-imaging Cherenkov (RICH), muon identification, and electromagnetic calorimetry. High-purity separation of electrons with pT as low as 60 MeV/c and up to about 3 GeV/c at midrapidity, and of hadrons up to a pT of 20 GeV/c, is achieved by the TOF and RICH, which are arranged in barrel and end-caps for full rapidity coverage. This contribution will present the PID subsystems conceptual design and technology options, as well as expected performance from simulation studies and first results achieved in ongoing R&D activities.
During the second LHC long shutdown, the LHCb experiment underwent a major upgrade in order to be able to operate at the instantaneous luminosity of 2 × 10−33 cm−2 s−1, reading data at the full LHC bunch crossing rate. The RICH system of LHCb has been completely refurbished installing new photon detectors (Multi-anode Photomultiplier Tubes) equipped with a custom developed read-out chain. In order to reduce the unprecedented peak occupancy, the full optics and mechanics of the RICH1 detector has been re-designed to distribute the Cherenkov photons over a larger surface of the photon detectors planes. The overview of the RICH upgrade programme is described including the design, installation, commissioning and early operations phase. The validation of the newly installed detectors and the performance studies employing datasets collected during 2022 and 2023 data-taking with pp, pAr, PbPb and PbAr collisions are presented.
TORCH is a novel particle identification detector for the high-luminosity Upgrade-II of LHCb. This research also contributes to CERN’s DRD4 programme. TORCH is designed to provide 15 ps timing resolution for charged particles, resulting in K/pi (p/K) particle identification up to 10 (15) GeV/c momentum over a 10 m flight path. Cherenkov photons radiated from a 1cm thick quartz plate are focussed onto micro-channel-plate detectors (MCP-PMTs), with fast timing and high spatial resolution. Test-beam results from the CERN PS in 2022 will be presented, as well as TORCH’s recently developed light-weight carbon-fibre support structure and novel exo-skeleton jigging system. The development of a 16 x 96 pixelated MCP-PMT will be also be described. Finally, we present progress on the computationally challenging TORCH pattern recognition, which has been implemented on IPUs, a novel highly parallel processor.
The CEPC is a proposed electron-positron Higgs factory. It is expected to deliver millions of Higgs bosons, Teras of Z boson, Gigas of W boson, and potentially Teras of Z boson. On top of the precise Higgs property measurement, it could also conduct an intriguing flavor physics program that is highly complementary to other flavor physics facilities, as well as to other physics measurements at CEPC.
We explored the flavor physics landscape and summaries the physics reach from 40 different physics benchmarks. These studies lead to the conclusion that flavor physics measurements could provide accessibility to New Physics with an energy scale of 10 TeV or even high. We will introduce this work and discuss future action items accordingly.
PIONEER is a next-generation precision experiment proposed at PSI to perform high precision measurements of rare pion decays. By improving the precision on the experimental result of the charged pion branching ratio to electrons vs. muons and the pion beta decay by an order of magnitude, PIONEER will provide a pristine test of Lepton Flavour Universality and the Cabibbo angle anomaly. In addition, various exotic rare decays involving sterile neutrinos and axions will be searched for with unprecedented sensitivity.
This presentation will cover the theoretical motivations for PIONEER, as well as the ongoing simulations efforts to precisely determine the detector performance and inform decisions on the experiment design. It will show results from recent beam test campaigns on the pion beamline itself and various sensor candidates. In addition, new developments on the path to a multi-layer prototype Active Target detector system with sensor and readout electronics will be presented.
The Belle and Belle$~$II experiments have collected a 1.1$~$ab$^{-1}$ sample of $e^+ e^- \to B\bar{B}$ collisions at the $\Upsilon(4S)$ resonance. These data, with low particle multiplicity and constrained initial state kinematics, are an ideal environment to study semileptonic and leptonic decays of the $B$ meson. Combined with theoretical inputs, measurements of both inclusive and exclusive decays yield information about the Cabibbo-Kobayashi-Maskawa matrix elements $V_{cb}$ and $V_{ub}$. We review our latest results, which include the first Belle$~$II results with fully leptonic and inclusive semileptonic decays.
We investigate the role of different schemes in deciding at what order to truncate form factor expansions for semileptonic decays and how to determine the appropriate combination of truncations when multiple form factors are involved. The specific choice of truncation orders can significantly impact the reported values of exclusive $|V_{cb}|$. Additionally, we explore whether and how unitarity bounds, which provide a straightforward method for regularizing overfitting of form factors, may introduce bias in measurements. We formulate a set of suggestions on how LHCb and Belle II should treat this subject in future measurements.
It is long known that interference effects play an important role in understanding the shape of the $\pi^+\pi^-$ spectrum of resonances near the threshold. In this manuscript, we investigate the role of the $\rho-\omega$ interference in the study of semileptonic $B \to \pi^+ \pi^- \ell \bar \nu_\ell$ decays. We determine for the first time the strong phase difference between $B \to \rho \ell \bar \nu_\ell$ and $B \to \omega \ell \bar \nu_\ell$ from a recent Belle measurement of the $m_{\pi\pi}$ spectrum of $B \to \pi^+ \pi^- \ell \bar \nu_\ell$. In addition, we investigate different ways of modelling the $S$-wave component within an $m_{\pi\pi}$ window ranging from $2 m_\pi$ to $1.02 \, \mathrm{GeV}$. We also determine the absolute value of the Cabibbo-Kobayashi-Maskawa matrix element of $|V_{ub}|_{\rho-\omega} = ( 3.03^{+0.49}_{-0.44}) \times 10^{-3}$, which takes into account the $\rho-\omega$ interference.
Observed anomalies in the flavor sector as displayed by the LFU ratios $R_{D^{(*)}}$ in the tree level $b\rightarrow c \tau\nu_\tau$ transitions motivate the search for new physics beyond the standard model. The semileptonic tree level $b\rightarrow u$ sector may hide similar unexplored new physics. Considering a model-dependent approach, we explore the decay channel $B_c\rightarrow D \tau\nu_\tau$ within the framework of the $U_1$ and $S_1$ leptoquark models. As there are fewer experimentally measured observables in the $b\rightarrow u$ sector compared to the $b\rightarrow c$ sector, we correlate the new physics couplings of the two sectors within these leptoquark models. The parameter space of the new couplings is obtained using currently available experimental data. We then make predictions of some $B_c\rightarrow D\tau \nu_\tau$ observables, such as the branching fraction, the LFU ratio and the forward-backward asymmetry parameter within the two leptoquark models.
The CERN Future Circular Collider (FCC) is a post-LHC project aiming at direct and indirect searches for physics beyond the SM in a new 91 km tunnel. In addition, the FCC-ee offers unique possibilities for high-precision studies of the strong interaction in the clean e+e- environment, thanks to its broad span of c.m. energies from the Z pole to the top-pair threshold, and its huge integrated luminosities yielding $10^{12}$ and $10^8$ jets from Z and W decays, respectively, as well as $10^5$ pure gluon jets from Higgs decays. Selected studies of the impact of FCC-ee on improving our understanding of QCD will be summarized including: (i) $\alpha_s$ extractions with permil uncertainties, (ii) parton showers and jet properties (udsg discrimination, event shapes, multijet rates, jet substructure,...), (iii) heavy-quark jets (dead cone, charm-bottom separation, gluon-to-QQbar splitting,...); and (iv) nonperturbative QCD phenomena (color reconnection, baryon and strangeness production,..).
The Large Hadron-electron Collider and the Future Circular Collider in electron-hadron mode [1] will make possible the study of DIS in the TeV regime providing electron-proton (nucleus) collisions with per nucleon instantaneous luminosities around $10^{34}$ ($10^{33}$) cm$^{−2}$s$^{−1}$. Following the renewal of the CERN mandate, in this talk we present the status of the studies on proton and nuclear structure at the LHeC and FCC-eh, in light of the findings at HERA and LHC and the perspectives in future LHC runs and those at the EIC. We examine the possibilities and plans for future activities on proton and nuclear PDFs in the collinear limit and beyond, high-energy QCD and unravelling the saturation regime in QCD, diffraction and extraction of alphas, considering the synergies and complementarities with the LHC and the EIC.
[1] LHeC Collaboration and FCC-he Study Group: P. Agostini et al., J. Phys. G 48 (2021) 11, 110501, e-Print: 2007.14491 [hep-ex].
We evaluate the unintegrated gluon distribution of the proton starting from a parametrization of the color dipole cross section including Dokshitzer--Gribov--Lipatov--Altarelli--Parisi (DGLAP) evolution and saturation effects. To this end, we perform the Fourier-Bessel transform of $\sigma(x,r)/\alpha(r)$. At large transverse momentum of gluons we match the so-obtained distribution to the logarithmic derivative of the collinear gluon distribution. We check our approach by calculating the proton structure function $F_L(x,Q^2)$ finding good agreement with HERA data.
References
Phys. Lett. B 835 (2022) 137582.
The need of percent precision in high energy physics requires the inclusion of QED effects in theoretical predictions, for example like the contributions coming from photon initiated processes. It is trivial then, to correctly determine the photon content of the proton.
In this work, we extend the NNPDF4.0 NNLO determination of parton distribution functions (PDFs) with a photon PDF, determined within the LuxQED formalism, which evolves with the gluon and quark PDFs via DGLAP equations that contain NLO QED corrections.
We study the impact of the QED effects to the NNPDF4.0 methodology, we compare our results with NNPDF3.1QED and other recent QED PDF fits and we asses the impact of the photon PDF for photon-initiated processes for LHC processes.
We present recent results based on the IR-improvement of unintegrable singularities in the infrared regime via amplitude-based resummation in $QED\times QCD ⊂ SU(2)_L \times U_1 \times SU(3)^c$. In the context of precision LHC/FCC physics, we focus on specific examples, such as the removal of QED contamination in PDF’s evolved from data at $Q_0^2\sim 2 GeV^2$ and used in evaluating precision observables in $pp\rightarrow Z + X \rightarrow \ell\bar\ell + X’$, in which we discuss new results and new issues.
We measure proton structure parameters sensitive primarily to valence quarks using $8.6~{\rm fb}^{−1}$ of data collected by the D0 detector in $\sqrt{s} = 1.96~{\rm TeV}$ ${\rm p\bar{p}}$ collisions at the Fermilab Tevatron. We exploit the property of the forward-backward asymmetry in dilepton events to be factorized into distinct structure parameters and electroweak quark-level asymmetries. Contributions to the asymmetry from s, c and b quarks, as well as from u and d sea quarks, are suppressed allowing valence u and d quarks to be separately determined. We find a u to d quark ratio near the peak values in the quark density distributions that is smaller than predictions from modern parton distribution functions.
The internal motion of partons has been studied through its impact on
very low transverse momentum spectra of Drell Yan pairs
created in hadron-hadron collisions at NLO using the Parton Branching
(PB) Method which describes the evolution of transverse momentum
dependent (TMD) parton distributions. The main focus is on studying the
dependence of the intrinsic transverse momentum of partons in the initial
state (intrinsic kT) on collision centre of mass energy, $\sqrt s$.
While the standard Monte Carlo event generators require parton intrinsic
transverse momentum distributions strongly dependent on $\sqrt s$, in the
PB Method there is no such dependence.
In addition to this, it will be shown that by requiring minimal transverse
momentum of the radiated parton at a branching of the order of 1-2 GeV, the $\sqrt s$ dependence of the intrinsic-kt in the PB Method will be introduced and will be steeper by increasing the minimal value.
Photographic films are still used in a number of medical and industrial x-ray imaging applications need to reconstruct an image on a flexibile surface. We will present the FleX-RAY project, which aims to create an electronic X-ray detector with the flexibility of photographic film, suitable for a variety of applications.
FleX-RAY uses a sheet of flexible scintillating fibers to detect X-rays and guide the scintillation light to arrays of silicon photomultipliers. The detector also self-reports its curved shape using optical waveguides with Bragg gratings on a flexible glass substrate, which act as strain sensors.
In this contribution, we present the detector concept, simulations of the expected detector performance for pipe-inspections and results of the initial tests on the FleX-RAY prototype.
This study explores the possibility of employing pure cesium iodide (CsI) crystals for a total-body positron emission tomography (TB-PET) device. When operated at cryogenic temperatures, these crystals exhibit an excellent light yield, up to 120 photons/keV, which is approximately four times larger than LYSO. Although CsI has a slightly smaller stopping power and a slower decay time compared with BGO and LYSO, its significantly lower price (3 to 5 times cheaper than its counterparts) could enable the realization of accessible TB-PET devices.
In this project we also investigate the feasibility of using larger, monolithic crystals read out by an array of solid-state photosensors. This approach significantly simplifies the device’s design and assembly, further reducing costs. We show that modern machine learning algorithms for image processing can potentially enable the realization of a monolithic PET with performances analogous or better than a pixelated one.
The CREMA project investigates channeling for low energy carbon ions interacting with bent crystals in
the hundreds MeV/u energy range. The experimental setup to assess the process efficiency will be operated
in the experimental area (XPR) of the CNAO accelerator complex in Pavia (Italy). The project's aim consists on
optimising a bent crystal that could be installed at a later stage in a medical synchrotron to complement
or replace the electrostatic extraction septum for beam extraction. The proposed layout and calibration measurements on the CNAO beam will be presented, as well as simulations that confirm the feasibility of the channeling tests.
Next generation high energy physics experiments will feature high-granularity detectors with thousands of readout channels, thus requiring ASICs (low power and dimension).
CAEN FrontEnd Readout System (FERS) integrates ASICs on small, synchronizable and distributable systems with Front and Back Ends. The A5203 FERS houses the recently released CERN picoTDC ASIC and provides high-resolution time measurements of ToA and ToT.
In this talk we will analyze the performances of the A5203 unit: 3.125 ps LSB, ToA measurements down to ~7 ps RMS over a single board, and ~20 ps RMS for input signals of variable amplitude. The walk effect introduced by different amplitudes is corrected using ToT. Besides walk correction, the ToT is used for signal amplitude reconstruction and background reduction.
The A5203 has been used in various applications, both experimental and industrial. These will be quickly illustrated as well as the upcoming units where picoTDC will be combined with new Weeroc ASICs.
The ATLAS experiment is gearing up for the HL-LHC upgrade, with an all-silicon Inner Tracker (ITk). The ITk will feature a pixel detector surrounded by a strip detector, with the strip system consisting of 4 barrel layers and 6 endcap disks. The strip tracker will consist of 11,000 silicon sensor modules in the central region and 7,000 modules in the end-cap region, which are mounted onto larger carbon-fibre support structures called ‘petals’ for the end-cap and ‘staves’ for the barrel. To facilitate the assembly of these larger detector structures, an automated system has been developed for mounting modules on petals and staves. The automated procedure streamlines and simplifies the production process and ensures uniformity across the international production clusters. This contribution presents the latest results from the assembly of the first ATLAS ITk pre-production petals and staves, alongside electrical test results and performance measures.
Water distribution systems can experience high levels of leakage, causing financial losses, supply problems, as well as being a risk for public health.
In this talk we present a non-invasive water leakage detection technique based on cosmic ray neutrons, that exploits the difference in the above ground thermal neutron flux between dry and wet soil conditions. The potential of the technique has been assessed by means of an extensive set of Monte Carlo simulations based on GEANT4, involving realistic scenarios based on the Italian aqueduct design guidelines.
Simulation studies focused on sandy soils and results suggest that a significative signal, associated with a leakage, could be detected with a data-taking lasting from a few minutes to a half-hour, depending on the environmental soil moisture, the leaking water distribution in soil, and the soil chemical composition.
The design of a portable and low-cost detector, suitable for this kind of applications, is also presented.
The high center-of-mass energy of proton-proton collisions and the large available datasets at the CERN Large Hadron Collider allow us to study rare processes of the Standard Model (SM) with unprecedented precision. Observation of the four-top-quark process is presented. This final state is combined with the Higgs to gammagamma final state and limites on the Higgs boson width is set.
The high center-of-mass energy of proton-proton collisions and the large available datasets at the CERN Large Hadron Collider allow to study rare processes of the Standard Model with unprecedented precision. Measurements of rare SM processes provide new tests of the SM predictions with the potential to unveil discrepancies with the SM predictions or provide important input for the improvement of theoretical calculations. In this contribution, total and differential measurements of top-quark production in association with a photon, Z or W boson are shown using data taken with the ATLAS experiment at a center-of-mass-energy of 13 TeV. These measurements provide important bounds on the electroweak couplings of the top quark and constrain backgrounds that are important in searches for Higgs production and for new phenomena beyond the SM.
The top-quark pair production in association with heavy-flavour jets (b/c) is a difficult process to calculate and model and is one of the leading sources of background to ttH and 4tops in 1l/2LOS channel. To improve our understanding of this process, a new inclusive and differential measurement of this process was performed. Results from ATLAS using the full run 2 dataset will be presented.
We will present the state-of-the-art full off-shell NLO QCD results for the $pp \to t\bar{t}W^+\, j+X$ process. The multi-lepton top-quark decay channel at the LHC with $\sqrt{s}= 13$ TeV will be analysed. In our calculation off-shell top quarks and gauge bosons are described by Breit-Wigner propagators. Furthermore, double-, single- as well as non-resonant top-quark contributions along with all interference effects are consistently incorporated already at the matrix element level. We will present the results at the integrated and differential cross-section level for various renormalisation and factorisation scale settings as well as different PDF sets. Lastly, we will investigate the effects of the additional jet activity in the $pp \to t\bar{t}W^+ +X$ process by comparing the normalised differential cross-section distributions for $pp \to e^+\nu_e\, \mu^-\bar{\nu}_\mu\, \tau^+\nu_\tau\, b\bar{b}\,j+X$ and $pp \to e^+\nu_e\, \mu^-\bar{\nu}_\mu\, \tau^+\nu_\tau\, b\bar{b} +X$.
We will report on the calculation of the next-to-leading order QCD corrections to the Standard Model process $pp→t\bar{t}t\bar{t}$ in the $4\ell$ top-quark decay channel. Higher-order QCD effects in both the production and decays of the four top quarks are taken into account. The latter effects are treated in the narrow width approximation, which preserves top-quark spin correlations. We will present results for two renormalisation and factorisation scale settings and three different PDF sets. Furthermore, the main theoretical uncertainties associated with the neglected higher-order terms in the perturbative expansion and with the parameterisation of the PDF sets will be presented. The results at the integrated and differential fiducial cross-section level are going to be shown for the LHC Run III center-of-mass energy of $\sqrt{s}$=13.6TeV. Our findings are relevant for precise measurements of the four top-quark fiducial cross sections and the modelling of top-quark decays at the LHC.
The associated top-quark pair production with a photon $t\bar{t}\gamma$ represents an important process to further test the Standard Model. Among others, it allows to directly probe the electric charge of the top quark, as well as the top-photon coupling. Therefore, precise predictions are of utmost importance.
In this talk, I will discuss the application of QCD resummation techniques to improve the precision of the theoretical predictions for the total cross section for this process at the LHC. The reported calculations are carried out in the framework of invariant mass threshold resummation and the results are matched to the complete next-to-leading order calculation, including QCD and electroweak effects.
The poster collects measures adopted by the ERC over the years in order to facilitate the participation of diverse groups, and it presents some of the main results of these, with a focus on gender and physical sciences.
The Jiangmen Underground Neutrino Observatory (JUNO) is a next-generation large liquid-scintillator neutrino detector, which is designed to determine the neutrino mass ordering. Moreover, high-energy atmospheric neutrino measurements could also improve its sensitivity to mass ordering via matter effects on oscillations, which depend on the capability to identify the flavors of neutrinos. However, this task has never been attempted in large homogeneous liquid scintillator detectors like JUNO.
This poster presents a machine learning approach for the flavor identification of atmospheric neutrinos in JUNO. In this method, several features relevant to event topology are extracted from PMT waveforms and used as inputs to machine learning models. Moreover, the features from captured neutrons provide additional capability of neutrinos versus anti-neutrinos identification. Preliminary results based on Monte Carlo simulations show promising potential for this approach.
The current ATLAS Inner Detector is to be replaced with the all-silicon Inner Tracker (ITk) to cope with high pile-up and harsh radiation environment expected at the HL-LHC. During prototyping and early production phases of the ITk project, the performance of all types of ITk strip modules has been extensively evaluated using high-energy electron or hadron beams available at the DESY II and CERN SPS test beam facilities. Complementary to test beam measurements, full computer simulations of the experimental setup have been carried out using the Allpix-Squared framework. This contribution focuses on results obtained by reconstruction and analysis of test beam measurements with an R2 ITk strip endcap module at the DESY II test beam facility. Comparisons of experimental results with computer simulations for key performance metrics are presented. Additionally, the effects of the particle beam impacting the tested module at non-perpendicular angles are explored and discussed.
We present an updated set of SKMHS diffractive parton distribution functions (PDFs). In addition to the diffractive deep-inelastic scattering (diffractive DIS) datasets, the recent diffractive dijet cross-section measurement by the H1 experiment from the HERA collider are added to the data sample. The new set of diffractive PDFs, entitled SKMHS23 and SKMHS23-dijet, are presented at NLO and NNLO accuracy in pQCD. Since the gluons directly contribute to jet production through the boson-gluon fusion process, the data on diffractive dijet production in inclusive DIS help constrain the gluon density, allowing for the determination of both the quark and gluon densities with better accuracy. The NLO and NNLO theory predictions are compared to the analyzed data showing excellent agreement. The effect arising from the inclusion of diffractive dijet data and higher-order QCD corrections on the extracted diffractive PDFs and data/theory agreements are clearly examined and discussed.
Since 1983 the Italian groups collaborating with Fermilab have been running a 2-month summer training program for students in physics and engineering. Many students have extended their collaboration with Fermilab for their Master Thesis and PhD.
The program has involved more than 600 students from more than 20 universities. Each intern is supervised by a Fermilab Mentor. Training programs spanned from Tevatron, CMS, Muon (g-2), Mu2e and SBN and DUNE design and data analysis, development of detectors, electronic and accelerator components, infrastructures and software for tera-data handling, quantum computing and superconductive magnets.
In 2015 the University of Pisa included the program within its own educational programs. Students are enrolled at the University of Pisa and at the end of the internship write reports on their achievements. In 2020 and 2021 the program was canceled, but in 2022 and 2023 it allowed a total of 48 students to be trained for nine weeks at Fermilab.
We will report on our study focusing on developing a logical circuit for the Leven-0 (L0) Endcap Muon Trigger in the HL-LHC ATLAS experiment. We aim to achieve systematic and efficient firmware validation through a comprehensive study across hardware, software, and databases. Specific approaches include conducting systematic tests using benchmarking artificial track data, high-statistics full-simulation data, and further actual collision data. Our design of the validation system enables systematic tests, coherently injecting identical data in the software simulation environment and actual hardware testbench. Along with the system's design, we have developed a relational database to centrally manage the information of cablings and data format and a bit-wise simulator of the trigger logic circuit. This presentation will discuss the concepts of the validation system design, specific implementation methods, and experiences gained from test results.
SAND, System for on-Axis Neutrino Detection, will be one of the three components of the DUNE Near Detector complex and it will be placed permanently on the axis of the neutrino beam. It consists of a solenoidal magnet, an electromagnetic calorimeter, an inner Straw Tube Tracker, and finally GRAIN (GRanular Argon for Interaction of Neutrinos) a 1-ton liquid argon target, placed in the upstream part of the inner magnetized volume.
In the current design, GRAIN will be instrumented with innovative lens-based optical detectors to focus fast LAr scintillation light into a high granularity 32x32 Silicon Photo-Multiplier (SiPM) matrix.
In this talk, the preliminary design of the lens-based optical detector will be discussed, and the first results achieved with a prototype tested with an artificial point-like light source in a large liquid Argon volume in the ARTIC facility at the University of Genova will be shown.
We propose here a set of new methods to directly detect light mass DM through its scattering with abundant atmospheric muons or accelerator beams. Firstly, we plan to use the free cosmic muons interacting with dark matter in a volume surrounded by tracking detectors, to trace possible interaction between dark matter and muons. Secondly, we will interface our device with domestic or international muon beams. Due to much larger muon intensity and focused beam, we anticipate the detector can be made further compact and the resulting sensitivity on dark matter searches will be improved. In line with above projects, we will develop muon tomography methods. Furthermore, we will measure precisely directional distributions of cosmic muons, either at mountain or sea levels, and the differences may reveal possible information of dark matter distributed near the earth. In the future, we may also extend our study to muon on target experiments.
We present an innovative charge detector designed with high resolution and a wide dynamic range to fulfill ion beam monitoring requirements. The detector prototype, constructed using HERD Si photodiodes and Calo PD readout electronics, underwent rigorous testing during HERD and AMS beam tests at CERN SPS facilities. Initial testing showcased the detector's exceptional performance, emphasizing both high resolution and a dynamic range capable of measuring nuclei with atomic numbers ranging from 1 to 80. The prototype's compatibility with fast, practically real-time data analysis positions it as an ideal candidate for online applications. This presentation will unveil the results from the prototype's testing phase, highlighting its capabilities and performance metrics. It will delve into ongoing detector development, exploring potential applications, and discussions will extend to future development pathways and refinements aimed at enhancing the detector's functionality and versatility.
Monolithic Water Cherenkov Neutrino detectors are crucial for understanding neutrino astrophysics and oscillations. Traditional calibration involves analyzing calibration data sequentially, which may overlook parameter correlations and necessitates frequent retuning of reconstruction algorithms. This leads to duplicated efforts and increased detector-related uncertainties in next-generation experiments like Hyper-Kamiokande.
To address this, we propose a machine learning-based approach using a differentiable model of a water Cherenkov simulation for calibration and event reconstruction. We demonstrate how this method allows simultaneous optimization of calibration and reconstruction parameters through gradient descent within a unified framework. Furthermore, we discuss its potential to surpass existing calibration and event reconstruction methods in the near future.
The ATLAS experiment, located on LHC at CERN, requires a flexible and comprehensive Level-1 Trigger configuration to meet its diverse scientific goals. A robust framework validates this configuration throughout every year of data-taking. The Level-1 Central Trigger system (L1CT), integral for detector readout, is software-programmable and relies on machine-readable files for Level-1 Trigger item mapping. Testing involves simulation and a hardware replica of the L1CT at ATLAS. With the advanced monitoring capabilities of the system, we can compare the results from the hardware (including intermediate stages) with those from the simulation to identify any discrepancies or errors. This presentation covers testing methods, challenges due to the ATLAS detector's complexity, and introduces a new user interface for the testing framework.
The Tile Calorimeter (TileCal) is the hadronic calorimeter covering the central region of the ATLAS experiment at the LHC. This sampling device is made of plastic scintillating tiles alternated with iron plates and its response is calibrated to electromagnetic scale by means of several dedicated systems. The accurate time calibration is important for the energy reconstruction, non-collision background removal as well as for specific physics analyses. Every year, the time calibration is performed with first physics collisions and fine-tuned with subsequent data. The stability of the time calibration is monitored with laser system and physics collision data. Recent developments in the monitoring tools are shown and the corrections for various observed problems are discussed. Finally, the time resolution as measured with jets are presented separately in individual radial layers of the calorimeter.
The ATLAS experiment in the LHC Run 3 is recording up to 3 kHz of fully-built physics collision events out of an LHC bunch crossing rate of up to 40 MHz, with additional rate dedicated to partial readout. A two-level trigger system selects events of interest to to cover a wide variety of physics while rejecting a high rate of background events. The selection of events targets both generic physics signatures, such as high pT leptons, jets, missing energy, as well as more specific signatures targeting specific physics, such as long lived particles, or di-Higgs events. We will present an overview of the ATLAS trigger menu system highlighting the new developments and changes for run 3. We will also highlight some of the performance improvements in run 3.
Neutrinoless double beta decay experiments are pushing their sensitivities to reach half-lives on the order of $10^{28}$ years. A promising approach involves detecting the daughter ion generated in the decay. The NEXT collaboration is testing chemical sensors to identify the Ba$^{2+}$ ion produced in the double beta decay of $^{136}$Xe, coinciding with the emission of two electrons. This entails a challenge, since only a few ions per year would be produced in the NEXT chamber. Further the chemosensors must be compatible with the ultra-dry conditions of xenon. The R&D effort in NEXT towards Barium tagging is two fold: First, to build a Single-Molecule Fluorescence Imaging (SMFI) system with enough sensitivity to detect the signal from the chemosensors and capable of operating under a dry environment. Second, to develop a barium source reproducing the conditions of the NEXT detector. This two systems will be integrated to compose a reliable Barium-tagging sensor.
The Fermilab Muon g-2 experiment has measured the positive muon magnetic anomaly to an unprecedented precision of 0.2 ppm, based on the data taken in the first three years. The magnetic anomaly is derived from the ratio between the muon anomalous spin precession frequency in a magnetic storage ring and the magnetic field experienced by the muon ensemble. In addition, systematic effects on the measurement of spin precession frequency due to the muon beam dynamics have to be included for an unbiased result. We considered two types of systematic effects in the analysis: i) reduction in the spin precession frequency due to electric fields and vertical motions, and ii) precession phase changes over the measurement time. In this presentation, we discuss these beam dynamics corrections applied in the Run-1 to Run-3 data analyses and provide an update since the Aug 2023 result announcement.
The demands of HL-LHC data processing and the challenges of future colliders are pushing to re-think High Energy Physics (HEP) computing models.
This talk aims at providing transparent resources for users and experiments, with suitable tools and environment, coupled with flexible and cloud-independent deployment in the framework of the ICSC project (Italian National Centre on HPC, Big Data and Quantum Computing).
The resources will be experiment-agnostic and applicable across HEP experiments, exploited to benchmark the proposed workflows.
Seamless interactive or quasi-interactive analysis are extremely promising:starting from container technology and Kubernetes,we provide analysis tools via Jupyter interface and Dask scheduling system, masking complexity for front end users and rendering cloud resources flexibly.
An overview of the technologies involved and the results of a benchmark use case will be provided, with suitable metrics to evaluate preliminary performance of the workflow.
The ATLAS Tile Calorimeter (TileCal) is a sampling hadronic calorimeter covering the central region of the ATLAS detector at the Large Hadron Collider. It performs the precise measurement of hadrons, jets, hadronically decaying tau-leptons, missing transverse momentum as well as provides input signal to the Level-1 Calo Trigger. The calorimeter consists of thin steel plates and about 460,000 scintillating tiles configured into more than 4900 cells, each viewed by two photomultipliers. The calorimeter response is monitored using radioactive source, laser and charge injection systems. This poster presents the TileCal calibration systems as well as the latest results on their performance in terms of calibration factors, linearity and stability.
The DANSS detector is placed under the reactor core of Kalinin NPP and collects up to 5000 ν events per day. Experiment is aimed to scrutinize the sterile ν hypothesis, and obtained limits exclude practically all sterile neutrino parameters preferred by BEST experiment. The main goal of the energy calibration is the determination of the energy scale coefficient $K_Е$, however, the Birks and Cherenkov effects are also investigated.
The report covers calibration with atmospheric muons (µ) stopped inside the sensitive volume of the detector including their decays. Muons were selected by applying geometrical constraints and searching for subsequent e- or e+. The spectrum of the Michel e-/e+ is used for $K_E$ determination. Bragg’s curve built using µ energy release along its track is sensitive not only to $K_E$ but also to nonlinear effects: Birks effect, and Cherenkov radiation. This calibration complements the results of the calibration via radioactive sources and 12B β-decays.
The Higgs factory is a kind of special energy consumer and the environmental impact for the given scientific outcome must be optimized carefully. The carbon footprint of CEPC was estimated based on simplified model including both construction process and operation process. The environmental impact of CEPC with different circumference, different energy source, different SR power and different Higgs number was studied. The carbon intensity of China electric grid will be reduced rapidly by 2040 due to the development of renewable energies. Some results to compare the future colliders, including linear colliders and circular colliders, are given. Assuming all the colliders will use the same clean energy (20 ton CO2e/GWh ), CEPC has the lowest carbon emission to produce one Higgs boson.
A new front-end ASIC named "PIST" (pico-second timing) has been successfully developed using 55 nm CMOS technology for the silicon photomulplier (SiPM) readout with a single channel with a major aim of fast timing. We performed extensive tests to evaluate the timing performance of a dedicated test stand. The results show that the system timing resolution can reach sub 10 ps, while the PIST intrinsic timing resolution is better than 5 ps. The PIST dynamic range has been further extended using the time-over-threshold (ToT) technique.
Meanwhile, we fully characterised a new SiPM-readout 32-channel ASIC for developments of a future high-granularity homogeneous calorimeter. Comprehensive measurements were made with a laser beam and high-energy particle beams with crystals and SiPMs. First results show that this chip has an excellent signal-to-noise for the single photon calibration and a large dynamic range.
This contribution will introduce the two ASICs and present highlighted results.
The ATLAS Inner Detector will be completely replaced with an all-silicon tracking detector (ITk) to cope with the new challenging conditions arising with the HL-LHC. The pixel detector will be located in the innermost part of the ITk and consists of five layers of detectors, with different thickness and sensor technology. n-in-p planar hybrid modules 150 μm thick and 100 μm thick will instrument the three outer layers, respectively. 3D sensor technology was chosen due to its radiation hardness to instrument the innermost layers. Additionally, the production of the ITk pixel detectors is distributed among four different vendors. As soon as pre-production modules and sensors of different types and produced by different vendors become available, they are being tested with test beams before and after irradiation to assess their performance at the fluence expected at the end of their life during HL-LHC. An overview of the current test beam results will be given.
In the context of the CMS improved Resistive Plate Chambers (iRPC) upgrade, a strategy has developed that leverages cosmic muon triggers along with web-based automation for Quality Control (QC) steps. A key aspect of this approach was finding a way to bridge slow and fast control parameters, a crucial step towards achieving full automation. This integration not only enhances the efficiency and accuracy of the QC process for the iRPC system but also streamlines the workflow and significantly reduces the likelihood of human errors. This development is a valuable improvement in the CMS experiment's upgrade efforts, contributing to more reliable and efficient operations in high-energy physics research.
This poster presents the efforts to boost the performance and the reliability of the Resistive Plate Chambers (RPC) of the muon system of the Compact Muon Solenoid (CMS) experiment. The focus is on both, maintenance of the existing RPC chambers and the installation of the improved RPC detectors (iRPC) for the Phase-2 upgrade. The RPC system consolidation is based on the cooling system upgrade and the gas leak repair, which would allow powering back of leaking chambers. Installation of special mounting brackets, in addition to upgrade of the power system were completed as part of the new iRPC Phase-2 upgrade programme. At the end of 2023, two iRPC chambers were installed and are already under commissioning in the CMS experiment. This poster offers valuable insights into the maintenance procedures of the RPC detector and the activities crucial for the success of the ongoing and Phase-II data taking of CMS for HL-LHC era.
Improving the identification of jets initiated from gluon or quark will impact the precision of several analysis in the ATLAS collaboration physics program. Using jet constituents as inputs for developing quark/gluon taggers gives the models access to a superset of information with respect to the use of high-level variables. Transformer architecture is used to learn long-range dependencies between jet constituent kinematic variables to predict the jet flavor. Several variations of Transformers are studied, and their performance is compared to the high-level variable taggers and older jet constituent taggers. We propose a new Transformer-based architecture (DeParT) that outperforms all other taggers. The models are also evaluated on events generated by multiple Monte Carlo generators to study their generalization capabilities.
Dark matter candidates in different models with an extended Higgs sector as the THDM, NTHDM, THDMS are discussed and compared. The phenomenology of prospects of the models at different lepton colliders, electron-positron colliders as well as a muon collider are analyzed and the relevance of the different colliders features w/wo polarized beams etc. discussed.
LHC prospects, experimental, flavour as well as cosmological constraints are included.
Darkside-20k is an underground direct dark matter search experiment designed to reach a total exposure of 200 tonne-years nearly free from instrumental backgrounds. The detector's core is a dual-phase Time Projection Chamber filled with 50 tonnes of low-radioactivity liquid argon. The TPC wall is surrounded by PMMA acting as a neutron veto, immersed in an argon bath.
The key technological innovation is instrumenting the TPC and veto with silicon photomultiplier (SiPM) arrays, for a total area of 27 m2. In particular the neutron veto is equipped with array detectors, arranged in a compact design: the Veto PhotoDetector Units (vPDUs), each containing 384 SIPMs. The neutron veto will be equipped with 120 vPDUs.
The poster will focus on the vPDU production. Tests have been performed in liquid nitrogen baths to assign a "quality passport" to each component, underlining the rigorous QA/QC procedures, up to the final characterization of the first batch of completed units ready for integration in the detector.
As a fourth-generation synchrotron light source, the experiments carried out by HPES will be shifted to high throughput, multi-modal, ultra-fast frequency, and cross-scale forms. The annual data flux produced by the experiment is expected to enter the "Exa-scale" era. Faced with such high-throughput experimental data, the arithmetic power of a single computing node is difficult to meet the needs of data analysis. This brings up the need to develop a high performance, robust and user friendly distributed parallel computing engine for data processing software.Due to the large difference in data rates between different beamline stations, how to support heterogeneous resources (GPUs) in a flexible and fine-grained manner becomes a challenge.To accelerate the computation and large datasets process, we design a distributed parallel computing engine capable of invoking scalable distributed heterogeneous computing resources to provide computational analysis services at different scales.
To study the feasibility of a shallow-depth neutrino detector, a Cosmic Muon Veto Detector (CMVD) is being built around the mini-ICAL detector at the IICHEP in Madurai, India. CMVD will use extruded plastic scintillators for muon detection and wavelength-shifting fibres coupled with silicon photomultipliers (SiPMs) for signal readout. A power supply source is needed for biasing the SIPMs, where the source's accuracy, precision, and stability are crucial in order to ensure consistent gain characteristics. We developed a biasing power supply circuit capable of sourcing 50-58V in 50 mV steps and up to 1mA of current. It features digital voltage adjustment and stabilization as well as current monitoring capabilities using an external controller such as microcontrollers or FPGAs. Besides providing better flexibility, the controller enables possibilities like temperature compensation. Designed to power multiple SiPMs, this circuit can be easily integrated with front-end electronics of SiPMs.
Core-collapse supernova bursts are among the most energetic phenomena known in the universe. PandaX-4T, a dark matter and neutrino experiment that employs a dual-phase xenon TPC as the detector, has the ability to detect neutrinos from supernova bursts via the coherent elastic neutrino-nucleus scattering process. In this study, the total number of supernova neutrino events in PandaX-4T is estimated to be from 6.6 to 13.7 at 10 kpc over 10-second duration with negligible backgrounds, dependent on the properties of the supernova progenitors of different masses. Two specialized triggering alarms, golden and silver, for monitoring supernova burst neutrinos are built, with false alert rates of around one per month and one per week, respectively. These alarms will soon be implemented in the real-time supernova monitor system of PandaX-4T to provide supernova early warnings for the astronomy community.
The Jiangmen Underground Neutrino Observatory (JUNO) is a neutrino detector currently under construction in China. It will use 20 ktons of liquid scintillator as the target medium, which will be surrounded by 45,000 photomultiplier tubes to collect the scintillation light produced by the interacting particles. The JUNO physics program encompasses a comprehensive range of measurements, including neutrino fluxes from various natural sources such as solar, atmospheric, geo-, and supernova neutrinos (core-collapse and diffuse background). The primary challenges are posed by radioactive and cosmogenic backgrounds, which can be effectively mitigated through the use of highly radiopure materials and advanced identification techniques.
This talk reviews the potential of JUNO to improve measurements of neutrinos from natural sources and focuses on factors and conditions necessary for achieving the corresponding physical targets.
We study detection possibilities of the Odderon interaction in the elastic meson-nucleon scattering, by measuring K0s regeneration at CERN, using the planned HIKE (Phase II) and existing LHCf infrastructures. Basic geometrical requirements and kinematic constraints of such experimental efforts at CERN are considered and the published predictions of the Odderon signatures in K0s regeneration are reviewed. We estimate the expected Odderon influence on K0L => K0s --> PiPi decay probability after a distant Pb regenerator, exposed to 1 TeV neutral K0L mesons originating from 13.6 TeV p+p collisions occurring at the ATLAS interaction point. A possibility of the Odderon detection in diffractive K0s regeneration events, using 50 GeV K0L beam within HIKE project at CERN, is also discussed.
Supersymmetry (SUSY) is one of the most interesting theories for Physics beyond the Standard Model and LHC experiments have searched for its evidence during Run1 and Run2. The search for direct production of top squark pairs in which each stop decays in two, three or four bodies depending on the hypotheses on its mass was performed, on data collected during Run2, in final states with two opposite-sign leptons (electrons or muons), jets and missing transverse momentum. The search placed constraints at 95% confidence level on the minimum top squark and neutralino masses up to 1 TeV and 500 GeV respectively. This contribution describes the discovery prospects of a top squark in events with two leptons in the final state in the High-Luminosity LHC phase, when the accelerator is expected to reach a center-of-mass energy of 14 TeV and an integrated luminosity up to 3000 fb-1, reporting from recently published results from the ATLAS Experiment
The LUX-ZEPLIN (LZ) dark matter search experiment, centered on a dual-phase xenon time projection chamber operating at the Sanford Underground Research Facility in Lead, South Dakota, USA, has the world’s leading sensitivity to searches for Weakly Interacting Massive Particles (WIMPs). It is comprised of a 10-tonne target mass (7-tonne active) and outfitted with photomultiplier tubes in both the central and the self-shielding regions of the liquid xenon, which is enclosed within an active gadolinium-loaded liquid scintillator veto and all submerged in an ultra-pure water tank veto system. LZ has completed its first science run, collecting data from an exposure of 60 live-days. This talk will provide an overview of LZ’s search and sensitivity goals to an Effective Field Theory (EFT) framework that describes several possible dark matter interactions with nucleons. We will highlight the key backgrounds, data analysis techniques, and signal models relevant to this study.
Domain walls are a type of topological defects that can arise in the
early universe after the spontaneous breaking of a discrete symmetry. This occurs in several beyond Standard Model theories with an
extended Higgs sector such as the Next-to-Two-Higgs-Doublet model
(N2HDM). In this talk I will discuss the domain wall solution related
to the singlet scalar of the N2HDM as well as demonstrate the possibility of restoring the electroweak symmetry in the vicinity of the
domain wall. Such symmetry restoration can have profound implications on the early universe cosmology as the sphaleron rate inside the
domain wall would, in principle, be unsuppressed compared with the
rate outside the wall.
Tau leptons serve as an important tool for analyzing the production of Higgs and electroweak bosons in the context of the Standard Model as well as for physics phenomena beyond the Standard model. Therefore, an accurate reconstruction and identification of the hadronically decaying tau leptons is a crucial for contemporary and future high energy physics experiments. Building on the results of novel tau tagging algorithms, we show the tau energy and decay mode reconstruction performance of the end-to-end transformer based machine learning methods in comparison with the algorithms currently used at various experiments. The algorithms are evaluated on the electron-positron collisions simulations with realistic detector effects and ParticleFlow-based event reconstruction. The results are expected to be applicable also to other future electron-positron and proton-proton colliders.
The ATLAS hadronic Tile Calorimeter (TileCal) is one of the sub-systems of the ATLAS detector installed at the LHC. The calorimeter is composed of alternating iron plates and plastic scintillating tiles. Our study aims to determine the azimuthal uniformity of the energy response and intercalibration of the TileCal longitudinal layers using isolated muons. The muons from the decay of the W bosons are selected. This particular decay is chosen because of its high cross-section and clean signature. The response of the individual TileCal cells is quantified by measuring the ratio of the energy deposited by a muon in a given cell (dE) to the corresponding path length (dx). The distribution of dE/dx follows the well-known Landau distribution. To cancel out various systematic effects, our analysis uses the truncated mean of the dE/dx distribution obtained from the data divided by the truncated mean from the MC simulation samples. Results using 2022 and 2023 data will be shown.
The ATLAS Collaboration has developed a variety of Education and Outreach activities designed to engage young minds at home and in the classroom. This material ranges from an original particle physics baby book to colouring books, onlne printable information sheets and a challenging Masterclass program using real data from LHC proton collisions. Here we present our most recent developments designed to spark interest in children and students of all ages, teaching the methodology of scientific research, sharing the excitement of exploration and discovery, and inviting them to experience the challenges of being an international scientist today. We will also present efforts to make our material more inclusive and available to a wider and more diverse audience around the world.
Communicating the science and achievements of the ATLAS Experiment is a core objective of the ATLAS Collaboration. This contribution will explore the range of communication strategies adopted, provide an overview of ATLAS’ digital communication platforms - including its website and social media - and evaluate their impact on target audiences. Lessons learned and best practices will be shared, drawing from measured effects on audience engagement.
The ATLAS Collaboration hosts several popular programmes bringing visitors to our detector at CERN or via video conference from remote locations. ATLAS physicists take advantage of technical stops and shutdowns to show off the world’s largest collider detector to local audiences via guided visits and to remote audiences via virtual visits. Throughout the year, local visitors join guided tours through the ATLAS Visitor Centre and remote guests speak to hosts using a new virtual visit system that features a view of the ATLAS Control Room. These programmes are popular not only for classrooms and groups around the world, reaching tens of thousands of visitors, but also for the guides, who have a chance to hone their skills at describing the detector, our research, and the value of international collaboration. We present these programmes, recent developments, and current efforts to make them available to a wider and more diverse audience around the world.
Several leptogenesis models predict that the CP violation (CPV) necessary for the generation of the observed baryon asymmetry is driven exclusively by the CP-violating phase in the PMNS leptonic mixing matrix, δCP. The value of δCP must be measured with the highest precision in order to verify or reject some of these and the various lepton flavours models, each of which predicts a specific δCP value. The ESSvSB+ aims at measuring the v-Nucleus cross-section in the low energy range of the ESSvSB, ca. 0.2–0.6 GeV, for the precise determination of δCP value using a LE-nuSTORM and a LE-Monitored Neutrino Beam facilities.
Several technological challenges must be studied before addressing the design of the ESSvSB+ experiment. Among these, the design of the special ESSvSB+ target station and the physical characteristics of the pion beam considered highest priority.
This talk will shed more light on the design study currently running for this important part of the experiment.
The ESSnuSB project aims to measure the leptonic CP violation at the second neutrino oscillation maximum using an intense neutrino beam.
ESSnuSB+ is a continuation of this study which focuses on neutrino interaction cross-section measurement at the low neutrino energy region as well as the study of the sensitivity of the experimental set-up to additional physics scenarios. Among them, it proposes to search for atmospheric, Supernovae and Solar neutrinos at the Far Detector and to study sterile neutrinos at the Near Detectors.
In this talk, we summarize the expected ESSnuSB+ physics reaches on the study of non-beam neutrino oscillation physics. Moreover we
describe the the capabilities of the experiment in constraining 3+1 sterile neutrino model using neutrinos reaching near detectors from two different neutrino beams: a monitored beam produced by pion decays and a beam produced by muons circulating in a muon storage ring.
Event-by-event fluctuations of mean transverse momentum, $\langle p_{\rm{T}}\rangle$, help to characterize the properties of the system created in heavy-ion collisions and are linked to the phase transition dynamics from quark-gluon plasma (QGP) to a hadron gas. In this contribution, $\langle p_{\rm{T}}\rangle$ fluctuations of charged particles produced in pp at $\sqrt{s}= 5.02$ TeV, Xe-Xe and Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}=$ 5.44 and 5.02 TeV, respectively, are studied as a function of charged-particle multiplicity. In Xe-Xe and Pb-Pb collisions, the multiplicity dependence deviates significantly from a simple power law scaling, which is qualitatively expected from radial flow in mid to central heavy-ion collisions. In pp collisions, the correlation strength is studied as a function of transverse spherocity to examine the effects of jets. The strength and multiplicity dependence of jetty and isotropic events are compared with QCD-inspired MC models.
The INO-ICAL collaboration has built a prototype detector called miniICAL at IICHEP, Madurai, India. A Cosmic Muon Veto detector (CMVD) based on an extruded plastic scintillator (EPS) is being built on top of the miniICAL detector to investigate the feasibility of constructing a large-scale neutrino experiment at shallow depths. All the individual components of the veto walls, e.g, SiPM $\&$ its readout, WLS fibre as well as the reconstruction of muon trajectory in the miniICAL have been well established.
Using these developments, this work examines the performance of building such a large veto system around the miniICAL detector using the GEANT4 toolkit by incorporating all the known detector parameters in the simulation.
The algorithm to study the performance of the CMVD detector is optimized with and without magnetic field. The talk will describe the detailed performance of the hardware components and the expected performance of the CMVD around the miniICAL.
Within the framework of the Standard Model, the Higgs sector is minimally composed of one doublet of complex scalar fields, essential for achieving spontaneous electroweak symmetry breaking. Nevertheless, a myriad of advanced theories transcending the Standard Model envision more intricate Higgs sectors, leading to the prediction of charged Higgs bosons. Notably, the Georgi- Machacek(GM) model postulates the production of charged Higgs bosons through the distinct Vector Boson Fusion topology, marking a unique signature for charged Higgs searches. In light of this, the ATLAS experiment has rigorously pursued the search for these charged Higgs bosons. In this presentation, a summary of the latest experimental results obtained in searches for both singly- and doubly-charged Higgs bosons, as well as the combination results within the GM model are presented and discussed.
Resonances play a crucial role in probing the characteristic of the hadronic phase, created in ultra-relativistic heavy-ion collisions. Rescattering and regeneration processes influence the measurable resonance yields and $p_{\rm T}$ spectra shapes. Measurements of resonance productions in high-multiplicity pp collisions could provide insight into the possible presence of a hadronic phase in small collision systems. The $\Lambda(1520)$ resonance, with a lifetime of approximately 13 fm/$\it{c}$, provides additional insights into the hadronic phase compared to the $\rm K^{*0}$ (4 fm/$\it{c}$) and $\rm \phi$ (46 fm/$\it{c}$) resonances. This contribution presents recent measurement of $\Lambda(1520)$ resonance production in high-multiplicity pp collisions, including $p_{\rm T}$ integrated yield ($\frac{dN}{dy}$), mean transverse-momentum ($\langle p_{T} \rangle$), and particle yield ratios as a function of charged-particle multiplicity.
Monte-Carlo (MC) simulations play a key role in high energy physics, for example at the ATLAS experiment. MC generators evolve continuously, so a periodic validation is indispensable for obtaining reliable and reproducible physics simulations. For that purpose, an automated and central validation system was developed: PMG Architecture for Validating Evgen with Rivet (PAVER). It provides an MC event generator validation procedure that allows a regular evaluation of new revisions and updates for commonly used MC generators in ATLAS as well as comparisons to measured data. The result is a robust, fast, and easily accessible MC validation setup that is constantly developed further. This way, issues in simulated samples can be detected before generating large samples for the collaboration, which is crucial for a sustainable and low-cost MC production procedure in ATLAS.
We studied the CP violating phases in the neutral kaon oscillations and decays in the effective field theory of kaons, without going into the quark level, and connected the CP violating parameters to the Bargmann invariants and hence to the geometrical phases. We extended this approach to demonstrate how the CP violating parameters appearing in the processes of baryogenesis and leptogenesis are related to the Bargmann invariants, and hence with the geometric phases of those systems. We then concluded on the applications of such a generalized treatment of CP violating phases.
We discuss far-forward production of $D$ mesons and neutrinos in $pp$-collisions at the LHC. We include the gluon-gluon fusion, the intrinsic charm (IC) and the recombination mechanisms. We show that the IC and recombination give negligible contributions at the LHCb kinematics, i.e. in the interval $2 < y < 4.5$ and start to be crucial at larger rapidities, i.e. for $y > 6$. We present energy distributions for electron, muon and tau neutrinos to be measured in Forward Physics Facilities at the LHC. For all kinds of neutrinos the subleading contributions, i.e. the IC and/or the recombination, dominate over light meson components and the standard charm production contribution, for neutrino energies $E_{\nu} > 300$ GeV. For electron and muon neutrinos both the mechanisms lead to a similar production rates and their separation seems rather impossible. For the tau neutrino flux the recombination is reduced further making the measurement of the IC contribution very attractive.
The study of nucleon pairs momentum correlations can provide input for describing the formation of light nuclei, such as deuterons, through the coalescence of protons and neutrons into bound states. The femtoscopy technique is applied to measure the correlation in momentum among protons emitted after the hadronization phase of a hadronic collision. The spatial properties of the proton-emitting source are extracted, and the measured source size can be used as an input parameter for the coalescence modelling. This contribution shows new results of proton-proton correlations measured in pp collisions at √s = 900 GeV, using data collected by the upgraded ALICE experiment during the Run 3 of the LHC. These measurements contribute to the characterization of the proton-emitting source size in small collision systems for different collision energies, providing an insight into the microscopic description of the strong nuclear force and of the physical processes occurring in hadronic collisions.
Performing a precision measurement of the tritium $\beta$-decay spectrum, the Karlsruhe Tritium Neutrino (KATRIN) experiment aims at measuring the neutrino mass with a sensitivity better than $0.3$ eV/c${}^2$ (90% C.L.) after 1000 measurement days. The current world-leading upper limit of $m_\nu \leq 0.8$ eV/c${}^2$ (90% C.L.) was determined from combined analysis of the first two measurement campaigns (6 million collected electrons until 2019) and a publication including the three subsequent measurement campaigns is in preparation (36 million collected electrons until 2021).
In this poster we present the most recent measurement phases which feature a significant increase of statistics to more than 125 million collected electrons in the region of interest. Following KATRIN’s model blinding strategy, studies on simulated Asimov data using the KaFit/SSC model within the Kasper framework will be presented to provide an initial overview of this dataset.
The DeepTau tau identification algorithm, based on Deep Neural Network techniques, has been developed to reduce the fraction of jets, muons and electrons misidentified as hadronically decaying tau leptons by the Hadron-plus-strip algorithm. Its recently deployed version 2.5 for Run3 has brought several improvements to the existing algorithm, e.g. the addition of domain adaptation to reduce data-MC discrepancies in the high-confidence region of the tagger. The resulting model delivers a reduced mis-identification rate at a given efficiency by 10-50% across the regions of interest and, thus, sets a new improved baseline for the tau identification task. The talk will focus on DeepTau v2.5, the comparison with the previous version (v2.1) and its calibration using pp collisions. Corrections to improve data modeling are also shown.
A fundamental aspect of CMS researches concerns the identification and characterisation of jets originating from quarks and gluons produced in high-energy pp collisions. Electroweak scale resonances (Z/W bosons), Higgs bosons and top quarks are often produced with high Lorentz-boosts, where their products become highly collimated large and massive jets, usually reconstructed as AK8 jets. Therefore, the identification of the particle initiating the jet plays a crucial role in distinguishing between boosted top quarks and bosons from the QCD background. In this talk, an overview of the usage of boosted jet taggers within CMS will be given. It will highlight the most recent AK8 tagging algorithms, which make use of sophisticated machine learning techniques, optimised for performance and efficiency. Furthermore, the presentation will show the validation of ML-based taggers, developed for AK8 jets originating from boosted resonances decaying to bbbar, comparing CMS data and simulation.
The T2K long-baseline neutrino experiment in Japan harnesses its sensitivity to search for CP violation in neutrino sector by observing the appearance of electron (anti-)neutrinos from a beam of muon (anti-)neutrinos at its far detector, Super-Kamiokande (SK). For the next iteration of T2K's oscillation analysis, a new $\nu_e$ appearance sample was developed, targeting charged-current single pion production. This neutrino interaction produces an event with more than one Cherenkov ring at SK, specifically a visible $e$-like and a $\pi^+$-like ring. This sample increases T2K's total $\nu_e$ CC statistics by $\sim$6%, thus enhancing its $\delta_{\text{CP}}$ measurements. Additionally, multiple improvements were implemented in the treatment of the SK detector's systematic uncertainties. The poster will showcase the features of the new sample along with changes in SK's systematic treatment and some preliminary oscillation parameter sensitivity studies with the inclusion of this sample.
Mu2e will search for the neutrinoless coherent μ^-→e^- conversion in the field of an Al nucleus and improve the current limit by 4 orders of magnitude. Mu2e consists of a straw-tube tracker and crystal calorimeter in a 1T B field complemented by a plastic scintillation counter veto to suppress cosmic ray backgrounds. Tracker geometry makes track reconstruction a quite unique problem. The first step of track reconstruction is hit clustering, in space and time. Pattern recognition is performed for each time cluster to identity hit combinations compatible with a 3D helix and remove background hits. Track fitting acts on the hit combinations to determine precision track parameters. The existing algorithms are robust and efficient for the topologies of interest for the principal physics analyses. However, we are developing pattern recognition algorithms to improve track reconstruction of multi-particle events which could be important for background estimates through data-driven procedures.
We integrated the detector and the readout electronics for a new inner-station TGC system at the ATLAS experiment and evaluated the performance. The TGC detectors installed in the endcap inner stations of the ATLAS detector will be upgraded from the doublet to triplet chambers for an improved selectivity of the first-level muon trigger at the HL-LHC. The challenging structure of fitting a triplet within the same envelope as the doublet makes integration tests with the readout electronics crucial. The detector and readout electronics from early production were integrated, and the measurements showed an acceptable noise level and a detection efficiency of 94% for each layer. Additionally, the trigger firmware was developed requiring coincidences in two or more layers for this detector and confirmed with the functional simulation.
The flagship activity of the International Particle Physics Outreach Group (IPPOG) is the International Masterclasses (IMC) in particle physics. This very successful programme brings cutting-edge science to high-school students. Invited to a university or laboratory, the students spend a day of immersion in particle physics, learning about the standard model and beyond, about experimental methods, detectors and accelerators. The hands-on part involves analysis of data from an experiment. The IMC programme started in Europe, was later joined by America and has now expanded to all continents. The data used were initially from LEP; with the turn-on of the LHC, measurements with data from ALICE, ATLAS, CMS and LHCb were introduced. In the last years there has been a spectacular broadening of the physics content of the IMC, with data from BELLE II in KEK, from neutrino experiments (MINERvA at Fermilab), from astroparticle physics (Pierre Auger) and medical applications (particle therapy).
Light sterile neutrinos with a mass at the eV-scale could explain several anomalies observed in short-baseline neutrino oscillation experiments. The Karlsruhe Tritium Neutrino (KATRIN) experiment is designed to determine the effective electron anti-neutrino mass via the kinematics of tritium $\beta$–decay. The precisely measured $\beta$-spectrum can also be used to search for the signature of light sterile neutrinos. In this poster we present the status of the light sterile neutrino analysis of the KATRIN experiment. The analysis contains data from the first five measurement campaigns and the obtained sensitivity is compared to current results and anomalies in the field of light sterile neutrinos.
This work received funding from the European Research Council under the European Union Horizon 2020 research and innovation programme, and is supported by the Max Planck Computing and Data Facility, the Excellence Cluster ORIGINS, the ORIGINS Data Science Laboratory and the SFB1258.
In this talk, we report recent progress on the development of a local renormalisation formalism based on Causal Loop-Tree Duality. By performing an expansion around the UV-propagator in an Euclidean space, we manage to build counter-terms to cancel the non-integrable terms in the UV limit. This procedure is then combined with the so-called causal representation, and the UV expansion is performed at the level of on-shell energies. The resulting expressions are more compact, and they retain nice properties of the original causal representation. The proposed formalism is tested up to three-loops, with relevant families of topologies. In all the cases, we successfully cancel the UV divergences and achieve a smooth numerical implementation. These results constitute a first step towards a new renormalisation program in four space-time dimensions (by-passing DREG), perfectly suitable for fully numerical simulations.
We investigate the effects of parameters in the Bestest Little Higgs Model (BLHM) on rare flavor-changing decays of the top quark. In this study, we incorporate new flavor mixing terms between the light quarks of the Standard Model (SM) and the fermions and bosons of the BLHM. We compute the one-loop contributions from the heavy quark $(B)$ and the heavy bosons $(W^{\prime\pm}, \phi^{\pm}, \eta^{\pm},H^{\pm})$. we observe that the processes with higher sensitivity are $Br(t\to cZ)\sim 10^{-5}$, $Br(t\to c\gamma)\sim 10^{-6}$ and $Br(t \to ch^0) \sim 10^{-8}$ within the appropriate parameter space.
The ATLAS physics program at HL-LHC calls for a precision in the luminosity measurement of 1%. To fulfill such requirement in an environment characterized by up to140 simultaneous interactions per crossing (200 in the ultimate scenario), ATLAS will feature several luminosity detectors. LUCID-3, the upgrade of the present ATLAS luminometer (LUCID-2), will fulfill such a condition. In this presentation, two options for LUCID-3 under study are presented: the first is based on photomultipliers (PMT) as for LUCID-2, while the second is based on optical fibers. In the first case, PMTs with a reduced active area are foreseen, placed at a larger distance from the beam-pipe wrt LUCID-2 or in a region with low particle flux, behind the forward ATLAS absorber. In the second option, optical fibers act as both Cherenkov radiators and light-guides to route the produced light to the readout PMTs. The prototypes installed in ATLAS in Run-3 are discussed together with the first results obtained.
Precise luminosity determination is of paramount importance for the ATLAS physics program. A set of complementary luminometers is crucial to ensure high stability and precision of the luminosity measurement. In 2018, two Timepix3 detector setups were installed to study their capabilities of measuring luminosity. The detectors benefit from a fine segmentation and a narrow per-pixel time resolution allowing for a high-quality track reconstruction and particle identification. The installed system was used to study luminosity in different time frames: long term (run-by-run), short term (within a single run) and instantaneous (for each bunch crossing). In this presentation, we discuss the methodology to use Timepix3 sensors for luminosity measurement and show first performance, based on data taken at 13 TeV. We demonstrate that different algorithms provide different signal-to-background ratios and indicate the potential to use Timepix3 in the ongoing Run 3 for luminosity measurement.
Searches for beyond the SM physics can involve heavy resonances identified by multi-prong jets. Calibration techniques rely on SM candles, which makes it challenging to calibrate jets with more than three prongs. This talk will highlight a new method for calibrating the tagging of multi-prong jets using the Lund Jet Plane to correct the substructure of simulated jets. The technique is based on reclustering a multi-prong such that each prong is contained in a separate subjet. The substructure of each prong is then corrected via data-driven reweighting of splittings in the Lund Jet Plane. The method is shown to improve the data-simulation agreement of substructure observables significantly. This will enable future searches utilizing high prong jets.
Tau leptons are very important objects for testing the predictions of the standard model, such as the characterization of the Higgs boson. Tau leptons are also vital in the search for beyond the standard model physics, as many models predict new particles which decay into final states with tau leptons. An efficient tau lepton trigger is therefore essential to maximize the physics reach of the CMS experiment. This talk will describe the latest online reconstruction algorithms used to trigger on tau leptons with the CMS detector that utilize machine learning based methods for the first time. The performance of the algorithms are validated using proton-proton data collected by the CMS detector at a center of mass energy of 13.6 TeV.
The second MoEDAL Apparatus for Penetrating Particles (MAPP-2) is proposed for deployment at the High Luminosity LHC (HL-LHC) a large instrumented tunnel decay volume adjacent to IP8 with a volume of 1200m3. The detector utilizes large area scintillator panels with x-y WLS fibres readout by SiPMs arranged in a “Russian Doll configuration to measure the vertices of very Long-Lived Particles (LLPs) emanating from IP8. The detector incorporates a radiator layer to also allow the registration of photons in the final state. The sensitivity of MAPP-2 is complementary to other planned LLP detectors and the existing LHC general-purpose detectors. We shall discuss a few physics benchmarks to illustrate this sensitivity. The initial plans for deploying the MAPP-2 detector at the High Luminosity LHC (HL-LHC) were endorsed by the LHCC. A LoI, to be submitted to the Large Hadron Collider Committee, is under preparation.
Analysis of the high-multiplicity triggered pp data at $\sqrt{s} =$ 13 TeV, obtained by the ALICE detector, is carried out to study the event-by-event fluctuations of mean transverse momentum ($p_{\rm T}$) using two particle correlator, $\sqrt{C_m}/M(p_{\rm T})_m$. The driving force behind these studies is the search for dynamical fluctuations that may be associated with the formation of QGP droplets in small collision systems, such as pp, for which the traces have been observed in previous studies. The values of the correlator are observed to decrease with increasing charged particle density and exhibit power-law behavior similar to the one observed for pp and Pb--Pb collisions at lower energies. The findings also reveal that Monte Carlo model PYTHIA reproduces the observed dependence of the correlator on charged particle density.
The ATLAS measurement of differential cross-sections for the production of four charged leptons and two jets with the full Run 2 pp collision data will be presented. The cross-sections were measured in two distinctive signal regions characterised by an enhanced contribution from events arising out of strong and electroweak interactions, respectively. An iterative unfolding procedure was used to correct the data for the detector inefficiency and resolution, allowing for a direct comparison to predictions from state-of-the-art Monte Carlo simulations. Vector Boson Scattering processes can be used to probe the weak-boson self-interactions and search for anomalous couplings. In this context, the unfolded cross-sections were interpreted in a Standard Model Effective Field Theory, providing limits on the coupling of dimension-6 and dimension-8 operators. No significant deviations from the Standard Model predictions were observed.
KATRIN aims to measure the electron neutrino mass
$𝑚_𝜈$ with <0.3 eV/$𝑐^2$ (90 % C.L.) sensitivity, by measuring the $^3$H β spectrum near its endpoint $𝐸_0$. In the fit yielding the searched for quantity $𝑚^2_𝜈$ also the parameter $𝐸_0$ is fitted. Since both parameters are highly correlated in the fit any systematic effect influencing the parameter $𝑚^2_𝜈$ will also manifest in $𝐸_0$. After absolute calibration of $𝐸_0$ with $^{83m}$Kr con-
version electron lines a comparison with measurements of the $^3$He-$^3$H mass difference is valuable for cross checks of our experimental procedure. This is limited by the knowledge of $^{83m}$Kr nuclear
transitions in literature to 0.3 eV. Using a gaseous Kr source at KATRIN a new measurement was performed in
2023. Following the method described in ref. EPJ C 82 (2022) 700
the nuclear transition energies can be determined, which can allow for
a reduction of the $𝐸_0$ uncertainty to below 0.1 eV. In this
poster the status is presented.
The associated production of the Higgs boson with the top quark allows to directly probe the Top Yukawa coupling, which is a key parameter for the Standard Model. The presented ttH(bb) analysis exploits the distinctive signature of the large H-> bb branching ratio and the leptonic decays of the top quarks and, uses the full Run 2 dataset collected with the ATLAS detector at the centre-of-mass energy of 13 TeV. Improved reconstruction and machine learning techniques are deployed to optimise the signal-background separation. Differential measurements are explored within the STXS formalism, as a function of the Higgs boson transverse momentum. The results are compared with the predictions of the Standard Model.
The first inclusive cross section measurements for the diboson production of a W and a Z bosons (WZ) in proton-proton collisions at a centre-of-mass energy of 13.6 TeV are presented. The data used were recorded with the CMS detector of the LHC during 2022. Events containing three electrically charged leptons in the final state, which can be electrons or muons, are analysed. The selection is optimized to minimize the number of background events thanks to the usage of a very efficient prompt lepton discrimination strategy and a tagging algorithm that efficiently associates the three leptons to its correspondent parent boson. After selection, the cross section is extracted separately for each lepton flavor multiplicity category, as well as in a simultaneous likelihood fit to all the categories.
SuperCDMS SNOLAB is a 4th generation direct detection experiment that will employ Si and Ge crystals equipped with Transition Edge Sensors (TESs) to search for low mass dark matter particles (<10 GeV/$c^2$). These detectors use larger crystals compared to their predecessors and feature 12 phonon readout channels each. The position dependence of the detector response broadens the energy resolution in addition to a broadening from correlated noise between channels. The NxM filter is an advanced event reconstruction algorithm meant to address these issues by fitting digitized waveforms from N channels with M signal shapes simultaneously. This algorithm allows us to mathematically account for correlated noise. Position information is encoded in the shapes and amplitudes of the pulses. We present results from combining machine learning methods with NxM filter to capture position information in the output amplitudes, correct for position dependence of the energy estimators and more.
The production of quarkonia in vacuum is not fully understood. Theoretical models offer different predictions and experimental measurements are needed to help in distinguishing and improving them. Furthermore, understanding the quarkonium production offers an insight into the quark-gluon plasma properties in heavy-ion collisions.
This poster presents results of the latest $\Upsilon$ measurements in $p~+~p$ collisions at $\sqrt{s}~=~510$ GeV using data collected in 2017 by the STAR detector. The measurement uses the dielectron channel to reconstruct $\Upsilon$ mesons with 2$~<~p_\mathrm{T}~<~15~$GeV/c and $|\eta|~<~$1. The analysis studies the dependence of the self-normalised $\Upsilon$ yield on the self-normalised event multiplicity to elucidate the connection between hard and soft processes involved in quarkonium production. The used data offers an increase in statistics compared to previous measurements done at RHIC allowing for improved precision and extended multiplicity reach.
The Short-Baseline Near Detector (SBND) is a 100-ton scale Liquid Argon Time Projection Chamber (LArTPC) neutrino detector positioned in the Booster Neutrino Beam at Fermilab, as part of the Short-Baseline Neutrino (SBN) program. The detector is currently being commissioned and is expected to take neutrino data this year. Located only 110 m from the neutrino production target, it will be exposed to a very high flux of neutrinos and will collect millions of neutrino interactions each year. This huge number of neutrino interactions with the precise tracking and calorimetric capabilities of LArTPC will enable a wealth of cross section measurements with unprecedented precision. SBND is also remarkably close to the neutrino source and not perfectly aligned with the neutrino beamline that allows sampling of multiple neutrino fluxes, a feature known as SBND-PRISM. This talk will present the current status of the experiment along with expectations for a rich cross section program ahead.
Top quarks, the heaviest elementary particles carrying colour charges, are considered to be attractive candidates for probing the quark-gluon plasma produced in relativistic heavy-ion collisions. In proton-lead collisions, top-quark production is expected to be sensitive to nuclear modifications of parton distribution functions at high Bjoerken-x values, which are difficult to access experimentally using other available probes. In 2016, the ATLAS exper- iment recorded proton-lead collisions at centre-of-mass energy of 8.16 TeV per nucleon pair, corresponding to an integrated luminosity of 165 nb-1. In this poster, we present the final measurement of the top-quark pair production in dilepton and lepton+jet decay modes in the proton-lead system with the AT- LAS detector. The inclusive cross section is extracted using a profile-likelihood fit to data distributions in six signal regions. The nuclear modification factor is also measured.
Upgrades to the CMS Muon system for the High-Luminosity LHC (HL-LHC) include the new GEM detectors GE1/1, GE2/1 and ME0. The development of the GEM-Online Monitoring System (OMS) is crucial for their successful operation. The GEM-OMS provides real-time monitoring of key parameters, enabling the detection of anomalies by filtering data directly through different controllers. With a focus on enhancing efficiency and stability, the GEM-OMS offers a data visualization framework that facilitates easy interpretation of database data through tables, graphs, and charts. This ensures the effective functioning of the upgraded muon system, contributing to the success of the HL-LHC physics program. This presentation offers insights into the current status of the GEM-OMS, highlighting its most recent developments and advancements.
This contribution addresses the need for reliable and efficient data storage in the high-energy physics experiment called AMBER. The experiment generates sustained data rates of up to 10 GB/s, requiring optimization of data storage. The study investigates single-disk performance, including random and sequential disk operations, highlighting the impact of parallel access and disk geometry. A comparison with SSD drives reveals important differences. Various RAID configurations are assessed, considering their reliability, data rates, and capacity. Probability analysis is used to evaluate the RAID rebuilding procedure in the event of disk failure. In addition, an innovative approach of alternating disk access is proposed to ensure uninterrupted performance. Finally, the study identifies the most suitable RAID configuration for the AMBER experiment. The results of this study contribute to the design of high-performance storage solutions for data-intensive scientific experiments.
The LEP precision physics requirements on the theoretical precision tag for the respective luminosity were 0.054 % (0.061%) at $M_Z$, where the former (latter) LEP result has (does not have) the pairs correction. For the contemplated FCC-ee, ILC, and CEPC Higgs/EW factories, one needs improvement at $M_Z$ to at least 0.01% for the theoretical precision tag. We discuss the paths one may take to even exceed this latter goal and present an update on the current expectations for both $M_Z$ and proposed higher energy scenarios.
PMT is widely used in high energy physics experiments to detect single photons. The PMT single photoelectron (PE) response (SER) is a template function describing the pulse shape of single PE. In PMT waveform simulation and analysis, the shape of SER are usually fixed among different pulses from the same PMT. This work proposes a linear model using multiple Gaussian parameters and multiple basis, which allows SER to adjust its shape without introducing much complexity such as non-linear and empirical SER formula. This model provides an easy extension to PMT electronics simulation and waveform analysis. The corresponding calibration algorithm is also developed and applied on real data, which demonstrates the shape variance of PMT PE pulses.
The Super Tau-Charm Facility (STCF) is a new generation $e^+ e^-$ collider designed for various physics topics in the $\tau$-charm energy region. The particle identification (PID), as one of the most fundemental tools in physics analysis, is crutial for achieving excellent physics performance. In this work, we present a powerful PID software based on ML techniques, including a global PID algorithm for charged particles combining information from all sub-detectors, a deep CNN taking Cherenkov detector inputs to discriminate charged hadrons, as well as a deep CNN discriminating neutral particles based on calorimeter responses. The preliminary results show the PID models has achieved excellent PID performance, greatly boosting the physics potential of STCF.
The SuperNEMO experiment aims to search for neutrinoless double beta decay. Whilst the standard approach relies on detecting the sum of the kinetic energy of two emitted electrons, SuperNEMO has an additional tracking detector, enabling investigation of kinematic parameters of the decay and further background suppression through post-processing. Comprising 2034 drift cells operating in Geiger mode, the tracking detector measures both the vertical position of passing particles and the horizontal distance of their trajectories from the central anode wire. Consequently, each tracker hit is represented by a horizontally aligned circle centred at the anode wire with measured radius and vertical position. When viewed from above, the reconstructed trajectory should be tangent to these circles. Without a magnetic field applied, all particles are expected to follow a straight linear path. The primary challenge lies in the horizontal plane. A method based on the Legendre transform is utilised.
Scenario with a lepton-flavor-violating (LFV) interaction, either due to LFV coupling of a scalar or a vector boson, is an intriguing BSM phenomenon. This LFV coupling in the presence of muons leads to a rich phenomenology including an extra contribution to muon anomalous magnetic moment. With the low-energy effective coupling ${\cal L}_{\phi e\mu}=\phi\bar e(g_{e\mu}+h_{e\mu}\gamma^5)\mu$+h.c., which turns $e$ into $\mu$ or vice versa through a scalar $\phi$, we first derive $(h_{e\mu},M_\phi)$ parameter space that can account for experimental measurements of ${g_\mu}-2$. We propose to probe such parameter space or that with an even smaller $h_{e\mu}$ by searching for background-free processes of same-sign, same-flavor final-state lepton pairs $e^+e^-\to e^\pm\mu^\mp\phi\to e^\pm e^\pm\mu^\mp\mu^\mp$ at Belle II. Assuming such final states are detected by Belle II, we propose an effective method to further discriminate between scalar and vector boson mediated LFV interaction scenario.
Long-range angular correlations between particles could potentially reveal physics beyond the Standard Model, such as Hidden Valley (HV) scenarios. Our emphasis is on a hidden QCD-like sector, where the emergence of HV matter alongside QCD partonic cascades could amplify and extend azimuthal correlations among final-state particles.
Our study at the detector level focuses on the detectability of these signals at future $e^+ e^-$ colliders, offering a cleaner experimental environment compared to the Large Hadron Collider (LHC). Notably, the identification of ridge structures in the two-particle correlation function may hint the presence of new physics.
We investigate the effect of photon-axionlike particle (ALP) oscillations in the gamma-ray spectra of fourth most distant blazar QSO B1420+326 measured by Fermi-LAT and MAGIC around the flaring activity in January 2020. We set 95% CL upper limit on the photon-ALP coupling constant $g_{a\gamma} < 2 \times 10^{-11}$ GeV$^{-1}$ for ALP masses $m_{a} \sim 10^{-10} - 10^{-9}$ eV. Assuming the hadronic origin of very-high-energy photons, we also estimate the expected neutrino flux and the cumulative flux from QSO B1420+326-like FSRQs at sub-PeV energies. Furthermore, we study the implications of photon-ALP oscillations on the counterpart gamma-rays of the sub-PeV neutrinos. Finally, we investigate a viable scenario of invisible neutrino decay to ALPs on the gamma-ray spectra and diffuse-ray flux at sub-PeV energies. Interestingly, we find that for the choice of neutrino lifetime $\tau_2/m_2 = 10^{3}$ s eV$^{-1}$, the gamma-ray flux has a good observational sensitivity towards LHAASO-KM2A.
Science students encounter multiple challenges with employment or upper level courses thus teaching them solid lab skills and analysis provide needed solid foundation. The lab for intro-level Physics I and II must provide students with practical experience and laboratory skills that would be further developed by upper-level courses.
A new approach was instituted to provide meaningful lab experience to students. This presentation will overview the experiments and the innovative methodology using open-source tracker software physlets.org/tracker. This new approach expanded the number of possible experiments, provided students with ability to conduct some of the simple experiments at home using common household items, and gives a backup option in the case of possible restrictions to students access of lab facilities. The lab topics make a progression that introduces additional skills to students such as use of software for data analysis, error analysis methods, making a presentation, etc.
The MIP Timing Detector (MTD) is a new sub-detector planned for the Phase 2 upgrade of the CMS experiment at the CERN LHC, designed to measure the time-of-arrival of charged particles with a resolution of 30-60 ps. The barrel region of MTD (Barrel Timing Layer, BTL) is made of arrays of Cerium-doped Lutetium-Yttrium Oxyorthosilicate (LYSO:Ce) scintillating bars, readout by silicon photo-multipliers arrays. The quality of BTL LYSO crystals in the production phase is being monitored in a dedicated laboratory in INFN-Rome1, for a sample of arrays and single crystals. In order to ensure mechanical compatibility in the detector assembly, arrays undergo dimensional measurements. Performances of LYSO arrays and single crystals are checked by measuring light output, decay time, optical cross-talk, time resolution, and transmittance. Radiation hardness of the samples against ionizing radiation by gamma rays are studied at the Calliope facility of ENEA-Casaccia Research Centre.
We simulate deuteron production in Pb+Pb collisions at 2.76 TeV and focus particularly on the elliptic flow. In coalescence, the deuteron yield depends on the size of the region producing the coalescing nucleons. The elliptic flow also depends on how the size of the effective emitting region varies with the azimuthal angle. Thus the elliptic flow of deuterons from coalescence is expected to be sensitive to the azimuthal anisotropy of the fireball more or at least differently than in case of statistical thermal production. We test this idea by using the blast-wave model to parametrise the emission of hadrons and also deuterons in case of thermal production. We tune the model very carefully on proton and pion data for spectra and differential elliptic flow, which constrains it quite precisely. It turns out that coalescence leads to higher elliptic flow than the thermal production and is in better agreement with current data.
The presentation is based on https://arxiv.org/abs/2402.06327.
We investigate the potentially observable consequences at the LHC of resonant production of a vectorlike quark pair through an ultraheavy diquark scalar. For this study, we performed comprehensive Monte-Carlo simulations for a diquark mass of 7 TeV or 8.5 TeV, and a vectorlike quark mass of 2 TeV. We assume that each vectorlike quark decays into a W boson and a b quark, and given the very large invariant mass of the events, we focus on the 6-jet final state. To evaluate the signal selection efficiency, we employed several Machine Learning models trained to discriminate against a multitude of relevant background sources. We have found that the even the diquark as heavy as 8.5 TeV can be discovered or ruled out by the end of the HL-LHC runs.
The ATLAS Online Luminosity Calculator (OLC) is a standalone component of luminosity-related software responsible for the calibration of online luminosity measurements from the various ATLAS luminometers, as well as for providing an interface between ATLAS and the LHC. It also provides the infrastructure for synchronizing the LHC beam movements with the ATLAS DAQ during beam separation scans. In Run-3, the system has seen several developments aimed at making the OLC as a standalone system able to provide data from a wider range of luminometers. This involves a larger degree of integration between the LUCID-2 software and the OLC. The presentation will focus on recent developments, including the ability to switch between using ATLAS or LUCID-2 as the source for time measurements, and the possibility to switch between several different sources to retrieve or calculate the current bunch pattern in the LHC. The presentation will also discuss developments related to the beam scan software.
Though the Standard Model has been a very successful theory, there are still many questions left unanswered like incorporation of gravity into SM, neutrino masses, matter-antimatter asymmetry,... One of the possible solutions to address these challenges is the extension of the present SM by incorporating an additional Higgs doublet. This search aims at exploring the presence of a scalar or pseudoscalar heavy Higgs boson as predicted by the Two-Higgs-Doublet-Model (2HDM) produced in an association with a pair of top quarks with heavy Higgs boson decaying further into a pair of top quarks. This analysis uses proton-proton collisions at sqrt(s)=13 TeV, taken by the ATLAS detector. In order to improve the modelling of the most dominant background originating from top-antitop production in association with jets, data-driven corrections are applied. It includes the performance of a powerful multivariate classifier Graph Neural Network used for signal-background discrimination.
The Daya Bay experiment has studied antineutrino emission at low-enriched uranium reactors, with detectors spanning a large baseline from the reactor cores ($\sim$2km). This poster presents results of a search for the mixing of a sub-eV sterile neutrino based on Daya Bay's full data sample. The result is obtained in the minimally extended 3+1 neutrino mixing model. The analysis benefits from a doubling of the statistics ($5.55\times10^{6}$ candidates) of our previous result and from improvements of several important systematic uncertainties. With these updates, the sensitivity to $\sin^22\theta_{14}$ achieves $5\times10^{-3}$ with $95\%$ confidence level, which represents the world leading constraints in the region of $2\times10^{-4}$ eV$^2$ $\lesssim\Delta m^{2}_{41}\lesssim2\times10^{-1}$ eV$^2$.
An integrated luminosity of 138 fb-1 collected by CMS during Run 2, allows to perform search for new particles with unprecedented sensitivity. The search for a scalar particle with higher mass than the Higgs Mass boson is performed investigating resonances that decays into two W bosons. Results are interpreted in a model independent way as well as in various extensions of the standard models, such as the Two Higgs Doublet Model (2HDM) and a specific subset of the Minimal Supersymmetric Standard Model (MSSM). In this poster the latest updates in the H->WW channel will be presented
The $U(1)_{B-L}$ model contains three heavy Right-Handed (RH) neutrinos, essential for anomaly cancellation and preserving gauge invariance. The model is attractive due to its relatively simple theoretical structure, and the crucial test of the model is the detection of the new heavy neutral $Z'$ gauge boson, the heavy-neutrinos $\nu_R$, and the new Higgs boson $H$. With these motivations, we carried out a study on the $Z'$ resonance and heavy-neutrino pair production at the future muon collider through the process $\mu^+\mu^- \to Z' \to \nu_R \nu_R \to l^{\pm}W^{\mp}l^{\mp}W^{\pm}$ with the subsequent decay of $\nu_R$ to pairs of $l^{\pm}W^{\mp}$ with $l=e, \mu$. The study is realized in the resonance of the $Z'$ boson and for the energies and luminosities of the future muon collider of $\sqrt {s}=4, 5, 6, 7$ TeV and ${\cal L}=2, 3, 4, 10$ $\rm ab^{-1}$.
A search for HH or $X\to SH$ production in final states with one or two light leptons and a pair of $\tau$-leptons is presented. The search uses a $pp$ collision data sample with an integrated luminosity of 140 fb$^{-1}$, recorded at a center-of-mass energy of $\sqrt{s} = 13$ TeV, with the ATLAS detector at the Large Hadron Collider. The search selects events with two hadronically decaying $\tau$-lepton candidates from $H\to \tau^+\tau^-$ decays and one or two light leptons ($\ell=e,\,\mu$) from $S(H)\to VV$ ($V = W,\,Z$) decays while the remaining $V$ boson decays hadronically or neutrinos. A multivariate discriminant based on event kinematics is used to separate the signal from the background. No excess is observed beyond the expected SM background and 95\% confidence level upper limits will be reported.
This poster presents results from a search for exotic decays of the 125 GeV Higgs boson (H) to a pair of light pseudoscalars a, where one pseudoscalar decays to two b quarks and the other to a pair of muons or tau leptons (H ->aa->2b2mu/2b2tau). The analysis is performed on the full CMS Run-2 dataset of proton-proton collisions at center-of-mass-energy 13 TeV, corresponding to an integrated luminosity of 138 fb^-1. Upper limits are set at 95% confidence level (CL) on the branching fraction of the Higgs boson to aa, to 2b2mu and to 2b2tau. In models with two Higgs doublets extended by a complex scalar singlet (2HDM+S), the results from the two final states are combined to determine model-independent upper limits on the branching fraction B(H -> aa -> llbb) at 95% CL, where l is a muon or tau lepton. Upper bounds on the combined branching fraction B(H->aa) are also set for different types of 2HDM+S.
Supersymmetry (SUSY) models with nearly mass-degenerate higgsinos could solve the hierarchy problem as well as offer a suitable dark matter candidate consistent with the observed thermal-relic dark matter density. However, the detection of SUSY higgsinos at the LHC remains challenging especially if their mass-splitting is O(1 GeV) or lower. A novel search using 140 fb^{-1} of proton-proton collision data collected by the ATLAS Detector at a center-of-mass energy \sqrt{s}=13 TeV and targeting final states with an energetic jet, missing transverse momentum, and a low-momentum track with large transverse impact parameter is developed to face such challenge. Results are interpreted in terms of SUSY simplified models and, for the first time since the LEP era, a range of mass-splittings between the lightest charged and neutral higgsinos from 0.3 GeV to 0.9 GeV is excluded up to 170 GeV of higgsino mass.
We present searches for Lepton Flavor Violation (LFV) in the top quark sector using 138 fb^{-1} of proton-proton collision data collected by the CMS experiment at a center-of-mass energy of 13 TeV. The analysis focuses on events containing a single muon and an additional lepton, either a tau or an electron. Modern deep learning techniques are employed to distinguish between signal and background events. The signal involves the decay of top quarks in both single top production and top quark pair production, while the background includes Standard Model processes. The results are consistent with Standard Model expectations, and new constraints on LFV in the top quark sector are established. These searches advance our understanding of nature, particularly highlighting the significance of LFV processes involving tau leptons.
The "4321" renormalizable model proposes a mechanism that accommodates the experimental anomalies found in B-meson decays while remaining consistent with all other indirect flavor and electroweak precision measurements. Among the fundamental particles provided by the 4321 model are three families of Vector-Like Leptons (VLLs), with a mass predicted to be around 1 TeV. Using the full dataset corresponding to 140 fb-1 of integrated luminosity collected by the ATLAS detector in 13 TeV pp collisions at the LHC, a search is presented for VLL pairs as predicted by the "4321" model in regions containing hadronically decaying tau-leptons and no light-leptons in the final state. Signal-like events are selected with exactly one or two hadronically decaying tau-lepton candidates, no light-leptons, and least three jets identified as b-tagged. The expected sensitivity of the search is reported as 95% CL limit on the VLL production cross-section as a function of VLL mass.
This contribution presents a search for rare decays of the Z and Higgs bosons to a photon and a charmed meson J/$\Psi$ or $\Psi'$, which subsequentially decays to a pair of muons. The employed data set corresponds to an integrated luminosity of 123 fb$^{-1}$ of proton-proton collisions at center of mass energy $\sqrt{s} = 13$ TeV, collected with the CMS detector during LHC Run-2. The analysis strategy relies on the presence of two resonances in the signal, unlike the other standard model background contributions. No significant discrepancies with respect to the standard model expectation are observed and upper limits at 95% confidence level are set on the branching fractions of these rare decay channels. The search significantly improves on the existing limits, thanks to innovative signal selection techniques. In addition, the limits are interpreted in the $\kappa$-framework to constrain the coupling of the Higgs boson to the charm quark.
After the discovery of the Higgs boson at the Large Hadron Collider (LHC) at CERN, we undoubtedly live in a phase characterized by a lack of discoveries of Beyond Standard Model physics in particles accelerators. Anomaly Detection is a novel machine learning approach that could be used to resolve this stalemate, as it allows to be very general with the searched signatures without losing sensibility to possible signals. ATLAS analyses are taking the first steps in this direction, following the results obtained from CWoLa based resonant searches. The poster shows the results obtained with Anomaly Detections approaches in ATLAS, where events are selected solely because of their incompatibility with a learned background-only model. In particular, my focus is on the search for a heavy resonance Y decaying into a Standard Model Higgs boson H and a new particle X in a fully hadronic final state, which represents the first application of fully unsupervised machine learning to an ATLAS analysis
Many new physics models, such as the Sequential Standard Model, Grand Unified Theories, models of extra dimensions, or models with eg. leptoquarks or vector-like leptons, predict heavy mediators at the TeV energy scale. We present recent results of such searches in leptonic final states obtained using data recorded by the CMS experiment during Run-II of the LHC.
Measurements of two-neutrino double beta decay ($2\nu\beta\beta$) have played a key role in advancing the understanding of neutrino properties. Further exploration of $2\nu\beta\beta$ and its possible exotic decay modes (decay with right-handed or sterile neutrinos) may provide further knowledge. The recently published improved description of the shape of $2\nu\beta\beta$ spectrum provides a methodology for precise calculations of the axial vector coupling constant $g_A$.
The signature of these processes should be most evident by examining the single-electron energy spectra and the distribution of the decay angle. These variables can uniquely be obtained with the SuperNEMO Demonstrator.
The detector is currently taking data for background studies. Once passive shielding is installed in the second half of 2024, the $2\nu\beta\beta$ data-taking campaign will begin. In this contribution we present how SuperNEMO can be utilized in the search for exotic $2\nu\beta\beta$.
The high-luminosity upgrade of the LHC (HL-LHC) will lead to a factor of five increase in instantaneous luminosity, making it possible for experiments as CMS and ATLAS to collect ten times more data. This proton-proton collision rate will result in higher data complexity, making more sophisticated trigger algorithms unavoidable during the HL-LHC phase. The availability of information on the individual jet constituents at the level-1 trigger makes it possible to design more precise jet identification algorithms if they meet the strict latency and resource requirements. In this work, we construct, deploy, and compare fast machine-learning algorithms based on graph and deep sets neural networks on field-programmable gate arrays (FPGAs) to perform jet classification. The latencies and resource consumption of the studied models are reported. Through quantization-aware training and efficient FPGA implementations, we show that O(100) ns inference is feasible at low resource cost.
The Inert Triplet Model (ITM) is a well-studied scenario that contains a neutral scalar Dark Matter (DM), along with an inert charged scalar in a compressed mass spectrum. The DM constraints corner the ITM to a narrow TeV-scale mass range, the production of which is inefficient at the present and future iterations of the LHC. However, Vector Boson Fusion (VBF) at a future Muon Collider promises high production rate for the inert triplet scalars. The compressed mass spectrum leads to disappearing tracks for the charged scalars, which can be efficiently reconstructed over the beam-induced background (BIB). Exploiting the high-momentum Forward Muons from the VBF processes along with these disappearing tracks, we present a detailed analysis of signatures of the model, as well as luminosity projections for $5\sigma$ discovery.
The axion provides a solution for the strong CP problem and is one of the promising candidates for dark matter. The leading approach is probing gamma-ray emission from the nuclear transitions associated with the axion-nucleon coupling. Monochromatic 14.4 keV axions would be produced by de-excitation of the thermally excited isotope of iron-57 in the Sun and could be detected as a 14.4 keV gamma-ray via the inverted production process on the Earth. We developed a Transition-Edge-Sensor (TES) microcalorimeter capable of high energy resolution with an iron absorber and conducted a commissioning run using a one-pixel TES microcalorimeter. In this talk, we highlight scientific objectives, the experimental design, and the latest status, including the development of a microwave multiplexer based on microstrip SQUID for scalability.
The Neutrino Experiment with a Xe TPC (NEXT) is searching for neutrinoless double beta decays (0nubb) of Xe-136 using high pressure xenon gas time projection chambers (HPXeTPC). The power of electroluminescent HPXeTPCs for 0nubb derives from their excellent energy resolution (FWHM <1%), and their topological classification of signal events. The NEXT-100 detector was successfully constructed and assembled in 2023. Commissioning of the detector is underway and data taking will start in summer of 2024. Holding ~80kg of Xenon at about 15bar, this detector has a projected sensitivity of 6e25yr after 3 effective years of data taking. In this talk, we will review the advantages about this type of HPXeTPC, describe the NEXT-100 detector in detail and the scientific aims of it. We will also discuss the current status of the experiment, including commissioning results.
Coherent elastic neutrino-nucleus scattering (CEvNS) can provide interesting physics such as measuring neutrino properties and proving non-standard interactions. CEνNS was observed in 2017 with neutrinos from a stopped pion source, but detecting CEνNS from lower-energy reactor neutrinos is still challenging. Neutrino Elastic-scattering Observation with NaI(Tl) experiment (NEON) is design to address this challenge by aiming to detect CEvNS in a NaI(Tl) crystal using reactor anti-electron neutrino at Hanbit nuclear power plant. Since April 2022, physics data taking has been smoothly underway using a 16.7 kg NaI(Tl) crystal array positioned at the tendon gallery, located 23.7 m away from the reactor core. Current physics data were collected ~399 days reactor-On and 144 days reactor-OFF data. In this talk, we will provide an overview of the experiment, including the detector design and operation, progress in data analysis, and the detector sensitivity.
The future e+e- colliders offer excellent facilities for SUSY searches. The stau, superpartner of the tau-lepton, is one of the most interesting particles for these searches, being likely the lightest of the sfermions, first one that could be observed, and it can be regarded as the worst and thus most general scenario for the searches.
The prospects for discovering stau-pair production at future e+e- factories and the resulting detector requirements will be discussed. The study takes the ILD detector concept and ILC parameters at 500 GeV as example. It includes all SM as well as beam induced backgrounds. It shows that with the chosen accelerator and detector conditions, SUSY will be discovered if the NLSP mass is up to just a few GeV below the kinematic limit of the collider.
Expectations for another accelerator and detectors conditions are derived. In particular the role of the hermeticity of the detector and of the ability to operate trigger-less will be discussed.
We will discuss possible future studies of $\gamma\gamma\to\gamma\gamma$ process using two future detectors. We include different $\gamma\gamma\to\gamma\gamma$ scattering mechanisms, such as double-hadronic photon fluctuations, t/u-channel neutral pion exchange or resonance excitations and deexcitation. Low mass resonant contributions will be included here. The resonance contributions give intermediate photon transverse momenta. We study and quantify individual leptonic and quarkish contributions.
We predict cross section in the mb-b range for typical ALICE3 cuts, a few orders of magnitude larger than for the current ATLAS/CMS experiments. We have also explored an option to use the planned FoCal detector. When used simultaneously with the ALICE main detector, it may allow the study of the $\gamma\gamma\to\gamma\gamma$ scattering for $M_{\gamma\gamma}<1$GeV, a new unexplored region of the subsystem energies.
The talk will be based on research published in Phys.Rev.D109 014004 (2024).
The goal of the SuperNEMO experiment is the search for neutrinoless double-beta decay (0𝜈𝛽𝛽), the observation of which would prove that the neutrino is a Majorana particle. As 0𝜈𝛽𝛽 is a hypothetical and extremely rare process, it is essential to have the lowest level of background possible. 222Rn is a gaseous isotope which could emanate from the detector materials or diffuse from the air of the laboratory into the detector, and its daughter isotope 214Bi with Qb=3.27 MeV can contribute to the double-beta background. The 222Rn activity inside the SuperNEMO tracker demonstrator module must be significantly reduced down to 0.15 mBq/m3. This poster will detail anti-radon strategies used in SuperNEMO and present the status of the 222Rn analysis based on first data compared to simulation using the topology of the 214Bi-214Po decay event, i.e. one electron followed by a delayed alpha.
Recent measurements in small collision systems at the LHC show striking similarities between high multiplicity pp, p–Pb collisions and Pb–Pb collisions. In particular, study of hadronic resonances provide valuable information about the final state hadronic interaction. Due to the short lifetime, resonances decay inside the hadronic medium after the chemical freezeout and their decay daughters interact elastically with other hadrons. As a consequence, measured resonance yields get modified. The ALICE experiment is suitable for measuring hadronic resonances thanks to its excellent tracking and particle identification capabilities over a broad momentum range. In this contribution, new measurements of K(892)*$^{0}$, ϕ(1020), Λ(1520) resonance production using high statistics pp collisions at $\sqrt{s}$ = 0.9 and 13.6 TeV collected by the ALICE Collaboration during the Run 3 data taking will be presented.
A 20-kiloton liquid scintillator detector is designed in the Jiangmen Underground Neutrino Observatory (JUNO) for multiple physics purposes, including the determination of the neutrino mass ordering through reactor neutrinos, as well as measuring supernova neutrinos, solar neutrinos, and atmosphere neutrinos to explore different physics topics. Efficient reconstruction algorithms are needed to achieve these physics goals in a wide energy range from MeV to GeV. In this poster, we present a novel method for reconstructing the energy of sub-GeV events using hit information from 25600 3-inch photomultiplier tubes (PMTs) and the OCCUPANCY method. Our algorithm exhibits good performance in accurate energy reconstruction, validated with electron Monte Carlo samples spanning kinetic energies from 10 MeV to 1 GeV.
The current ATLAS Inner Detector will be replaced with the new Inner Tracker (ITk) to cope with the increased track density and corresponding radiation levels at the HL-LHC. The ITk is designed to be an all-silicon tracking detector with a strip detector surrounding the inner pixel detector. The strip tracker will consist of a central barrel detector with four layers and two end-caps. The 19,000 modules required for the full detector will be assembled between 31 module assembling institutes worldwide in 50-54 months. Each module is assembled from a silicon strip sensor and between one and three flexes holding readout electronics in a series of precision assembly and quality control steps.
In preparation for the module production phase, 5% of the module production volume was assembled during module pre-production. This contribution provides an overview of the results from the ATLAS ITk strip tracker pre-production phase and selected issues discovered in the process.
A Hermitian matrix can be parametrized by a set of variables consisting of its determinant and the eigenvalues of its sub-matrices. Along this line, correlations between these parameters and the physical mixing observables are investigated. The relations may be simplified by considering their symmetry properties. We establish a group of equations which connect these variables with the mixing parameters of diagonalization. These equations are simple in structure and manifestly invariant in form under the symmetry operations of dilatation, translation, rephasing and permutation. When applied to the problem of neutrino oscillation in matter, these relations lead to two new ``matter invariants" which are confirmed by available data.
The CMS experiment will undergo different upgrades in view of the HL-LHC phase of LHC. A key feature is the complete replacement of the Inner Tracker (IT), which will be equipped with detectors with improved radiation hardness, enhanced granularity, and the ability to manage higher data rates. A pioneering serial powering strategy will be deployed for biasing the pixel modules, accompanied by the adoption of new technologies for a high-bandwidth readout system. The Endcap disks (TEPX) of the IT detector will feature four large double disks on each side. This work focuses on the design and performance of the TEPX detector, particularly highlighting the disk prototyping. The functionality of the quad modules built with the first version of the CROC chip, in both digital and planar sensor forms, will be discussed, in terms of noise and threshold uniformity. The final design of the TEPX disk will also be evaluated, especially in terms of optical data transmission quality.
The determination of the detector efficiency is a critical ingredient in any physics measurement. It can be in general estimated using simulations, but simulations need to be calibrated with data. The tag-and-probe method provides a useful and elegant mechanism for extracting efficiencies directly from data. In this work, we present the tracking performance measured in data where the tag-and-probe technique was applied to di-muon resonances for all reconstructed muon trajectories and the subset of trajectories in which the CMS Tracker is used to seed the measurement. The performance is assessed using LHC 2022 and 2023 Run 3 data at 13.6 TeV.
The minimum ionizing particles (MIPs) Timing Detector (MTD) will be installed during the Phase II Upgrade of the Compact Muon Solenoid (CMS) experiment at the CERN LHC. The MTD will provide time information for tracks with a time resolution of about 30-60 ps, helping manage the increased pileup level to preserve the CMS detector's reconstruction performance. The MTD's barrel part (BTL) is instrumented with sensor modules made of 16 bars of LYSO:Ce scintillating crystals coupled at each end to Silicon Photomultipliers (SiPMs). The SiPMs will be exposed to an unprecedented radiation level by the end of the High-Luminosity LHC operations, up to a neutron fluence of 2*10^14 1MeV n_eq/cm^2. In this poster, the latest results of test beam campaigns conducted at CERN and FNAL during 2023 will be presented. The final performance of both non-irradiated and irradiated sensor modules will be discussed, demonstrating their compliance with the detector design requirements.
For the upgrade of the LHC to the High-Luminosity LHC, the ATLAS inner detector will be replaced with an all-silicon detector, the Inner Tracker (ITk). The innermost part of the ITk will consist of a pixel detector with five layers that will consist of modules combined into serially powered chains and loaded on ring and stave shaped low mass carbon-fiber local supports (LLS).
During 2024, the ITk Pixel project will be in the preproduction period and multiple loading sites will integrate modules on LLSs. Testing these significantly sized and feature-complete detector units are a major challenge and essential in order to ensure their electrical and thermal performance. This contribution will describe the developed large-scale detector test
setup detailing on the general infrastructure, the control and monitoring system as well as the readout system. The testing routines and the first test results from the preproduction LLSs will be summarized in this contribution.
The Time-of-Flight (ToF) detectors in the ATLAS Forward Proton (AFP) system are used to measure the primary vertex z-position of the pp -> pXp processes using the arrival times of the two intact final state protons. Detection efficiencies and timing resolutions using low, and moderate pile-up data collected are presented. While efficiencies of a few percent are observed in the Run 2, the resolutions of the two ToF detectors of 21 ps and 28 ps are measured. This corresponds to the expected precision of 5.3 ± 0.6 mm for the vertex reconstruction. The subsequent analysis confirms that the vertex position obtained with the ToF aligns with the value from the ATLAS central detector at the level of 6.0 ± 2.0 mm. During long shutdown 2, the ToF detector underwent major upgrades in electronics, optics, and mechanics, expected to provide a substantial improvement in detection efficiency. Preliminary results for efficiency and resolution studies based on Run 3 data taken will be presented.
The new ATLAS Inner Tracker (ITk), consisting of pixel and microstrip detectors, will replace the current tracking system of the ATLAS detector to cope with the challenging conditions of the high luminosity LHC. System tests of the strip sub-detector are being developed which serve as a testbed for testing and evaluating the performance of several close-to-final detector components before production. System tests for the barrel and end-cap region are being developed and operated using pre-production staves and petals, as the building blocks of the detector. This contribution shows the developments of a FELIX-based DAQ system as the foreseen system for Phase-II to work with staves and petals of the ITk strip detector. As a benchmark, the FELIX results are compared to the ones gained with the collaboration internal DAQ systems used by the assembly sites. Moreover, several DCS tools for control and monitoring of the detector developed and validated at the system tests will be presented.
To face the hightened requirements of real-time and precision bunch-by-bunch luminosity determination and beam-induced background monitoring at the High-Luminosity LHC, the CMS BRIL project constructs a stand-alone luminometer, the Fast Beam Condition Monitor (FBCM). It will be fully independent from the CMS central timing, trigger and data acquisition services and able to operate at all times with a fast triggerless readout. The CO2-cooled silicon-pad sensors will be connected to a dedicated front-end ASIC to amplify the signals and provide a few ns timing resolution. FBCM is based on a modular design, adapting several electronics components from the CMS Tracker for power, control and read-out functionalities. The 6-channel FBCM23 ASIC outputs a single binary high-speed asynchronous signal carrying the Time-of-Arrival and Time-over-Threshold information. The prototype chip is under extensive tests. The detector design and the results of the first validation tests are reported.
Next-generation neutrinoless double-beta decay searches seek to elucidate the Majorana nature of neutrinos and the existence of a lepton number violating process. The LEGEND-1000 experiment represents the ton-scale phase of the LEGEND program's search for neutrinoless double-beta decay of $^{76}$Ge, following the current intermediate-stage LEGEND-200 experiment at LNGS in Italy. The LEGEND-1000 design is based on a 1000-kg mass of p-type, inverted-coaxial, point-contact germanium detectors operated within a liquid argon active shield. The LEGEND-1000 experiment's technical design, energy resolution, material selection, and background suppression techniques combine to project a quasi-background-free search for neutrinoless double-beta decay in $^{76}$Ge at a half-life beyond 10$^{28}$ yr and a discovery sensitivity spanning the inverted-ordering neutrino mass scale. The innovation behind the LEGEND-1000 design, its technical readiness, and discovery potential is presented.
The Jiangmen Underground Neutrino Observatory (JUNO) is a multi-purpose neutrino experiment under construction in China with the main goal of measuring the neutrino mass ordering from reactor antineutrinos. The Top Tracker constitutes part of the veto system of JUNO. Its main task is to track the muons crossing the Central Detector and evaluate the cosmogenic background contribution to the signal. The Top Tracker will be installed on top of JUNO's water Cherenkov and Central detectors and it can precisely track 30\% of the total muon flux at JUNO. It is composed of three layers of several plastic scintillator modules repurposed from OPERA, with each module having two planes of scintillation strips arranged in perpendicular directions for tracking. A new electronics was developed to cope with high background rates at JUNO to trigger and digitize the signal. This poster will present the Top Tracker of JUNO, and discuss its capabilities based on recent detector simulations.
The MIP Timing Detector (MTD) of the CMS experiment, currently under construction for the High Luminosity phase of LHC, emerges as a key player in the pursuit of unrivaled temporal precision in particle physics.
The precise measurement of the time-of-arrival of charged particles provided by the MTD enables the implementation of a 4D vertex reconstruction and helps to discriminate interaction vertices within the same bunch crossing, aiming to recover the vertex cross-contamination levels of the current LHC conditions.
In this contribution, we explore the impact of the measured track momentum uncertainty in the time-of-flight determination and its use in the vertex reconstruction and mass hypothesis assignment, shedding light on its potential impact on event reconstruction and classification.
The results presented in this abstract open new avenues for an effective usage of precision timing in pileup mitigation and as a tool to probe new physics with characteristic time structures.
The Fluorescence detector Array of Single-pixel Telescopes (FAST) project proposes a simplified Schmidt telescope designed for detection of ultra-high-energy cosmic rays. It maintains its optical excellence while featuring cost-effective components. The FAST prototype utilizes a segmented 1.6 m diameter mirror and four 200 mm photomultipliers at its focal plane. Currently, the first generation operates at the Telescope Array Experiment, and the Pierre Auger Observatory. Based on the experience from the first-generation prototype, the second generation is under development. Understanding the temperature and magnetic field dependence of the photomultipliers is essential for accurate event reconstruction. Therefore, thorough testing of individual detector components provides insight into the behavior of the FAST prototype.
We introduce the second generation FAST telescope, compare it to the first prototype and present results of photomultipliers characterization in the laboratory.
The NEXT collaboration uses a high-pressure gaseous time projection chamber with an electroluminescent amplification to search the neutrinoless double beta decay in Xe-136. The experimental program is built on solid and successful R&D, showing an excellent energy resolution (<1%) and remarkable topological discrimination. This prompts the tonne scale proposals for the technology in a phased approach. As a first stage, NEXT-HD, with 1 tonne of enriched xenon, would reach a sensitivity to the half-life of the process better than 1e27 within 5 years of operation. As a second phase, the implementation of an innovative technique of barium tagging would drastically reduce backgrounds and enhance sensitivity beyond the inverted mass ordering allowed region. This talk will cover the progress toward the large-scale phases of the NEXT project: prototyping plans and R&D, such as tracking readout technology, S1 detection, in-vessel electronics, and the current state of single-ion barium tagging.
We report on a novel application of computer vision techniques to extract beyond the Standard Model (BSM) parameters directly from high energy physics (HEP) flavor data. We develop a method of transforming angular and kinematic distributions into "quasi-images" that can be used to train a convolutional neural network to perform regression tasks, similar to fitting. This contrasts with the usual classification functions performed using ML/AI in HEP. As a proof-of-concept, we train a 34-layer Residual Neural Network to regress on these images and determine the Wilson Coefficient $C_{9}$ in MC (Monte Carlo) simulations of $B \rightarrow K^{*}\mu^{+}\mu^{-}$ decays. The technique described here can be generalized and may find applicability across various HEP experiments and elsewhere.
We report progress in using transformer models to generate particle theory Lagrangians. By treating Lagrangians as complex, rule-based constructs similar to linguistic expressions, we employ transformer architectures -proven in language processing tasks- to model and predict Lagrangians. A dedicated dataset, which includes the Standard Model and a variety of its extensions featuring various scalar and fermionic extensions, was utilized to train our transformer model from the ground up. The resulting model hopes to demonstrate initial capabilities reminiscent of large language models, including pattern recognition and the generation of consistent physical theories. The ultimate goal of this initiative is to establish an AI system capable of formulating theoretical explanations for experimental observations, a significant step towards integrating artificial intelligence into the iterative process of theoretical physics.
The High-Luminosity LHC will open an unprecedented window on the weak-scale nature of the universe, providing high-precision measurements of the Standard Model as well as searches for new physics beyond the standard model. The Compact Muon Solenoid (CMS) experiment is planning to replace entirely its trigger and data acquisition system to achieve this ambitious physics program. Efficiently collecting those datasets will be a challenging task, given the harsh environment of 200 proton-proton interactions per LHC bunch crossing. The new Level-1 trigger architecture for HL-LHC will improve performance with respect to Phase I through the addition of tracking information and subdetector upgrades leading to higher granularity and precision timing information. In this poster, we present a large panel of trigger algorithms for the upgraded Phase II trigger system, which benefit from the finer information to reconstruct optimally the physics objects. The expected performance will be presented.
In view of the HL-LHC, the Phase-2 CMS upgrade will replace the entire trigger and data acquisition system. The readout electronics will be upgraded to allow a maximum L1 accept rate of 750 kHz, and a latency of 12.5 µs. The muon trigger is a multi-layer system designed to reconstruct and measure the momenta of the muons by correlating information across muon chambers on the so-called muon track finders. This is achieved with sophisticated pattern recognition algorithms that run on FPGA processors. The Layer-1 Barrel Muon Filter is the second layer of this system, it concentrates the stubs and hits from the barrel muon stations and runs dedicated algorithms to refine and correlate the information of multiple chambers before sending the information to the track finders. We describe the first version of an algorithm designed to detect and identify muon showers. The algorithm has been demonstrated in firmware and the physics performance is also assessed.
The precision measurement of daily helium fluxes with AMS during twelve years of operation in the rigidity interval from 1.71 to 100 GV is presented. The helium flux and the helium to proton flux ratio exhibit variations on multiple timescales. In nearly all the time intervals from 2014 to 2018, we observed recurrent helium flux variations with a period of 27 days. Shorter periods of 9 days and 13.5 days are observed in 2016. The strength of all three periodicities changes with time and rigidity. In the entire time period we found that below ~ 7 GV the helium flux exhibits larger time variations than the proton flux, and above ~ 7 GV the helium to proton flux ratio is time-independent. Remarkably, below 2.4 GV a hysteresis between the helium to proton flux ratio and the helium flux was observed at greater than the 6σ level. This shows that at low rigidity the modulation of the helium to proton flux ratio is different before and after the solar maximum in 2014.
Cosmic Nitrogen, Sodium, and Aluminum nuclei are a combination of primaries, produced at cosmic-ray sources, and secondaries resulting from collisions of heavier primary cosmic rays with the interstellar medium. We present high statistics measurements of the N, Na and Al rigidity spectra. We discuss the properties and composition of their spectra and present a model-independent determination of their primary and secondary components, together with the mostly primary cosmic rays C, Ne, Mg, and S. This allows us to determine the C/O, N/O, Ne/Si, Na/Si, Mg/Si, Al/Si, and S/Si abundance ratios at the source independent of cosmic ray propagation.
To cope with the large amount of data and high event rate expected from the planned High-Luminosity LHC (HL-LHC) upgrade, the ATLAS monitored drift tube (MDT) readout electronics will be replaced. In addition, the MDT detector will be used at the first-level trigger to improve the muon transverse momentum resolution and reduce the trigger rate. About 100 small-radius MDT chambers have been built to replace the current MDT chambers in the innermost barrel region. A new trigger and readout system will be used. Designs for two frontend ASICs and a data transmission board have been finished and detailed standalone and joint tests have been performed. We will present the construction of sMDT chambers and latest studies of the new trigger and readout system.
The use of generative deep learning models has been of interest in the high-energy physics community intending to develop a faster alternative to the compute-intensive Monte Carlo simulations. This work focuses on evaluating an ensemble of GANs on the task of electromagnetic calorimeter simulations. We demonstrate that the diversity of samples produced by a GAN model can be significantly improved by expanding the model into a multi-generator ensemble. We present a systematic study comparing the single-GAN model and the ensemble model using both physics-inspired and artificial features.
The CMS upgrade for the High Luminosity phase of the LHC involves the installation of three GEM stations: GE1/1, GE2/1, and ME0. While GE1/1 has been operational since Run-3's onset, only two GE2/1 chambers are in place as of early 2024. ME0's installation is slated for LHC Long Shutdown 3, with GE2/1 chamber installation resuming post ME0 completion.
These GEM stations, coupled with improved Resistive Plate Chambers (iRPC) detectors, are pivotal for sustaining optimal muon triggering and reconstruction. This presentation delves into the production, validation, and performance of the initial two GE2/1 detectors at CERN GEM lab, showcasing impressive average efficiencies of 98.5% and 99% measured with a six-layer cosmic stand.
The predicted neutrinoless double-β (0vββ) decay is the crucial phenomenon to prove the existence of the Majorana neutrino, which gives a foundation to the Fukugita-Yanagida theory explaining the matter prevalence of the universe. The nuclear matrix element (NME) of 0vββ decay is an important theoretical quantity for the detector design for the next generation of the 0vββ decay search. Reliable calculation of this NME is a long-standing problem because of the diversity of the predicted values of the NME. A particular difficulty comes from the fact that the effective strength of the Gamow-Teller transition operator for this decay is not established. I will discuss what vertex corrections are necessary for the 0vββ NME by the hybrid model of application of the quantum field theory to the leptons and the Rayleigh-Schrödinger perturbation to the nucleons. In the first order, the two-body nucleon current and the exchange vertex corrections are important.
The study of the produced hot and dense matter formed in heavy-ion collisions at the LHC allows the characterization of the quark--gluon plasma (QGP), the deconfined state of quarks and gluons. The measurement of the ratio of yields of charged particles in central to peripheral heavy-ion collisions ($I_{\rm{CP}}$) provides strong constraints on the quenching mechanism in the QGP. In this talk, the modification of the per-trigger yield of associated particles, $I_{\rm{CP}}$, extracted from dihadron correlations in Run 3 Pb--Pb collisions at 5.36 TeV will be presented. Such measurements have the potential to show two distinct effects: suppression on the away side due to strong in-medium energy loss as well as enhancement on the near side due to the reappearance of the quenched energy. A detailed study on collision energy and system size dependence of $I_{\rm{CP}}$ will be shown by comparing the results from Pb--Pb collisions at 2.76 TeV at the LHC and Au-Au collisions at RHIC.
KKMChh is a precision Monte Carlo program for photonic and electroweak radiative corrections to hadron scattering, implementing the amplitude level exponentiation originally developed for electron-positron scattering at the quark level, modeling initial and final state QED radiation as well as initial-final interference to all orders in a soft-photon approximation, adding hard photon corrections through second order next-to-leading logarithm. A previous ICHEP talk introduced a matching procedure NISR (negative initial state radiation) to match the exponentiated photon radiation from the quarks to a QED-corrected parton shower. Here, we describe further details of the NISR method, and describe its effect on forward-backward asymmetry calculations of interest for a precise determination of the electroweak mixing angle.
The MUonE experiment proposes a novel approach to determine the hadronic contribution to the muon anomalous magnetic moment, by measuring the running of the QED coupling through the analysis of $\mu e$ elastic scattering events. The experiment will be carried out at CERN North Area, by scattering the high intensity 160 GeV muon beam available on a low-Z target. The detector would have 40 stations comprising a low-Z target followed by a tracking system, which can measure the scattering angles with high precision; further downstream lies an electromagnetic calorimeter and a muon detector. To validate the basic concepts, a run was performed in 2023 with two stations followed by a calorimeter. This showed, for the first time, the ability of the detector to measure elastic events with high rate 160 GeV muons of 40 MHz and is considered a milestone to proceed with a Technical Proposal of the experiment. The results from the test run will be presented.
The discovery of neutrino oscillations has provided experimental evidence that neutrinos have nonzero masses. Cosmological constraints as well as direct measurements indicate that the neutrino masses are orders of magnitude smaller than the masses of other SM fermions. The introduction of new heavy states, N, with right-handed chirality, known as heavy neutral leptons (HNLs), is a possible beyond the Standard Model (BSM) mechanism for providing nonzero masses to neutrinos. The HNLs can generate a gauge invariant mass term for the SM neutrinos through a see-saw mechanism. Additionally, HNLs can provide explanations for other problems in high energy physics like the nature of dark matter or the matter-antimatter asymmetry of the universe. This poster presents the search for long-lived HNLs, both as possible right-handed Dirac or Majorana scenarios. The search is conducted using final states that contain two charged leptons (electrons or muons) and a jet from a displaced vertex.
The Micro-Channel Plate (MCP) is a specially crafted microporous plate with millions of independent channels, which have secondary electron emission capability. The MCP could be used as the electronic multiplier amplifier in the PMTs. There are two types of MCP Photomultiplier tube (MCP-PMT). One is the large-area electrostatic focusing PMTs (LPMT) , which always used in the large scalar neutrino detector for it’s large area efficiency photocathode. The small size proximity focusing PMTs (FPMT) is widely used in high energy physics for its fast time response. The MCP-PMT Collaboration Group in China has successfully research and developed the LPMT for JUNO in 2019, and plan to research a new type of FPMT with multi-anode readout (4X4, 8X8). The FPMT prototypes have been produced with 30 ps time resolution, and also the 8X8 readout anode for the position resolution. We will introduce both of two type of PMTs used in high energy physics detectors.
The ALICE data-taking concept for the LHC Run 3 and Run 4 allows the collection of minimum bias collisions in a continuous readout mode, their subsequent asynchronous reconstruction, and the final offline selection of events for permanent storage. This design enables the implementation of dedicated event selection schemes, tailored for a given observable, and avoids the need for dedicated hardware triggers, which in many cases would be difficult to develop. The poster will discuss the strategy for selection events with high transverse momentum ($p_{\rm T}$) charged-particle jets. The offline jet trigger searches for the presence of a large radius ($R=0.6$) anti-$k_{\rm T}$ jet, unconstrained in pseudorapidity, with transverse momentum greater than a given $p_{\rm T}$ threshold. Such design provides unbiased inclusive $p_{\rm T}$ spectra of charged-particle anti-$k_{\rm T}$ jets with radii less or equal to $R$ and of inclusive tracks above the chosen $p_{\rm T}$ threshold.
Multipacting in particle accelerator elements is a major challenge. Multipacting is strongly dependent on the surface total electron yield (TEEY). Developing thin coatings to reduce it is of critical importance. The surface dissipation induced by RF fields is also a critical parameter and the thin film electrical conductivity has to be tuned accordingly. For each application, an optimal set of TEEY and conductance values is required. In order to control both independently, a solution is to develop an heterostructure based on the mixing of a low TEEY, electrical conductor material with a high TEEY, dielectric material to obtain, for instance, a low TEEY, dielectric coating that will prevent both Multipacting and an increase in surface losses. We will present results obtained with Atomic Layer Deposited coatings made of layers of ZnO and MgO to verify that this solution is relevant. Measurements show that TEEY and conductivity vary according to coating structure and composition.
Ultra-short and intense electron beams are now routinely generated by the Laser Wakefield acceleration (LWFA) mechanism. However, achieved beams remain unstable compared to conventional beams, even at state-of-the-art laser facilities, because of the inherent nature of the laser systems and the gaseous target involved. An online, accurate and non-perturbative beam diagnostic system is required for irradiation experiments. LWFA beams are challenging to characterize because of their high intensities, high divergence, short time structure and sensitivity to lasers. We present a scintillation detector in a differential stack configuration that can perform shot-by-shot measurements of e/γ radiation from LWFA sources. Using an unfolding algorithm relying on Monte Carlo simulations, the detector provides direct feedback on the beam quality including spectral information. Proof-of-concept measurements carried out at ELI Beamlines are presented at laser repetition rates reaching 1 kHz.
The CLIC study has developed compact, high gradient, and energy efficient acceleration units as building blocks for a future high-energy, electron-based linear collider. The components to construct such units are now generally available in industry and their properties promise cost effective solutions for making electron-based linacs (already a crucial technology in many research, medical, and industrial facilities) more efficient and more compact.
The CLIC study has actively promoted and supported spin-off developments since its beginning. Examples include beam manipulation and diagnostic in research linacs, including FEL light sources; compact inverse Compton scattering X-ray sources; medical linacs, including FLASH radiotherapy; and compact neutron sources for material investigations. This presentation will introduce the X-band technologies developed as part of the CLIC study and discuss examples of compact linacs utilising such technology for different applications.
Current artificial muon beam sources require conventional radiofrequency (RF) accelerators that can be 100s-1000s of meters in size. Laser wakefield acceleration, instead, can achieve acceleration gradients up to 100 GeV/m, 1000 times greater than RF accelerators. Therefore, by using a meter-scale long plasma and combining it with next-generation laser driver technology the system could be potentially shrunk down to portable sizes.
The Intense and Compact Muon Sources for Science and Security (ICMuS2) project aims at the production of a high-intensity high-energy (100 GeV) muon beam using a 10 PW-class laser to accelerate electrons. Muons will be then generated in a high-Z target via Bethe-Heitler process.
The ICMuS2 project and its latest results are presented including the design and development of a 10-100 GeV electron laser-driven accelerator and the production, detection, and characterization of muons within the background electromagnetic cascade.
The Mu2e experiment at Fermilab investigates rare muon-to-electron conversion using a muon beam generated by an 8 GeV proton beam. To achieve the required high muon flux, minimizing extraction losses is crucial. An important source of such losses are the particles impacting on the electric septum anode. An ideal solution to the problem lies in the beam shadowing scheme tested at CERN SPS. In this approach, a bent crystal is strategically placed upstream of the septum, deflecting particles at a precise angle via the phenomenon of channeling. As a result, a zone with reduced particle flux is created downstream of the crystal, safeguarding the septum anode by minimizing interactions with the beam. This contribution outlines the conducted investigation aimed at optimizing the design of beam shadowing and the manufacturing process of the bent crystal sample. It underscores the significant potential of channeling in bent crystals to assist the Mu2e experiment.
Meson factories are powerful drivers of diverse physics programs and play a major role in particle physics at the intensity frontiers.
Currently, PSI delivers the most intense continuous muon beam in the world up to 10^8 μ+/s. The High-Intensity Muon Beam (HiMB) project at PSI aims to develop new muon beamlines that deliver up to 10^10 μ+/s, with a huge impact for low energy muon-based searches.
While the next generation of proton drivers with beam powers over the current limit of 1.4 MW still requires significant research, HiMB focuses on optimizing existing target stations and beamlines.
We will present the results after the installation of the new production target, confirming the MC predictions, that putting it in perspective would be equivalent to a proton beam power of almost 2 MW. We will report on the design of beamline optics based on pure solenoid elements for the secondary beamlines, together with new high-brightness tertiary beamlines.
The nuSTORM facility enables innovative neutrino physics studies through the decay of muons circulating in a storage ring. The well-defined composition and energy spectra of the neutrino beam from the decays of muons,combined with precise muon flux measurements, facilitate a diverse research program probing fundamental neutrino properties.
nuSTORM has been optimized to store muons with momentum tunable from 1 to 6 GeV/c,enabling precise measurements of νμA and νeA scattering over energy ranges relevant for long-baseline experiments. It also allows for highly sensitive searches for exotic processes and studies of short-baseline flavor transitions exceeding the reach of already planned experiments. As a technology testbed for high-brightness muon beams, nuSTORM is on the path towards a multi-TeV muon collider and could be part of a test-facility serving a muon-cooling demonstrator.
nuSTORM’s status, physics capabilities and potential as a muon collider test-facility will be presented.
Precision measurements by AMS reveal unique properties of cosmic charged elementary particles. In the absolute rigidity range ~60 to ~500 GV, the antiproton flux and proton flux have nearly identical rigidity dependence. This behavior indicates an excess of high energy antiprotons compared with secondary antiprotons produced from the collision of cosmic rays. More importantly, from ~60 to ~500 GV the antiproton flux and positron flux show identical rigidity dependence. The positron-to-antiproton flux ratio is independent of energy and its value is determined to be a factor of 2 with percent accuracy. This unexpected observation indicates a common origin of high energy antiprotons and positrons in the cosmos.
The positron flux measured by the Alpha Magnetic Spectrometer in the TeV region exhibits complex energy dependence. It is described by the sum of a term associated with the positrons produced in the collision of cosmic rays, which dominates at low energies, and a new source term, which dominates at high energies and is associated with either dark matter or astrophysical origin. The positron source term also manifests itself in the measured electron spectrum. This is the first indication of the existence of identical charge symmetric source term both in the positron and in the electron spectra and, as a consequence, the existence of new physics.
Analysis of anisotropy of the arrival directions of galactic positrons, electrons and protons has been performed with the Alpha Magnetic Spectrometer on the International Space Station. This measurement allows to differentiate between point-like and diffuse sources of cosmic rays for the understanding of the origin of high energy positrons. The AMS results of the dipole anisotropy are presented along with the discussion of the implications of these measurements.
We present the continuous daily electron and positron spectra over twelve years from 1 to 42 GeV. These unique data provide critical information to the understanding of the propagation of the same mass but opposite charge particles in the heliosphere. The characteristics of the data can not be explained by current theoretical models.
One of the main goal of the next generation space experiments is to extend the measurement of cosmic positron in the TeV region: this will provide unique information related to dark matter indirect search and cosmic ray physics. The detection techniques currently in use are not suited to reach this energy region in a relatively short time scale.
An alternative method relies on the detection of synchrotron photons emitted by electrons and positrons as they travel within the geomagnetic field. To investigate this technique is the main goal of the Electron Positron Space Instrument (EPSI) project, an R&D that has been approved and financed in Italy as a “Research projects of relevant national interest” in September 2023. The detector consists of an large acceptance electromagnetic calorimeter and a large-area, low detection threshold X-ray detector array.
In this contribution we will present the current status of the EPSI project and next steps for the fulfillment of the project goals.
The production of antihelium in pp collisions at √s = 13 TeV is studied with the LHCb experiment. The used dataset corresponds to 5.1 f b−1. The helium nuclei are identified using mainly ionisation losses in the silicon detectors, resulting in a nearly background-free sample of more than 105 candidates. Recent improvements lead to further suppression of the residual background from photon conversions. This is done by exploiting information from the RICH, calorimeter, and OT. Using the excellent vertex reconstruction capabilities of the VELO subdetector, these helium nuclei are combined with pions to study hypertriton production, and with proton and deuteron candidate tracks to form Λb decay vertices. In this contribution, first results and prospects for a rich programme of LHCb measurements of QCD and astrophysics interest involving light nuclei are discussed. In particular, the results will ultimately impact estimates of antihelium in cosmic rays from dark matter annihilation.
There is a convincing case for some form of supersymmetry, but not a single superpartner has yet been observed. Here we consider a radically different form of supersymmetry, which initially combines standard Weyl fermion fields and primitive (unphysical) boson fields. A stable vacuum then requires that the initial boson fields be transformed into the usual complex fields $\phi$, auxiliary fields $F$, and real fields $\varphi$ of a new kind. A stable vacuum thus imposes Lorentz invariance and breaks the initial susy with no additional assumptions or fields. The present formulation may explain why no superpartners have yet been identified: superpartners with masses $\lesssim 1$ TeV may exist, but with reduced cross-sections and modified experimental signatures. Predictions include (1) the dark matter candidate of our previous papers, (2) many new fermions with masses not far above 1 TeV, and (3) the full range of superpartners with a modified phenomenology.
The Muon g-2 Experiment at Fermilab, whose second result was published in August 2023, conducts the world’s most precise measurement of the anomalous magnetic moment of the muon. Muon g-2 data can be used to search for a sidereal variation of the anomalous spin precession of the muon, one of the important signatures of CPT and Lorentz Invariance Violation (LIV) in the muon sector. The BNL Muon g-2 experiment was the first to conduct this search at the sidereal frequency. The Fermilab Muon g-2 experiment searched for a variation of the anomalous spin precession of the muon at the sidereal frequency and at its harmonics. This represents the first search for CPT and LIV signatures in the muon sector at sidereal harmonics. The main focus of this talk is to discuss the result of the CPT and LIV search with Fermilab Muon g-2 Run-2/3 data and the details of the analysis framework used. The projected sensitivity of this search will approach O(10^-25) GeV, well surpassing the Planck scale.
The observed matter-antimatter asymmetry in the universe is a serious challenge to our understanding of nature. BNV/LNV decays have been searched for in many experiments to understand this large-scale observed fact. In this talk, we present recent results on searches for BNV and LNV decays of charmed meson, hyperons and light hadrons at the BESIII experiment.
Charged Lepton Flavor Violation (cLFV) is highly suppressed in the Standard Model (SM) by the finite, but tiny neutrino masses. Its branching fraction is calculated to be extremely small in the SM and so far no charged lepton flavour violating process has been found in experiments, including searches performed in lepton ($\mu$, $\tau$) decays, pseudoscalar meson (K, $\pi$) decays, vector meson ($\phi$, $J/\psi$, $\Upsilon$) decays, Higgs decays etc. This talk presents the charged Lepton Flavor Violation searches at the BESIII experiment. The process $J/\psi-> e \tau/e \mu$ is searched for using 10 Billion $J/\psi$ events collected by BESIII and the result improves the previously published limit by two orders of magnitude.
Lorentz and CPT symmetry in the quark sector of the Standard Model are studied in the context of an effective field theory using ZEUS $e^{\pm}p$ data. Symmetry-violating effects can lead to time-dependent oscillations of otherwise time-independent observables, including scattering cross sections. An analysis using five years of inclusive neutral-current deep inelastic scattering events corresponding to an integrated HERA luminosity of 372 pb$^{−1}$ at $\sqrt{s} = 318$ GeV has been performed. No evidence for oscillations in sidereal time has been observed within statistical and systematic uncertainties. Constraints, most for the first time, are placed on 42 coefficients parameterising dominant CPT-even dimension-four and CPT-odd dimension-five spin-independent modifications to the propagation and interaction of light quarks.
The HIBEAM/NNBAR experiment is a two stage experiment for the European Spallation
Source to search for baryon number violation. The experiment would make high sensitivity searches for baryon number violating processes: n → nbar and n → n′(neutron to sterile neutron), corresponding to the selection rules in baryon number ΔB = 2, 1 , respectively. The experiment addresses open questions such as baryogenesis and dark matter, and is sensitive to a scale of new physics in excess of that available at colliders. The experiment comprises physicists from large collider experiments and low energy nuclear physics experiments, together with scientists specialising in neutronics and magnetics. European, US, and South American communities are represented. The experiment can increase the sensitivity to neutron conversion probabilities by three orders of magnitude compared with previous searches. The opportunity to make such a leap in sensitivity in tests of a global symmetry is a rare one.
The CMS experiment has recently established a new Common Analysis Tools (CAT) group. The CAT group implements a forum for the discussion, dissemination, organization and development of analysis tools, broadly bridging the gap between the CMS data and simulation datasets and the publication-grade plots and results. In this talk we discuss some of the recent developments carried out in the group, including the structure of the group, the facilities and services provided, the communication channels, the ongoing developments in the context of frameworks for data processing, strategies for the management of analysis workflows and their preservation and tools for the statistical interpretation of analysis results.
The ATLAS experiment at CERN comprises almost 6000 members. To develop and maintain a system allowing them to analyze the experiment's data, significant effort is required. Such a system consists of millions of lines of code, hundreds of thousand computer cores, and hundreds of petabytes of data. Even a system of this size, while sufficient for current needs, will need to be significantly upgraded to prepare for the High-Luminosity LHC upgrade in the coming years. In this talk, I will summarize the status of ATLAS Computing and the improvements that are implemented or planned.
The Auger Offline Framework is a general-purpose C++-based software that allows the reconstruction of the events detected by the Pierre Auger Observatory. Thanks to its modular structure, the collaborators can contribute to the code development with their algorithms and sequencing instructions required for their analyses. It is also possible to feed the Auger Offline Framework with different Monte Carlo codes used to describe the longitudinal and lateral development of air showers and simulate the detectors' response using Geant4, embedded in the Offline code. Thanks to its high modularity and robustness, several modules of Offline Framework were shared with other experiments from a wide range of cosmic- and gamma-ray and particle detectors, ranging from JEM-EUSO and HAWC to NA-61. In this talk, we describe the Auger Offline Framework and its applications and discuss the challenges for the future.
ALICE records Pb-Pb collisions in Run 3 at an unprecedented rate of 50 kHz, storing all data in continuous readout (triggerless) mode. The main purpose of the ALICE online computing farm is the calibration of the detectors and the compression of the recorded data. The detector with the largest data volume by far is the TPC, and the online farm is thus optimized for fast and efficient processing of TPC data during data taking. For this, ALICE leverages heavily the compute power of GPUs. When there is no beam in the LHC, the GPU-equipped farm performs the offline reconstruction of the recorded data, in addition to the GRID. Since the majority of the compute capacity of the farm is in the GPUs, and meanwhile also some GRID sites begin to offer GPU resources, ALICE has started to offload other parts of the offline reconstruction to GPUs as well.
The talk will present the experience and processing performance with GPUs in the Run 3 Pb-Pb and pp online and offline processing in ALICE.
Recent advances in X-ray beamline technologies, including the advent of very high-brilliance beamlines at next-generation synchrotron sources and advanced detector instrumentation, have led to an exponential increase in the speed of data collection. As a consequence, there is an increasing need for a data analysis platform that can refine and optimize data collection strategies in real-time and effectively analyze data in large volumes after the data collection. The increased data volume and rate push the demand for computing resources to the edge of current workstation capabilities. Advanced data analysis methods are required to keep up with the anticipated data rates and volumes.
We proposed a data analysis software framework, to address the data challenges at High Energy Photon Source. In this talk, we will introduce the design and development of the framework and the scientific software developed based on the framework. The future plan will also be introduced.
High Energy Photon Source(HEPS) will produce huge amount of data. Efficiently storing, analyzing, and sharing this huge amount of data presents a significant challenge for HEPS.
HEPS Computing and Communication System(HEPSCC), has designed and established a network and computing system. A deliciated machine room and high speed network have been ready for production. A computing architecture is designed in three types, including Openstack, Kubernetes, and Slurm. Additionally, HEPSCC developed two software for the data management and analysis, DOMAS and Daisy. DOMAS is aimed for automating the organization, transfer, storage, distribution and sharing of the scientific data for HEPS experiments. Daisy is a data analysis software framework with a highly modular C++/Python architecture. Some data analysis algorithms have been integrated into Daisy successfully most of which were validated at the beamlines of BSRF (Beijing Synchrotron Radiation Facility) for the real-time data processing.
Experiments using positron beams impinging on fixed targets offer unique capabilities for probing new light dark particles feebly coupled to e^+ e^- pairs, that can be resonantly produced from positron annihilation on target atomic electrons. In this talk, I will discuss the impact of correctly accounting for the momentum distribution of the atomic electrons that shifts the center of mass energy of the annihilating e+e- pairs, and that must be taken into account in the determination of the number of signal events. After discussing how to reliably compute the cross section for the process, I will show how to obtain the bound electron momentum distribution for different target materials from theoretical computations or experimental data. Finally, I will apply these results to the search for the hypothetical X17 particle focusing on the expected reach of the PADME experiment.
We conduct an analysis to investigate DM with hypercharge anapole moments, focusing on the scenario of a spin-1/2 or 1 Majorana DM interacting with SM particles through U(1) hypercharge anapole terms. We construct general and hypercharge gauge-invariant 3-point vertices for the interactions of a virtual $\gamma/Z$ with two identical massive Majorana particles of any spin. We calculate the relic abundance, analyze current constraints and future sensitivities from the XENON$n$T direct detection and LHC (HL-LHC) experiments, and apply the naive perturbativity bound. The scenario with spin-1 DM is more tightly constrained than that with spin-1/2 DM, due to the reduced annihilation cross-section and/or the enhanced rate of LHC mono-jet events. The spin-1 scenario is almost entirely tested after the full run of the HL-LHC, with the exception of a small parameter region. Our estimations anticipate even stronger bounds for Majorana dark matter with higher spins.
The PADME experiment was originally designed to test dark matter theories predicting the existence of a "Dark Sector" composed of particles that interact with Standard Model ones exclusively through the exchange of a new, massive mediator.
The confirmation of the X17 anomaly, observed in nuclear decays at the ATOMKI in Debrecen, sparked considerable interest in the particle physics community. If the anomaly arises from the decay of a new state into an $e^+ e^-$ pair, the time-reversal symmetry implies that it must be also producible through $e^+ e^-$ annihilation. The PADME experiment can rely on the world's only $e^+$ beam with the appropriate energy for a resonant production of X17. The collaboration dedicated 2022 data taking to investigate the X17 anomaly via $e^+ e^\rightarrow X17 \rightarrow e^+ e^-$ reaction, aiming to confirm the particle hypothesis.
The talk gives an overview of the scientific program of the experiment and presents preliminary results on the X17 search.
Dark SHINE is a fixed-target experiment initiative at SHINE (Shanghai high repetition rate XFEL and extreme light facility, being the 1st hard X-ray FEL in China) under construction targeting completion in 2026. Dark SHINE aims to search for the new mediator, Dark Photon, bridging the Dark sector and the ordinary matter. In this work and presentation, we present the idea of this new project and 1st prospective study in search for Dark Photon decaying into light dark matter. It also provides the opportunity to incorporate broader scope of BSM search ideas utilizing the fixed-target experiment of this type.
Refererence:
Sci. China-Phys. Mech. Astron., 66(1): 211062 (2023), DOI:10.1007/s11433-022-1983-8
arXiv:2311.01780
arXiv:2310.13926
arXiv:2401.15477
DOI:10.5281/zenodo.8373963
To achieve the physics goals of the Circular Electron Positron Collider (CEPC), a tracking system combining a silicon tracker and a drift chamber is proposed. The drift chamber could provide excellent particle identification (PID) performance with cluster counting (dN/dx) technique. By measuring the number of primary ionizations along the particle trajectory, the dN/dx will significantly improve the PID performance due to little sensitivity to Landau tails.
Detailed PID study of the drift chamber will be presented. Simulation study, including the detector and electronics responses as well as the reconstruction algorithm, is performed to optimize the detector design and performance. The results show the kaon and pion separation power with 1.2 track for 20 GeV/c momentum can reach 3σ. Fast readout electronics was developed, and a detector prototype was tested with electron beam. The test results validate the performance of the electronics and the feasibility of dN/dx method.
A large, worldwide community is working to realize physics program of the International Linear Collider (ILC).The International Large Detector (ILD) is one of the detector concepts. The ILD tracking system consists of a Si vertex detector and a large volume Time Projection Chamber (TPC), all embedded in a 3.5 T solenoidal field. An extensive research and development program has been carried out within the framework of the LCTPC collaboration. A Large Prototype TPC in a 1 T magnetic field, which allows to accommodate up to seven identical Micropattern Gaseous Detector (MPGD) readout modules of the TPC-design has been built as a demonstrator at the 5 GeV electron test-beam at DESY. Three TPC-MPGD concepts are being developed: GEM, Micromegas and Pixel readout, also known as GridPix. Successful test beam campaigns have been carried out during the last decade. In this talk, we will summarize recent test-beam results and the next steps towards the TPC construction for the ILD detector.
A 10 TeV muon collider is a promising machine for the high energy frontier. However, the beam-induced background (BIB), originated from the interaction of leptons from muon decay with the machine, represents a big challenge. To deal with its high occupancy, new reconstruction algorithms and high performance detectors are required.
In this context, the studies concerning the muon spectrometer are presented. A new geometry has been designed with seven (five) layers of Gas Electron Multipliers in the barrel (endcap) as track-sensitive chambers. In the endcaps, a layer of Picosec is added to provide time information to reject BIB hits. Picosec achieves resolutions of tens of picoseconds by amplifying, via a Micromegas, electrons from Cherenkov light produced from an incident particle on a radiator crystal. The results for the muon reconstruction efficiency and BIB mitigation are presented with the R&D outcomes with different radiators, photocathodes, and new-generation gas mixtures.
The Deep Underground Neutrino Experiment (DUNE) is a next-generation international experiment that aims to make high-precision measurements of neutrino mixing parameters. DUNE will include a multiple-component near detector (ND) complex that will be located on the LBNF beamline at Fermilab in Batavia, IL. During DUNE Phase II, there will exist a high-pressure gaseous argon TPC surrounded by a calorimeter and a magnet, referred to as ND-GAr. As a fine-grained tracker with a low detection threshold, it will be capable of measuring one of the most crucial sources of systematic uncertainty for neutrino oscillation measurements: nuclear effects in argon at the neutrino interaction vertex. This makes ND-GAr essential for DUNE to reach its precision physics goals. This talk will present an overview of the ND-GAr design, its expected performance, and the R&D efforts on various readout systems. These readout systems include technologies such as wire chambers and Gas Electron Multipliers (GEMs).
The NEWS-G experiment, located at SNO lab, aims for direct detection of WIMPs via nuclear recoils using Spherical Proportional Counter (SPC). Accurate measurement of the recoil energy requires knowledge of quenching factor (QF). Our past measurements were performed in Ne+CH4 gas mixture at 2 bar. Next, we intend to measure QF for different gas mixtures with different detector parameters. To facilitate these in-beam QF measurements, we recently developed a novel technique to study SPC detector characteristics for different detector parameters for neutron scattering based experiments. We are also exploring the possibility to use the tandem accelerator at UdeM, which has the capability to reach neutron beam energy as low as 5 keV.
In this talk, the highlights of our new technique to study neutron scattering with SPC will be presented. Following up with that, the past measurement, current status, and the future plans of the NEWS-G collaboration in measuring QF will be summarized.
The IDEA detector, designed for future e+e- colliders like FCC-ee or CEPC, features an innovative design with a central tracker enclosed in a superconducting solenoidal magnet, a preshower system, and a dual readout calorimeter. Positioned within the iron yoke are three muon detector stations. The preshower and muon detector employ μ-RWELL technology, inheriting the best characteristics of GEM and Micromegas, being spark-robust and simple to assemble. Both preshower and muon detectors follow a modular design, with 50x50 cm2 μ-RWELL tiles. The preshower targets a 100 um spatial resolution, while the muon detector prioritizes channel count. Vigorous R&D on μ-RWELL technology has yielded various prototypes characterized in lab and test beams, and technology transfer to industries for mass production. This abstract highlights insights and outcomes from these efforts.
The Zubarev approach of the non-equilibrium statistical operator [1] is used to account for the enhancement of the low-$p_T$ part of pion spectra by introducing an effective pion chemical potential [2]. This is an alternative to the explanation of the low-$p_T$ enhancement by resonance decays. We report on the first results obtained with a newly developed thermal particle generator that implements both mechanisms of low-$p_T$ enhancement and applies Bayesian inference methods for these scenarios to find the most probable sets of thermodynamic parameters at the freeze-out hypersurface for the case of the transverse momentum spectra of identified particles measured by the ALICE Collaboration. The Bayes factor is determined for these scenarios. The advantages and limitations of the Zubarev approach are discussed.
References:
[1] D.N. Zubarev et al., Statistical Mechanics of Nonequilibrium Processes, Akademie Verlag Berlin (1996), vol. I
[2] D. Blaschke et al., Particles 3, 380–393 (2020)
Well established measurements of high-multiplicity proton-proton (pp) and proton-lead (p-Pb) collisions at the LHC have revealed that small collision systems show the onset of phenomena (e.g. strangeness enhancement, collective flow) typical of heavy-ion collisions, suggesting that light-flavor hadron production arises from a set of complex mechanisms whose relative contributions evolve smoothly from low to high multiplicity collisions. This talk presents multi-differential results from ALICE on light-flavor particle production as a function of the transverse spherocity $(S_{\rm{O}}^{{p_{\rm T}}=1})$ in pp collisions measured at $\sqrt{s}$ = 13 TeV that allows for a topological selection of events that are either "isotropic" (dominated by multiple soft processes) or "jet-like" (dominated by one or few hard scatterings). The experimental results are compared with predictions from various Monte Carlo generators.
Strangeness production in heavy-ion collisions is a longstanding and actively researched topic, offering crucial insights into the properties of strongly interacting matter. The NA61/SHINE experiment at CERN SPS North Area is one of the leading experiments in this field, focusing on measuring hadron production in a wide range of collision energies and system sizes.
This talk emphasizes the significance of measuring strangeness production with respect to the onset of deconfinement. The first results on Lambda baryon production in medium-size systems, such as Ar+Sc, will be presented with a focus on the methodology employed in the analysis. They will be compared with available world data from proton-proton and nucleus-nucleus collisions, and selected theoretical models.
We use the Boltzmann Equation in Diffusion Approximation (BEDA) as a tool to explore the time evolution of an initially out-of-equilibrium and highly occupied expanding system of gluons. We study the hydrodynamization of this system as well as the quark production until chemical equilibration is established. A comprehensive study of such processes will be presented based on parametrical estimations in the weak-coupling limit, similar to those employed for bottom-up thermalization in pure gluon systems, as well as complementary numerical solutions of the BEDA, provide a better understanding of the underlying processes involved in the different stages of the evolution.
In this talk, we present our recent studies on thermal field theories using quantum algorithms. We first delve into the presentations of quantum fields via qubits on general digital quantum computers alongside the quantum algorithms employed to evaluate thermal properties of generic quantum field theories. Then, we show our numerical results of thermal field theories in 1+1 dimensions using quantum simulators. Both fermion and scalar fields will be discussed. These studies aim to understand thermal fixed points for our forthcoming work on studying thermalization in quantum field theories in real time quantum simulation.
We study whether in-medium showers of high-energy quarks and gluons can be treated as a sequence of individual splitting processes or whether there is significant quantum overlap between where one splitting ends and the next begins. Accounting for the Landau-Pomeranchuk-Migdal (LPM) effect, we calculate such overlap effects to leading order in high-energy $α_s(\mu)$ for the simplest theoretical situation. We investigate a measure of overlap effects that is independent of physics that can be absorbed into an effective value $\hat{q}_{eff}$ of the jet-quenching parameter $\hat{q}$.
The DsTau (NA65) experiment at CERN was proposed to measure an inclusive differential cross-section of production in p-A interactions. The DsTau detector is based on the nuclear emulsion technique providing an excellent spatial resolution for detecting short-lived particles like charmed hadrons. The first results of the analysis of the pilot-run data are presented. A high precision in vertex reconstruction allows one to measure the proton interaction length and charged particle multiplicities accurately in a high-track density environment. The measured data have been compared with several Monte Carlo event generators in terms of multiplicity and angular distribution of charged particles. The proton interaction length in tungsten is measured. The predictions of KNO-G scaling are tested on the multiplicity distribution in p-A interactions. The results presented in this study can be used to validate event generators of p-A interactions.
The neutrino flux for accelerator-based neutrino experiments originates from the decay of mesons, which are produced via hadron-nucleus interactions in extended targets. Since the cross sections of hadronic processes are not well known, neutrino flux uncertainties are typically
a leading uncertainty in present day measurements of neutrino oscillation parameters with these experiments. However, the flux uncertainties can be significantly constrained with precise measurements of the hadronic production processes occurring in the production of neutrino beams. The
NA61/SHINE experiment at the CERN SPS has a dedicated program to precisely measure these processes; the T2K experiment has already incorporated previous NA61/SHINE measurements for a substantial flux uncertainty reduction. This talk will review the NA61/SHINE experiment, including
the newest, ongoing, and future measurements of charged hadrons with thin and replica targets important for the Fermilab and J-PARC neutrino programs.
This talk highlights the contributions and recent milestones of the Accelerator Neutrino Neutron Interaction Experiment (ANNIE) to neutrino detection technology and our understanding of neutrino interaction physics. Located on the BNB at Fermilab and serving as an R&D platform, ANNIE stands out as the first near detector experiment to deploy gadolinium (Gd)-loaded water, a Large Area Picosecond Photodetector (LAPPD), multi-LAPPDs, and Water-based Liquid Scintillator (WbLS) on a neutrino beam. The physics mission focuses on studying nuclear final states, thus helping control a key systematic on long-baseline experiments. Gd loading makes ANNIE especially efficient in measuring the neutron yield from neutrino-nucleus interactions. WbLS further increases these capabilities by allowing more efficient reconstruction of nuclear recoil energies, including protons. ANNIE’s location on the same beam line as multiple LArTPC experiments will enable important water-Ar cross-section comparisons.
The ICARUS experiment combines a 760-ton LArTPC with the Fermilab BNB and NuMI neutrino sources to search for sterile neutrinos. While the main goal of ICARUS is to serve as the far detector of the FNAL SBN Program, there is a broader set of physics goals that including searches for BSM physics and nu-Ar cross-section measurements. ICARUS is situated 5.7 degrees off-axis of the NuMI beam, where a significant flux of muon and electron neutrinos in the hundred MeV to a few GeV range are incident on the detector. This flux enables a cross-section measurement program that will test models relevant to the SBN sterile search and DUNE oscillation physics measurements. ICARUS has collected 3E20 POT of NuMI beam exposure and is finalizing its first round of cross section measurements and BSM search results. This talk will review the characterization of the NuMI beam (and related uncertainties) at the ICARUS detector position, along with the latest news on the forthcoming analyses.
With the CONUS reactor antineutrino experiment, the coherent elastic neutrino nucleus scattering (CEνNS) on germanium nuclei was studied at a nuclear power plant in Brokdorf, Germany. Very low energy thresholds of about 210 eV were achieved in four 1 kg point contact germanium detectors equipped with electric cryocooling. Strong constraints on the CEνNS rate which are less than a factor 2 above the signal predicted by the Standard Model were achieved. Last year, the CONUS setup was moved to a new site, a power plant in Leibstadt, Switzerland. There the CONUS+ experiment continues data taking with improved detectors at even lower energy thresholds and an optimised shield design. The setup was positioned at a distance of about 20 m from the center of the reactor core. The detector performances after the first few months of data taking will be described and future perspectives will be discussed based on the recently collected new data.
The CONNIE experiment uses high-resistivity silicon CCDs with the aim of detecting the coherent elastic scattering (CEνNS) of reactor antineutrinos with silicon nuclei at the Angra-2 reactor. It was recently upgraded with two Skipper-CCDs, increasing the sensitivity reach down to a record 15 eV, and becoming the first experiment to employ Skipper-CCDs for reactor neutrino detection. We report on the new results from 300 days of 2021-2022 data with an exposure of 18.4 g-days. The difference between the reactor-on and off rates shows no excess and yields upper limits at 95% CL for CEνNS. We also present the results of three BSM searches to illustrate the potential of Skipper-CCDs: a limit on new neutrino interactions in simplified models with light vector mediators, a dark matter search by diurnal modulation yielding limits on DM-electron scattering, and a search for relativistic millicharged particles produced by reactors. We present the prospects for increasing the detector mass.
Nuclear power reactors offer an intense source of antineutrinos (ν̄ e) for investigating Coherent Neutrino Nucleus Elastic Scattering (CνAel − a Standard Model process) at low energy in the complete coherency regime [1, 2]. Furthermore, they offer avenues for probing the beyond Standard Model (BSM) aspects of CνAel, including various low mass light mediators and non-standard interactions. The TEXONO experiment employs state-of-the-art point-contact high-purity Germanium detectors at O(100 eV) threshold [3] to study neutrino physics at the Kuo-Sheng nuclear power plant. In this presentation, we will give an overview of our research activities and present the latest results in probing SM and BSM physics with CνAel.
References
[1] S. Kerman et al. (TEXONO Collaboration), Phys. Rev. D 93, 113006 (2016).
[2] V. Sharma et al. (TEXONO Collaboration), Phys. Rev. D 103, 092002 (2021).
[3] A.K. Soma et al. (TEXONO Collaboration), Nucl. Instrum. Methods Phys. Res. A 836, 67 (2016).
The ATLAS experiment in the LHC Run 3 uses a two-level trigger system to select events of interest to reduce the 40 MHz bunch crossing rate to a recorded rate of up to 3 kHz of fully-built physics events. The trigger system is composed of a hardware based Level-1 trigger and a software based High Level Trigger.
The selection of events by the High Level Trigger is based on a wide variety of reconstructed objects, including leptons, photons, jets, b-jets, missing transverse energy, and B-hadrons in order to cover the full range of the ATLAS physics programme.
We will present an overview of improvements in the reconstruction, calibration, and performance of the different trigger objects, as well as computational performance of the High Level Trigger system.
The ALICE Fast Interaction Trigger (FIT) has been working since beginning of LHC Run 3, demonstrating excellent performance. FIT serves as an interaction trigger, online luminometer, initial indicator of the vertex position, and the forward multiplicity counter. In the offline mode, it provides collision time, collision centrality and interaction plane. It also selects diffractive and ultra-peripheral heavy-ion collisions.
FIT comprises three detectors, FT0, FV0 and FDD, based on different technologies. The sensors are assembled into five groups placed at forward (2.2 < $\eta$ < 6.3) and backward (− 7.0 < $\eta$ < −2.1) rapidity. Detector readout is realized with the fast front-end electronics (FEE), which allow recording events every 25 ns. The FIT online trigger generation takes only 200 ns.
This talk will present the FIT performance in Run 3, including trigger and event selection studies. Moreover, the FEE upgrade program for high luminosity measurements in Run 4 will be discussed.
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) features a sophisticated two-level triggering system composed of the Level 1 (L1), instrumented by custom-design hardware boards, and the High-Level Trigger (HLT), a software based trigger. The CMS L1 Trigger relies on separate calorimeter and muon trigger systems that provide jet, e/γ, τ, and muon candidates along with energy sums to the Global Trigger, where the final selection is made. The L1 trigger hardware was upgraded to handle proton-proton collisions at a center-of-mass energy of 13 TeV with a peak instantaneous luminosity of $2.2 \cdot 10^{34} cm^{-2}s^{-1}$, more than double the design. For the Run 3 of the LHC, an optimized and expanded Level-1 and HLT trigger physics menu has been developed to meet the requirements of the ambitious CMS physics program. A wide range of measurements and searches will profit from the new features and strategies implemented in the trigger system.
Since 2022, the LHCb experiment is using a triggerless readout system collecting data at an event rate of 30 MHz and a data rate of 4 TB/s. The trigger system, implemented as a high-level trigger (HLT), is split in two stages. During the first stage (HLT1), implemented on GPGPUs, track reconstruction and vertex fitting for charged particles is performed to reduce the event rate to 1 MHz, where the events are buffered to a disk. In the second stage (HLT2), deployed on a CPU server farm, a full offline-quality reconstruction of charged and neutral particles and their selection is performed, aided by the detector alignment and calibration run in quasi-real time on buffered events. This allows to use the output of the trigger directly for offline analysis. In this talk we will give a detailed review of the implementation and challenges of the heterogenous LHCb trigger system, discuss the operational experience of the last two years and show first results of the 2024 data taking period
The ATLAS Level-1 Trigger, crucial for the selection of LHC events at CERN, has been upgraded for Run-3 with advanced processors and FPGAs, extensively using optical links to enhance performance. The software has adopted modern continuous integration tools and advanced monitoring. The Calorimeter Trigger system, utilizing the detector's full granularity, enhances object identification alongside a redesigned Topological Processor for detailed spatial analysis. Notably, the Muon Trigger system, with input from the newly installed muon sub-detector (NSW), improves Level-1 Trigger background rejection. Trigger Decision and Clock distribution were enhanced with new single-board modules. The Central Trigger system includes a System-on-chip for the new Muon Interface, running control and monitoring applications directly on hardware. This overview highlights the upgrades' impact on system performance, particularly under Run-3 high-intensity conditions, paving the way for future physics.
We present a novel long-lived particle (LLP) trigger that exploits the Run~3 upgrade of the Compact Muon Solenoid (CMS) Hadron Calorimeter (HCAL), which introduced a precision timing ASIC, programmable front-end electronics, and depth segmentation to the CMS HCAL barrel. The hardware- and firmware-based trigger algorithm identifies delayed jets resulting from the decay of massive LLPs, and displaced jets resulting from LLPs that decay inside the HCAL. This approach significantly increases sensitivity to LLP signatures with soft hadronic final states, including exotic decays of the Higgs boson. Recent HCAL timing scans produce artificially delayed jets in collision data and are crucial to understanding the detector and trigger performance. Data collected in Run~3 with the new LLP triggers provides a first look at the capabilities to capture softer events and expand the phase space accessible in LLP searches, which are a compelling direction to probe physics beyond the Standard Model.
Semileptonic $b$-hadron decays proceed via charged-current interactions and provide powerful probes for testing the Standard Model and for searching for New Physics effects. The advantages of studying such decays include the large branching fractions and reliable calculations of the hadron matrix elements. In this contribution, LHCb measurements on CKM paramenters and test of new physics will be presented.
We review recent higher order calculations to properties of $B$ mesons. This includes next-to-next-to-leading order corrections to hadronic and next-to-next-to-next-to-leading order corrections to semi-leptonic $B$ meson decays. The latter is important in connection to the determination of the CKM matrix element $V_{cb}$. We also discuss next-to-next-to-leading order corrections to the width difference of in the $B-\bar{B}$ system.
The Heavy Quark Expansion (HQE) has become the major tool to perform precision calculations for inclusive heavy hadron decays. With this method, $V_{cb}$ has been extracted with percent-level precision from moments of $B\to X_c\ell\bar{\nu}$. The HQE is an expansion in $1/m_b$ and introduces nonperturbative HQE matrix elements which can be extracted from data.
To further increase the theoretical precision, we recently pushed the expansion to $1/m_b^5$. We focused on reparametrization invariant (RPI) observables, which depend on a reduced set of HQE parameters. Specifically, at $1/m_b^5$, "intrinsic charm" (IC) contributions proportional to $1/(m_b^3m_c^3)$ enter, which are numerically expected to be sizeable.
I will show how the $1/m_b^5$ contribute to the $q^2$ moments of $B\to X_c\ell\bar{\nu}$ decays. We found that the total $1/m_b^5$ contributions may not be as sizeable as expected. I will discuss how this may impact a future inclusive $V_{cb}$ determination.
Employing the full $BABAR$ dataset with hadronic tagging, we present a model-independent form factor analysis for $B\to D^{(*)}\ell\nu$ and a moments analysis for $B\to D^{*}\ell\nu$ . We also perform a combined lattice+$BABAR$ joint $B\to D^{(*)}\ell\nu$ form factor analysis employing an HQET parameterization. We report updated $|V_{cb}|$ and SM theory predictions for R(D(*)) from this analysis
We present a new global fit for inclusive $|V_{cb}|$ based on the Kolya open-source library, utilizing the full available set of spectral moments of semileptonic $B \to X_c \ell \bar \nu_\ell$ decays with state-of-the-art precision. Our approach includes a novel prescription to estimate the uncertainty arising from missing higher-order contributions of order $1/m^4$ in the heavy quark expansion (HQE). We review various approaches on how to incorporate theoretical uncertainties and correlations, studying their impact on the value of inclusive $|V_{cb}|$ and HQE parameters. Additionally, we conduct a detailed investigation into the compatibility of $|V_{cb}|$ using different sets of experimental inputs.
We systematically compute the Λb(p,sb)→Λc(2595)+ and Λb(p,sb)→Λc(2625)+ form factors within the heavy quark effective theory (HQET) framework including O(1/m^c_2) contributions. Besides taking into account the Standard Model–like vector and axial contributions, we further determine tensor and pseudotensor form factors. Our work constitutes a step forward with respect to previous analyses allowing for a comprehensive study of the matrix element parametrization stemming from the HQET formalism. Finally, we demonstrate that the resulting form factors are in agreement well with lattice quantum chromodynamics (LQCD) determinations. We show that the newly derived 1/m_c^2 corrections are necessary to reconcile LQCD results with HQET computations.
The Mu2e experiment at Fermilab will search for the coherent, neutrino-less conversion of a negative muon into an electron in the field of an aluminum nucleus, an example of Charged Lepton Flavor Violation (CLFV). Observation of CLFV at Mu2e would be an unambiguous signal of physics beyond the Standard Model (BSM). Mu2e aims to improve previous sensitivity on the conversion rate by four orders of magnitude reaching a single event sensitivity of 3E-7, exploring a wide range of BSM models and probing mass scales up to 10^4 Tev/c2.To achieve its goal, Mu2e utilizes a system of solenoids to create an intense pulsed muon beam. The background will be kept at a sub-event level through a high performing detector. The experiment is approaching a very important phase. Construction is almost complete. Commissioning will begin shortly and physics data-taking is scheduled to begin in 2027. This talk will explore the theoretical motivations, design, and current status of the Mu2e experiment.
Event classifiers based on the charged-particle multiplicity have been extensively used in pp collisions at the LHC. However, one drawback of the multiplicity-based event classifiers is that requiring a high charged-particle multiplicity biases the sample towards hard processes. These biases blur the effects of multi-parton interactions (MPI) and make it difficult to pinpoint the origins of fluid-like effects in small systems.
This contribution exploits a new event classifier, the charged-particle flattenicity, defined in ALICE using the charged-particle multiplicity estimated in 2.8 < $\eta$ < 5.1 and −3.7 < $\eta$ < −1.7 intervals. New final results on the production of identified and unidentified charged particles as a function of flattenicity in pp collisions at $\sqrt{s}$ = 13 TeV will be discussed. It will be shown how flattenicity can be used to select events more sensitive to MPI. All the results are compared with predictions from QCD-inspired Monte Carlo event generators.
Recent CMS results on production of open heavy flavor hadrons and quarkonia in pp collisions are discussed. The measurements are performed with data collected in pp collisions at sqrt(s)=13 TeV between 2016 and 2018.
Recent ATLAS results on heavy-flavour hadron production are presented, including production of open charm and beauty, charmonia, and associated production of $J/\psi$ with $t\bar t$.
LHCb functions as a spectrometer targeting the forward region of proton-proton collisions, focusing on a pseudo-rapidity range between 2 and 5. Due to the scarcity of background events in the high mass region, its precise reconstruction capabilities and an optimized trigger system, LHCb offers an optimal environment for probing (exotic) Higgs decays. In this talk, we discuss the latest investigations into Beyond the Standard Model (BSM) Higgs decays at LHCb, and the potential avenues for future data collection. The search for H->bb and H->cc decays will be presented, with a focus on the latest results obtained using the full Run 2 dataset. Finally, prospects on the Standard Model Higgs searches are presented, with an eye toward the future LHCb experiment upgrades. This talk will present published results for measurements of nonidentified hadrons within light quark-initiated jets as well as the status of other ongoing hadronization measurements at LHCb.
Jet substructure observables are sensitive to the effects arising from the mass of quarks produced by QCD hard-scattering interactions. In particular, QCD predicts the suppression of collinear emission around a massive quark, the so-called dead-cone effect, recently observed by the ALICE collaboration at the LHC.
In this talk we discuss how the quark mass affects the theoretical calculations of an event shape observable such as energy-energy correlation functions. In particular, we consider and resum the large logarithms involving the quark mass up to next-to-leading logarithmic accuracy, and investigate the differences between parton shower approaches to QCD radiation by massive quarks as implemented in Pythia and Herwig Monte Carlo event generators.
In order to obtain total charm cross sections in hadron-hadron collisions, measured fiducial cross sections need to be extrapolated including the treatment of charm fragmentation non-universality effects which have recently been reported by the LHC experiments. For this, a novel phenomenological approach [1] was introduced with a theory-inspired extrapolation function which is constrained by various published measurements without the need to assume any particular non-universal fragmentation model. The total charm cross section measurements obtained can then be used for a direct comparison with NNLO theory, which is the highest order available for charm to date. At the total cross section level, the theory is free from fragmentation inputs, such that its $\sqrt{s}$ dependence can be directly used to constrain other QCD parameters. A first evaluation of constraints on parton density functions and the charm quark mass is also presented.
[1] PoS EPS-HEP2023 (2024) 367, arXiv:2311.07523
Measurements of beauty-hadron production in pp collisions provide a fundamental tool for testing perturbative QCD calculations. Studies in p--Pb collisions allow us to shed light on the role of cold nuclear matter effects on beauty production and their impact on beauty-quark hadronisation.
In this presentation, the final results on the production of charm mesons and baryons from beauty-hadron decays (non-prompt) in pp collisions are shown. They are compared to pQCD predictions and to models with modified hadronisation mechanisms with respect to in-vacuum fragmentation. The $b\overline{b}$ production cross section at midrapidity extrapolated from these measurements is presented. The final results of non-prompt charm-hadron production and nuclear modification factor in p--Pb collisions are also discussed. Lastly, the first studies of non-prompt/prompt production yield ratios of charm hadrons in pp collisions at $\sqrt{s} = 13.6$ TeV from the LHC Run 3 data taking are reported.
The "Workshop on Sustainable High Energy Physics" was initiated as an international grassroots initiative by early and mid-career researchers in 2021. It was organized as a virtual workshop and featured a three-day program with keynote lectures, panel discussions, and contributed talks. The 2nd edition took place in 2022, and the 3rd edition in 2024. The workshop series focuses on all aspects of sustainability in the broad context of high-energy physics and discusses the environmental impact and mitigation strategies of research facilities, energy needs for computing and infrastructures, challenges for experimental collaborations, researchers, and institutions, and it provides a platform for various smaller initiatives. Through virtual rooms and generous time slots for discussion, the workshop provides an international platform for networking and coordination on sustainability subjects. In this talk, the history of the workshop series will be reviewed and outcomes will be presented.
The impact of large scale scientific infrastructure such as accelerators, observatories and big data centres cannot be denied. This presentation is based on the recently published reflection document covering the HECAP+ communities (High Energy Physics, Cosmology, Astroparticle Physics, and Hadron and Nuclear Physics). It reflects on the environmental impacts of work practices and research infrastructure, highlights best practice and identifies the opportunities and challenges that such changes present for wider aspects of social responsibility.
As computing becomes substantial for achieving scientific and social progress, its environmental implications often remain underestimated. While the value of scientific computing is witnessed by its ubiquitous achievements, its growing demands have lead, in turn, to increased energy and carbon footprint costs.
With the goal of describing such computational trace in subnuclear physics (SNP), this work estimates the energy consumption of benchmark SNP workloads with a containerized original monitoring software. The benchmark workloads used in this work are GEN-SIM, DIGI and RECO containerized jobs deployed by the HEPScore project. The monitoring software extracts the CPU and RAM usage of such jobs in real-time via process IDs and estimates, with this information, their energy (kWh) and carbon utilization (gCO2e).
The results can be used as a starting point towards a “greener” approach to computing methods and integrate current benchmarking scores with energy efficiency-related metrics.
With the LHCb experiment upgraded at CERN to handle 14 TeV proton-proton collisions, demand for data processing surged, requiring a redesigned acquisition chain. The LHCb Datacenter, powered by 3MW, 4000 nodes, and 200 Data Acquisition machines, is pivotal. This paper explores sustainability and performance optimization in the Datacenter, emphasizing eco-friendly cooling solutions like freecooling, minimizing energy use and costs. Additionally, we examine how liquid cooling can further reduce environmental impact by reusing waste heat for heating nearby areas, enhancing overall sustainability and energy efficiency.
The ATLAS Collaboration operates a large, distributed computing infrastructure: almost 1M cores of computing and almost 1 EB of data are distributed over about 100 computing sites worldwide. These resources contribute significantly to the total carbon footprint of the experiment, and they are expected to grow by a large factor as a part of the experimental upgrades for the HL-LHC at the end of the decade. This contribution describes the efforts underway to understand and monitor the true carbon footprint of computing (beyond only its power consumption), identify opportunities for savings, and establish recommendations for the sites to reduce their carbon footprint.
Monte Carlo simulations of scattering processes with many particles require enormous computing power. Particularly in view of the HL-LHC, an improvement in efficiency is necessary in order to be able to carry out the desired investigations in an economically sensible way. We show that employing a sophisticated neural network emulation of QCD multijet matrix elements based on dipole factorisation can lead to a drastic acceleration of unweighted event generation in high-multiplicity LHC production processes. We incorporate these emulations as fast and accurate surrogates in a two-stage rejection sampling algorithm within the SHERPA Monte Carlo event generator. The approach reduces the computational cost of unweighted events by factors between 16 and 350 for the considered channels.
SciPost Phys. 15, 107 (2023), arXiv:2301.13562
High-precision calculations are crucial for the success of the LHC physics programme. However, the soaring computational complexity for high-multiplicity final states is threatening to become a debilitating bottleneck in the coming years. At the same time, the rapid proliferation of non-traditional GPU-based computing hardware in data centres around the world demands an overhaul of the event generator design.
We propose a flexible and efficient approach for simulating collider events with multi-jet final states, based on the first portable leading-order parton-level event generation framework, along with an improved parton-level event file format with efficient scalable data handling. Our approach lends itself neatly to most modern GPU-accelerated hardware, allowing to better exploit computing resources in large-scale production campaigns, and paving the way for economically and ecologically sustainable event generation in the high-luminosity era.
The MUonE experiment at CERN aims to determine the leading-order hadronic contribution to the muon $g-2$, $a_\mu^{\rm HLO}$, by an innovative approach, using elastic scattering of 160 GeV muons on atomic electrons in a low-Z target. $a_\mu^{\rm HLO}$ is extracted from the precision measurement of the shape of the differential cross section of the muon-electron elastic process. The target precision is few per mill, competitive with the use of data from experiments at $\rm e^+e^-$colliders or lattice QCD, whose tensions currently limit the comparison between theoretical and experimental value of the muon $g-2$. The M2 beamline at CERN provided the necessary intensity needed to reach the statistical goal in few years of data taking. The experimental challenge relies in the precise control of the systematic effects. We will present the progress achieved by the experiment in the last years, the current status, and the future plans of the experiment.
The FCC-ee offers unparalleled opportunities for direct and indirect evidence for physics beyond the Standard Model (SM), via a combination of high precision measurements and searches for forbidden and rare processes. The precision measurement program benefits from an extraordinary conjunction of (i) very clean experimental conditions and excellent c.m. energy determination from the Z up to the top-quark pair production, (ii) unprecedented statistics, with $6\cdot 10^{12}$ Z bosons, $10^8$ WW events, and $1.5\cdot10^6$ Higgs and ttbar events. This will allow a huge leap in precision both for the Electroweak Precision Observables in both neutral and charged currents, as well as direct measurements of key SM parameters such as $\alpha(m_Z)$, $\alpha_s(m_Z)$, $\sin\theta_W$, $m_{top}$, etc. Examples will be shown of the steady work that is ongoing to understand how to improve the detector, analysis, and theory calculations in order to reduce systematic errors towards the statistical ones.
We discuss the experimental prospects for measuring differential observables in b-quark and c-quark production at the International Linear Collider (ILC) baseline energies, 250 and 500 GeV.
The study is based on full simulation and reconstruction of the International Large Detector (ILD) concept.
Two gauge-Higgs unification models predicting new high-mass resonances beyond the Standard Model are discussed.
These models predict sizable deviations of the forward-backward observables at the ILC running above the $Z$ mass and with longitudinally polarized electron and positron beams.
The ability of the ILC to probe these models via high-precision measurements of the forward-backward asymmetry is discussed.
Alternative scenarios at other energies and beam polarization schemes are also discussed, extrapolating the estimated uncertainties from the two baseline scenarios.
Electroweak Precision Measurements are stringent tests of the Standard Model and sensitive probes to New Physics. Accurate studies of the Z-boson couplings to the first-generation quarks could reveal potential discrepancies between the fundamental theory and experimental data. Future e+e- colliders offering high statistics of Z bosons would be an excellent tool to perform such a measurement based on comparison of radiative and non-radiative hadronic decays. Due to the difference in quark charge, the relative contribution of the events with final-state radiation (FSR) directly reflects the ratio of decays involving up- and down-type quarks. Such an analysis requires proper modeling and statistical discrimination between photons coming from different sources, including initial-state radiation (ISR), FSR, parton showers and hadronisation. In our contribution, we show how to extract the values of the Z couplings to light quarks and present the estimated uncertainties of the measurement.
In the last 15 years the "Radio MontecarLow (“Radiative Corrections and Monte Carlo Generators for Low Energies”) Working Group (WG), see www.lnf.infn.it/wg/sighad/, has been providing valuable support to the development of radiative corrections and Monte Carlo (MC) tools for low energy e+e- data. By bringing together in more than 20 meeting experts working in the field of e+e- physics, the WG produced the report “Quest for precision in hadronic cross sections at low energy: Monte Carlo tools vs. experimental data” S. Actis et al. Eur. Phys. J. C 66, 585-686 (2010) (https://arxiv.org/abs/0912.0749), which is highly cited. All this effort has been recently included in the STRONG2020 project for the realization of a MC event generator with fully NNLO corrections for low energy e+e- data into hadrons and leptons, which is of relevance for precise tests of the Standard Model as the determination of the leading hadronic contribution to the muon g-2. We will report on this initiative.
Monte Carlo (MC) generators are at the core of LHC data analyses, and will also play a paramount role at future lepton colliders offering high energies and luminosities. With an $e^+e^-$ Higgs factory at the horizon and studies on the physics potential of a muon collider ongoing, the development of MC generators has to be continuously improved to match the projected experimental precision.
We give a status report on new developments within the WHIZARD event generator, an efficient tool for simulating exclusive and inclusive multi-particle processes, which is one of the major codes used within the lepton-collider community. Important new features comprise NLO electroweak automation (incl. extension to BSM processes like SMEFT), loop-induced processes and new developments in the UFO interface. We highlight work in progress and further plans, such as the implementation of electroweak PDFs, photon radiation, the exclusive top threshold and features for exotic new physics searches.
We present an overview of the ${\tt KKMCee\; 5.00.2}$ Monte Carlo event generator for lepton and quark pair production for the high energy electron-positron annihilation process. We note that it is still the most sophisticated event generator for such processes. Its entire source code is re-written in the modern C++ language. We checked that it reproduces all features of the older ${\bf KKMC}$ code in Fortran 77. We discuss a number of improvements both in the MC algorithm and in its various interfaces, such as those to parton showers and detector simulation.
Circular muon colliders provide the prospect of colliding particles at unprecedented center-of-mass energies. However, the stored muons decay along their trajectory, inducing several technological challenges for the collider and detector design. In particular, secondary decay $e^{+/-}$ are a source of background and induce radiation damage in the machine and detector components, requiring a sophisticated interaction region design. This paper presents design studies for the machine-detector interface (MDI) and quantifies the resulting beam-induced background for a 10 TeV muon collider with the latest optics design elaborated in the International Muon Collider Collaboration. Starting from the shielding design developed by the MAP collaboration for 3 TeV, we devise a customized MDI for the 10 TeV collider. In particular, we highlight the shielding requirements for the final focus magnets and a tentative nozzle optimization for minimizing the beam-induced background from the muon decay.
The international muon collider collaboration is working toward a staged implementation of a 10 TeV muon collider. The talk will summarise the key challenges and the progress that the collaboration is making in addressing them.
A 10 TeV muon collider has the potential to directly search for new physics and uniquely probe electroweak SM properties. An important component of such a collider is cooling, in which a cloud of muons is converted into a beam. In the last stage of this process called final cooling, emittance decreases in the transverse axes while increasing in the longitudinal axis. This step is critical to deliver the needed luminosity for physics goals. Previously, final cooling designs assumed absorbers within high field solenoids. Simulations with realistic magnets did not reach the desired cooling goals. We show a different design for final cooling based on single thick wedges, which has the potential to achieve cooling goals with existing magnet technology. We investigate both machine learning techniques and classical methods to optimize the parameters and achieve lower emittances than published results. We show the feasibility of wedge-based cooling and motivate future studies.
The International Muon Collider Collaboration (IMCC) is investigating the key challenges of a 10 TeV center-of-mass muon collider ring, along with its injector complex and an intermediate 3 TeV collider stage. Muon and anti-muon bunches are produced via a proton driver complex and then undergo 6D cooling. The bunches are then accelerated before entering the collider ring by a series of Linacs, recirculating Linacs and Rapid Cycling Synchrotrons (RCS).
Collective effects are a concern due to the high charge of the muon bunches. The RCS require a significant number of RF cavities to rapidly accelerate the beams and keep a 90% survival rate in each ring. The effect of the cavity high-order modes was evaluated using start-to-end simulations that included collective effects. The collider would be an isochronous ring to preserve a short bunch length. The study also examined the impact of this operation on transverse coherent stability, and potential methods for mitigating instabilities.
A TeV muon-ion collider could be established if a high energy muon beam that is appropriately cooled and accelerated to the TeV scale is brought into collision with a high energy hadron beam at facilities such as Brookhaven National Lab, Fermilab, or CERN. Such a collider opens up a new regime for deep inelastic scattering studies as well as facilitates precision QCD and electroweak measurements and searches for beyond Standard Model physics, in an alternative and complementary way to the proposed LHC-electron collider. We discuss the potential physics program of a muon-ion collider and summarize some accelerator options. We also explore some of the associated experimental challenges to be addressed and the requisite detector performance, including initial studies of a forward muon spectrometer design applicable for a muon-ion or muon-muon collider experiment.
KM3NeT is a deep-sea neutrino observatory currently under construction in the Mediterranean Sea. Its main goals are the search for sources of high energy cosmic neutrinos and the study of neutrino oscillation phenomena with atmospheric neutrinos.
KM3NeT comprises 3D arrays of multi-PMT optical modules optimised to detect the Cherenkov light emitted by charged particles resulting from neutrino interactions in the vicinity of the detectors. With its two sites: ARCA a ‘sparse’ km3-scale detector offshore from Sicily and ORCA a ‘dense’ 7 MTon detector offshore from the south of France, KM3NeT is sensitive to neutrino energies ranging from Mev to PeV.
In this talk the status of the KM3NeT detector is presented and the latest results obtained with partial detector configurations are reported. Results will include searches for diffuse, steady and transient cosmic sources, measurement of the atmospheric neutrino oscillation parameters, tau appearance, neutrino mass ordering, BSM effects.
Multi-messenger astronomy studies transient phenomena by combining the information provided by different cosmic messengers, such as neutrinos, photons, charged particles or gravitational waves. A coincident detection enhances the chances for the identification of new astrophysical sources, which motivates the distribution of external alerts and their follow-ups by multiple observatories worldwide.
The KM3NeT neutrino telescope is a Cherenkov deep-sea infrastructure currently taking data with partial configurations at the Mediterranean Sea. Two different arrays are being constructed: ORCA, on the shore of Toulon (France), and ARCA, on the shore of Sicily (Italy). In this contribution, the latest results of the neutrino searches conducted with the real-time multi-messenger analysis platform of the KM3NeT detectors will be summarised, including statistical significances and flux upper limits. These searches cover a wide neutrino energy range, from MeV up to a few of PeVs.
Neutrino telescopes play a fundamental role in highlighting the hadronic component of cosmic ray accelerators in the Universe. The ANTARES underwater neutrino telescope was operated for more than 15 years in the Mediterranean Sea off shore the coast of Toulon, France. The KM3NeT/ARCA detector, designed for the observation of high energy cosmic neutrinos, is under construction at the KM3NeT site off shore Portapalo di Capo Passero, Sicily, Italy.
In this contribution, the first combined analysis of the full ANTARES dataset and the datasets from the partially completed KM3NeT/ARCA neutrino detector is presented. Point-like and extended sources are tested for neutrino emission. The list of sources includes bright γ-ray emitters, galactic γ-ray sources with hints of hadronic components, extragalactic AGN with high flux observed in radio bands. The diffuse flux search with both detectors is also presented, enhanced by an excellent Galactic centre visibility.
A decade after IceCube's discovery of astrophysical neutrinos, high-energy neutrino astronomy thrives. The blazar TXS 0506+056 and Seyfert Galaxy NGC 1068 emerged as first source candidates amid an otherwise isotropic extragalactic neutrino flux. Their differing energy spectra hint at multiple source populations, which reveals the complexity of the extragalactic neutrino sky. In addition, IceCube recently detected the long-sought Galactic Plane neutrinos. Despite these advancements, many sources remain unknown. IceCube continues to accumulate data; also, advances in neutrino reconstruction and analysis methodology improve the search for neutrino sources. I will present recent IceCube results on the origin of cosmic neutrinos. While we are preparing further instrumentation with the IceCube-Upgrade and IceCube-Gen2, new neutrino telescopes (e.g. P-ONE and KM3NeT) are also planned or already under construction to increase the sensitivity of future neutrino astronomy.
We will present an analysis of IceCube public data from its IC86configuration, namely PSTracks event selection, to search for pseudo-Dirac signatures in high-energy neutrinos from astrophysical sources NGC 1068, TXS 0506+056, PKS 1424+240 and GB6 1542+6129which have been detected with high significance. Neutrino flux from astrophysical sources is reduced in the pseudo-Dirac scenario due to conversion of active-to-sterile neutrinos as compared to the neutrino oscillation scenario of only three active neutrinos over astrophysical distances. We fit IceCube data using astrophysical flux models for these sources in both scenarios and constrain the active-sterile mass-square-difference through a stacking analysis. We also predict ratios of astrophysical neutrinos of different flavors from these sources in the pseudo-Dirac scenario that can be probed in future neutrino detectors such as KM3NeT and IceCube Gen-2.
We introduce a modification to the standard expression for tree-level CP-violation in scattering processes at the LHC, which is important when the initial state in not self-conjugate. Based on that, we propose a generic and model-independent search strategy for probing tree-level CP-violation in inclusive multi-lepton signals. Then, as an illustrative example, we show that higher-dimension TeV-scale 4-fermion operators of the form tuℓℓ and tcℓℓ with complex Wilson coefficients can generate CP asymmetries of O(10%), that should be accessible at the LHC with an integrated luminosity of O(1000) fb$^{−1}$.
Modular Invariance is a relatively new approach to the flavour problem: in special cases, only one flavon is needed to reproduce the neutrino masses and mixing parameters, with just a small number of free parameters. By combining this framework with generalised CP-symmetry, one can determine that the flavon vacuum expectation value also dictates the CP-violation of the lepton sector. Hence, one can construct models in which it is possible to accomodate not only the recent data on CP violation, but also the matter-antimatter asymmetry of the Universe through Leptogenesis. However, it is particularly challenging to build a model that is also minimalistic. Using a set of guiding principles, we demonstrate how this can be achieved using the smallest modular finite group, $S_3$, which was rather underutilized before our work. Both the strengths and weaknesses of this promising approach are discussed.
Flavor deconstruction refers to ultraviolet completions of the Standard Model where the gauge group is split into multiple factors under which fermions transform non-universally. We propose a mechanism for charging same-family fermions into different factors of a deconstructed gauge theory in a way that gauge anomalies are avoided. The mechanism relies in the inclusion of a strongly-coupled sector, responsible of both anomaly cancellation and the breaking of the non-universal gauge symmetry. As an application, we propose different flavor deconstructions of the Standard Model that, instead of complete families, uniquely identify specific third-family fermions. All these deconstructions allow for a new physics scale that can be as low as few TeV and provide an excellent starting point for the explanation of the Standard Model flavor hierarchies.
Lepton flavor violation in tau decays, an unambiguous signature of New Physics, has been searched in many channels by multiple collaborations, including BaBar, Belle, Belle II, LHCb, ATLAS and CMS. Combined upper limits as compiled by the Tau subgroup of the Heavy Flavor Averaging group are presented, for channels where multiple searches provide significant contributions.
I will describe a Left-Right symmetric model that provides an explanation for the mass hierarchy of the charged fermions within the framework of the Standard Model. This explanation is achieved through the utilization of both tree-level and radiative seesaw mechanisms. In this model, the tiny masses of the light active neutrinos are generated via a three-loop radiative inverse seesaw mechanism, with Dirac and Majorana submatrices arising at one-loop level. To the best of my knowledge, this is the first example of the inverse seesaw mechanism being implemented with both submatrices generated at one-loop level. The model contains a global U(1)X symmetry which, after its spontaneous breaking, allows for the stabilization of the Dark Matter (DM) candidates. The model is consistent with electroweak precision observables, the electron and muon anomalous magnetic moments as well as with the constraints arising from charged lepton flavor violation, dark matter and the 95 GeV diphoton excess.
$E_6$ Grand Unified Theories introduce novel symmetry-breaking patterns compared to the more common $SU(5)$ and $SO(10)$ GUT. We explore in this talk how $SU(3)^3$ (trinification), $SU(6)\times SU(2)$ and $SO(10)\times U(1)$ symmetries can explicitly arise from $E_6$ at an intermediate breaking stage.
Due to perturbative limitations associated with very large $E_{6}$ representations, the $650$ emerges as the unique candidate for breaking into one of the novel symmetries. We find suitable minima of its scalar potential and subsequently construct a complete model with the scalar sector $27+351'+650$.
The model facilitates a two-stage breaking to the Standard Model alongside a realistic Yukawa sector. We determine for each novel breaking pattern the intermediate effective theory consistent with the extended survival hypothesis (and a $\mathbb{Z}_2$ parity that gives a dark matter candidate), and analyze unification constraints and proton decay lifetime for these minimal scenarios.
The Deep Underground Neutrino Experiment (DUNE) is a next-generation long-baseline neutrino experiment currently under construction in the US. The experiment consists of a broadband neutrino beam from Fermilab to the Sanford Underground Research Facility (SURF) in Lead, South Dakota, a high-precision near detector, and a large liquid argon time-projection chamber (LArTPC) far detector. The Trigger and Data Acquisition (TDAQ) systems are responsible for the acquisition and selection of data produced by the DUNE detectors and for their synchronization and recording. The main challenge for the DUNE-TDAQ lies in developing effective, resilient software and firmware that optimize the performance of the underlying hardware. The TDAQ is composed of several hardware components. A high-performance Ethernet network interconnects all the elements, allowing them to operate as a single, distributed system. At the output, the high-bandwidth Wide Area Network allows the transfer of data.
The HL-LHC will open an unprecedented window on the weak-scale nature of the universe, providing high-precision measurements of the standard model (SM) as well as searches for new physics beyond the SM. Collecting the information-rich datasets required by such measurements and searches will be a challenging task, given the harsh environment of 200 proton-proton interactions per bunch crossing. For this purpose, CMS is designing an efficient data-processing hardware trigger including tracking and high-granularity calorimeter information. Trigger data analysis will be performed through sophisticated algorithms including widespread use of Machine Learning. The system design is expected to take advantage of advances in FPGA and link technologies, providing a high-performance, low-latency computing platform for large throughput and sophisticated data correlation across diverse sources. The expected impact on the physics reach of the experiment will be summarised in this presentation.
This R&D project, initiated by the DOE Nuclear Physics AI-Machine Learning initiative in 2022, leverages AI to address data processing challenges in high-energy nuclear experiments (RHIC, LHC, and future EIC). Our focus is on developing a demonstrator for real-time processing of high-rate data streams from sPHENIX experiment tracking detectors. Integrating streaming readout and intelligent control with FPGA, the approach efficiently identifies rare heavy flavor events in high-rate p+p collisions (3MHz) within limited DAQ bandwidth (15kHz), using GNN and hls4ml. Success at sPHENIX promises immediate benefits, minimizing resources and accelerating the heavy-flavor measurements. The approach is transferable to other fields. For the EIC, we develop a DIS-electron tagger using AI-ML algorithms for real-time identification, showcasing the transformative potential of AI and FPGA technologies in high-energy nuclear and particle experiments real-time data processing pipelines.
The KOTO experiment's main aim is to measure the branching ratio of the CP-violating $K_L\rightarrow\pi^0\nu\bar{\nu}$ decay. However, data targeting other physics studies can also be recorded at KOTO. Events are rejected or tagged at the L1 stage of KOTO's DAQ based on total energy deposition in different detectors, and trigger modes with high rate are prescaled. The L2 has been recently expanded, increasing the bandwidth from L1 to L3 by a factor of 2 to more than 25 kEvents per second while keeping the event loss less than 1%. Each L3 node captures data at 40 Gbps. Events are offloaded to a GPU where reconstruction, selection, and compression are performed in parallel, reducing the data rate by a factor of 20. The increase in bandwidth at the L2, together with the capabilities of the new L3 allow to reduce prescale factors in favor of sophisticated online event selection. The DAQ system of the KOTO experiment and the design and performance of the new L3 are described in this talk.
New readout electronics for the ATLAS LAr Calorimeters are being developed, within the framework of the experimental upgrades for the HL-LHC, to be able to operate with a pile-up of up to 200 simultaneous pp interactions. Moreover, the calorimeter signals of up to 25 subsequent collisions are overlapping, which increases the difficulty of energy reconstruction. The energy computation will be performed in real-time using dedicated electronic boards based on FPGAs. To cope with the signal pile-up, new machine learning approaches are explored: convolutional and recurrent neural networks outperform the optimal signal filter currently used in energy resolution for energy reconstruction happening each bunch crossing. Very good agreement between neural network implementations in FPGA and software calculations is observed. The FPGA resource usage, the latency and the operation frequency are analysed. Latest performance results and experience with prototype implementations will be reported.
We present the preparation, deployment, and testing of an autoencoder trained for unbiased detection of new physics signatures in the CMS experiment Global Trigger (GT) test crate FPGAs during LHC Run 3. The GT makes the final decision whether to readout or discard the data from each LHC collision, which occur at a rate of 40 MHz, within a 50 ns latency. The Neural Network makes a prediction for each event within these constraints, which can be used to select anomalous events for further analysis. The GT test crate is a copy of the main GT system, receiving the same input data, but whose output is not used to trigger the readout of CMS, providing a platform for thorough testing of new trigger algorithms on live data, but without interrupting data taking. We describe the methodology to achieve ultra low latency anomaly detection, and present the integration of the DNN into the GT test crate, as well as the monitoring, testing, and validation of the algorithm during proton collisions.
SuperCDMS SNOLAB is a direct dark matter search experiment currently under construction at SNOLAB in Sudbury, Canada. SuperCDMS will deploy 24 cryogenic Si and Ge detectors, arranged in 4 towers with 6 detectors each. Although all 4 towers have been previously tested at SLAC, the extent of testing was limited due to large cosmogenic background at this surface facility. Two of the towers contain High Voltage (HV) detectors which utilize the NTL effect to obtain phonon signals amplified linearly with applied voltage, thus allowing us to lower our thresholds and search for low mass dark matter candidates. Tower 3, containing 4 Ge and 2 Si HV detectors, has been tested at the CUTE facility at SNOLAB from Oct 2023 to Feb 2024. This marks the first operation of these detectors in a low background environment, which closely resembles conditions in the main experiment. This talk will present a summary of the testing effort, major results and prospects for dark matter searches with this data.
The Super Cryogenic Dark Matter Search (SuperCDMS) SNOLAB experiment is currently under construction 2 km underground at the SNOLAB facility near Sudbury, Canada. Utilizing 24 state-of-the-art cryogenic germanium (Ge) and silicon (Si) detectors, the experiment aims to achieve world-leading sensitivity in the direct search for dark matter (DM) particles interacting with nuclei, spanning masses from 0.5 to 5 GeV/c$^2$. Additionally, the experiment will investigate electron scattering of MeV-scale DM particles and the absorption of eV-scale dark photons and axion-like particles (ALPs). This presentation first provides a detailed examination of the experimental setup. The talk proceeds to highlighting recent progress and achievements in the experiment installation, and discusses the world-leading sensitivity expected.
The DAMIC-M (DArk Matter In CCDs at Modane) experiment will use skipper CCDs to search for low mass (sub-GeV) dark matter underground at the Laboratoire Souterrain de Modane (LSM). With about 1kg of silicon target mass and sub-electron energy resolution, the detector will surpass the exposure and threshold (eV-scale) of previous experiments. Thus, DAMIC-M will have world-leading sensitivity to a variety of “hidden sector” candidates. In this talk, we will report on science results from a prototype detector, test performance of CCD modules, and the status of the detector construction at LSM.
Dark matter is a hypothetical new form of matter that does not interact with the electromagnetic field and has a very weak interaction with ordinary matter. WIMPs are prime dark matter candidates, but most experiments are constrained to spin-independent interactions in the 10-100 GeV/$c^2$ mass range.
QUEST-DMC (Quantum Enhanced Superfluid Technologies for Dark Matter and Cosmology) is a collaboration, between Lancaster, Oxford, Royal Holloway University of London, and Sussex Universities, supported through the Quantum Technologies for Fundamental Physics UK programme.
QUEST-DMC will use superfluid He-3 as a dark matter collision target, aiming to reach the world-leading sensitivity to spin-dependent interactions of 0.1-1 GeV/$c^2$ mass dark matter candidates.
Here we discuss the development of superfluid He-3 bolometers, arguing that recoil energy of <10 eV can be detected using nanomechanical resonators, controlling the dominant sources of background and using quantum sensors.
DarkSide-20k, a noble liquid argon using double-phase time projection
chamber, being constructed as a direct dark matter detection experiment
with 50 tonnes of fiducial target mass. The key component of the
experiment is low radioactivity argon (UAr) depleted in the isotope
39Ar.
The journey of UAr shall start from Urania plant in Colorado, with a
purity of 99.99% post extraction from CO2 stream sourced from a deep
well at a rate of 250 kg/day. Next the Argon arrives at Aria plant in
Sardinia, Italy, which can perform both cryogenic and isotopic
distillation, to get a purity of 99.9999%. In order to compliment this
unique procurement chain, DArTinArDM experiment at Canfranc has been
designed to measure and monitor radiopurity of the UAr. The importance
of this supply chain and of associated techniques has gained global
attention well beyond DarkSide-20k.
The Circular Electron Positron Collider accelerator TDR, as a Higgs and high luminosity Z factory, has been released in 2023. The baseline design of a detector concept consists of a large 3D tracking system, which is a high precision (about 100μm) spatial resolution Time Projection Chamber (TPC) detector as the main track embedded in a 3.0T solenoid field, especially for the accelerator operating at Tera-Z. TPC requires the longitudinal time resolution (<100ns) and the physics goals require PID resolution (<3%).
In this talk, we will present the feasibility and progress of the high precision TPC technology for CEPC, even at Tera-Z. The fundamental parameters such as the spatial resolution, PID with the good separation power and the drift velocity were studied by the simulation and measurement using a TPC prototype with 500mm drift length. We will review the track reconstruction performance results and summarize the next steps towards TPC construction for CEPC physics and detector TDR.
The extension of the BESIII experiment (IHEP, Beijing) until 2030 has triggered a program to improve the accelerator and the detector. In particular, it is proposed to replace the current inner drift chamber with a cylindrical GEM detector.
The inner CGEM tracker consists of three coaxial layers of triple GEM and is expected to restore efficiency, improve z-determination and secondary vertex position reconstruction with a resolution of 130 μm in the xy-plane and better than 500 μm along the beam direction.
A special readout system was developed. The signals are processed by TIGER, a custom 64-channel 110 nm ASIC that provides an analog charge readout via a fully digital output to the GEM Read Out Card.
The three layers were assembled in October 2023 and a cosmic ray data collection campaign is underway to evaluate performance prior to installation in BESIII.
The general status of the CGEM-IT project will be presented, with a particular focus on the first cosmic ray detection results.
A new detector concept optimizes MPGD geometry for low-cost and large-area applications while keeping the same performance. The base element, a µRtube, is a cylindrically shaped µRWELL of 0.9cm radius, which works as an amplification stage and readout. The external sleeve is 18 cm in diameter and accommodates the cathode, completing a radial tubular TPC having a small internal surface used for the readout. This geometry significantly reduces the number of electronic channels per unit area and brings a new technological achievement with an unprecedented curvature radius of MPDG for imaging and particle identification applications. The detection technique of the µRtube is based on the TPC approach where time information is used to reconstruct the ionizing particle path inside the drift volume. A report on the detector concept, a full simulation of the detector, and a validation with a testbeam will be presented.
The near detector of T2K experiment is undergoing a major upgrade. A new Time Projection Chambers have been constructed, based on the innovative resistive Micromegas technology. A resistive layer is deposited onto the segmented anode in order to spread the charge onto several adjacent pads. This way, the spatial resolution for a given segmentation is improved. The results of the first detailed characterization of the charge spreading in resistive Micromegas detectors will be presented. A detailed physical model has been developed to describe the charge dispersion phenomena in the resistive Micromegas anode. The detailed physical description includes initial ionization, electron drift, diffusion effects and the readout electronics effects, including description and simulation of noise. The model provides an excellent characterization of the charge spreading of the experimental measurements and allowed the simultaneous extraction of gain and RC information of the modules.
Recent advancements in High Energy Physics experiments demand innovative particle detectors capable of operating efficiently in high-background and high-radiation environments. This necessitates R&D in MPGD technology, targeting particle fluxes up to 10 MHz/cm2. Our project focuses on single-stage amplification resistive Micromegas, addressing challenges such as miniaturization of readout elements and spark protection optimization. We've explored various resistive layouts, utilizing embedded resistors or double-layer DLC foils. Comparative analysis highlights the efficacy of different configurations under high irradiation. Our findings showcase promising results, particularly with the double DLC layer solution. We present comprehensive results from medium-sized detectors and preliminary measurements from large area modules, underscoring readiness for further development towards large-scale high-rate detectors.
While the ionization process by charged particles (dE/dx) is commonly used for particle identification, uncertainties in total energy deposition limit particle separation capabilities. To overcome this limitation, the cluster counting technique (dN/dx) takes advantage of the Poisson nature of primary ionization, providing a statistically robust method for inferring mass information. This presentation introduces state-of-the-art algorithms and modern computing tools for electron peak identification and ionization cluster recognition in experimental data. Three beam tests conducted at CERN, involving different helium gas mixtures, varying gas gains, and various wire orientations relative to ionizing tracks. The tests employ a muon beam ranging from 1 GeV/c to 180 GeV/c, with drift tubes of different sizes and diameter sense wires. The discussion will include the data analysis results regarding the confirmation of the Poisson nature of the cluster counting technique.
Resonance production is one of the key observables to study the dynamics of high-energy collisions. The analysis of $K^*(892)^0$ meson allows to better understand the time evolution of high-energy nucleus-nucleus collision. Namely, the ratio of $K^*(892)^0$ to charged kaons is used to determine the time between chemical and kinetic freeze-outs.
In this talk, the first NA61/SHINE results of the analysis of $K^*(892)^0$ production in central Ar+Sc collisions at three SPS energies ($\sqrt{s_{NN}}$ = 8.8, 11.9, 16.8 GeV) will be presented. The $K^*(892)^0/K^\pm$ yield ratios will be compared with corresponding results in p+p collisions, allowing to estimate the time between kinetic and thermal freezouts for Ar+Sc collisions. These first results for intermediate-mass nucleus-nucleus systems will be compared with the results of heavier systems at a similar energy range.
The correlations between net-conserved quantities such as net-baryon, net-charge and net-strangeness play a crucial role in the study of QCD phase structure, as they are closely related to the ratios of thermodynamic susceptibilities in lattice QCD (LQCD) calculations. This presentation introduces new results focusing on the correlations between net-kaon and net-proton as well as net-kaon and net-charge in Pb--Pb collisions at 5.02 TeV using data recorded by the ALICE detector. Here, the net-proton and net-kaon serve as proxies for the net-baryon and net-strangeness, respectively. A comparative analysis is presented, drawing connections with corresponding results at lower collision energies from the STAR experiment at RHIC. Theoretical predictions from the hadron resonance gas model, HIJING and EPOS event generators are also compared with experimental results, providing insights into the effects of resonance decays and charge conservation laws.
The FASTSUM Collaboration has developed a comprehensive research programme in thermal lattice QCD using 2+1 flavour ensembles. We will review our recent hadron spectrum results including analyses of open charm mesons and charm baryons at non-zero temperature. We also detail our determination of the interquark potential in the bottomonium system using NRQCD quarks. Finally, we summarise our work comparing various spectral reconstruction methods in this system. All of our work uses anisotropic lattices where the temporal lattice spacing is considerably finer than the spatial one allowing better resolution of temporal correlation functions.
Direct photons are emitted throughout the development of a relativistic heavy ion collisions; their observation, therefore, provides a snapshot of the evolution of the collisions.This talk will present the latest results of the PHENIX experiment at RHIC obtained from high statistics Au+Au data set taken at 200 GeV. The results expand earlier measurements and isolate the non-prompt direct photon component. The data show high yields that exhibit a power-law scaling behavior with system size with no apparent dependence on transverse momentum (pt). In contrast the inverse slope of the pt spectra varies from 250 MeV at 1 GeV/c to close to 400 MeV by 3 GeV/c. Direct photons also exhibit a strong anisotropy with respect to the reaction plane for pt of upto 5 GeV/c. These features are qualitative consistent with calculations of thermal radiation from the collision but reconciling them quantitatively remains a challenge.
Electromagnetic probes are a unique tool to study the space-time evolution of the hot and dense matter created in ultra-relativistic heavy-ion collisions. More specifically, dielectron pairs are emitted as thermal radiation during all stages of the collision, allowing the extraction of the real direct photon fraction at vanishing mass. Measurements in pp collisions both serve as a baseline for heavy-ion studies and allow one to search for interesting phenomena in events with high charged-particle multiplicities.
This talk will present the final LHC Run 2 ALICE results on the dielectron and direct-photon production in central Pb-Pb at $\sqrt{s_{NN}}$ = 5.02 TeV and in minimum bias and high-multiplicity pp collisions at $\sqrt{s}$ = 13 TeV. Finally, first results from the Run 3 data using the upgraded ALICE detector will be reported.
LHCb functions as a spectrometer targeting the forward region of proton-proton collisions, focusing on a pseudo-rapidity range between 2 and 5. Due to the scarcity of background events in the high mass region, along with its precise reconstruction capabilities and a trigger system featuring low energy thresholds, LHCb offers an optimal environment for probing (exotic) Higgs decays, complementing the efforts of ATLAS and CMS. This presentation will delve into the latest investigations into Beyond the Standard Model (BSM) Higgs decays conducted at LHCb, along with outlining the potential avenues for future data collection periods at the LHC. Additionally, the search for H->bb and H->cc decays will be presented, with a focus on the latest results obtained using the full Run 2 dataset. Finally, prospects on the Standard Model Higgs searches are presented, with an eye toward the future LHCb experiment upgrades.
The study of Higgs boson production in association with one or two top quarks provides a key window into the properties of the two heaviest fundamental particles in the Standard Model, and in particular into their couplings. This talk presents property measurement of Higgs boson, in particular cross section and CP nature, with tH and ttH production in pp collisions collected at 13 TeV with the ATLAS detector using the full Run 2 dataset of the LHC.
The full set of data collected by CMS experiment at a centre of mass energy of 13 TeV allows searches for rare production modes of the Higgs boson, subdominant with respect the ones already observed at the LHC, by using a variety of decay modes profiting of the ones with largest expected branching fractions. They include associate production of the Higgs with two b-quarks, with a c-quark, or vector boson scattering production with two associated Ws. Double Higgs boson production associated with a pair of top quarks is also considered. While the expected rate is still limited with the collected data, these modes become enhanced in several BSM theories and can be used to constrain such models.
Detailed measurements of Higgs boson properties can be performed using its decays into fermions, providing in particular a key window into the nature of the Yukawa interactions. This talk presents the latest measurements by the ATLAS experiment of Higgs boson properties in its decays into pairs of tau leptons, using the full Run 2 pp collision dataset collected at 13 TeV.
Testing the Yukawa couplings of the Higgs boson with fermions is essential to understanding the origin of fermion masses. Higgs boson decays to quark pairs are an important probe of these couplings, and of properties of the Higgs boson more generally. This talk presents various measurements of Higgs boson decays into two bottom quarks as well as searches for Higgs boson decays into two charm quarks by the ATLAS experiment, using the full Run 2 dataset of pp collisions collected at 13 TeV at the LHC, as well as their combination and interpretation. The results of the search for Higgs boson production associated with a charm quark is also reported.
The impact of finite bottom-quark mass effects at next-to-next-to-leading order constitutes one of the leading theory uncertainties of the Higgs production cross section.
In this talk, I will present our evaluation of this contribution. We computed the relevant two-loop master integrals that enter the real-virtual contribution numerically using the method of differential equations. In addition, the Higgs-gluon form factor at three loops in QCD with two different massive quark flavours has been included. Furthermore, I will discuss the impact of the choice of renormalisation scheme.
The NEXT collaboration seeks to discover the neutrinoless double beta decay (ββ0ν) of Xe-136 using a high-pressure gas time projection chamber with electroluminesence gain and optical read-out. An initial medium-scale prototype, NEXT-White, with 5-kg of xenon was operational at the Laboratorio Subterraneo de Canfranc (LSC) from 2016 to 2021. This prototype has proven the outstanding performance of the NEXT technology in terms of energy resolution (<1% FWHM at 2.6 MeV) and precise event topology reconstruction, crucial for distinguishing signal from background events. In this talk, I will review the performance of the NEXT-White detector and present the measurement of the half-life of the double beta decay (ββ2ν) and the derived limits on the half-life of the ββ0ν decay. The results were obtained with both a background-model-dependent approach and a novel direct background-subtraction technique using a combination of 271.6 days of Xe-enriched data and 208.9 days of Xe-depleted data.
The AMoRE-II experiment is the next phase of the AMoRE project. Its aim is to search for neutrinoless double beta decay of 100Mo isotopes. The experiment will use 100 kg of 100Mo target nuclei enriched in more than 95%, which are mainly contained in hundreds of scintillating lithium molybdate crystal absorbers to use MMC (metallic magnetic calorimeter) sensors for a cryogenic calorimeter. The detectors' performance has significantly improved compared to the previous phases. We anticipate a background level of approximately 10^-4 counts/keV/kg/year in the region of interest (ROI) by utilizing the low background detector material, an optimized shielding structure at Yemilab, the new underground laboratory with a 1000m overburden. We will present the overall effort to move towards the AMoRE-II phase
Neutrinoless double-beta decay plays a crucial role in addressing crucial questions in particle physics, including lepton number conservation and the Majorana nature of neutrinos. CUPID is a next-generation experiment to search for 0νββ of 100Mo using scintillating bolometers. CUPID profits from the experience acquired with CUORE, the first ton-scale bolometric array, currently in operation, and will be hosted in its cryogenic infrastructure. With 1596 scintillating 100Mo-enriched Li2MoO4 crystals coupled to 1710 light detectors, CUPID enables simultaneous readout of heat and light, allowing for particle id and a robust rejection of the alpha background, reaching a sensitivity greater than 1E27 yr. Today, ongoing coordinated efforts and R&D projects aim to finalize the detector design and assess its performance and physics capabilities. In this presentation, we will provide an overview of the current status of CUPID and highlight the upcoming milestones in the experiment construction.
SuperNEMO is searching for the hypothesised lepton-number-violating neutrinoless double-beta decay (0νββ) process. Our unique NEMO-3-style tracker-calorimeter detector tracks individual particle trajectories and energies. This enables powerful background rejection and detailed studies of Standard Model (2νββ) decay. By studying electron and photon energies and relative trajectories, SuperNEMO will investigate nuclear processes hidden to other technologies, such as decays to excited nuclear states, and will constrain the axial coupling constant, gA. By precisely measuring 2νββ observables we will seek beyond-the-Standard-Model effects like exotic 0νββ modes, Lorentz-violating decays and bosonic neutrino processes.
The SuperNEMO Demonstrator at LSM, France has a 6.1kg Se-82 ββ source, and is taking background data vital to isolate future signals. It is calibrated with a Bi-207 source deployment system. Multi-layer shielding, now in construction, will allow ββ data-taking in 2024.
The search for neutrinoless double beta decay could cast light on one critical piece missing in our knowledge i.e. the nature of the neutrino mass. The observation of such a potentially rare process demands a detector with an excellent energy resolution, an extremely low radioactivity and a large mass of emitter isotope. Nowadays many techniques are pursued but none of them meets all the requirements at the same time. The goal of R2D2 is to prove that a cylindrical high pressure TPC filled with xenon gas could meet all the requirements and provide an ideal detector for the 0νββ decay search. The prototype has demonstrated an excellent resolution with argon and xenon up to the maximal possible operation pressure of 10 bar. The resolution is constant in the pressure range studied and almost independent on the gas used in the TPC. In the proposed talk the R2D2 results obtained with the current prototype will be discussed as well as the project roadmap and future developments.
Observation of the neutrinoless double-beta ($0\nu\beta\beta$) decay would demonstrate lepton number violation and provide insights into matter-antimatter asymmetry and the Majorana nature of neutrino. It is a challenging quest that requires experimental conditions ensuring little to no background and superb energy resolution. The Large Enriched Germanium Experiment for $0\nu\beta\beta$ decay (LEGEND) is designed to provide such conditions, aiming at unambiguous discovery of $0\nu\beta\beta$ decay of ${}^{76}$Ge.
Its first and current stage, LEGEND-200, utilizes up to 200 kg of high purity ${}^{76}$Ge-enriched detectors, and will be operating for 5 years as a natural step towards the final 1000-kg phase. LEGEND-200 is located at LNGS, Italy. Its commissioning was completed in Oct 2022, and physics data taking started in Mar 2023. In this talk I will summarize the status of LEGEND-200 and its current and future milestones. The talk is presented on behalf of the LEGEND collaboration.
We propose a model for leptons based on the smallest modular finite group $\Gamma_2\simeq S_3$, incorporating two right-handed sterile neutrinos $N_{1,2}$ and a single modulus $\tau$ into the Standard Model (SM) particle spectrum. In addition to offering an excellent fit to low-energy neutrino observables, we investigate the potential for explaining the baryon asymmetry of the Universe (BAU) through thermal leptogenesis. We numerically solve the unflavored Boltzmann Equations for lepton asymmetry, considering both the decays of $N_1$ and $N_2$. Our analysis leads to the conclusion that the $N_1-$dominated scenario is successful and it represents the most natural choice for the model.
To fully exploit the extended capability of its upgraded L1 trigger at the High-Luminosity LHC, CMS is pioneering a novel L1 Data Scouting (L1DS) system, capable of acquiring and processing the quasi-offline-quality trigger primitives produced by the upgraded L1 at the accelerator bunch-crossing rate of 40 MHz. The goal of the system is to give full access to potential physics signatures otherwise constrained by the L1 latency and accept rate limitations, or whose selection strategy diverges from that of the standard CMS physics program. To validate the concept and provide a development platform for the firmware and software required for the final system, a demonstrator system was assembled to operate with the current L1 trigger in the LHC Run 3. We discuss the L1DS demonstrator system architecture, performance, and some preliminary results, as well as the design of the final L1DS system and a summary of ongoing studies of its potential and competitiveness in selected physics channels.
As a hadron collider, the LHC produces a large number of hadronic jets. Properties of these jets are good tests of QCD; however, hadronic decays of Standard Model particles, as well as signs of new physics, can be hidden in events containing jets too. The ATLAS jet trigger system is an important element of the event selection process, providing data to study a wide range of physics processes at the LHC. To this end, proper jet reconstruction and calibration are crucial to ensure good trigger performance across the ATLAS physics program, and understanding this performance is necessary for the correct interpretation of recorded data. In this contribution, we are going to provide an overview of the ATLAS jet trigger system and its general performance at the beginning of Run-3 at the LHC.
The ATLAS Trigger, upgraded for the increased instantaneous luminosity of the LHC in Run 3, includes a topological trigger system (L1Topo) that performs complex multi-object trigger calculations within a very small processing time of 75 ns. L1Topo is based on 6 Xilinx Ultrascale+ 9P FPGAs for massively parallel and fully synchronous computation using 2.5M LUTs per FPGA. Its firmware is composed of a large number of sort/select, decision, and multiplicity algorithms, automatically assembled and configured based on the trigger menu. An overview of the L1Topo hardware, firmware, commissioning challenges and performance results is presented.
For the HL-LHC, L1Topo will be replaced by a Global Trigger, a time-multiplexed system, concentrating the data of a full event into a single FPGA. An overview of the new topological firmware, redesigned from the Run 3 building blocks to match the larger available processing time (1.2 us) and a much tighter resource budget (100k LUTs), is presented.
The IceCube Neutrino Observatory is a cubic kilometer Cherenkov light detector that also searches for signatures of particles beyond the standard model including fractionally charged particles. These are predicted to carry a fraction of the elementary charge, resulting in faint tracks in the detector.
To enhance the efficiency of detecting these faint signatures, we developed the novel Faint Particle Trigger (FPT). The FPT involves the analysis of isolated single hits which are not included in any other IceCube trigger. In the case of simulated faint exotic signatures and GeV-range leptons, these isolated single hits become the predominant hit type. The FPT significantly improves trigger efficiencies for signal simulations, concurrently increasing the event rate by approximately 0.5%.
The FPT commenced operations at the South Pole in November 2023. The development, processing chain for the new events and implications for analyses will be presented.
During the third data taking period, the Large Hadron Collider provided record-breaking integrated and instantaneous luminosities, resulting in huge amounts of data being provided with numbers of interaction per bunch crossing significantly beyond initial projections. In spite of these challenging conditions, the ATLAS Inner Detector (ID) track reconstruction continued to perform excellently. In this contribution the algorithms used to reconstruct charged particles and primary vertices will be described. The software configuration used for the Run 3 data-taking period and its performance will be presented using data and simulated events. Additional track reconstruction passes, developed to improve the tracking capabilities in dedicated physics scenarios, will be discussed as well.
The efficient and precise reconstruction of charged particle tracks is crucial for the overall performance of the CMS experiment. Prior to the beginning of the Run 3 at the LHC in 2022, the first layer of the Tracker Barrel Pixel subdetector was replaced in order to cope with the high pileup environment, and significant upgrades were made to the track reconstruction algorithms. Performance measurements of the track reconstruction both in simulation and data will be presented from the collisions which occurred in 2022 and 2023. Finally we will discuss the ongoing developments to improve track reconstruction for the remainder of Run 3 and for the future.
Unstable long-lived particles with lifetime above 100 ps occur in the Standard Model (SM) and show up in many of their extensions. They are, however, challenging to reconstruct and trigger at the LHC due to their very displaced decay vertices. The new software-based trigger system of the LHCb experiment for Run 3 onwards consists of two stages, HLT1 and HLT2, the first one enabling the detector reconstruction to be performed in real time with high performance on GPUs, the second providing also in real time offline-quality resolution of the reconstructed objects. This trigger opens the possibility to develop new algorithms, which can be decisive for enhancing the reconstruction of Λ and K0s hadrons and finding new particles with lifetimes ranging from about 100 ps to tens of ns. This talk discusses the efforts and challenges of these developments, detector performance studies using Run 2 and Run 3 data, and the opportunities opened for the LHCb physics program within and beyond the SM.
The Mu2e experiment at Fermilab will search for the neutrinoless muon-to-electron conversion in the nuclear field by stopping $\mu^{-}$ on an Al target. The experimental signature of $\mu^{-}$ to $e^{-}$ conversion on Al is the 104.97 MeV mono-energetic conversion $e^{-}$s. Rejection of one of the most important experimental backgrounds coming from muon Decays-In-Orbit requires a momentum resolution $<1\%$ FWHM and a momentum scale calibrated to an accuracy of better than 0.1% or 0.1 MeV. Among other momentum scale calibration techniques, the collaboration is considering using 68.9 MeV $e^{+}$s from $\pi^{+}\rightarrow e^{+},\nu_{e}$ decays of stopped $\pi^{+}$s. This calibration measurement has a significant background dominated by the muon decays in flight affecting the calibration accuracy. The background can be reduced by placing a thin Ti degrader in front of the stopping target and timing selection. We discuss the optimization of the momentum calibration measurement results.
The Belle and Belle II experiments have collected a 1.1$~$ab$^{-1}$ sample of $e^+ e^-\to B\bar{B}$ collisions at the $\Upsilon(4S)$ resonance. These data, with low particle multiplicity, constrained initial state kinematics and excellent lepton identification, are ideal to study lepton-flavour universality in semileptonic decays of the $B$ meson.
We present results on the ratios of semitauonic decay rates to semileptonic decays with light leptons, in both exclusive and inclusive $B$ decays. These include new measurements of the ratios for exclusive $B\to D^{(*)}\ell \nu$ decays, $R(D^{(*)})$. We also present measurements of angular observables that test universality between electrons and muons.
New sources of CP violation, beyond the CKM matrix in the Standard Model (SM), which encodes the weak sector, are required to explain the baryon asymmetry of the universe. An additional mechanism within the SM that can generate CP violating electric dipole moments (EDMs), is through the QCD $\theta_s$ parameter, via the strong sector. Significant advances in the recent past coming from new theoretical calculations of the, (i) electron EDM via hadronic contributions, (ii) CP violating semi-leptonic scalar (and tensor) interaction parameter of $C_S$ (and $C_T$), and (iii) nuclear {magnetic quadrupole, Schiff} moment contributions to the molecular systems, has allowed us to update our lower estimates of the EDMs of: charged leptons, certain baryons, select atoms and molecules, within the CKM$\oplus\theta_s$ framework, in light of the current experimental upper limits. We were able to impose a new constraint of $\theta_s<9.5\times10^{-11}$ (95% C.L.), using an analysis of $^{199}$Hg EDM.
In the SM, the electroweak bosons couple to the three lepton families with the same strength, the only difference in their behaviour being due to the difference in mass. In recent years, some deviations have been found in measurements of the ratios of branching fractions for $b$-hadrons decaying into final states with different lepton flavours. This talk presents recent results of lepton flavour universality tests in $b \to c\ell\nu$ decays, using hadronic or muonic $\tau$ decays, performed at LHCb.
In arxiv:2312.07758 and arxiv:2206.11281 we applied the recently-developed Residual Chiral Expansion (RCE) to significantly reduce the set of unknown subsubleading hadronic functions to a set of highly-constrained functions at second order in Heavy Quark Effective Theory (HQET). In this talk, we present updated predictions for $R(D/D*)$ using the RCE and the recent new experimental inputs from Belle and Belle II. We further discuss the compatibility with new lattice information for $B \to D^* \ell \nu$. We explore the applicability of the RCE using $\Lambda_b \to \Lambda_c \ell \bar \nu_\ell$ decays: intriguingly, in this decay the RCE reduces the set of six unknown subsubleading hadronic functions to a single function. We fit a form factor parametrization based on these results to all available Lattice QCD (LQCD) predictions and experimental data and find excellent agreement with the pure HQET prediction.
We do the complete 4-body angular distribution for the decay of $\Lambda_b^0 \to \Lambda_c^{+}(\to \Lambda \pi^{+})\tau \bar{\nu_{\tau}}$.We provide form factor for BGL parametrization and update the SM prediction for angular observables with the LFUV ratio $R(\Lambda_{c})$ which we find consistent with the Lattice.For the first time,the CP violating(CPV) observable are analyzed. Using the recent LHCb result of $\text{$R(\Lambda_{c})$}$ and $F_L(D^*)$ along with the current HFLAV average of $R(D), R(D*)$, we have performed a fit with one and two parameters. We found that a scenario $\mathcal{Re}[C_{V_2}]$, can explain $R(D^*)$ and $R(\Lambda_{c})$ but can't explain $R(D)$ marginally though all 1-operator scenarios can explain all the observable within $2\sigma$. In two-parameter scenario $C_{S_1}-C_T$ is best-fit scenario for explaining all the observable within $1\sigma$. We extensively studied correlations between observables in the presence of both one and two-operator NP scenarios.
Results are presented on LF(U)V tests through precise measurements of decays involving heavy mesons and leptons, which are compared to the standard model predictions. The measurements use 13 TeV pp collision data collected by the CMS experiment at the LHC.
Measurements of charm-strange meson and charm-baryon production in pp and heavy-ion collisions at the LHC are fundamental to investigate the charm-quark hadronisation across collision systems.
In this contribution, the final results of the ALICE Collaboration on the production of strange ($\mathrm{D_s}^+$ , $\Xi_\mathrm{c}^{0,+}$, $\Omega_\mathrm{c}^0$) and non-strange ($\mathrm{D}^0$ , $\mathrm{D}^+$, $\mathrm{D}^{*+}$ , $\Lambda_\mathrm{c}^+$, $\Sigma_\mathrm{c}^{0,+,++}$) charm hadrons in pp, p–Pb and Pb–Pb collisions collected in Run 2 by the ALICE experiment are shown. The production measurements of $\mathrm{D_s}^+$ mesons are compared to those of non-strange mesons, and the comparison between the measured baryon-to-meson ratios with novel theoretical calculations will be discussed. To conclude, the first studies of charm-hadron reconstruction using the large data sample of pp collisions at $\sqrt s $= 13.6 TeV harvested from the start of LHC Run 3 are presented.
Fragmentation functions (FFs) are typically parametrised exploiting measurements performed in $\mathrm{e^+e^-}$ and $\mathrm{e^-p}$ collisions, under the assumption of universality across collision systems. Measurements of charmed-hadron yields in pp collisions at LHC have proved that the fragmentation of heavy quarks differ in hadronic and leptonic collisions.
In this talk, we present measurements of differential observables that allow for a closer connection to the charm FFs and put stronger constraints on the properties of hadronisation in hadronic collisions. We report the results of angular correlations between D mesons and charged particles in pp collisions, including the first studies with Run 3 data. The latter are also compared to the correlations of $\Lambda_{c}^{+}$ and charged particles in pp collisions. We also present the final measurement of the fraction of longitudinal momentum of jets carried by $\Lambda_{c}^{+}$ baryons in pp collisions at $\sqrt{s} = 13$ TeV.
Polarization and spin correlations have been explored very little for quarks other than the top. Utilizing the partial preservation of the quark's spin information in baryons in the jet produced by the quark, we examine possible analysis strategies for ATLAS and CMS to measure the quark polarization and spin correlations in $pp\to q\bar{q}$ processes. We find polarization measurements for the $b$ and $c$ quarks to be feasible, even with the currently available datasets. Spin correlation measurements for $b\bar{b}$ are possible using the CMS Run 2 parked data, while such measurements for $c\bar{c}$ will become possible with higher integrated luminosity. We also provide LO QCD predictions for the polarization and spin correlations expected in the $b\bar{b}$ and $c\bar{c}$ samples with the relevant cuts. These proposed measurements can provide new information on the polarization transfer from quarks to baryons and might even be sensitive to physics beyond the SM.
We discuss production of $D$ mesons in $p\!-\!H\!e$ and $p\!-\!N\!e$ collisions at the LHCb in the fixed-target mode. We explain how the LHCb data may put constraints on the intrinsic charm (IC) component in the nucleon. We show that there is a possible scenario in which the traditional components are insufficient to describe the LHCb data, especially for backward rapidities and large meson $p_{T}$'s. The IC with probability $P_{\mathrm{IC}} \lesssim 1.0\%$ allows to improve description of the data. We also discuss the production asymmetry for $D$ and $\bar D$. We show whether it can be explained by a possible asymmetry in the intrinsic $c$ and $\bar c$ quarks. We include also recombination mechanism that may also generate the asymmetry for $D$ and $\bar D$. We show that the scenario with the recombination and the IC components included simultaneously is not excluded by the LHCb data. We also present that our calculations for the asymmetry are in agreement with the experiment.
The azimuthal correlation angle, $\Delta\phi$, between the scattered lepton and the leading jet in deep inelastic $ep$ scattering at HERA has been studied using HERA II data collected with the ZEUS detector. Differential cross sections, $d\sigma/d\Delta\phi$, are presented for the first time as a function of the azimuthal correlation angle in various ranges of the jet transverse momentum $p_\mathrm{T,jet}$, photon virtuality $Q^2$ and jet multiplicity. Perturbative calculations at $\mathcal{O}(\alpha_{s}^2)$ accuracy successfully describe the data within the defined fiducial region, while a lower level of agreement is observed near $\Delta\phi \rightarrow \pi$ for events with high jet multiplicity due to limitations of the perturbative approach in describing soft QCD phenomena. Monte Carlo predictions that supplement leading-order matrix elements with parton showering describe the data as well as the $\mathcal{O}(\alpha_{s}^2)$ calculations do.
The study of charmonium production in proton-proton collisions provide an excellent probe of QCD, as it involves both the perturbative and non-perturbative regime. At the LHC, charmonia are produced via hadroproduction in proton-proton collision vertex or from b-hadron decays. In both cases, they can also originate from an intermediate excited charmonium, which is required to be understood to compare the measured production cross-sections charmonium mesons to theory. The associated production of charmonium states provide another way to probe the quarkonium production mechanism. The associated quarkonium production is also considered an ideal way to probe the transverse momentum dependent parton distribution functions of gluons inside the proton, leading towards a more comprehensive knowledge of the proton structure. In this talk, the latest results on charmonium and associated charmonium production from LHCb will be presented.
We investigate the challenges posed by non-global logarithms in analyzing the jet mass observable within the context of Z+jet production, employing jet grooming techniques. Their presence is obvious even if the jet clustering effects tend to reduce their contribution. Our approach involves both an analytical fixed-order calculation, extending up to second order in the coupling, and an all-orders estimation of the specific invariant mass distribution of the leading-pT jet post-application of the triming technique. To reconcile these, we conduct a matching procedure between the resummed distribution and next-to-leading order results obtained from MCFM, subsequently juxtaposing our findings with those generated by the Monte Carlo event simulator Pythia 8.
Different families of gaseous detectors used in particle physics experiments are operated with gas mixtures containing greenhouse gases (GHGs), like C2H2F4, CF4, C4F10 and SF6. Given their high Global Warming Potential (GWP) and the increasingly stringent European regulations regarding the use and trade of these gases, different strategies have been implemented by EP-DT Gas Team to reduce GHG emissions at CERN LHC experiments.
The first strategy is based on the use of gas recirculation plants that allowed a reduction between 90% and almost 100% of GHG emissions in LHC Runs.
The second approach is based on the separation and recuperation of the GHG from the exhausting gas mixtures from the detectors. Nowadays four GHG recovery systems, based on different separation techniques, are operational at the LHC experiments for different GHGs.
Finally, studies on new eco-friendly gas mixtures are on-going for long-term operation.
The three strategies will be discussed in this contribution.
ATLAS RPC detectors have been operated with a gas mixture selected after an extensive R&D work and consisting of 94.7% C2H2F4, 5.% i-C4H10, and 0.3% SF6. The gas mixture has a high environmental impact, having a Global Warming Potential (GWP) of about 1400. So all possible measures to reduce its dispersion into the atmosphere should be put in place.
The contribution of RPC detectors to global warming has become more evident due to gas leakage issues experienced in ATLAS. Almost 4000 RPC chambers located in the ATLAS cavern are being damaged due to the high sensitivity of the materials used for gas inlets.
The proposed solutions or mitigations of the problem ranging from the repair and prevention of detector leaks to the replacement of the actual gas with environmental friendly mixtures will be presented. The measures already implemented and the ongoing studies with new mixtures will be also shown.
In High Energy Physics Resistive Plate Chamber (RPC) detectors are typically operated in avalanche mode, making use of a high-performance gas mixture which main component, Tetrafluoroethane (C2H2F4), is classified as a fluorinated high Global Warming Potential greenhouse gas.
The RPC EcoGas@GIF++ Collaboration is pursuing an intensive R&D on new gas mixtures for RPC detectors to explore environmentally friendly alternatives complying with recent European regulations. During the last few years, the performance of RPCs characterized by different layouts and read-out electronics have been studied with Tetrafluoropropene (C3H2F4)-CO2 based gas mixtures at the CERN Gamma Irradiation Facility. A long-term ageing test campaign was launched in 2022. In 2023 all detector systems underwent evaluation by means of dedicated beam tests.
In this talk, preliminary results on these studies will be presented together with their future perspectives.
Saturated fluorocarbons (CnF(2n+2)) are chosen for their optical properties as Cherenkov radiators, with C4F10 and CF4 used in COMPASS and LHCb RICH1&2. Non-conductivity, non-flammability and radiation resistance make them ideal coolants with C6F14 used in all LHC experiments, while C3F8 evaporatively cools the ATLAS silicon tracker. These fluids however have high GWPs (>5000*CO2).
While not yet industrialised over the full CnF2nO range fluoro-ketones can offer similar performance at very low, or 0 GWP. The radiation tolerance and thermal performance of 3M NOVEC 649 (C6F12O) is sufficiently promising to be chosen by CERN to replace C6F14. Subject to optical testing, NOVEC 5110 (C5F10O) - blended with N2 and monitored in real time by sound velocity gas analysis - could replace C4F10 and CF4 in RICH detectors. Lighter molecules (e.g. C2F4O, with similar thermodynamics to C2F6) - would allow lower temperature, 0GWP operation than evaporative CO2 in Si trackers operated at high luminosity.
Fluorinated fluids used for particle detection and detector cooling in the LHC experiments are the dominant contribution to CERN’s direct (scope 1) greenhouse gas emissions. For the first major upgrade of LHCb, installed during the Long Shutdown 2 of the LHC, significant efforts were made to identify and validate environmentally friendly alternatives to perfluorocarbon coolants (in particular C6F14). In this contribution, we present the results of the qualification studies for C6 perfluoroketone (Novec 649) and hydrofluoroether (Novec 7100) fluids, summarise the lessons learned during the construction, commissioning and operation of the LHCb SciFi and RICH detectors, and discuss short- and long-term strategies for the future use of fluorinated coolants.
With the ambition to maintain competitiveness of European accelerator-based research
infrastructures, the Horizon Europe project Innovate for Sustainable Accelerating Systems (iSAS) has been approved. Within total 17 academic and industrial partners, the objective of iSAS is to develop, prototype and validate new impactful energy-saving technologies so that SRF accelerators use significantly less energy while providing the same, or even improved, performance. Aligned with the European accelerator R&D roadmap, the project focusses on three key technology areas connected to SRF cryomodules: the generation and efficient use of RF power, the improved cryogenic efficiency to operate superconductive cavities and optimal beam-energy recovery. The most promising and impactful technologies will be further developed to increase their TRL and facilitate their integration into cryomodules at existing research infrastructures and/or in the design of future accelerators.
In this work, we revisit the experimental constraints on the multipolar dark matter that has derivative coupling to the visible sector mediated by the Standard Model photon. The momentum dependent interaction enables them to be captured efficiently within massive celestial bodies boosted by their steep gravitational potential. This phenomena makes compact celestial bodies as an efficient target to probe such type of dark matter candidates. We demonstrate that a synergy of the updated direct detection results from DarkSide-50 and LUX-ZEPLIN together with IceCube bounds on high energy solar neutrinos from dark matter capture disfavour the viable parameter space of the dipolar dark matter scenario. Whereas, for the anapole dark matter scenario, a narrow window survives that lies within the reach of prospective heating signals due to the capture of dark matter at cold neutron stars.
Taking axion inflation as an example, we consider a scenario where the inflaton is coupled solely to a pure SU(3) Yang-Mills sector. In the low-energy phase of this sector, glueball states are formed. If non-renormalizable operators are considered, these glueballs may become unstable and reheat the standard model fields. Yet, for a certain parameter range, C-parity can protect part of the glueball species from decay and the C-odd glueballs can provide a viable dark matter candidate. We study the constraints related to dark matter stability and minimal reheating temperature of the standard model and conclude that this scenario is very predictive.
I will discuss cosmological domain walls which are described by tension red-shifting with the expansion of the Universe so that this network eventually fades away completely. These melting domain walls emit gravitational waves with the low-frequency spectral shape corresponding to the spectral index γ=3 favoured by the recent NANOGrav 15 yrs data. This scenario involves a feebly coupled scalar field, which can serve as a promising dark matter candidate. This ultra-light dark matter has mass below 0.01 neV which is accessible through planned observations thanks to the effects of superradiance of rotating black holes. This talk is based on recent works: arXiv:2104.13722, arXiv:2112.12608 and arXiv:2307.04582.
The initial density of both the Dark Matter(DM) and the Standard Model (SM) particles may be produced via perturbative decay of inflaton with different decay rates, creating an initial temperature ratio, $\xi_i$=T$_{DM,i}$/T$_{SM,i}$. This scenario implies inflaton mediated scatterings between the DM and the SM, that can modify the temperature ratio even for high inflaton mass. The effect of these scatterings is studied in a gauge-invariant model of inflaton interactions upto dimension-5 with all the SM particles including Higgs. It is observed that an initially lower(higher) DM temperature will rapidly increase(decrease), even with very small couplings to the inflaton. There is a sharp lower bound on the DM mass for satisfying relic density due to faster back-scatterings depleting DM to SM. For low DM masses, the CMB constraints become stronger for $\xi_i$<1, probing values as small as $10^{-4}$. The BBN constraints become stronger for lower DM masses, probing $\xi_i$ as small as 0.1.
We study under which conditions a first-order phase transition in a composite dark sector can yield an observable stochastic gravitational-wave signal. To this end, we employ the Linear-Sigma model featuring Nf = 3, 4, 5 flavours and perform a Cornwall-Jackiw-Tomboulis computation also accounting for the effects of the Polyakov loop. The model allows us to investigate the chiral phase transition in regimes that can mimic QCD-like theories incorporating in addition composite dynamics associated with the effects of confinement-deconfinement phase transition. A further benefit of this approach is that it allows to study the limit in which the effective interactions are weak. We show that strong first-order phase transitions occur for weak effective couplings of the composite sector leading to gravitational-wave signals potentially detectable at future experimental facilities.
Left-Right Models (LRMs) are one of most relevant extensions of the Standard Model (SM) of particle physics. They introduce a new gauge sector and can restore parity (P) or charge conjugation (C) symmetries at high enough energies. These theories can be embedded in other more fundamental ones with larger gauge groups. Consequently, the restoration of the C or P symmetries can be pushed towards higher energy scales compared to the scale of the Spontaneous Symmetry Breaking (SSB) of the LRM gauge group. We study three LRMs with different specific realizations of the scalar sector without imposing any additional discrete symmetry to the theory. We present bounds on the masses of the new gauge bosons using data from the LHC Run 2 and rare meson decays. We discuss the structure of the right-handed quark mixing matrix and the impact of the neutrino and scalar sectors. Naive collider bounds are alleviated bringing New Physics effects in flavour observables closer to an observable level.
Many new physics models such as compositeness, extra dimensions, extended Higgs sectors, supersymmetry, and dark sectors are expected to manifest themselves in the final states with photons. This talk presents searches in CMS for new phenomena in the final states that include photons, focusing on the recent results obtained using the full Run-II data-set collected by the CMS Experiment at the LHC.
Although the LHC experiments have searched for and excluded many proposed
new particles up to masses close to 1 TeV, there are many scenarios that
are difficult to address at a hadron collider. This talk will review a
number of these scenarios and present the expectations for searches at an
electron-positron collider such as the International Linear Collider.
The Circular Electron Positron Collider (CEPC) is a large-scale collider facility that can serve as a factory of the Higgs, Z, and W bosons and is upgradable to run at the ttbar threshold. While it also has a tremendous potential to search for the direct production of new physics states, which including Supersymmetry, Dark Matter and Dark Sector, Long-Lived Particles, and more. This talk will summaries and highlights the above new physics search potential at CEPC.
High-Electric-Charge compact Objects (HECOs) appear in several theoretical particle physics models beyond the Standard Model, and are actively searched for in current colliders, such as the LHC. In such searches, mass bounds of these objects have been placed, using Drell-Yan and photon-fusion processes at tree level so far. However, such estimates are not reliable, given that, as a result of the large values of the electric charge of the HECO, perturbative QED calculations break down. We present a Dyson-Schwinger resummation scheme, which allows for a large gauge coupling and thus makes the computation of the pertinent HECO-production cross sections reliable, thus allowing us to extract improved mass bounds for such objects from ATLAS and MoEDAL searches.
The Run 3 data-taking conditions pose unprecedented challenges for the DAQ systems of the LHCb experiment at the LHC. Consequently, the LHCb collaboration is pioneering a fully software trigger to cope with the expected increase in event rate. The upgraded trigger has required advances in hardware architectures, expert systems and machine learning solutions. Among the latter, LHCb has explored the adoption of Lipschitz monotonic neural networks (NNs) to enact trigger decisions. Such architectures are appealing owing to their robustness under varying detector conditions and capacity to certify domain-specific inductive biases. This contribution presents the application of Lipschitz monotonic NNs within the LHCb Run 3 trigger. Emphasis is placed on the topological triggers, devoted to inclusively selecting b-hadron candidates, where Lipschitz NNs enable retention of the beauty candidates whilst enhancing sensitivity to feebly interacting BSM states produced within the LHCb acceptance.
Tree Tensor Networks (TTNs) are hierarchical tensor structures commonly used for representing many-body quantum systems, but can also be applied to ML tasks such as classification or optimization. We study the implementation of TTNs in high-frequency real-time applications such as the online trigger systems of HEP experiments. The algorithmic nature of TTNs makes them easily deployable on FPGAs, which are naturally suitable for concurrent tasks like matrix multiplications. Moreover, the limited hardware resources can be optimally exploited by measuring quantum correlation and entanglement entropy, that can be used for the optimal pruning of the TTN models. We show different TTN classifiers implementations on FPGA, performing inference on synthetic ML datasets for benchmarking. A projection of the needed resources for the HW implementation of a classifier for HEP application will also be provided by comparing how different degrees of parallelism affect physical resources and latency.
The LHCb detector generates vast amounts of data (5 TB/s), necessitating efficient algorithms to select data of interest and reduce the bandwidth to acceptable levels in real time. Deploying machine learning (ML) models for inference at all trigger stages is challenging, as the models need to fulfill strict throughput requirements.
To achieve the throughput aims, optimized batched evaluation for both GPU and CPU architectures is developed, used at the first and second trigger stages, respectively. Furthermore, the aim is to reduce the maintenance burden and turnaround time of retraining models and allow flexibility of training platforms by factorizing inference from training software.
This talk provides an overview of the real-time ML inference framework integrated into the software of the LHCb experiment, covering training and testing pipelines, alongside throughput evaluations of typical ML models at both stages of the trigger.
Identification of hadronic jets originating from heavy-flavor quarks is extremely important to several physics analyses in High Energy Physics, such as studies of the properties of the top quark and the Higgs boson, and searches for new physics. Recent algorithms used in the CMS experiment were developed using state-of-the-art machine-learning techniques to distinguish jets emerging from the decay of heavy flavour (charm and bottom) quarks from those arising from light-flavor (udsg) ones. Increasingly complex deep neural network architectures, such as graphs and transformers, have helped achieve unprecedented accuracies in jet tagging. New advances in tagging algorithms, along with new calibration methods using flavour-enriched selections of proton-proton collision events, allow us to estimate flavour tagging performances with the CMS detector during early Run 3 of the LHC.
Flavour-tagging is a critical component of the ATLAS experiment physics programme. Existing flavour tagging algorithms rely on several low-level taggers, which are a combination of physically informed algorithms and machine learning models. A novel approach presented here instead uses a single machine learning model based on reconstructed tracks, avoiding the need for low-level taggers based on secondary vertexing algorithms. This new approach reduces complexity and improves tagging performance. This model employs a transformer architecture to process information from a variable number of tracks and other objects in the jet in order to simultaneously predict the jets flavour, the partitioning of tracks into vertices, and the physical origin of each track. The new approach significantly improves jet flavour identification performance compared to existing methods in both Monte-Carlo simulation and collision data.
We propose a differentiable vertex fitting algorithm that can be used for secondary vertex fitting, and that can be seamlessly integrated into neural networks for jet flavour tagging. Vertex fitting is formulated as an optimization problem where gradients of the optimized solution vertex are defined through implicit differentiation and can be passed to upstream or downstream neural network components for network training. More broadly, this is an application of differentiable programming to integrate physics knowledge into neural network models in high energy physics. We demonstrate how differentiable secondary vertex fitting can be integrated into larger transformer-based models for flavour tagging and improve heavy flavour jet classification.
Determination of the nature of dark matter is one of the most fundamental problems of particle physics and cosmology. This talk presents recent searches for dark matter particles in mono-X final states from the CMS experiment at the Large Hadron Collider. The results are based on proton-proton collisions recorded at sqrt(s) = 13 TeV with the CMS detector.
The Belle and Belle$~$II experiment have collected samples of $e^+e^-$ collision data at centre-of-mass energies near the $\Upsilon(nS)$ resonances. These data have constrained kinematics and low multiplicity, which allow searches for dark sector particles in the mass range from a few MeV to 10$~$GeV. Using a 426$~$fb$^{-1}$ sample collected by Belle$~$II, we search for a light dark photon that could explain the ATOMKI anomaly and a $Z^{\prime}$ boson that decays invisibly. Using a 711$~$fb$^{-1}$ sample collected by Belle, we search for $B\to h + \mathrm{invisible}$ decays, where $h$ is a $\pi$, $K$, $D$, $D_{s}$ or $p$, and $B\to Ka$, where $a$ is an axion-like particle.
We present the most recent $BABAR$ searches for reactions that could simultaneously explain the presence of dark matter and the matter-antimatter asymmetry in the Universe. This scenario predicts exotic $B$-meson decays of the kind $B\to\psi_{D} {\cal B}$, where $\cal{B}$ is an ordinary matter baryon (proton, $\Lambda$, or $\Lambda_c$) and $\psi_D$ is a dark-sector anti-baryon, with branching fractions accessible at the $B$ factories. The hadronic recoil method has been applied with one of the $B$ mesons from $\Upsilon(4S)$ decay fully reconstructed, while only one baryon is present in the signal $B$-meson side. The missing mass of signal $B$ meson is considered as the mass of the dark particle $\psi_{D}$. Stringent upper limits on the decay branching fraction are derived for $\psi_D$ masses between 0.5 and 4.3 GeV/c$^2$. The results are based on the full data set of about 430 fb$^{-1}$ collected at the $\Upsilon(4S)$ resonance by the $BABAR$ detector at the PEP-II collider.
The natural scenario where dark matter originates from thermal contact with familiar matter in the early universe requires the DM mass to lie within about MeV to 100 TeV. Considerable experimental attention has been given to exploring WIMPs in the upper end of this range, while the sub-GeV region is largely unexplored, even though a thermal origin for dark matter works in a predictive manner in this mass range as well. It is therefore an exploration priority. If there is such an interaction between light DM and ordinary matter, then there necessarily is a production mechanism in accelerator-based experiments. The Light Dark Matter eXperiment (LDMX) is a planned electron-beam fixed-target missing-momentum experiment with unique sensitivity to sub-GeV light DM. This contribution will discuss the background rejection capabilities and the projected sensitivities of the experiment after an accelerator upgrade from 4 to 8 GeV beam energy, where most of LDMX's data will be collected.
One of the indirect detection method of dark matter (DM) is based on the search of the products of DM annihilation or decay. They should appear as distortions in the gamma rays spectra and in the rare Cosmic Ray (CR) components, like antiprotons, positrons and antideuterons, on top of the standard astrophysical production. In particular, the antiprotons in the Galaxy are mainly of secondary origin, produced by the scattering of cosmic proton and helium nuclei off the hydrogen and helium in the interstellar medium (ISM). In order to obtain a significant sensitivity to DM signals, accurate measurements of the $\bar{p}$ production cross section in $p-p$ and $p-He$ collisions are crucial. The AMBER experiment at CERN collected in 2023 the first data ever in $p-He$ collision at a center of mass energy from 10 to 21 GeV. Preliminary results will be shown for the first time in this talk. The 2024 AMBER program with proton beam on liquid hydrogen and deuterium targets will be also presented.
The presence of a non-baryonic Dark Matter (DM) component in the Universe is inferred from the observation of its gravitational interaction. If Dark Matter interacts weakly with the Standard Model (SM) it could be produced at the LHC. The ATLAS experiment has developed a broad search program for DM candidates, including resonance searches for the mediator which would couple DM to the SM, searches with large missing transverse momentum produced in association with other particles (light and heavy quarks, photons, Z and H bosons) called mono-X searches and searches where the Higgs boson provides a portal to Dark Matter, leading to invisible Higgs decays. The results of recent searches on 13 TeV pp data, their interplay and interpretation will be presented.
The Forward Physics Facility (FPF), is a proposed underground cavern that will greatly expand the LHC”s physics potential in the HL-LHC era. The FPF will house several experiments, including FASER2, FASERnu2, Advanced SND, FORMOSA, and FLArE. These experiments will detect thousands of TeV-energy neutrinos per day, with far-reaching implications for detecting BSM physics in neutrinos, QCD studies, and astroparticle connections. In addition, the FPF will transform the LHC”s potential to detect new, weakly-interacting particles. In this talk, we will review the physics motivations for the FPF, give a status update on the FPF and its experiments, and present the latest updates to the FPF’s plans and timescale.
The CERN test beam lines and experimental areas serve over 200 test beams and experiments per year with more than 2000 users and are considered one of the most important facilities for detector R&D worldwide. Both the East and North Areas host several experimental areas and are in the process of extensive renovation to ensure the availability of test beams for the coming decades. We present the consolidation efforts and the available beams, including their versatility in momenta, spanning from a few hundred MeV/c to 400 GeV/c, and in particle species, extending over protons, mixed hadrons, electrons, muons, and several ion species within a wide range of intensities. Test beam users have access to beam instrumentation capable of particle identification, among other functionalities. With projects leading to intense detector R&D work on the horizon, such as FCC-ee, initial ideas to provide better electron test beams at CERN along with extensions of user space will also be reported.
We present the current status of the R&D performed for the ePIC dual-radiator RICH (dRICH) detector at the future Electron-Ion Collider (EIC). The dRICH will be equipped with silicon photomultipliers (SiPM), the first large-scale application of SiPM for single-photon detection in HEP. Special focus will be given to the beam test performed with the prototype SiPM optical readout, consisting of a total of 1280 3x3 mm² SiPM sensors and related electronics tested at CERN-PS in October 2023. The photodetector surface is modular and based on a novel prototype photodetection unit (PDU) which integrates 256 SiPM pixel sensors, cooling and TDC electronics in a volume of ~ 5x5x14 cm³. The data have been collected with a complete chain of front-end and readout electronics based on the ALCOR chip. This presentation will highlight the features and details of the PDU and the performance of the full dRICH SiPM prototype system that successfully recorded Cherenkov photon rings.
The dMu/DT collaboration plans to measure the $\mu$CF rate and sticking fraction at temperatures ~10$^3$K and pressure ~10$^5$ bar using a diamond anvil cell with D-T mixture. In parallel, physics processes related to formation, transport, transfer, and other deexcitations of muonic atoms, as well as $\mu$CF and reactivation of muons to the fusion cycles are being modeled in GEANT4. Although atomic capture and the formation of muonic atom exist in the recent source distributions it’s not part of a standard Physics list. The authors are working towards adding the above processes and modified EM processes to act on muonic atoms. These models are being validated with archival data from DD and DT $\mu$CF experiments and theoretical calculations across different groups. The proposed package was presented in the Geant4 Hadronic Physics group meeting, and we are submitting it for inclusion in GEANT4 as part of the muonic atom package in the 2024 release.
A major obstacle for detection of meV-scale rare events is demonstrating sufficiently low energy detection thresholds in order to detect recoils from light dark matter particles. We have developed a method of cryogenic optical beam steering that can be used to generate O(μs) pulses of small numbers of photons over the energy range of 0.1 - 5eV and deliver them to any location on the surface of a superconducting device with time and energy features comparable to expected signals. This allows for robust calibration of any photon-sensitive detector, enabling exploration of a variety of science targets including position sensitivity, phonon transport in materials, and the effect of quasiparticle poisoning on detector operation. In this talk, I will review the operating principles of optical beam steering, present current results from our pulse delivery system, and discuss the implementation of this technology for various novel sensor technologies such as HVeV detectors, MKIDs, and qubits.
Organic scintillators detect ionizing radiation and are crucial in Particle and Nuclear Physics research. This study aims to enhance scintillator properties for next-gen experiments, focusing on Polyethylene Terephthalate (PET) and Polyethylene Naphthalate (PEN) as promising alternatives for emitting blue light when exposed to radiation. We manufacture, PET, PEN, and PET:PEN blend scintillator samples via injection molding and investigate the impact of dopants. Comparative analysis shows PEN samples have higher light responses compared to PET, with specific dopants doubling PET's light yield. A positive correlation exists between the light response and PEN proportion in PET:PEN blends.
Outreach and communication with the public is an integral part of our work as researchers. A wide range of activities and platforms allow ALICE members to share, especially with the young generation, the excitement of our field. ALICE Masterclasses for high-school students, both in-person and online, are expanding, reaching a higher number of students every year. Visits to the experiment site, especially to the underground installations when the LHC schedule allows, are very popular; the large demand also serves to motivate ALICE members to get involved as guides. The surface exhibition offers a glimpse to both the physics and the variety of detectors of ALICE. Virtual visits are also popular, and the growing use of social media platforms like Instagram brings the excitement of the physics of the quark-gluon plasma to new audiences of different ages and interests.
The Pierre Auger Collaboration has a long tradition of outreach that engages a wide range of people of all ages worldwide. In Malargue, Argentina, the heart of the Pierre Auger Observatory, the Visitor Centre offers a permanent interactive exhibition. Every November Collaboration meeting, we organize a Science Fair where Argentinian students from across the country can present their works and talk with the scientists at the site, motivating youngsters to pursue a career in Science. We also participate in the local parade, commemorating the foundation of Malargue. We have developed numerous activities and interactive tools, including a 3-D event display. We have an open data policy and share them according to FAIR principles. Recently, we joined the International Masterclasses within the International Particle Physics Outreach Group, using a framework similar to the ATLAS and CMS Collaborations. In this contribution, we summarize all of our Outreach activities.
LHAASO is the world's highest-altitude, largest-scale, and most sensitive cosmic ray detection facility.It has achieved a number of significant results.
Modern Physics(MP) is a popular science magazine in physics.It focuses on popularizing and promoting modern physics knowledge and advanced scientific and technological developments.
Campus Cosmic-ray Observation Collaboration(CCOC) is a non-profit collaboration unit composed of members voluntarily, based on LHAASO and MP.
Report introduces cosmic ray observation activities in middle schools, cosmic ray observation training in summer schools, and the participation in International Cosmic Day(ICD).It also discusses the integrated development issues related to popular science journals, popular science activities, and science education.
Through this exchange, the hope is to learn from the experiences of popular science and science education based on large scientific facilities, and to inspire more young people to love nature and science.
The Deep Underground Neutrino Experiment (DUNE) is embarking on an ambitious quest to unravel the mysteries surrounding neutrinos and their intricate interactions. At the heart of our collaboration is a dedication to not only probe the depths of neutrino physics but also to engage the public, students and policy makers with all aspects of the DUNE experiment. We are committed to communicating our physics goals, improving understanding of our groundbreaking detectors, and highlighting the efforts of the people behind these endeavors.
This talk will present the social media strategy adopted by DUNE. We'll explore the results of our efforts and discuss our approach to engage new audiences.
Today we can design infrastructure the size of a metropolis and space missions millions of miles away, thanks to Galileo's and Newton's classical mechanics and to Maxwell’s electromagnetism, which represents a solid castle and undisputed cause-effect laws. However, at the beginning of the new century, the exploration of the microscopic world has provoked a crisis and shaken the seemingly unwavering order, revealing that reality is unpredictable and, even today, counterintuitive.
Inaugurated in December in Italy, at the Science Museum of Trento (MUSE), "Quantum. The revolution in a leap" is an exhibition of 400 square meters made of interactive multimedia installations, texts, videos, animations, historical and scientific objects. Designed for the general public, "Quantum" is based on a rigorous and experiential narrative focused on the ideas and perceptions that made scientists build the theory that changed the paradigm we use to look at atoms and particles and the entire universe.
Outreach to non-HEP audiences is becoming increasingly important and cannot only be done by the few trained professionals in our field. Indeed the authenticity of good outreach relies on active physicists/engineers speaking to external audiences, including students, educators, general public and the media. And although some people are interested in taking part, they are often apprehensive as they fear that talking to non-expert audiences in an effective way is very different from talking to peers. In this brief talk we give some hints and tips that can be the basis of good public outreach, including clear messaging, anecdotes/stories, simplifying plots and customizing the talk for the particular audience. It is crucial to note that many of these skills are also beneficial for talking to audiences within our field!
Measurements of jets in heavy-ion collisions provide detailed information about the dynamics of the hot, dense plasma formed in these colli- sions at the LHC. This talk gives an overview of the latest jet measurements with the ATLAS detector at the LHC, utilizing the high statistics 5.02 TeV Pb+Pb and 8.16 TeV p+Pb data collected in 2015, 2016 and 2018. Multiple new results will be featured in this talk, Novel measurements of the photon plus two jet production and photon+jet+hadron correlations, which are sensitive to the parton-QGP interaction and medium response effects, are presented in this talk. Additionally, we will present a new measurement of the centrality dependence of dijet production in proton-lead collisions at 8.16 TeV, offering a unique possibility to investigate initial state effects in nuclear collisions.
The measurement of jets recoiling from a trigger hadron provides unique probes of medium-induced modification of jet production. Jet deflection via multiple soft scatterings with the medium constituents may broaden the azimuthal correlation between the trigger hadron and the recoiling jets. The R-dependence of recoil jet yield probes jet energy loss and intra-jet broadening. The hadron+jet results may be sensitive to wake effects due to jet-medium energy transfer at low $p_\mathrm{T}$.
This talk presents measurements of the semi-inclusive distribution of charged-particle jets recoiling from a trigger hadron in pp and Pb-Pb collisions. We observed a marked medium-induced jet yield enhancement at low $p_\mathrm{T}$ and at large azimuthal deviation from $\delta \phi \sim \pi$ with large jet resolution parameter R. Comparisons to different model calculations incorporating different formulations of in-medium jet scattering and medium response are also reported.
We use the parametric approach to analyze jet suppression measured using the nuclear modification factor of inclusive jets and jets from gamma-jet events. With minimum model assumptions, we quantify the magnitude of the average energy loss, its pt-dependence, and flavor dependence. Further, we quantify the impact of fluctuations in the energy loss and nuclear PDFs on the measured jet suppression. Employing the Glauber model to infer the information about the collision geometry, we quantify the path-length dependence of the average energy loss. Comparison between the magnitude of the energy loss in 2.76 TeV and 5.02 TeV Pb+Pb collisions along with Glauber modeling enables extrapolation of the magnitude of energy loss expected to be measured in upcoming O-O collisions. The work presented in this talk is an extension of modeling published in PLB 767 (2017) 10 and EPJC 76 (2016) 2, 50, and it should help to shed light on the basic properties of parton energy loss measured at the LHC.
Based on a data-driven approach and a scaling analysis, we demonstrate that the quenching of hadron spectra at RHIC and LHC allows for a precise determination of the path-length dependence of parton energy loss in quark-gluon plasma. We find that the average energy loss is proportional $\langle \epsilon \rangle \propto L^\beta$ with $\beta=1.02^{+0.09}_{-0.06}$, consistent with the pQCD expectation of parton energy loss in a longitudinally expanding QGP. We also show that the azimuthal anisotropy coefficient divided by the collision eccentricity, $v_2/\mathrm{e}$, follows the same scaling property as the $p_\perp$ dependence of $R_{\textnormal{AA}}$. This scaling is observed in data, which are reproduced by the model at large $p_\perp$. Finally, a linear relationship between $v_2/\mathrm{e}$ and the derivative of $R_{\textnormal{AA}}$ is found and confirmed in data, offering an additional way to probe the $L$ dependence of parton energy loss using measurements from LHC Run 3.
In this talk we discuss factorization of jet cross sections in heavy-ion collisions based on fixed-order calculations. First, using Glauber modelling of heavy nuclei, a factorized formula for jet cross sections is derived, which involves defining jet functions in QCD medium. Then, we present our result of the jet function for producing a heavy quark-antiquark pair, denoted by $Q\bar{Q}$, at leading order in a static medium. The jet function is found to depend on the virtuality of the hard parton that initiates the jet, showing that the presence of QCD matter allows the production of $Q\bar{Q}$ at virtuality where is kinematically forbidden in vacuum jets.
Two-dimensional (2D) jet tomography is a promising tool to study jet medium modification in high-energy heavy-ion collisions. It combines gradient (transverse) and longitudinal jet tomography for selection of events with localized initial jet production positions. It exploits the transverse asymmetry and energy loss that depend, respectively, on the transverse gradient and jet path length inside the quark-gluon plasma (QGP). In this study, we employ the 2D jet tomography to study medium modification of the jet shape of γ-triggered jets and the effect of medium responses within the linear Boltzmann transport (LBT) model for jet propagation in heavy-ion collisions.
Several physics scenarios beyond the Standard Model predict the existence of new particles that can subsequently decay into a pair of Higgs bosons. This talk summarises ATLAS searches for resonant HH production with LHC Run 2 data. Several final states are considered, arising from various combinations of Higgs boson decays.
If, as so many believe, there are "BSM" Higgs bosons, it poses three questions:
Their answers are in a forgotten treasure by Eldad Gildener and Steven Weinberg (GW), Phys Rev D33, 3333 (1976)). GW assume a scale-invariant electroweak theory with multiple Higgses. The scale symmetry is spontaneously broken and H is its massless dilaton. In the one-loop approximation, scale symmetry is explicitly broken at a scale identified with the electroweak decay constant $v = 246\,{\rm GeV}$. Then, H acquires the low mass $M_H = 125$ GeV, and a sum rule relates its mass to those of the BSM Higgses. Thus, the mass scale of the BSM Higgses is set by $v$ or, equivalently, $M_H$, and the BSM Higgses are all relatively light, of order 500 GeV in a two-Higgs doublet model of the GW scheme.
Various extensions of the Standard Model predict the existence of additional Higgs bosons. If these additional Higgs bosons are sufficiently heavy, an important search channel is the di-top final state. In this channel, interference effects between the signal and the corresponding QCD background process are important. If more than one heavy scalar is present, besides the signal-background interference effects associated with each Higgs boson also important signal-signal interference effects are possible. We perform a model-independent analysis of various interference contributions within a simplified model framework considering two heavy scalars that can mix with each other, taking into account large resonance-type effects arising from loop-level mixing between the scalars. The interference effects are studied with Monte Carlo simulations for the di-top production at the LHC. We demonstrate that signatures can emerge from these searches that may be unexpected or difficult to interpret.
Properties of the Higgs boson (H) at current and future particle colliders are crucial to explore new physics beyond the standard model. In particular, experimental and theoretical outlooks at future colliders drive interest in Higgs to gauge boson couplings. Single Higgs production via vector-boson fusion allows probing Higgs couplings with massive vector bosons (V = W, Z). We consider electron-proton (eP) collider to study these couplings due to the low background. In a recent study, we considered the most general anomalous Higgs-vector boson (HVV) couplings and explored the potential of eP collider in constraining the parameters of HVV couplings. Our results were based on leading order predictions in perturbation theory. We include further Next to Leading Order (NLO) corrections of Quantum Chromodynamic (QCD) in Standard Model signal to make precise predictions. In this talk, I will present the effect of NLO QCD corrections on the standard model and anomalous HVV couplings.
With the current precision of measurements by ATLAS and CMS experiments, it cannot be excluded that a SM-like Higgs boson is a CP violating mixture of CP-even and CP-odd states. We explore this possibility here, assuming Higgs boson production in ZZ-fusion, at 1 TeV ILC, with unpolarized beams. The full reconstruction of SM background and fast reconstruction of the signal is performed, simulating 8 ab$^{-1}$ of data collected with the ILD detector. We demonstrate that the CP mixing angle $\Psi_{\mathrm{CP}}$ between scalar and pseudoscalar states can be measured with the statistical uncertainty of 4 mrad at 68% CL, corresponding to 1.6 $\cdot$ 10$^{-5}$ for the CP parameter $f_{CP}$. This is the first result on sensitivity of an $e^+e^-$ collider to measure $f_{CP}$ in vector boson fusion.
We study possible CP-violation effects of the Higgs to Z-boson coupling
at a future e^+ e^- collider, e.g. the International Linear Collider (ILC). We find that the azimuthal angular distribution of the muon pair, produced by e+ e- -> H Z -> H mu+ mu-, can be sensitive to such a CP-violation effect when we apply initial transversely polarized beams. Based on this angular distribution, we construct a CP sensitive asymmetry and obtain this asymmetry by Whizard simulation. By comparing the SM prediction with 2$\sigma$ range of this asymmetry, we estimate the limit of the CP-odd coupling in HZZ interaction, including as well studies from unpolarized and longitudinally-polarized beams.
The Cryogenic Underground Observatory for Rare Events (CUORE) is the first bolometric experiment searching for 0νββ decay that has successfully reached the one-tonne mass scale. The detector, located at the LNGS in Italy, consists of an array of 988 TeO2 crystals arranged in a compact cylindrical structure of 19 towers. CUORE began its first physics data run in 2017 at a base temperature of about 10 mK and has been collecting data continuously since 2019, reaching a TeO2 exposure of 2 tonne-year in spring 2023. This is the largest amount of data ever acquired with a solid state cryogenic detector, which allows for further improvement in the CUORE sensitivity to 0νββ decay in 130Te. In this talk, we will present the latest results of CUORE, based on the full available statistics and on new, significant enhancements of the data processing chain and high-level analysis.
The Karlsruhe Tritium Neutrino (KATRIN) experiment is probing the effective electron anti-neutrino mass by a precise measurement of the tritium beta-decay spectrum near its kinematic endpoint. Based on the first two measurement campaigns a world-leading upper limit of 0.8 eV (90% CL) was placed. New operational conditions with an improved signal-to-background ratio, the reduction of systematic uncertainties and a substantial increase in statistics allow us to expand this reach. In this talk, I will present the status of the latest results of KATRIN experiment based on the first five measurement campaigns.
Neutrinos produced in an early stage of the Big Bang are believed to pervade the Universe.
The Ptolemy project is studying novel experimental techniques to observe this relic cosmological background neutrinos and to eventually study their flux and compare it with cosmological models.
This requires to face challenges in material technologies as tritium storage on nanostructure and radio-frequency radiation detection associated in a novel type of electromagnetic spectrometer. It will be employed to observe the electrons emerging from a tritium target, used to absorb the relic neutrinos.
Ptolemy is entering the construction phase for the first complete high precision measurement module with a first physics goal to be measure the neutrino mass from the beta endpoint.
The current status and outlook of the project is presented.
Neutrino has been regarded as an unique tool to reveal the interiors of astronomical objects. KamLAND, which is a 1 kt liquid scintillator located in the Kamioka mine, detects electron-anti neutrinos through the inverse beta decay. Due to its significant sensitivity around a few MeV energy region, supernova neutrino (SN$\nu$) search has been conducted. Neutrinos emitted a few hours before a supernova (pre-SN$\nu$) are also detectable. We have developed the combined alert system for pre-SN$\nu$s with the Super-Kamiokande group. In addition, we are developing a new background reduction scheme using a neural network to reduce atmospheric neutrino backgrounds for the supernova relic neutrino search. Other than SN$\nu$, KamLAND has a sensitivity to neutrinos from primordial black holes, which are one of the candidates of dark matter. In this presentation, we show the search progress of neutrinos from supernova and primordial black holes.
The surface detector array of the Pierre Auger Observatory is sensitive to neutrinos of all flavors for primary neutrino energies above 0.1 EeV and zenith angles above $60^{\circ}$. During the 20 years of Auger operation, we put stringent limits on the existence of a diffuse flux of ultra-high-energy neutrinos and also on neutrino fluxes from point-like steady sources, including those of neutrinos detected in coincidence with gravitational wave events. We further severely constrain the secondary by-product fluxes expected from the decay of super-heavy dark matter particles in the Galactic halo. Finally, we also analyzed the monocular data from our Fluorescence Detector to search for upward-going tau-neutrino events consistent with the two "anomalous" radio pulses observed by the ANITA flights I and III. In this talk, we review our neutrino searches and present our prospects for the new neutrino triggers with AugerPrime, the major update of the Pierre Auger Observatory.
SND@LHC is a stand-alone experiment to measure neutrinos produced at the LHC in an unexplored pseudo-rapidity region (7.2<𝜂<8.6). It is located at 480m from IP1 in the TI18 tunnel. Its hybrid detector is composed of 800kg tungsten target-plates, interleaved with emulsion and electronic trackers, followed by a calorimeter and a muon system. This allows to identify all three neutrino flavours, opening a unique opportunity to probe heavy flavour production at the LHC in a pseudorapidity region not accessible to ATLAS, CMS and LHCb. This region is of particular interest also for future circular colliders and for studies of very high-energy atmospheric neutrinos. The detector is also well suited to search for Feebly Interacting Particles in scattering signatures. The experiment has been running successfully during 2022 and 2023 and has published several results. This talk will focus on the experience gained from the first measurements and on the overall physics goals of SND@LHC.
Every bunch crossing at the LHC causes not just one proton-proton interaction, but several which are called "pileup". With the increasing luminosity of the LHC the number of pileup interactions per bunch crossing increases and it will reach up to 200 during high-luminosity LHC operation. Removing the pileup from an event is essential, because it does not only affect the jet energy but also other event observables, for example the missing transverse energy, the jet substructure, jet counting and the lepton isolation. In addition, jets as an experimental signature of energetic quarks and gluons, need to be calibrated in order to have the correct energy scale. A detailed understanding of both the energy scale and the transverse momentum resolution of jets at the CMS is of crucial importance for many physics analyses. In this talk we present recent developments in terms of jet energy scale and resolution, substructure techniques and pileup mitigation techniques.
Experimental uncertainties related to hadronic object reconstruction can limit the precision of physics analyses at the LHC, and so improvements in performance have the potential to broadly increase the impact of results. Recent refinements to reconstruction and calibration procedures for ATLAS jets and MET result in reduced uncertainties, improved pileup stability and other performance gains. In this contribution, highlights of these developments will be presented.
The electromagnetic calorimeter (ECAL) of the CMS experiment at LHC is crucial for many physics analyses, from Higgs measurements to new physics searches. A precise calibration of the detector and its individual channels is essential to achieve the best possible resolution for electron and photon energy measurements, as well as the measurement of the electromagnetic component of jets and the contribution to energy sums used to obtain information about particles leaving no signal in the detectors. To ensure the stability of the energy response over time a laser monitoring system is employed to measure radiation induced changes in the detector and compensate for them in the reconstruction. Also, each channel is calibrated with physics events. This talk will summarize the techniques used for the ECAL energy and time calibrations and it will present a new system developed to automatically execute the calibration workflows. The ECAL performance achieved in 2022 and 2023 will be discussed.
The High Luminosity upgrade of the CERN LHC (HL-LHC) will deliver unprecedented instantaneous and integrated luminosities to the detectors and an average of up to 200 simultaneous interactions per bunch crossing is expected. The CMS detector is undergoing an extensive Phase-2 upgrade program to prepare for these severe conditions and a major upgrade of the electromagnetic calorimeter (ECAL) is foreseen. While a new detector will be installed in the endcap regions, the ECAL barrel crystals and photodetectors are expected to sustain the new conditions. However, the entire readout and trigger electronics system will be replaced to cope with the challenging HL-LHC environment and increased trigger latency requirements. This talk will present the design and status of the individual components of the upgraded ECAL barrel detector, and the results of energy and time resolution measurements with a full readout chain prototype system in recent test beam campaigns.
The Compact Muon Solenoid (CMS) is one of the two multi-purpose experiments at the Large Hadron Collider (LHC) and has a broad physics program. Many aspects of this program depend on the ability to trigger, reconstruct, and identify events with final state electrons, positrons, and photons with high efficiency and excellent resolution.
In this talk we present the characteristics and the performance of the electron and photon reconstruction at CMS, both at trigger and offline levels, and the techniques used to identify these particles with high purity and to achieve the ultimate precision in energy measurements. Recent Run3 developments will be presented, together with improvements foreseen in the next future.
The FAMU experiment (Fisica degli Atomi MUonici), led by INFN at the Rutherford Appleton Laboratory (UK), is designed to measure the hyperfine splitting of the muonic hydrogen ground state. This measurement, aiming to give an accurate insight of the proton's magnetic structure, plays a key role in verifying the most accurate QED calculations and tests the interaction between proton and muon. A 55 MeV/c pulsed negative muon beam is produced by the ISIS synchrotron at the RIKEN-RAL muon facility. The beam is directed against a gaseous hydrogen-oxygen target, where a pulsed Mid-InfraRed laser with a tunable wavelength around 6.8 μm is injected. The aim is to determine the laser wavelength stimulating the resonant spin-flip in μH atoms, which is a function of the proton Zemach radius. The experiment started data talking in 2023, and a new set of data is being taken in 2024-5. In this presentation, the status of the FAMU experiment, its performance and its future development are presented.
We will present the operational status of the LHC Run 3 milliQan detector un, whose installation began last year and was completed during the 2023-4 YETS, and is being commissioned at the time of submission. We will also show any available initial results from data obtained with Run 3 LHC Collisions.
Charged lepton flavor violation (CLFV), poses a compelling indicator of potential physics beyond the standard model by violating the conservation of lepton flavor. A model is utilized featuring an additional Z$^{\prime}$ gauge boson to conduct an extensive comparative analysis of CLFV investigations at future lepton collider facilities, including a 240 GeV electron-positron collider and a muon collider at the TeV scale. Employing fast Monte-Carlo simulations and data analyses, we evaluate the detection prospects for Z$^{\prime}$-induced CLFV interactions, specifically targeting the $e\mu$, $e\tau$, and $\mu\tau$ couplings. The results are compared with the existing and anticipated limits determined by low-energy experiments and the high-energy pursuits at the LHC. The sensitivity on the $\tau$ related CLFV coupling strength can be significantly improved in comparison to the current best constraints and prospect constraints.
The tension of $B\to K^{(*)}\bar\ell\ell$ decays with the Standard Model (SM) can be attributed to a short-distance (SD) $b s\bar\ell\ell$ interaction.
We show two methods to disentangle this effect from long-distance (LD) dynamics. Firstly, we do a comparison of the inclusive $b\to s\bar\ell\ell$ rate at high $q^2=m^2_{\ell\ell}\geq 15~\rm GeV^2$ with a determination based on data on the leading exclusive modes, finding a $\sim 2\sigma$ discrepancy. Secondly, we do a data-driven analysis of the exclusive $B\to K^{(*)}\bar\ell\ell$ spectrum in the entire $q^2$ region. With a dispersive parametrization of the charmonia resonances, we extract the non-SM contribution to the Wilson coefficient $C_9$ for every bin in $q^2$. The result is compatible with the SD hypothesis and the inclusive determination. Finally, with the aim of having a better control over LD effects that mimic the $C_9$ contribution, we give an estimate of the size of charm-rescattering processes in $B\to K\bar\ell\ell$.
The $B^0 \to K^{*0}\mu^+\mu^-$ decay is mediated via the rare flavour changing neutral current transition $b \to s\ell^+\ell^-$, and constitute sensitive probes for New Physics (NP), as they are forbidden at tree-level in the SM. Virtual NP contributions can therefore have a large impact, and previous LHCb measurements of the decay have shown interesting tensions with the SM predictions at the level of $\sim 3\sigma$. The theoretical interpretation of the anomalies is difficult due to the uncertainties in non-local SM contributions, such as charm-loops $b \to s c \bar{c}(\to \gamma)$, which could mimic NP effects. This talk discusses the results from a data-driven approach to constraining the size of the charm-loops and other non-local contributions to the $B^0 \to K^{*0}\mu^+\mu^-$ amplitude, in the first measurement to parameterise full dimuon invariant mass spectrum. The results are obtained using an integrated luminosity of $\rm 8.4 \; fb^{-1}$ collected by the LHCb experiment.
Rare B-hadron decays mediated by $b\to s\ell^+\ell^-$ transitions provide a sensitive test of Lepton Flavour Universality (LFU), a symmetry of the Standard Model by which the coupling of the electroweak gauge bosons to leptons is flavour universal. Extensions of the SM do not necessarily preserve this symmetry and may give sizable contributions to these processes. Precise measurements of LFU ratios are, therefore, an extremely sensitive probe for New Physics, in particular given how clean the predictions for LFU observables are. Likewise, breaking of LFU can result in Lepton-Flavour violating decays of the form b->s\ell\ell'. This talks summarizes recent measurements of Lepton Flavour Universality at LHCb, as well as searches for Lepton-Flavour violating decays
Charged Lepton flavour violation (cLFV) is a flavour-changing short-range interaction among charged leptons. cLFV processes, although allowed by neutrino oscillations, are in the Standard Model highly suppressed, hence below any experiment sensitivity. Thus, a search for cLFV constitutes a clear probe of Beyond the Standard Model (BSM) physics. The LHCb collaboration has recently conducted searches involving different lepton flavour couplings to set stringent limits on the most relevant cLFV observables and bounds in the parameter space of many NP models. The world's stringent limits have been set on the branching fractions of $b\to sll'$ transitions, where leptons with different flavours are direct products of the decay of b-quark mesons.
Recent CMS results on rare decays with FCNC transition. The analyses are based on proton-proton collision data collected in pp collisions at sqrt(s)=13 TeV.
At the LHCb Experiment, we search for very rare decays of heavy hadrons containing b or c quarks. Most of these decays occur in the Standard Model (SM) through heavily-suppressed Flavour-Changing Neutral Currents (FCNC) leading to decay rates expected to be as tiny as 10^-9 or considerably below. These decays offer therefore various possibilities to search for deviations from SM predictions. We have exploited proton-proton collision data collected between 2011 and 2018 with the LHCb detector and have established some of the world’s most stringent upper limits on the decay rates of many of these decays. In this talk we will report the results of recent searches and link with the prospects for 2024 data.
We present for the first time a revised study of charmonium production in nuclear ultra-peripheral collisions (UPC) based on a rigorous Green's function formalism. Such a formalism allows to incorporate properly effects of the color transparency, as well as the quantum coherence inherent in the higher twist quark shadowing related to the $Q\bar Q$ Fock component of the photon. The significance of such effect gradually decreases towards forward and/or backward rapidities. In the LHC kinematic region we incorporate additionally within the same formalism the leading twist gluon shadowing corrections related to higher multi-gluon photon fluctuations. They represent a dominant source of nuclear phenomena in the mid-rapidity region. Model predictions for the rapidity distributions $d\sigma/dy$ are in a good agreement with available UPC data on coherent and incoherent charmonium production at RHIC and the LHC. They can be also verified by future measurements at the LHC, as well as at EIC.
The exclusive photoproduction reactions γp → J/ψ(1S)p and γp → ψ(2S)p have been measured at an ep centre-of-mass energy of 318 GeV with the ZEUS detector at HERA using an integrated luminosity of 373 pb$^{−1}$. The measurement was made in the kinematic range 30 < W < 180 GeV, Q$^2$ < 1 GeV2 and |t| < 1 GeV$^2$, where W is the photon-proton centre-of-mass energy, Q$^2$ is the photon virtuality and t is the squared four-momentum transfer at the proton vertex. The decay channels used were J/ψ(1S) → μ+μ−, ψ(2S) → μ+μ− and ψ(2S) → J/ψ(1S)π+π− with subsequent decay J/ψ(1S) → μ+μ−. The ratio of the production cross sections, R = σψ(2S)/σJ/ψ(1S), has been measured as a function of W and |t| and compared to previous data in photoproduction and deep inelastic scattering and with predictions of QCD-inspired models of exclusive vector-meson production, which are in reasonable agreement with the data.
We investigate the exclusive photoproduction of J/psi mesons in
ultraperipheral heavy-ion collisions in the color dipole approach. We
use the color dipole formulation of Glauber-Gribov theory to calculate
the diffractive amplitude on the nuclear target.
We discuss the role of $c \bar c g$-Fock states, which can be understood
in terms of the shadowing of the nuclear gluon distribution. We compare
the results of our calculations to recent data on the photoproduction of
$J/\psi$ by the ALICE, LHCb and CMS collaborations. In particular the
$\gamma A \to J/\psi A$ cross section and the putative gluon shadowing
ratio $R(x)$ are improved at small $x$ high energies after including the
$c \bar c g$ state.
We present a study of both inclusive and diffractive neutrino-nucleus scattering in the framework of the QCD dipole model and Color Glass Condensate effective field theory. This study fills the gap in this topic, as diffractive production in such process is investigated for the first time. We show that although the effect of gluon saturation is small, there are some of its signatures which could be seen in the ultra-high-energy regime. Our calculations provide predictions for the neutrino measurements at facilities such as IceCube and its next generation, IceCube-Gen2.
In this study, we revisit the extraction of parton-to-$K^0_S$ hadron fragmentation functions (FFs)focusing on both next-to-leading-order(NLO) and next-to-next-to-leading-order (NNLO) accuracies.
Our approach involves the analysis of single inclusive electron-positron annihilation (SIA) data, marking the first incorporation of the most recent experimental data from BESIII. Employing the analytic derivative of a Neural Network, we fit the FFs within the framework of perturbative QCD, concurrently considering hadron mass corrections. To comprehensively address experimental uncertainties, the Monte Carlo method is employed. The estimates of $K^0_S$ production rates demonstrate the closest agreement with the data, offering robust descriptions well within their respective uncertainties.
Fragmentation Functions (FF) play a crucial role in the description of the hadronization process. We report the measurements of normalized differential cross sections of inclusive hadron production as a function of hadron momentum at six energy points with $q^2$ transfer from 5 to 13 GeV$^2$ at BESIII.
The causal view of hadron formation allows to establish a simple quantization scheme describing mass spectra of light hadrons. The resulting model predicts a multitude of observable effects. The talk contains a short introduction to the model followed by the comparison of its predictions with recent LHC measurements.
With the ever-increasing requirement for sustainability in the modern age, it is crucial to understand the environmental impact of High Energy Physics (HEP) and related fields, especially considering the field's high resource consumption. This talk attempts to quantify the carbon footprints associated with four categories: Experiment, corresponding to the large infrastructure within HEP collaborations; Institute, accounting for the emissions from research institutes and universities; Computing, covering the resource consumption for data analysis and running simulations; and Travel, accommodating business trips for conferences, workshops, and meetings. A survey for self-evaluation was devised based on these studies, enabling colleagues to estimate their professional footprint. The Know your footprint campaign aims to raise awareness, identify the dominant contributing factors to the HEP-related footprint, and motivate the community to move towards more sustainable research practices.
Scientists are becoming more aware of the impact of their activities on the environment. They also want to base their analysis of the situation on measurements, leading to decisions to minimise their contribution to climate change and pollution. With this in mind, scientists in French labs started the Labos 1point5 collective in 2019, to collect what is already ongoing, study how research is being carried out, think about changes to promote in our teaching and research approaches, and provide tools to measure our labs greenhouse gases emissions in a standardised way. With GES 1point5, 100s of labs are reporting their yearly emissions from buildings, electricity, duty travel, commute and procurement. The latest addition includes emissions linked to the usage of large infrastructures such as CERN, large computing centers and astronomical observatories. The kit 1point5 and scenario 1point5 tools suggest measures and model their impact by 2030, aiming at a 55% reduction in GHG emissions.
As the World’s largest particle physics research laboratory, CERN strives to deliver world-class scientific results and knowledge, while embedding environmental responsibility and sustainability in its activities. This contribution will present CERN’s approach for environmentally responsible research, outlining the present footprint of the Organization and the current projects aimed at minimising the laboratory’s impact on the environment across its accelerators, experiments as well as its site and campus facilities. Insights of environmental objectives at the Horizon 2030 and beyond will also be discussed.
The ISIS-II Neutron and Muon source is the proposed next generation of, and successor to, the ISIS Neutron and Muon Source based at the Rutherford Appleton Laboratory in the United Kingdom. Anticipated to start construction in 2032, the ISIS-II project presents a unique opportunity to incorporate environmental sustainability practices from its inception.
A (Simplified) Life Cycle Assessment (LCA) will examine the environmental impacts associated with each of the ISIS-II design options across the stages of the ISIS-II lifecycle, encompassing construction, operation, and eventual decommissioning. This proactive approach will assess all potential areas of environmental impact and seek to identify strategies for minimizing and mitigating negative impacts, wherever feasible. This talk will provide insights into the motivation, methodology, and first results of the environmental impact and LCA of the entirety of the ISIS-II project.
In this talk, we will discuss the studies presented in PRX ENERGY 2, 047001, where the carbon impact of the Cool Copper Collider (C$^3$), a proposed e$^{+}$e$^{-}$ linear collider operated at 250 and 550 GeV center-of-mass energy, is evaluated. We introduce several strategies to reduce the power needs for C$^3$ without modifications in the ultimate physics reach. We also propose a metric to compare the carbon costs of Higgs factories, balancing physics reach, energy needs, and carbon footprint for both construction and operation, and compare C$^3$ with other Higgs factory proposals – ILC, CLIC, FCC-ee and CEPC – within this framework. We conclude that the compact 8 km footprint and the possibility for cut-and-cover construction make C$^3$ a compelling option for a sustainable Higgs factory. More broadly, the developed methodology serves as a starting point for evaluating and minimizing the environmental impact of future colliders without compromising their physics potential.
The European Laboratory Directors Group that coordinates European programme of accelerator R&D, took recently the decision to establish a working group on sustainability assessment of future accelerators. Working group mandate is to develop guidelines and a minimum set of key indicators pertaining to the methodology and scope of the reporting of sustainability aspects for future HEP projects. A panel of 14 people has been endorsed by LDG and is currently committed to preparing an input document for the Update of the European Strategy for Particle Physics, by Spring 2025. The working group includes representatives of the projects of future accelerators and experts on sustainability of large research infrastructures, involved in initiatives like CERN Sustainability Panel, IFAST, EAJADE, iSAS, STFC Sustainability Task Force, ESS on Green Facilities. The talk is intended to summarize the current status and receive a feedback on the initiative from the HEP community.
Sustainability has become a prioritized goal in the design, planning and implementation of future accelerators; approaches to improved sustainability include overall system design, optimization of subsystems, and operational concepts. A direct quantification of the ecological footprint, is currently performed only sporadically, with Lifecycle Assessments (LCA) emerging as a more comprehensive approach.
Two large electron-positron linear colliders are currently being studied as potential future Higgs-factories, CLIC at CERN and ILC in Japan. These projects are closely collaborating on methods to reduce the power consumption of accelerator components and systems, and smart integration of future accelerator infrastructure with the surrounding site and society. In a recent, common study an LCA of the construction of tunnels, caverns and shaft of both accelerators was conducted. This contribution will present this and other current results and future activities.
Gravitational waves can be produced from first order cosmological phase transitions that occur early in the Universe. Exciting recent results including a possible signal of a stochastic gravitational wave background from pulsar timing array experiments mean that we have now entered an era where robust predictions of the gravitational wave spectra from first order phase transitions are vital. Based on a recent invited review, Prog.Part.Nucl.Phys. 135 (2024) 104094 and several related works I will discuss various subtle issues in the prediction of gravitational wave spectra from first order phase transitions that can significantly impact the predictions and review the current status of predictions for gravitational waves from first order phase transitions. In particular I will discuss criteria for determining if a phase transition completes, and the dependence of gravitational wave predictions on the transition temperature and a variety of standard approximations.
We perform a phenomenological comparison of the gravitational wave (GW) spectrum expected from cosmic gauge string networks and superstring networks comprised of multiple string types. We show how violations of scaling behavior and the evolution of the number of relativistic degrees of freedom in the early Universe affect the GW spectrum. We derive simple analytical expressions for the GW spectrum from superstrings and gauge strings that are valid for all frequencies relevant to pulsar timing arrays (PTAs) and laser interferometers. We analyze the latest data from PTAs, and study correlations between GW signals at PTAs and laser interferometers.
We explore how quantum gravity effects, manifested through the breaking of discrete symmetry responsible for both Dark Matter and Domain Walls, can have observational effects through Dark Matter indirect detections and gravitational waves. To illustrate the idea we consider a simple model with two scalar fields or one fermion field plus one scalar field, together with two $Z_2$ symmetries, one being responsible for Dark Matter stability, and the other spontaneously broken and responsible for Domain Walls, where both symmetries are assumed to be explicitly broken by quantum gravity effects. We show the recent gravitational wave spectrum observed by several pulsar timing array projects can help constrain such effects.
The 2016 discovery of gravitational waves by the LIGO-Virgo collaboration is a watershed moment in cosmology. Now, with the approval of the space-based LISA experiment, the hunt is on: A search for gravitational-wave remnants of the Electroweak phase transition; to probe the Higgs potential and perchance even explain the Baryon asymmetry problem. Yet the theoretical hurdles are great—theoretical predictions can misjudge the peak gravitational-wave spectrum by even ten orders of magnitude. As such, naturally, there is a great push in the theoretical community to catch up to our experimental colleagues; to have theoretical predictions on solid ground as LISA looms.
In this talk I will illustrate the progress of using perturbation theory, together with Lattice simulations, to give robust predictions of the gravitational-wave spectrum from phase transitions. I will present the state-of-the-art of theoretical calculations and go into what challenges remain to be tackled.
It is often speculated that supermassive black holes (SMBHs) located at the centers of many galaxies can serve as possible sources of ultra-high-energy cosmic rays (UHECR). This is also supported by numerous observations of high-energy neutrinos and gamma-rays from the direction of blazars and other SMBH candidates. In this talk, I will present a novel scenario of particle acceleration involving electromagnetic extraction of rotational energy from the central black hole. I will show that for typical SMBH of a billion solar masses, the energy of an accelerated proton can reach ZeVs, while applied to the Galactic center SMBH, the proton energy reaches a few PeV, coinciding with the knee of the observed cosmic ray spectrum. I will also discuss the expected energy spectrum and particle composition of the presented model.
Liquid argon, widely used as active target in neutrino and dark matter experiments, is a scintillator with a light yield of about 40 photons/keV, attenuation length of the order of meters and a scintillation peak at 128 nm. Adding small amounts of xenon (around 10 ppb) allows to shift this to 178 nm without spoiling the light yield. The longer wavelength simplifies the development of imaging systems. A precise knowledge of liquid argon optical properties in the VUV range is essential to improve the performance of liquid argon based experiments. Besides, the refractive index is a crucial parameter for the development of imaging systems. LArRI (Liquid Argon Refractive Index) aims at a direct measurement of liquid argon refractive index in the VUV spectrum using an interferometric technique that compares two interference patterns, created in vacuum and in liquid. Here we present our first results in liquid argon and liquid nitrogen extracted with a mercury lamp emitting at 254 and 184 nm.
The Belle Collaboration recently measured the complete set of angular coefficient functions for the exclusive decays $\bar{B} \to D^*(D \pi)\ell \bar{\nu}_\ell$, where $\ell = e, \mu$, into four bins of the parameter $w = \frac{m^2_B + m^2_{D^∗} − q^2}{2m_B m_{D^*}}$, with $q$ representing the momentum of the lepton pair. In SM these measurements are instrumental in determining the hadronic form factors governing the $B \to D^*$ matrix elements of the SM weak current, thereby refining the estimation of $|V_{cb}|$. On the other hand, they can be used to assess the impact of possible new physics contributions. We extend the SM effective Hamiltonian that governs this mode by incorporating the complete set of Lorentz invariant $d=6$ operators compatible with the gauge symmetry of the theory. The measured angular coefficient functions play a pivotal role in constraining the couplings within the generalized Hamiltonian.
Motivated by the remarkable Belle II experimental result on $B\to K ^{}\, E_{\rm miss}$, and phenomenological difficulties in accommodating it exclusively in terms of processes with SM neutrino final states, we systematically investigate possibilities that $E_{\rm miss}$ comes not only from the SM neutrinos but also from other light undetected particles. We consider both single scalar or vector particle final states, as well as pairs of scalars, spin $1/2$ or $3/2$ fermions, and vectors, following the approach of Kamenik and Smith [JHEP 03 (2012) 090]. Since several of these possibilities significantly alter the phase space and kinematical distributions of events in the experiments, we consider not only the branching fractions of ${\cal B} (B\to K^{(*)} E_{\rm miss})$ but also all available event distributions presented in the Belle II and BaBar analyses, and construct our own likelihood for different NP scenarios using the data from both processes.
The first search for $K_L^0 \rightarrow \pi^0 e^+e^-e^+e^-$ is performed with a dataset collected by the J-PARC KOTO experiment in 2021. In the dark sector model, this final state can be achieved through $K_L^0 \rightarrow \pi^0 X X, X\rightarrow e^+e^-$. The branching ratio of this decay mode is predicted to be $\mathcal{O}$($10^{-10}$) dominated by the virtual photon vertex through $K_L^0\rightarrow \pi^0 \gamma^* \gamma^*$ in the Standard Model. With the data collected in 2021, we can achieve a sensitivity close to this level which can improve the constraints on the dark sector model. Special analysis methods are used for the rejection of backgrounds related to the $\pi^0$ Dalitz decay. Results will be presented in the conference.
A resonant structure has been observed at ATOMKI in the invariant mass of electron-positron pairs, interpreted as the production of a hypothetical particle (X17).
The MEG II apparatus at PSI, designed to search for the μ+ → e+ γ, can perform also the X17 search.
Protons from a CW accelerator, with an energy up to 1 MeV, was delivered on a dedicated Li-based targets (with thicknesses up to several μm) and the 7Li(p,e+e−)8Be process studied using the MEG II spectrometer. This aims to reach a better invariant mass resolution than the previous experiments and to study the production of the X17 with a larger acceptance and therefore to shed more light on the nature of this observation.
After a 2022 engineering run, a physics run took place in 2023, during which the first data sample was collected. We report on the status of this search with the MEG II apparatus, presenting the collected data, the analysis strategy, and the current results.
The recent ATOMKI experiments provided evidence pointing towards the existence of an X17 boson in the anomalous nuclear transitions of Beryllium-8, Helium-4, and Carbon-12. The favored ranges for X17 boson couplings to u and d quarks are determined through fittings to these nuclear transitions. In this work, we consider X17 boson contributions to the previously measured $D$ meson decays, including $D_s^{*+} \rightarrow D_s^+ e^+ e^-$ and $D^{*0} \rightarrow D^0 e^+ e^-$, as well as the measured decays of $\psi(2S) \rightarrow \eta_c e^+ e^-$ and $\phi \rightarrow \eta e^+ e^-$. Using these data, we perform an independent fitting to the couplings between the X17 boson and various flavors of quarks. This fitting requires a huge X17 boson coupling to the u quark that creates a serious tension with ATOMKI's data. Our analysis models X17 as a vector boson and allows its quark couplings to be generation dependent. The implications of our findings are discussed.
Many extensions of the standard model can give rise to tau leptons produced in non-conventional signatures in the detector. For example, certain long-lived particles can decay to produce taus that are displaced from the primary proton-proton interaction vertex. The standard tau reconstruction and identification techniques are suboptimal for displaced tau leptons, which require specialized approaches. Recent advances in machine learning (ML) have demonstrated the advantages of graph convolutional neural networks for tagging or identifying different kinds of jets. However, the application of such ML models for displaced tau signatures has not yet been extensively explored at CMS. This talk will present the use of such state-of-the-art ML techniques for identifying hadronically decaying displaced tau leptons.
The upcoming HL-LHC represents a steep increase in the average number of pp interactions and hence in the computing resources required for offline track reconstruction of the ATLAS Inner Tracker (ITk). Track pattern recognition algorithms based on Graph Neural Networks (GNNs) have been demonstrated as a promising approach to these challenges. We present in this contribution a novel algorithm developed for track reconstruction in silicon detectors based on a number of deep learning techniques including GNN architectures. Using simulated ttbar events on the latest ITk geometry, we demonstrate the performance of our algorithm, and compare to that of the tracking algorithm currently used. In addition, the tracking performance on single charged particles is studied in detail. Finally, we discuss the algorithm’s computational performance and optimisations that reduce computing costs, as well as our effort to integrate into the ATLAS analysis software for full-chain testing and production.
In this presentation we describe the performance obtained running machine learning models studied for the ATLAS Muon High Level Trigger. These models are designed for hit position reconstruction and track pattern recognition with a tracking detector, on different models of commercially available Xilinx FPGA cards: Alveo U50, Alveo U250, and Versal VCK5000. We compare the inference times obtained on a CPU, on a GPU and on the FPGA cards. These tests are done using TensorFlow libraries as well as the TensorRT framework, and software frameworks for AI-based applications acceleration. The inference times and other performance benchmark parameters are compared to the needs of present and future experiments at LHC.
Increases in instantaneous luminosity and detector granularity will increase the amount of data that has to be analyzed by high-energy physics experiments, whether in real time or offline, by an order of magnitude. In this context, Graph Neural Networks have received a great deal of attention in the community for the reconstruction of charged particles, because their computational complexity scales linearly with the number of hits in the detector. We present a GNN reconstruction of LHCb’s vertex detector and benchmark its computational performance on both GPU and CPU architectures. A unique aspect of our work is the integration into LHCb's fully GPU-based first-level trigger system, Allen, which performs at the rate up to 30 MHz in the ongoing Run~3. Our work is the first attempt to operate a GNN charged particle reconstruction in such a high-throughput environment using GPUs, and we discuss the pros and cons of the GNN and classical algorithms in a detailed like-for-like comparison.
Tracking charged particles in high-energy physics experiments is a computationally intensive task. With the advent of the High Luminosity LHC era, which is expected to significantly increase the number of proton-proton interactions per beam collision, the amount of data to be analysed will increase dramatically.
Traditional algorithms suffer from scaling problems. We are investigating the possibility of using machine learning techniques in combination with quantum computing.
In our research, we represent particle trajectories as a graph data structure and train a hybrid graph neural network with classical and quantum layers. We present recent results on the application of these methods, focusing on the computational aspects of code development in different programming frameworks such as Jax, Pennylane and IBM Qiskit.
We also provide insights into expected performance and explore the role of GPUs as computational accelerators in the simulation of quantum computing resources.
Tracking is one of the most crucial components of reconstruction in collider experiments. It is known for high consumption of computing resources, and various investigations are ongoing to cope with this challenge. The track reconstruction can be considered as a quadratic unconstrained binary optimization (QUBO) problem. Recent progress with two complementary approaches will be presented: (1) the Quantum Approximate Optimization Algorithm (QAOA) implemented in the Origin Quantum hardware, (2) quantum annealing inspired algorithms, in particular the simulated bifurcation algorithms. Both approaches show promising performance on the track reconstruction, and the latter can particularly handle significantly large data at high speed; as much as four orders of magnitude faster than the simulated annealing.
[1] H. Okawa, Springer Communications in Computer and Information Science, 2036 (2024) 272–283, arXiv:2310.10255.
[2] H. Okawa, Q.-G. Zeng, X.-Z. Tao, M.-H. Yung, arXiv:2402.14718 (2024).
The nature of dark matter is one of the most relevant open problems both in cosmology and particle physics.The NEWSdm experiment, located in the Gran Sasso underground laboratory in Italy, is based on a novel nuclear emulsion technology with nanometric resolution and new emulsion scanning microscopy that can detect recoil track lengths down to one hundred nanometers. Therefore, it is the most promising technique with nanometric resolution to disentangle the dark matter signal from the neutrino background, with a directional approach meant to overcome the background from neutrinos. The experiment has carried out measurements of neutrons and a run with equatorial telescope is in progress. In this talk we discuss the status of the experiment and we report the first analysis of data taken at Gran Sasso. We also discuss its sensitivity to boosted dark matter, achievable with a 10 kg emulsion module, exposed for one year at the Gran Sasso surface laboratory.
The KNU Advanced Positronium Annihilation Experiment (KAPAE) has developed a phase II detector to search for positronium invisible decay such as milli-charged particles, mirror world, axion, new light X-boson, and extra dimensions. KAPAE phase II detector optimized to detect gamma rays emitted during the annihilation of positronium and identify missing energy. It consists of a 5 × 5 array of BGO scintillation crystals with the size of 3 × 3 × 15 cm3 along with 16 channel 4 × 4 array of SiPMs on one side of each BGO and a custom DAQ system for collecting scintillation signals. In this presentation, we are presenting the fabrication process of the KAPAE Phase II detector, such as the surface treatment of BGO scintillation crystals, trigger system, data calibration techniques, and Geant4 Monte-Carlo simulation results such as detector optimization and sensitivity for detecting invisible decay.
The PICO collaboration employs bubble chambers filled with fluorocarbon fluids as targets in their active programme searching for dark matter via direct detection. The detectors are situated 2km deep underground at SNOLAB in Canada. Exploiting their insensitivity to electron recoil backgrounds, these detectors exhibit exceptional capability in background rejection.
This talk will present the results from the PICO-60 detector, including a study of interesting and well-motivated dark models proposed, such as inelastic dark matter and photon-mediated dark matter-nucleus interactions. Additionally, the status of the detector PICO-40L, currently operating at SNOLAB, will also be presented. Lastly, the status of the forthcoming ton-scale detector, PICO-500 will be discussed.
Anti-nuclei heavier than anti-D are unlikely to be formed during cosmic rays (CRs) propagation, as confirmed by the PHOENIX and ALICE collaborations. Anti-He observations could be related to Dark Matter interactions. Dedicated experiments must possess high charge sign discrimination to observe anti-He due to the He abundance in CR. Detector's effects, such as the rigidity resolution and the internal interactions, may lead to misidentifying matter as antimatter, producing a dominant background over rare signal candidates. In this work, we developed a Monte Carlo simulation to mimic the response of an AMS-02 like detector, identifying several phenomena that misidentify He as anti-He. We then implemented a fully connected neural network, trained over diverse sources of charge sign confusion, to quantify the event reconstruction quality. This tool could reduce the He background for the research for anti-He in CRs, improving the current capability to search for heavy antimatter in space.
The NEWS-G collaboration is searching for light dark matter candidates using a novel gaseous detector concept, the spherical proportional counter. Access to the mass range from 0.05 to 10 GeV is enabled by the combination of low energy threshold, light gaseous targets (H, He, Ne), and highly radio-pure detector construction. First physics results using the commissioning data of a 140 cm in diameter spherical proportional counter, S140, constructed at LSM using 4N copper with 500 um electroplated inner layer will be presented, along with the new developments in read-out technologies using resistive materials and multi-anode read-out. The first physics campaign in SNOLAB with the detector was recently completed. The potential to achieve sensitivity reaching the neutrino floor in light Dark Matter searches with a next generation, fully electroformed detector, DarkSPHERE, will also be presented.
We present a compact scintillating fiber timing detector developed for the Mu3e experiment. Mu3e is a novel experiment for the search of the charged lepton flavor violating neutrinoless muon decay mu -> eee. Mu3e is about to start taking data at PSI using the world's most intense continuous surface muon beam. The scintillating fiber detector is formed by staggering three layers of 250 um scintillating fibers. The fiber ribbons are coupled at both ends to multi-channel silicon photo-multiplier arrays which are read out with the MuTRiG ASIC especially developed for this experiment.
In particular, we will focus on the performance of this very thin (thickness < 0.2% of a radiation length) fiber detector in terms of the achieved timing resolution of ~250 ps, efficiency of ~97%, and spatial resolution of ~100 um, including the time calibration of the detector. We will also discuss the operation and performance of the MuTRiG ASIC used for reading out the ~3000 channels of the fiber detector.
In this report, we present the development of a new type of particle identification (PID) detector, the DIRC-like time-of-flight (DTOF). The DTOF detector uses the arrival time of Cherenkov photons to achieve better PID performance than a classic TOF detector with the same time resolution. It features fast response, a wide momentum range of PID, compact structure, ease of operation and maintenance. We have developed a DTOF prototype as well as its readout electronics. The prototype has a fused-silica radiator of 0.56 m^2 and 672 MCP-PMT readout channels. We will describe the detector design, reconstruction algorithm, radiator production, MCP-PMT base circuit , and readout electronics of the prototype. We will also present the results of cosmic ray tests of the prototype, which show a 22 ps time resolution for MIP, corresponding to a pi/K separation power of better than 4 standard deviations for momentum up to 2 GeV/c with a flight distance of 1.5m and a collision time jitter of 40ps.
The LGAD could be used as the time of flight detector for the Chinese electron-positron collider (CEPC). As suggested by the CEPC board, the time of flight is urgent for the flavor physics in CEPC, especially for the k/p and k/pi separation in the low-energy part. Two designs based on LGAD technology have been studied for the CEPC. One is pure ToF with 50 ps time resolution and would be realized with the large area LGAD. The other one is the ToF with additional track information and would be expected to achieve the performance with the 50 ps time resolution and 10 um spatial resolution.
The ePIC detector is specifically designed to address the entire physics program
at the Electron-Ion Collider (EIC). It consists of several sub-detectors, each tailored
to address specific physics channels. One of the key sub-systems of ePIC is
the dual-radiator Ring Imaging Cherenkov (dRICH) detector, which is a highmomentum
particle-identification system located in the hadronic end-cap. For
this purpose, silica aerogel has been chosen as a solid radiator. The optical and
geometrical characteristics of the aerogel tiles play a critical role in enhancing
the particle identification performance. Intensive R&D efforts are currently underway
to optimize these properties. Ongoing studies are focused on defining and
refining the aerogel tiles to ensure optimal performance. The measurement of the
transmittance of several aerogel tiles with different refractive indices, including
the setup and the measurement method, will be presented.
3D granularity plastic scintillator detectors combine particle tracking, calorimetry and sub-ns time resolution. Future detectors will aim to larger volumes and finer segmentation, making the manufacturing and the assembly prohibitive. The 3DET is developing additive manufacturing of plastic scintillator, opening the door to large-scale production of 3D-segmented detectors. A monolithic geometry consisting of a 5x5x5 matrix of optically-isolated scintillating voxels made of transparent polystyrene, white reflector, and orthogonal 1 mm diameter holes to accommodate wavelength shifting fibers was produced. We report about the manufacturing process of the prototype and its characterisation after exposure to cosmic rays and test beams at CERN. This work paves the way towards a new feasible, time and cost-effective process for the production of future scintillator detectors, regardless their size and difficulty in geometry.
Axions in the mass range of tens to hundreds of micro-electron volts represent a promising dark matter candidate. Traditional cavity haloscopes represent a sensitive experimental configuration for probing low axion mass ranges, corresponding to frequencies up to a few GHz. However, the scaling of these cavity detectors to higher frequencies proves impractical due to limitations imposed by the axion Compton wavelength. Recently, a plasma haloscope concept inspired by metamaterials was proposed, offering a solution where the artificial plasma frequency can be tuned independently of the physical size of the detector. Building on this idea, the ALPHA Collaboration is developing resonator prototypes for axion searches in the frequency range of 10-45 GHz. In this presentation, we will describe various optimization strategies and parametric studies employed for the development of these resonator designs.
The physical sciences can struggle to engage teenagers, sometimes being perceived as complex or daunting. To combat this, Badminton School developed its ‘Science Outreach’ programme enabling students to deliver liquid nitrogen based shows in 25 schools each year and at national events such as WOMAD, the UK Big Bang and the Northern Ireland Science Festival, appearing on these events’ main programmes. Science Outreach encouraged practical work through participation in exciting demonstrations, kept students involved in science as they became more able to deal with the abstractions of Physics, and developed valuable communication skills. Meanwhile their primary school audiences were inspired by engaging presenters who were just the right age to be perfect role models.
This session describes how the programme developed, gives insight into how similar models can be adopted, and will hear from some Science Outreach graduates discussing the positive effect it had on their further education.
Since 2013, the University of Michigan has hosted semester research programs for undergraduates at CERN. Students are selected from a diverse mix of universities and embedded in active research programs at the laboratory. Mentors are selected for their leadership skills and their ability to educate and inspire the students. Projects include detector R&D, software development, trigger design and physics analysis on a variety of CERN experiments.
Funding for the students, which covers travel, per diem and a stipend, has come from the University, the Richard Lounsbery Foundation and the United States Mission to the International Organizations in Geneva. An emphasis has been placed on providing opportunities to women and students from other under-served communities and, since 2022, the program has supported placements for students from Ukraine. We present the growing success of the program and describe our current efforts to expand and improve its diverse reach.
The Czech Particle Physics Project (CPPP) is introduced. It consists of two types of modules, learning modules for masterclasses aimed at high-school students (aged 15 to 18), and modules for educational aid and sources for expert information as web portals dedicated to Higgs boson research and searches for Supersymmetry. The modules are accessible at http://cern.ch/cppp. The modular structure of the CPPP allows to add new modules for educational aspects.
Dealing with modern physics, particularly in a cutting-edge laboratory with hands-on experiments, serves as a valuable tool for inspiring high school students to pursue STEM careers. In this case, in fact, students can immerse themselves, become part of the research setting, interact with researchers, raise awareness of the numerous applications of research in everyday life, and conduct experimental activities firsthand. In this spirit, since 2011, the INFN Frascati National Laboratory has been organizing INSPYRE, the International School on Modern Physics and Research, aimed at high school students from all over the world. In this talk we will introduce the 2024 edition of INSPYRE, drawn upon the collective experience and expertise of researchers from INFN and other institutes. Collaboration with key members of the educational and outreach communities at CERN and GIREP, also made the program richer and innovative, and fostered a constructive dialogue among the communities involved.
The OCRA (Outreach Cosmic Ray Activities) INFN programme involves 24 divisions all over Italy and offers activities for both students and teachers: participation in the International Cosmic Day - ICD organised by DESY, science camps for students, and training courses for teachers, structured as three-day events organised in the framework of the PNRR Cherenkov Telescope Array Plus project. In addition to the national efforts, local initiatives of the OCRA member groups enrich the variety of activities: workshops, participation in science festivals, and the development of new detectors for outreach activities offer a variety of opportunities for students and teachers to interact with researchers and explore the world of cosmic rays.
We will give an overview of the OCRA activities at large, with a special focus on the course “Discovering cosmic rays” at the Gran Sasso National Laboratories - INFN for in-service high school physics teachers.
We invite you to this discussion session, hosted jointly by the conveners of the EDI and Sustainability tracks, which will focus on topics raised by ICHEP participants. If you would like to propose a topic for discussion, please complete this form https://tinyurl.com/sustainEDI (linked below) as soon as possible, or if you have any questions, please contact the session conveners at ichep2024-pgm-16-edi@cern.ch and ichep2024-pgm-18-sustainability@cern.ch. Please note that the form can be submitted anonymously.
In this work, we propose a Green’s basis and a new physical basis for dimension-seven (dim-7) operators, which are suitable for the matching of ultraviolet models onto the Standard Model effective field theory (SMEFT) and the deviation of renormalization group equations (RGEs) for the SMEFT dim-7 operators. The reduction relations to convert operators in the Green’s basis to those in the physical basis are achieved as well, where some redundant dim-6 operators in the Green’s basis are involved if the dim-5 operator exists. Working in these two bases, we work out the one-loop RGEs resulting from the mixing among different dimensional operators for the SMEFT dim-7 operators for the first time and revisit those from the mixing among dim-7 operators. These results complete the full RGEs of dim-7 operators and can be used for a consistent one-loop analysis of the SMEFT. Some applications of those results to lepton- and baryon-number violation processes are also discussed.
We present a general family of effective $SU(2)$ models with an adjoint scalar. We construct the Bogomol'nyi-Prasad-Sommerfield (BPS) limit and derive monopole solutions in analytic form. In contrast to the 't Hooft-Polyakov monopole, included here as a special case, these solutions tend to exhibit more complex energy density profiles. Typically, we obtain monopoles with a hollow cavity at their core where virtually no energy is concentrated; instead, most of the monopole's energy is stored in a spherical shell (in some cases with several "sub-shells") around its core. Most interestingly, however, we show that some of the analytic monopole solutions contain an unconstrained constant of integration that controls the monopole's energy density profile, while keeping its total energy (i.e., the mass) constant. Thus, it can be interpreted as an internal degree of freedom or as a new moduli space parameter.
The theory of an independent Higgs field is given by an $\textrm{O}(N)$ model with an $N$-component scalar $\vec{\phi}$ and a quartic $\lambda(\vec{\phi}\cdot\vec{\phi})^2$ potential when $N=4$. The phase structure of the theory can be studied analytically for all values of the coupling $\lambda$ using the large-$N$ limit, both at zero and finite temperature. However, authors in the 70s and 80s argued the theory at large $N$ was "sick" and "futile", and dismissed the theory. This was based on two points: (1) a failure to identify the stable phases and vacuum of the theory and (2) the issue of a negative bare coupling $\lambda<0$ in the UV. We demonstrate that the theory is not, in fact, "sick". Issue (2) is dealt with through the modern understanding of $PT$-symmetric non-Hermitian theories with "wrong-sign" couplings. Issue (1) is resolved by realizing that the true vacuum has no spontaneous symmetry breaking (SSB) and that the SSB phase only becomes preferred at high temperatures.
The electroweak hierarchy problem and the naturalness framework have been a
driving theme for model building beyond the Standard Model of particle physics.
In the case of the Higgs boson, the problem lies in the difficulty of
producing a model where the Higgs mass is insensitive to parameters in the
ultraviolet (UV) completed theory. With time, more traditional solutions to
the hierarchy problem, like supersymmetry or extra dimensions have given room
to more daring approaches. We describe a model where
the one-loop corrections to the mass of a scalar field cancel exactly, making its UV-insensitivity less severe. This is achieved by introducing a symmetry that generalizes SU(N) by adding fermionic generators. The price to pay, however, is the introduction of degrees of freedom with the wrong spin-statistics, with severe implications for unitarity.
The ultimate dream of unification models consists in combining both gauge and Yukawa couplings into one unified coupling. This is achieved by using a supersymmetric exceptional E6 gauge symmetry together with asymptotic unification in compact five-dimensional space-time. The ultraviolet fixed point requires exactly three fermion generations: one in the bulk, and the two light ones localised on the SO(10) boundary in order to cancel gauge anomalies. A second option allows to preserve baryon number and to lower the compactification scale down to the typical scales of the intermediate Pati-Salam gauge theory.
The multi-gluon exchanges between quark loops constitute a contribution to confining force inside hadronic states at vanishing transferred momenta and finite strong coupling. In this talk, we present calculations of QCD corrections to Coulomb potential in the configuration of multi-gluon exchanges between two quark loops (four-quark scattering amplitude) in the limit of vanishing transferred momenta. These contributions are shown to induce an attractive interaction, possibly measurable as a macroscopic force. We summarize the primary experimentally observable consequences of this attractive interaction and concentrate on the next steps that should shed more light on the magnitude and possible presence of this attractive interaction at macroscopic scales. The talk is largely based on arxiv:2212.11667, which was recently finalized and submitted to EPJ about a month ago.
From black hole quasinormal frequencies (QNFs), we can extract characterising information about their perturbed source. In the case of charged black holes, we can interrogate the extendability of the metric past the Cauchy horizon as well as the role of superradiance in black hole evolution. Here, we examine the QNF spectrum corresponding to a massive scalar test field carrying an electric charge, oscillating in the outer region of a Reissner-Nordström-de Sitter black hole. Our analysis provides insight into QNF behaviour, particularly in regions approaching the extremal conditions of the black hole. The implications of our findings extend from safeguarding the principles of cosmic censorship to addressing the structural stability of the black hole's interior. Our semi-classical analysis suggests that Strong Cosmic Censorship may be violated for black holes that are in close proximity to extremality within the context of Reissner-Nordström-de Sitter geometries.
Quarkonium production in ultrarelativistic heavy ions collisions is one of the best probes of the QGP formed in these collisions. Resorting to acurate methods to describe the $Q\bar Q$ evolution in a QGP is a prerequisite for the precise interpretation of experimental data. Following [1], we present exact numerical solutions in a 1D setting of quantum master equations (QME) derived in [2]. Distinctive features of the $Q\bar Q$ evolution with the QME are presented; phenomenological consequences are addressed by considering evolutions in EPOS4 temperature profiles. Next, we investigate the accuracy of the semiclassical approximation (often used to to describe charmonium production in URHIC, e.g. [1,3]) by benchmarking the corresponding evolutions on the exact solutions derived with the QME for the case of a $c\bar c$ pair.
refs:
1. S. Delorme et al., arxiv 2402.04488
2. J.-P. Blaizot and M.A. Escobedo, JHEP06(2018)0344.
3. D.Y. Arrebato Villar et al., Phys.Rev.C 107 (2023) 5, 054913
Quarkonium is an ideal probe to explore the properties of QCD. Unlike Large Hadron Collider (LHC) measurements, quarkonium production at the Relativistic Heavy Ion Collider (RHIC) has different production mechanisms, can access different kinematic phase space and may experience different medium densities/temperatures. The PHENIX experiment has collected a large $J/\psi \to \mu^{+}\mu^{-}$ data set within its unique pseudorapidity region of $1.2<|\eta|<2.2$ in $p+p$, $p+A$ and $A+A$ collisions at $\sqrt{s}$ = 200 GeV. We will present the latest results of 1) event multiplicity dependent forward $J/\psi$ and $\psi(2S)$ production in $p+p$ collisions; and 2) forward $J/\psi$ flow in Au+Au collisions. Comparison with other RHIC and LHC measurements and latest theoretical calculations will be discussed. These PHENIX results will help improving the understanding of charmonium production mechanisms, the role of multi-patron interactions and the charmonium regeneration in the QGP.
Measurements of top quarks in heavy-ion collisions are expected to provide novel probes of nuclear modifications to parton distribution functions as well as to bring unique information about the evolution of strongly interacting mat- ter. We report the observation of the top-quark pair production in proton-lead collisions at the centre-of-mass energy of 8.16 TeV in the ATLAS experiment at the LHC. Top-quark pair production is measured in the lepton+jets and the dilepton channels, with a significance well above 5 standard deviations in each channel separately. The results from the measurement of the nuclear modifica- tion factor RpA are also presented. If available, results from the measurement of top-quark production in Pb+Pb collisions will be presented and discussed and will be complemented by an overview of the most recent quarkonia mea- surements with ATLAS.
Charmonia is a valuable tool to investigate nuclear matter under extreme conditions, particularly in the strongly interacting medium formed during heavy-ion collisions. At the LHC energies, the regeneration process has been found to significantly impact the observed charmonium characteristics. In particular, the ψ(2S) production relative to J/ψ is a physical observable with strong discriminating power between the possible regeneration scenarios in Pb–Pb collisions. Additionally, the study of quarkonium production in proton–proton collisions represents a reference for interpreting results obtained in Pb–Pb collisions and it is a key measurement to distinguish among the quarkonium production models in pp and p–Pb systems. In this contribution, preliminary findings of the double ratio of ψ(2S) -to- J/ψ and the inclusive J/ψ yield in pp collisions at a center-of-mass energy of √s=13 TeV measured by the ALICE Collaboration will be presented and compared with existing model calculations.
Open heavy flavor and quarkonium have long been identified as ideal probes for understanding the quark-gluon plasma (QGP). Heavy quarks are produced in the early stage of the heavy-ion collisions, therefore they experience the evolution of the medium produced, providing an important tool to investigate the properties of the QGP. In particular, the magnitude of the elliptic flow measured at the LHC is interpreted as a signature of the charm-quark thermalization in the QGP. This is reflected in the azimuthal anisotropies of the final particles, such as elliptic and triangular flows. Interestingly, the observation of collective-like effects in high-multiplicity pp and p-Pb collisions provides new insights on the evolution of QGP-related observables going from large to small collision systems. A better understanding of heavy quark energy loss, quarkonium dissociation, and production mechanism can therefore be obtained with those system-size dependent observables. In this talk, I will present recent measurement of the J/ψ and open heavy-flavor flow measurements in pp, p-Pb, and Pb-Pb collisions carried out by the ALICE collaboration.
Using an event-by-event Boltzmann transport approach with an hadronization via coalescence plus fragmentation, we investigate charm dynamics and the extension to bottom (b) quark dynamics providing predictions for RAA and v2,3 of B mesons comparing to the data by ALICE collaboration. A sizeable v2,3 is found with important implications on bottomonium Υ production. The extension to b quark allows to investigate the mass dependence of Ds(T) towards the infinite mass limit assumed in lQCD. We find a significant breaking of the scaling of thermalization time τth with MQ/T, entailing a Ds for M → ∞ in agreement with the recent lQCD data with dynamical quarks. Furthermore, we extend our QPM approach to a more realistic model in which partonic propagators explicitly depend on quark momentum (QPMp). The QPMp improves the description of lQCD quark susceptibilities and entails a Ds with a stronger non-perturbative behaviour near to Tc which leads to a better agreement with the recent lQCD data.
Our understanding of neutrinos faces limitations from neutrino-nucleus interaction uncertainties. Constraining the uncertainties has proven challenging given the absence of a complete model. To bypass most uncertainties, a DUNE physics program named PRISM employs a data-driven approach to measure neutrino oscillations. It involves the near detector (ND) moving off the neutrino beam axis to sample various neutrino energy spectra which are then linearly combined to predict the far detector oscillated spectrum. However, interaction uncertainties still affect the oscillation sensitivity primarily through the Monte Carlo based ND efficiency correction where interaction systematics introduce large variations in the predicted spectrum. We have developed a new data-driven geometric efficiency correction technique that further eliminates the interaction model dependence. In this talk, I will present this data-driven technique and its demonstrated performance using on-axis ND and FD data.
The MicroBooNE Liquid Argon Time Projection Chamber (LArTPC) experiment was exposed to Fermilab's neutrino beamlines from 2015 to 2021. The experiment has established a rich physics program. MicroBooNE records and utilizes both the ionization charge and scintillation light produced inside the TPC to select and reconstruct its events. Crucial to the experiment's physics program is a detailed understanding of the detector's performance over time. This talk will summarize the experiment's state-of-the-art measurements of detector physics quantities such as electron lifetime, diffusion, and scintillation light yield; and describe what MicroBooNE has learned, developed and measured throughout its running. In addition, MicroBooNE has developed and demonstrated novel capabilities in sub-MeV reconstruction of wire signals and light-based nanosecond interaction timing resolution, both of which are foundational to expanding the physics reach of future LArTPC detectors.
MicroBooNE utilizes an 85-tonne active volume Liquid Argon Time Projection Chamber (LArTPC) to pursue an ambitious physics programme including the search for oscillations between active and sterile neutrinos, and a broad range of cross section measurements and searches for new physics. LArTPCs are high-precision imaging detectors that capture fine details of particle interactions, driving the need for advanced reconstruction techniques. The principal reconstruction frameworks at MicroBooNE employ multi-algorithm, deep learning and tomographic techniques to address a range of pattern-recognition problems in reconstructing 3D images of neutrino interactions in order to fully exploit the LArTPC imaging capabilities. Enhanced signal processing and energy reconstruction, a crucial observable for numerous physics goals, has been achieved via deep learning and neutron tagging. This talk presents an overview of these techniques, which enable the MicroBooNE physics programme.
The SBND experiment, a 112-ton liquid argon time projection chamber (LArTPC), functions as the near detector for the Short Baseline Neutrino (SBN) program at Fermilab. Positioned only 110 metres from the beam target, SBND anticipates capturing over a million neutrino interactions annually, surpassing the dataset sizes of other LAr experiments by more than an order of magnitude. Due to its location on the surface, the detector is also exposed to high rates of cosmic rays, and therefore the experiment necessitates a sophisticated and dependable trigger system to allow for effective downstream analysis. This talk will detail the SBND hardware trigger system and its performance with first data.
T2K is a long-baseline experiment for the measurement of neutrino and antineutrino oscillations. The ND280 near detector at J-PARC plays a crucial role to minimise the systematic uncertainties related to the neutrino flux and neutrino-nucleus cross-sections.
ND280 has been recently upgraded with a new suite of sub-detectors: a high granularity target with 2 million optically-isolated scintillating cubes read out by wavelength shifting fibres and 55000 Multi-Pixel Photon Counters; two horizontal Time-Projection Chambers instrumented with resistive Micromegas and 6 panels of scintillating bars for precise time-of-flight measurements.
The new detectors were installed in 2023-2024. Results from the first data collected with a neutrino beam will be shown, and the performance of the detector will be highlighted: large acceptance for tracks produced at large angle, low threshold for pion and proton reconstruction, full kinematic measurement of neutrons and improved particle identification.
The NOvA experiment uses the ~1 MW NuMI beam from Fermilab to study neutrino oscillations over a long distance. The experiment is focused on measuring electron neutrino appearance and muon neutrino disappearance at its Far detector situated in Ash River, Minnesota. NOvA was the first experiment in High Energy Physics to apply convolutional neural networks to the classification of neutrino interactions and the composite particles in a physics measurement. Currently, NOvA is crafting new deep-learning techniques to improve interpretability, robustness, and performance for future physics analyses. This talk will cover the advancements in deep-learning-based reconstruction methods being utilized in NOvA.
Taishan Antineutrino Observatory is a satellite experiment of JUNO. It consists of a ton-level liquid scintillator detector at 44 meters from a reactor core of the Taishan Nuclear Power Plant. It detects reactor antineutrinos by inverse beta decay. Silicon photomultipliers which have ~95% coverage and ~50% photon detection efficiency are used to collect photoelectrons, resulting in the light yield is ~4500 photoelectrons per MeV. Dark noise of SiPM is suppressed by cooling the detector down to -50 degrees. The main goal of TAO is to get the precise energy spectrum of reactor antineutrinos with very high energy resolution (<2% at 1 MeV). It will deliver a reference energy spectrum for JUNO to reduce the impact from the reactor antineutrino flux and spectrum model uncertainties, provide a benchmark to nuclear databases, and search for light sterile neutrinos with a mass scale around 1 eV.
This talk will show the results of TAO 1:1 prototype and the latest status of final TAO detector.
A precise measurement of the luminosity is a crucial input for many ATLAS physics analyses, and represents
the leading uncertainty for W, Z and top cross-section measurements. ATLAS luminosity determination in Run-3 of the LHC follows the procedure developed in Run-2 of the LHC. It is based on van-der-Meer scans during dedicated running periods each year to set the absolute scale, and an extrapolation to physics running conditions using complementary measurements from the ATLAS tracker and calorimeter subsystems. The presentation discusses the procedure of the ATLAS luminosity measurement, as well as the results obtained for the 2022 and 2023 pp datasets.
Precision luminosity evaluation is an essential ingredient to cross section measurements at the LHC, needed to determine fundamental parameters of the standard model and to constrain or discover beyond-the-standard-model phenomena. The latest results of the CMS experiment are reported. The absolute luminosity scale is obtained with beam-separation “van der Meer” scans, and the systematic biases are studied in detail using innovative methods. Contributions to the uncertainty in the integrated luminosity due to instrumental effects, including the linearity and stability of the detectors are also discussed. Constraining the luminosity integration uncertainty via the observed rate of Z → μ+μ- events is also explored.
The LHCb detector optimised its performance in Run 1 and 2 by stabilising the instantaneous luminosity during a fill, by tuning the distance between the two colliding beams according using a hardware-based trigger. In Run 3, the LHCb experiment has being upgraded to cope with the 5-fold increase of luminosity and it has a fully software-based trigger. A brand new luminometer, PLUME, has been installed and successfully commissioned. Additionally, new online proxies from almost all sub-detectors are used to provide real-time measurements of luminosity, both integrated and per bunch-crossing. In addition, new offline counters are stored via a dedicated stream running at 30 kHz rate to allow for a precise offline calibration of the luminosity. In this talk an overview of the new luminosity measurements at LHCb is presented. The first results obtained using data collected during 2023 will also be shown, including the ghost charge fraction measurement using the beam-gas imaging technique.
Cross section measurements are an essential part of the ALICE physics program and require precise knowledge of the luminosity delivered by the LHC. In ALICE, the luminosity determination relies on visible cross sections measured in dedicated calibration sessions, the van der Meer scans.
In this talk, the methodology and results of the luminosity measurement will be discussed. For the LHC Run 2 data samples, ALICE measured the luminosity with an uncertainty better than 2% for pp collisions and 3% for Pb-Pb collisions.
The first measurement of the inelastic hadronic cross section for Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV, obtained by efficiency correction of the visible cross section, will also be presented and compared with available models. The luminosity-related upgrades to the ALICE detector for the LCH Run 3 will be presented.
The CMS Beam Radiation, Instrumentation and Luminosity (BRIL) system aims to provide high-precision bunch-by-bunch luminosity determination in the harsh conditions of the High-Luminosity LHC. Luminosity instrumentation will use diverse technologies, including a dedicated detector, the fast beam conditions monitor (FBCM) with Si-pad sensors and a fast triggerless readout. Various CMS subsystems' back ends will be adapted to provide luminosity information, including the tracker endcap pixel detector (TEPX), the outer tracker, the muon barrel, the hadron forward calorimeter, as well as the 40 MHz trigger scouting system. The BRIL Trigger Board will enable the independent operation of FBCM and the innermost layer of TEPX from the rest of CMS at all times by providing a dedicated timing and luminosity trigger infrastructure.
The physics program at the HL-LHC calls for a precision in the luminosity measurement of 1%. To fulfill this requirement in an environment characterized by up to 140 simultaneous interactions per bunch crossing (200 in the ultimate scenario), ATLAS will rely on multiple, complementary luminosity detectors, covering the full range of HL-LHC beam conditions from the low-luminosity, low-pileup regime of the van-der-Meer (vdM) calibrations to the high-luminosity environment typical of physics running. Two detector systems that are meant to be in operation at HL-LHC: LUCID-3 and the BMA detector have prototypes in operation since the start of Run-3. These prototypes have demonstrated their performance as well as limitations, and have added to the main set of luminometers for Run-3.The presentation will discuss ATLAS luminosity determination at HL-LHC, the available HL-LHC prototypes in Run-3, and their performance and inclusion in operations in 2022, 2023, and 2024.
The Belle and Belle II experiments have collected a 1.1$~$ab$^{-1}$ sample of $e^+ e^-\to B\bar{B}$ collisions at the $\Upsilon(4S)$ resonance. These data, with low particle multiplicity and constrained initial state kinematics, are an ideal environment to search for rare $B$ decays proceeding via electroweak penguin processes and lepton-flavour violating decays to final states with missing energy from neutrinos. Results include those on the decay $B\to K^+\nu\bar{\nu}$ using an inclusive tagging technique, including the first results from Belle. In addition, we present searches for the $S\!M$ process $B\to K^{(*)}\tau^+\tau^-$ and for the lepton-flavour violating decay $B^0\to K^{0}_{\rm S}\tau^+\ell^-$, where $\ell$ is an electron or muon.
The Belle-II experiment has recently measured $\mathcal{B}(B\to K\nu\nu)$, which appears to be almost $3\sigma$ larger than its Standard Model (SM) prediction. In this talk, I will critically revisit the status of the SM predictions for the $B\to K^{(\ast)}\nu\nu$ decays, and discuss the interpretation of this Belle-II measurement in terms of a general Effective Field Theory, as well as concrete models of physics beyond the SM.
The Belle and Belle$~$II experiments have collected a 1.1$~$ab$^{-1}$ sample of $e^+ e^-\to B\bar{B}$ collisions at the $\Upsilon(4S)$ resonance. We present results on the radiative decays $B^0\to \gamma\gamma$, $B\to \rho\gamma$ and $B\to K^{*}\gamma$. $C\!P$ and isospin asymmetries are presented for the latter two decays. We also present results from decays involving $b\to s\ell^+\ell^-$ and $b\to d\ell^+\ell^-$ transitions, where $\ell$ is an electron or muon.
Radiative rare b-hadron decays are sensitive probes of new Physics through the study of branching fractions, angular observables, CP violation paramenters and photon polarization. The LHCb experiment is ideally suited for the analysis of these decays due to its high trigger efficiency, as well as excellent tracking and particle identification performance. Recent measurements of the b-hadron radiative decays are presented and discussed.
Weak decays of beauty baryons offer an attractive laboratory to search for effects beyond the Standard Model (SM), complementary to searches in meson decays. Flavour changing neutral currents such as $b \to s\ell^+\ell^-$ transitions are of particular interest due to their high suppression in the SM. The LHCb experiment is ideally suited for the analysis of these decays due to its high trigger efficiency, as well as excellent tracking and particle identification performance. However, the properties of weakly decaying b baryons are not precisely determined yet, due to their low production cross-sections. Therefore, recent updates on precision measurements of b baryon properties as well as studies of baryonic $b \to s\ell^+\ell^-$ transitions are presented
We present an update of the likelihood analysis of the general two Higgs doublet model, using both theoretical constraints and the latest experimental measurements of the flavour observables. We make use of the public code GAMBIT and find that the model can explain the neutral anomalies while respecting the latest measurement of RK(()) by the LHCb and simultaneously fitting the values of the RD(()) charged current ratios at 1σ once the latest LHCb measurement of RD(*) is included. From the constrained parameter space, we make predictions for future collider observables. Particularly, the model predicts values of BR(h →bs) and BR(h→τμ) that are within the future sensitivity of the HL-LHC or the ILC. We also show how the Belle II 2.7σ measurement of the golden channel BR(B^{+}→K^{+}ν ν) could be accommodated within the model. Finally, using the latest measurement of the Fermilab Muon g − 2 Collaboration, we performed a simultaneous fit to ∆aμ_{μ}, finding solutions at the 1σ level.
The beauty baryon spectroscopy exhibit a rich phenomenology, which contributes to a deep comprehension of fundamental interactions. The large sample of beauty baryons produced at the Large Hadron Collider offers an unprecedented opportunity to enhance our understanding of these particles through searching for new decay channels, measurement of b-baryon properties, and the exploration of new states. This presentation will discuss recent results on beauty baryon decays based on data collected by the LHCb experiment at centre-of-mass energies of 7, 8 and 13TeV.
As the $\phi$ meson is composed of a pair of strange-antistrange quarks, it puts implicit constraints on modelling the hadronization procedure itself. Perturbative QCD inspired models, such as PYTHIA 8, describe hadronization through parton showers where strangeness is conserved on a quark-by-quark basis. In contrast, quark-gluon plasma inspired models, such as EPOS-LHC and EPOS4, model hadronization by statistical/thermal processes through microcanonical ensembles: as the $\phi$ meson is inherently neutral in strangeness, it is predicted to have similar dynamics to particles with comparable hadronic masses. Measuring the $\phi$ meson yield in association with a hard scattering can be used to test which paradigm best describes the underlying dynamics of $\phi$ meson production. This contribution will highlight new results from ALICE comparing the $\phi$ meson production in-and-out of jets from pp collisions at $\sqrt{s} = 13.6$ TeV.
Experimental data on the interaction between vector mesons and nucleons are a crucial input for understanding the pattern of in-medium chiral symmetry restoration (CSR) and the dynamically generated excited N($\Delta$) states. However, accessing these interactions is hampered by the short-lived nature of vector mesons, making conventional scattering experiments unfeasible. Leveraging the excellent PID capabilities of the ALICE experiment, coupled with the copious production of $\rho^{0}$p pairs at the LHC in small colliding systems, ALICE presents the first-ever measurement of the momentum correlation function between $\rho^{0}$ and p. This measurement provides an unprecedented opportunity to study the nature of the excited N($\Delta$), in particular N(1700) and N(1900), and possibly find out whether these states are molecular in nature, as well as shed light on possible signatures of CSR at LHC energies.
The composition of the innermost region of neutron stars is unknown, and the possible appearance of QCD axions has recently been proposed to help understand this puzzle. The properties of axions at high baryon densities can be related to the in-medium properties of pions, which are accessible in pp collisions at the LHC. Here, the emission of multiple hadrons helps to mimic high densities due to their short production distances ($\sim 1$ fm). This talk presents recent measurements of the particle-emitting source size using femtoscopy of $\pi\pi$ and p$\pi$ in pp collisions at $\sqrt{s}$ = 13 TeV by ALICE. The resulting scaling of the source size as a function of transverse mass is consistent for $\pi\pi$, p$\pi$ and previous results of baryon pairs by ALICE, demonstrating a common emission source for all hadrons. Using three-body femtoscopy, the first measurement of pp$\pi$ correlations unveiling effects beyond pairwise interactions are shown.
In the last decade, several resonances in the mass range 900-2000 MeV/$c^{2}$ (e.g. $f_{0}$(980) and $f_{1}$(1285)) have been proposed to have exotic quark compositions. Theory predicts it can be a linear composition of two u and d quarks or can have hidden strangeness to form tetra-quark hadrons or hadrons with a hybrid structure. The excellent particle identification capabilities of the ALICE detector provide an opportunity to explore the high mass resonances. This study reports the first measurement of the production cross section of $f_{1}$ and $f_{0}$ resonances in pp and p-Pb collisions at the LHC energies. The measurements of yields will be presented and compared to the statistical hadronization model (SHM) to shed light on the hidden strange content of these resonances. In addition to that, a multiplicity dependent study of $f_{0}$ resonance production is presented, in search for the possible rescattering effect in the hadronic phase of high multiplicity pp and p-Pb collisions.
We show that the masses of pion and its excited states as well as the pion decay constant, charge radius, electromagnetic form factor and photon-to-pion transition form factor can be simultaneously described by the holographic light-front QCD for the transverse dynamics augmented by the 't Hooft equation governing the longitudinal dynamics. We point out that this formalism satisfy the GMOR constraint.
Bound state constituents move in the instantaneous potential generated by their companions. QED and QCD have instantaneous potentials when the gauge is fixed over all space at an instant of time (eg., A^0=0). Thus the Schrödinger equation can be generalised to relativistic motion [1].
The QCD potential felt by a quark or gluon can be non-vanishing at spatial infinity for color singlet states, since the combined potential generated by all constituents vanishes. Maintaining Poincare symmetry leaves a single universal parameter: the (spatially constant) gluon field energy density.
Consequences: The gluon energy density gives the QCD hadron scale. Color singlet q qbar states are bound by a linear potential, qqq and q qbar g states have corresponding confining potentials. Full Poincare covariance of EM form factors for relativistic bound states is demonstrated for the first time in a Hamiltonian framework [2]. The hadron spectrum has promising features [1].
[1] 2101.06721
[2] 2304.11903
For fifty years, the standard model of particle physics has been hugely successful in describing subatomic phenomena. Recently, this statement appeared to be contradicted by the strong disagreement between the recent measurement of the anomalous magnetic moment of the muon, $a_\mu$, and the reference standard-model prediction for that quantity. Such a large discrepancy should signal the discovery of interactions or particles not present in the standard model. Here we present a new first-principle lattice calculation of the most uncertain contribution to $a_\mu$. We reduce uncertainties compared to earlier computations. Combined with the extensive calculations of other standard model contributions, our result leads to a prediction that differs from the measurement of $a_\mu$ by only 0.9 standard deviations. This provides an extremely precise validation of the standard model to 0.38 ppm.
At the LHC, the vast amount of data from the experiments demands both sophisticated algorithms and substantial computational power for efficient processing. Hardware acceleration is an essential advancement for HEP data processing, focusing specifically on the application of High-Level Synthesis (HLS) to bridge the gap between complex software algorithms and their hardware implementation. We will explore how HLS facilitates the direct implementation of software algorithms into hardware platforms such as FPGAs to enable real-time data analysis. We will use the case study of a track-finding algorithm for muon reconstruction with the CMS experiment, demonstrating HLS’s role in translating algorithms into high-speed, low-latency hardware solutions that improve the accuracy and speed of particle detection. Key techniques in HLS, including parallel processing, pipelining, and memory optimization, will be discussed, illustrating how they contribute to the efficient acceleration of algorithms.
The LHCb experiment will undergo its high luminosity detector upgrade in 2033-2034 to operate at a maximal instantaneous luminosity of 1.5 × 1034cm-2s-1. This increase in instantaneous luminosity poses a challenge to the tracking system to achieve proper track reconstruction with a tenfold higher occupancy. Here we focus on foreseen solutions for the new tracking stations after the magnet, called Mighty Tracker. It is of hybrid nature, comprising silicon pixels in the inner region and scintillating fibres in the outer region. The silicon pixels provide the necessary granularity and radiation tolerance to handle the high track density expected in the central region, while the scintillating fibres are well suited for the peripheral acceptance region. New R&D activities are needed in both technologies to cope with the highest instantaneous luminosity and the drastic increase in the radiation environment. An overview of the current status of the Mighty Tracker project will be presented.
At the LHC, Electrons and Photons play a crucial role for precision measurements of the Higgs Bosons properties as well as of Standard Model parameters such as the weak mixing angle, the W boson mass and related cross-sections which have proven to be competitive to prior determinations at the LEP or Tevatron colliders. In addition, they are crucial for searches using electron and photon final states, such as the search for Di-Higgs production or beyond the Standard Model multilepton final states. These challenging measurements rely on understanding and controlling the systematic uncertainties of the object reconstruction in the detector extremely well. In this poster, the final precision on electron and photon energy calibration, reconstruction, identification and isolation efficiencies measurements using 13 TeV pp collision data collected with the ATLAS detector during the LHC Run-2 will be discussed. A glance at preliminary Run-3 results will be included.
Outreach & Education is an essential part of HEP experiments where visualisation is one of the key factors. 3D visualisation and advanced VR, AR, and MR extensions make it possible to visualise detectors’ facilities, explain their purpose, and functionalities, and visualise different physical events. The visualisation applications should be extensive, easily accessible, compatible with most hardware and operating systems, simple to use, with a well-developed user framework and open source. The best fit to these requirements brings the browser-based applications based on the gaming engines. However, it causes limitations in the performance. Geometry descriptions play a critical role in finding agreement between performance and quality of the visualisation scene. The paper describes methods of geometry simplification and AR/VR applications developed based on simplified geometry.
The ATLAS Liquid Argon Calorimeter readout electronics will be upgraded for the HL-LHC. This includes the development of custom preamplifiers and shapers with low noise and excellent linearity, a new ADC chip with two gains and new calibration boards with excellent non-linearity and non-uniformity between all calorimeter channels. New ATCA compliant signal processing boards equipped with FPGAs and high-speed links receiving the detector data and performing energy and time reconstruction as well as a new timing and control system are also designed. Test results of the latest versions of the aforementioned components and the latest firmware development will be presented.
The China Jinping Underground Laboratory (CJPL) is an excellent location for studying solar, geo- and supernova neutrinos. As an early stage of the Jinping Neutrino Experiment (JNE), we have been studying the performance of a 1-ton liquid prototype neutrino detector at CJPL-I. We aim to improve its electronics system and photomultiplier tubes (PMTs) to explore its potential capabilities further. We have developed a new electronic system with higher resolution, greater bandwidth, and faster storage speed. We plan to replace the current Hamamatsu 8-inch PMTs with North Night Vision (China) 8-inch MCP-PMTs and increase the number of PMTs from 30 to 60. These new technologies will be used for the future 500-ton neutrino detector at CJPL. The poster will present the upgrade plan, equipment, progress, and physical improvements of the 1-ton neutrino detector.
The Mu2e experiment will search for the CLFV process of neutrinoless coherent conversion of muon to electron in the field of an Al nucleus. The experimental signature is a monochromatic conversion electron with energy $E_{CE} = 104.97$ MeV/c. One of the possible background processes is $\bar{p}$s produced by the proton beam at the Production Target, annihilating in the ST. The background expected from $\bar{p}$ is very low but highly uncertain. It cannot be efficiently suppressed by the time window cut used to reduce the prompt background. Therefore, we have developed a method for the in-situ measurement of this background. In Mu2e, $p\bar{p}$ annihilation in the ST is the only source of events with multiple tracks coming from the ST, simultaneous in time, each with a momentum in the signal window region. We exploit this unique feature and reconstruct the multi-track events to estimate the $\bar{p}$ background by comparison.
Understanding the formation of (anti)nuclei in high-energy collisions has attracted large interest over the last few years. According to the coalescence model, nucleons form independently and then bind together if they are close in phase-space. A recent advancement of the model is the Wigner function formalism, which allows the calculation of the coalescence probability based on the distance and relative momentum of the constituent nucleons. The interest in explaining nuclear formation processes extends beyond standard model physics, with implications for indirect Dark Matter searches where antinuclei could be produced in their decays. In this presentation, we provide an improved model based on the state-of-the-art coalescence formalism, not only for deuterons but also for the more intricate case of A=3 nuclei. Our approach introduces a purpose-built Monte Carlo generator that offers high adaptability and superior performance compared to traditional general-purpose event generators.
The exploration of the Higgs boson's properties and its interactions with top quarks constitutes a pivotal aspect of the post-Higgs discovery era. Among these, the measurement of the associated production of a Higgs boson with a pair of top quarks (ttH) offers a unique window into the Yukawa coupling between the Higgs and the top quark, the heaviest known fundamental particle. This poster presents the latest results on ttH production with the Higgs boson decaying into a pair of bottom quarks (H???bb), performed by the ATLAS collaboration. A key focus is placed on the improved MVA strategy, which incorporates permutation-invariant architectures utlising attention mechanisms, for enhancements in both multi-class classification of signal and background events, and Higgs candidate reconstruction.
Inspections and interventions in radioactive environments are often reliant on human personnel because of the complexity of the infrastructures that have not been designed for robotic or remote access. This is the case also for particle and nuclear physics experimental facilities which can become highly activated over time.
To alleviate problems with the decommissioning of the ATLAS inner detector at the Large Hadron Collider at CERN, a Virtual Reality (VR) Platform has been created. The platform provides a tool for training purposes in-situ and on mock-ups of the real detectors. Information on immediate and accumulated radiation doses can be fed back live or analysed in depth later. Applications of the system are presented together with research into extending the system towards an enhanced mode of operation in complex environments that is based on linking the virtual environment with a simultaneous localisation and mapping (SLAM) algorithm.
The Fermilab accelerator complex has been optimized to deliver a 1-Mega Watt proton beam for the NOvA experiment. The primary challenges involve maintaining the target system and stabilizing the proton beam operation to serve high-quality neutrino beams to the neutrino detectors. The beam transport lattice was re-optimized for sending the fine-tuned proton beam to the target. The proton beam monitoring algorithm has been developed utilizing the muon monitors. As a result, we achieved the world record 960-kW beam for accelerator neutrino experiments in Spring 2023, with ongoing challenge the beam power to reach 1-MW in 2024. This is a significant milestone in advancing the future US neutrino facility, which includes the PIP-II proton Linac and the LBNF-DUNE experiment.
The upcoming wave of neutrinoless double beta decay (0νββ) experiments is geared towards probing the inverted mass ordering and transitioning into the normal ordering domains. We undertake a quantitative assessment of the projected experimental sensitivities, with a specific emphasis on the discovery potentials anticipated prior to the execution of experiments. We assess the sensitivity of the counting analysis using full Poisson statistics [1] and compared with its continuous approximation. The inclusion of additional measurable signatures such as energy can enhance sensitivity, and this is accounted for through a maximum likelihood analysis [2]. This study serves as an example to the generic problem of making sensitivity projections to proposed projects with predicted background prior to the experiments are performed.
References
[1] M. K. Singh et al., Phys. Rev. D 101, 013006 (2020).
[2] M. K. Singh et al., Phys. Rev. D 109, 032001 (2024).
Visualization is integral to high-energy physics (HEP) experiments, spanning from detector design to data analysis. Presently, depicting detectors within HEP is an intricate challenge. Professional visualization platforms like Unity offer advanced capabilities, and also provide promising avenues for detector visualization. This work aims to develop an automated interface facilitating the seamless conversion of detector descriptions from HEP experiments, formatted in GDML, DD4hep, ROOT, and Geant4, directly into 3D models within Unity. The significance of this work extends to aiding detector design, HEP offline software development, physical analysis, and various aspects of HEP experiments. Moreover, it establishes a robust foundation for future research endeavors, including enhancements in event display.
The ATLAS experiment will undergo major upgrades for the high luminosity LHC. The high pile-up interaction environment (up to 200 interactions per 40MHz bunch crossing) requires a new radiation-hard, fast readout tracking detector.
The Inner Tracker (ITk) upgrade design includes ~28,000 modules. It is vital to follow the complex global production flow. The ITk production database (PDB) allows monitoring of production quality and speed. After production the information will be kept for 10 years of data-taking.
Database tools for interaction and reporting are developed for collaboration users with various skill-sets. Tools include a pythonic API wrapper, upload GUIs, commandline scripts, containerised applications, and CERN hosted resources.
This presentation promotes information exchange and collaboration tools which supports detector construction in large-scale experiments.Through examples, the general themes of data management and multi-user global accessibility will be discussed.
We investigate the elastic production of top quark pairs ($t\bar{t}$) in $pp$ collisions at low and high luminosities. We extend the study of the sum of two semi-exclusive $t\bar{t}$ production modes, namely in photon--Pomeron ($\gamma-\!IP$) and Pomeron--Pomeron ($\!IP-\!IP$) interactions. We consider semi-leptonic $t\bar{t}$ decay, tagging of both forward protons, and low pile-up. We find that the measuring the sum of $\!IP-\!IP$ and $\gamma-\!IP$ is feasible. Separating individual channels is challenging at high-luminosities. The $\gamma-\!IP$ signal is separable from backgrounds at low pile-up, allowing to probe the $\gamma-\!IP$ interactions.
HYLITE is a charge-integration pixel detector readout chip designed for Shanghai high repetition rate XFEL and extreme light facility. With a dynamic range t of 1~10000 photons at 12 keV, the pixel of HYLITE includes an ADC with an automatic gain-switching function. The initial phase of HYLITE development focuses on creating a 64×64-pixel chip with a 200-μm pixel pitch. The ultimate goal is to produce a chip with 128×128 pixels and a 100-μm pixel pitch.
HYLITE200F, the first full-scale chip of HYLITE, was manufactured using a 130 nm CMOS process. The frame rate of HYLITE200F is 6 kHz in successive readout mode, with plans to enhance it to 12 kHz in the final version. Moreover, HYLITE200F is bump-bonded with a specially designed PIN sensor for joint debugging. The test module comprises four HYLITE200F chips and one sensor and underwent preliminary testing using an X-ray tube. The test results demonstrate that the module can produce images clearly after flat-field correction.
The INO-ICAL collaboration has built a prototype detector called mini-ICAL at IICHEP, Madurai, India$\:$(9$^\circ$ 56' N, 78$^\circ$ 00' E). The mini-ICAL is being used to measure charge-dependent cosmic muon flux at the earth’s surface. Mini-ICAL is a magnetised detector, composed of 11 layers of iron plates interspaced with resistive plate chambers to track cosmic ray muons. The iron is magnetised to a maximum field of 1.5 T by applying a current of 900 A through 32 copper coils. The simulation with Geant4-toolkit by including detector noise and efficiency, which eventually used in the unfolding technique to obtain muon spectrum at the earth's surface from the observed distributions. This talk presents the results of the charge ratio of $\mu^+$ to $\mu^-$ as a function of momentum ranges from $\sim$ 1 GeV/c to 3 GeV/c and azimuthal angle of reconstructed muon for different zenith angle and compared with the prediction from different hadronic models in CORSIKA events generator.
Quantum chromodynamics (QCD) has yielded a vast literature spanning distinct phenomena. We construct a corpus of papers and build a generative model. This model holds promise for accelerating the capability of scientists to consolidate our knowledge of QCD by the ability to generate and validate scientific works in the landscape of works related to QCD and similar problems in HEP. Furthermore, we discuss challenges and future directions of using large language models to integrate our scientific knowledge about QCD through the automated generation of explanatory scientific texts.
Particle identification (PID) is crucial for future particle physics experiments like CEPC and FCC-ee. A promising breakthrough in PID involves cluster counting, which quantifies primary ionizations along a particle’s trajectory in a drift chamber (DC), bypassing the need for dE/dx measurements. However, a major challenge lies in developing an efficient reconstruction algorithm to recover cluster signals from DC cell waveforms.
In PID, machine learning algorithms have emerged as the state-of-the-art. For simulated samples, a supervised model based on LSTM and DGCNN achieves a remarkable 10% improvement in separating K from $\pi$ compared to traditional methods. For test beam data samples collected at CERN, due to label scarcity and data/MC discrepancy, a semi-supervised domain adaptation model is developed and validated using pseudo data. When applied to real data, this model outperforms traditional methods and maintains consistent performance across varying track lengths.
Non-identical femtoscopy is sensitive to the two-particle pair source size ($R$) and the pair-emission asymmetry ($\mu$). Here, we studied the dependence of $R$ and $\mu$ on the centrality and pair transverse velocity ($\beta_{\rm T}$). For this purpose, we modelled the femtoscopic correlations between all charged pion-kaon pairs in Pb--Pb collisions at $\sqrt{s_{\rm NN}}=$ 5.02 TeV using a) (3+1)D viscous hydrodynamics coupled with THERMINATOR 2 and b) the integrated Hydro-Kinetic Model (iHKM). The dependence of $\mu$ on the $\beta_{\rm T}$ is heavily modified by the interaction and rescattering of pions and kaons in the final state of the collisions. This phase is implemented in the iHKM but in THERMINATOR 2, an extra delay in kaon-emission is introduced to mimic it. We will show the predicted non-monotonic dependence of the $\mu$ on $\beta_{\rm T}$ can be used to constraint better the duration and impact of the hadron rescattering phase in ultra-relativistic heavy-ion collisions.
Majoron-like particle J in the mass range between 1 MeV to 10 GeV, which dominantly decays into the standard model (SM) neutrinos, can be constrained from the big-bang nucleosynthesis (BBN). For majoron lifetime ($\tau_J$) smaller than 1sec, the injected neutrinos from the majoron decay heat up the background plasma and it results in the deficit of Helium-4 abundance and enhancement of Deuterium (D) abundance. For $\tau_J$ larger than 1sec, the injected neutrinos enhance the conversion rate of $p\to n$ which results in the enhancement of Helium-4 and D abundance. We found that in both cases, the constraint from the measurement of D is the strongest. We also estimate the $\Delta N_{\rm eff}$ constraint on the majoron parameter space and compare it with the BBN bounds,
obtained from our analysis.
Sub-GeV dark matter particles evade standard direct detection limits since their typical energies in the galactic halo don’t allow for detectable recoil of the heavy nuclei in the detectors. It was, however, pointed out recently that if the dark matter particles have sizable couplings to nucleons, they can be boosted by interactions with galactic cosmic rays and also sub-GeV dark matter can be probed. We revisit such bounds based on direct detection experiments, paying particular attention to the attenuation of the boosted dark matter flux where we include the effect of inelastic scattering of the dark matter with nuclei in the Earth’s crust. More importantly, we also study the effect of inelastic scattering on dark matter detection, and we calculate the bounds that can be placed by upcoming neutrino experiments like DUNE. We improve on previous works by considering the dark-matter-nucleus inelastic cross sections provided by numerical simulations using the GENIE code.
The initial density of both the Dark Matter(DM) and the Standard Model (SM) particles may be produced via perturbative decay of inflaton with different decay rates, creating an initial temperature ratio, $\xi_i$=T$_{DM,i}$/T$_{SM,i}$. This scenario implies inflaton mediated scatterings between the DM and the SM, that can modify the temperature ratio even for high inflaton mass. The effect of these scatterings is studied in a gauge-invariant model of inflaton interactions upto dimension-5 with all the SM particles including Higgs. It is observed that an initially lower(higher) DM temperature will rapidly increase(decrease), even with very small couplings to the inflaton. There is a sharp lower bound on the DM mass for satisfying relic density due to faster back-scatterings depleting DM to SM. For low DM masses, the CMB constraints become stronger for $\xi_i$<1, probing values as small as $10^{-4}$. The BBN constraints become stronger for lower DM masses, probing $\xi_i$ as small as 0.1.
The contribution provides an overview of the Data Quality Control System (QC) of the ALICE Inner Tracking System (ITS2).
QC is a software developed during the ITS commissioning before the beginning of the LHC Run 3. It is used to validate
the detector performance and guarantee efficient data taking.
QC is capable of synchronous data flow monitoring at different levels: data integrity, data rate, hit rate (and fake-hit rate),
cluster and track reconstruction level, and even secondary vertex reconstruction for strange particles.
QC is run during online reconstruction, offline reconstruction passes as well as during Monte Carlo productions.
The Jiangmen Underground Neutrino Observatory (JUNO) is located in southern China, in an underground laboratory with a 650 m rock overburden. The primary scientific goal of JUNO is to determine the neutrino mass hierarchy.
Data Quality Monitoring (DQM) system is crucial to ensure the correct and smooth operation of the experimental apparatus during data taking of an experiment. The DQM system at JUNO will reconstruct raw data directly from JUNO Data Acquisition (DAQ) system, produce performance plots and use visualization tools on website to show the detector performance to guarantee high quality data taking. The strategy of JUNO DQM, including its design, current development and performance, will be presented.
This poster will present the Drell-Yan differential cross-section measurement in the wide dilepton mass range of 40-3000 GeV. The measurement was done using 2016-2018 CMS experiment data. A special emphasis will be placed on the background estimation procedures in dielectron and dimuon measurements.
The precision measurements of the Drell-Yan process are important inputs to parton distribution function fits, perturbative QCD tests, searches for new physics and so on. Therefore, it is important to achieve the highest possible accuracy and precision.
Data-driven techniques were employed to estimate both prompt and non-prompt lepton backgrounds to keep the statistical and systematic errors as low as possible. We use eμ event sample and same-sign dilepton samples for prompt and non-prompt lepton background estimations respectively, with some improvements over the typical implementations. We will present these techniques in detail during the conference.
The influence of exploiting Deep Neural Networks (DNNs) for signal-over-background classification in High Energy Physics (HEP) analysis is often underrated. In this research, we investigated the effect of a DNN classifier on the Vector Boson Fusion (VBF) production mode of the Higgs boson that decays into b-quark pairs. The DNN improves the identification of the signal events overwhelmed by the QCD background. However, the selection of the signal efficiency Working Point has a sculpting effect on the background Higgs boson's invariant mass distribution due to algorithm correlation with the latter. We studied the correlation impact, tested different decorrelation methods, and compared the relative sensitivities to the signal on Monte Carlo simulated datasets self-produced. The analysis of the Higgs boson in the VBF production mode decaying in the hadronic channels is ongoing at the major HEP experiments. Hence, this work will have a strong impact on future measurements of this channel.
Large neutrino liquid argon time projection chamber (LArTPC) experiments can broaden their physics reach by incorporating isolated MeV-scale features present in their data. We use data from MicroBooNE, an 85 tonne LArTPC exposed to Fermilab neutrino beams from 2015 until 2021, to demonstrate new calorimetric and particle discrimination capabilities for isolated ~O(1 MeV) energy depositions referred to as "blips". We observe concentrations of blips near fiberglass support struts along the TPC edge, with an energy spectrum indicative of specific gamma-ray decays. These and other blip sources are being used to validate calibrations in MicroBooNE's data by leveraging spectral features. This work further reports on the progress towards distinguishing between low-energy protons and electrons in large LArTPCs using cosmogenic data. The composition of proton-like blips selected using this new technique is being studied to evaluate the accuracy of cosmic ray flux models used in LArTPCs.
Hardware random number generators (HRNG) are widely used in the computer world for security purposes as well as in the science world as a source of the high-quality randomness for the models and simulations. Currently existing HRNG are either costly or very slow and of questionable quality. This work proposes a simple design of the HRNG based on the low-number photon absorption by a detector (a photo-multiplier tube of a silicon-based one i.e. SiPM, MPPC, etc.) that can provide a large volume of high-quality random numbers. The prototype design, different options of processing and the testing of quality of the generator output are presented.
In offline software of JUNO experiment, detector identifier (ID) and geometry management are indispensable parts. Detector identifier provides a unique ID number for every detector unit with readout, which is used by different applications in offline software. An ID mapping service is under development to provide associations between different sets of ID systems, including offline software, data acquisition, online event classification, electronics, detector testing and commissioning, etc. The geometry management system is developed based on Geant4 and GDML to precisely describe the detector details, such as geometrical structure, detector shapes and positions. In offline software, the geometry management system provides consistent detector description information for different applications through interfaces between them.
The Upstream Tracker (UT) is a crucial component in the LHCb tracking system installed in the Upgrade I. The UT is a silicon microstrip detector that speeds up track reconstruction, reduces the rate of ghost tracks, and improves reconstruction of long-lived particles. LHCb is planning Upgrade II during Long-Shutdown 4 aiming at increasing the peak luminosity by a factor of 7.5. The event pile-up and occupancies will be far beyond the design of the current UT, while radiation damage and pattern recognition will also be challenging. The plan of a new UT using MAPS sensors is proposed. The major sensor technology options will be discussed. The digitization and simulation of the MAPS-based UT will be introduced. Based on the simulation, optmization of the system design is performed, and impact of various operation scenarios is studied.
Generation of the relativistic electron beams using Laser Wakafield Acceleration technology (LWFA) has recently achieved Technology Readiness Level (RTL) sufficient to deliver MeV level electron beams for user experiment. Recently built LWFA accelerators can be operated at 1 kHz pulse repetition rate. The LWFA technology enables the production of electron beams with the ultra-shot time structure, ultra-high peak beam current unachievable with conventional acceleration methods. However, beam pointing, divergence, and energy spectrum instability are major trade-offs of the technique.
The proposed beam diagnostic method is based on the visualization of the beam cross-section on the phosphorous screen in one dimension using a kHz frame rate high-resolution linear camera and sub-nanosecond photodiode to read out its visible spectrum luminance and so, the electron beam intensity.
Presented experimental data are required for analysis of the laser and gas target performance and stability.
The water Cherenkov detector stands as a cornerstone in numerous physics programs such as precise neutrino measurements. In a conventional physics analysis pipeline, the understanding of detector responses often relies on empirically derived assumptions, leading to separate calibrations targeting various effects. The time-consuming nature of this approach can limit the timely analysis upgrades. Moreover, it lacks the adaptability to accommodate discrepancies arising from asymptotic inputs and factorized physics processes.
Our work on the differentiable physics emulator enhances the estimations of systematic uncertainties and advances physics inference across all the aforementioned aspects. We construct a physics-based AI/ML model that is optimizable with data. We can infer convoluted detector effects using a single differentiable model, informed by robust physics knowledge inputs. Furthermore, our model is a robust solution for experiments employing similar detection principles.
We present a comprehensive differential study of $\Lambda$ hyperon polarization in (ultra-)central Au+Au collisions at low and intermediate energies, employing the microscopic transport model UrQMD in conjunction with the statistical hadron-resonance gas model. This study entails a complex analysis of the fireball dynamics and thermal vorticity field evolution. The resulting thermal vorticity configuration effectively manifests as the formation of two vortex rings in the forward and backward rapidity regions. We demonstrate that the polarization of Λ-hyperons exhibits oscillatory behaviour as a function of the azimuthal angle, offering a novel means to probe the structure of the fireball in central heavy-ion collisions.
The upcoming long-baseline (LBL) neutrino experiments will be sensitive to non-standard interactions (NSI) and can provide information on the unknown oscillation parameter values. The observed shift in $\delta_{CP}$ value observed for NOvA in case of standard model (SM) and NSIs arising simultaneously from two different off-diagonal sectors, i.e., $e-\mu$ and $e-\tau$ could be attributed to the presence of new physics effects. We extend the study to the upcoming long-baseline experiments: DUNE and T2HK. We derive constraints on the NSI sectors using the combined datasets of NO$\nu$A and T2K. Our analysis reveals a significant impact that NSIs may have on the sensitivity of standard CP phase $\delta_{CP}$ and atmospheric mixing angle $\theta_{23}$. Furthermore, when NSIs from the $e-\mu$ and $e-\tau$ sectors are included, we see significant changes in the CP sensitivity due to the presence of NSIs, and, in addition, the CP asymmetry exhibits an appreciable difference in DUNE and T2HK.
A cosmic muon veto detector (CMVD), using extruded plastic scintillator (EPS) strips, is being built around the mini-ICAL detector which is operational at IICHEP, Madurai. CMVD will study the feasibility of building a shallow depth neutrino detector. Muon interactions in the EPS are detected by SiPMs mounted at both ends of two wavelength shifting fibres that are inserted in the EPS strips.
The muon detection efficiency of CMVD is required to be more than 99.99%. Faithful detection of muons also requires charge measurement. Current signals of SIPMs are converted into voltage pulses using trans-impedance amplifiers. A DRS4 based readout system is being designed to sample the signals at 1 GS/s. The samples are digitised on receiving a mini-ICAL trigger and the zero suppressed data are transmitted to the back-end server. An FPGA based DAQ board consisting of five DRS4 ASICs and a network interface is being designed. The paper will discuss the prototype design of the SiPM readout system.
DUNE is a long-baseline neutrino experiment that will precisely measure neutrino oscillation parameters, observe astrophysical neutrinos, and search for processes beyond the standard model. DUNE will build four LAr-TPCs far detectors with a total mass of ~70 kT LAr located at SURF (Sanford Underground Research Facility), 1.5 km below the earth’s surface. A near-site complex, hosting different detectors, will measure the neutrino flux from an accelerated particle beam produced at at Fermilab, 1300 km away from SURF. DUNE will be sensitive to processes beyond the standard model such as nucleon decays, heavy neutral leptons, and dark matter. Additionally, neutrinos of astrophysical origin, most notably supernovas, and solar neutrinos. This few-MeV low energy regime is of particular interest for the detection of the burst of neutrinos from a galactic core-collapse supernova and solar neutrinos, for which the most energetic component (hep-chain) has never been measured.
We are interested in thermal corrections to dark matter (DM) annihilation cross sections in a MSSM-inspired BSM theory, having bino-like Majorana DM ($\chi$), annihilating to SM fermions through Yukawa interactions via a charged scalar channel in freeze-out scenario. We apply real-time formalism of thermal field theory (TFT) to investigate corrections due to thermal fluctuations of DM annihilation cross section at NLO. We utilize generalized Grammer and Yennie approach in TFT to assure IR divergence cancellation with $K$-polarization sum of real and virtual corrections at NLO to DM annihilation processes $(\chi \bar{\chi} \rightarrow f \bar{f})$. We calculate the thermal correction to the finite remainder in TFT. Our Calculations shows quadratic thermal dependence of annihilation cross section of DM $( \sigma_T \propto {\cal{O}}(T^2))$ considering scalar to be heavy compared to DM and SM fermions.
To reconstruct the energy and time of events in the liquid scintillator detector, in a neutrino or dark matter experiment, we need to analyze the waveforms from photomultiplier tubes (PMTs). Fast Stochastic Matching Pursuit (FSMP) is a reversible jump Markov chain Monte-Carlo, which samples the posterior of PE time sequence for each waveform. Besides waveforms from dynode PMTs, FSMP is suitable to analyze waveforms from ALD-coated microchannel plate PMTs with long tail charge spectrum. This method gains acceleration on GPU, and improves the energy and time resolution of LS detectors. The energy resolution is improved by decreasing 12% of relative resolution.
Within the ATLAS Experiment the Prompt Lepton Isolation Tagger (PLIT) served as an essential tool to distinguish between prompt muons originating from the decays of W and Z bosons and non-prompt muons generated in the semi-leptonic decays of b- and c-hadrons. Its central role was to effectively mitigate the presence of fake and non-prompt leptons in various multi-lepton final state analyses and had been extensively used in Run-2. The poster will present the ongoing efforts in developing and optimizing this tagger for Run-3 data analyses. Through the integration of new features and the exploration of novel machine learning algorithms, the tagger's discrimination power can be enhanced, allowing for more precise identification of prompt leptons originating from electroweak boson decays.
In the realm of high-energy physics experiments, the ability of software to visualize data plays a pivotal role. It supports the design of detectors, aids in data processing, and enhances the potential to refine physics analysis. The integration of complex detector geometry and structures, using formats such as GDML or ROOT, into systems like Unity for 3D modeling is a key aspect of this process. This research employs Unity to render BESIII spectrometer and events in three-dimensional animated format. Such visual representations of events effectively demonstrate the particle collisions and trajectory interactions with the detector. The development of the visualization system for event displays through software not only improves physics analysis, but also encourages cross-disciplinary applications and contributes to educational initiatives.
In high-energy collisions, by measuring the two-particle Bose–Einstein correlation function and considering its relationship with the phase-space density of the particle-emitting source, we can obtain information about the source function. While a Gaussian shape is commonly assumed, anomalous diffusion suggests Lévy-stable distributions, as observed in the PHENIX experiment for kaon-kaon pair-source functions. Event generators like EPOS allow direct investigation of freeze-out coordinates, facilitating the analysis of the source function. EPOS, a Monte Carlo-based model, simulates high-energy nuclear and particle collisions, integrating Parton-Based Gribov-Regge theory for initial evolution, subsequent hydrodynamic evolution, and hadronization. In this talk, I will present an event-by-event analysis of the kaon-kaon source function in $\sqrt{s_\text{NN}}$ = 200 GeV Au+Au collisions using the EPOS model.
Jefferson lab is considering an energy increase from current 12 GeV to 22 GeV for its CEBAF accelerator. This will be accomplished by recirculating 5-6 additional turns through two parallel CEBAF LINACs using an FFA arc at each end of the racetrack. The total recirculation turns would be 10 times, the first four turns use present conventional arcs to make the 180-degree bends from one LINAC to the other. However, the last 5-6 turns will all share a single beam line inside two FFA arcs. This reduces the footprint and the cost of the project significantly. On the other hand, having the trajectories of last 5-6 recirculating beams close to each other makes it challenging to extract beams from different passes with different energies. In this paper we will explain our present extraction system for 12 GeV, our challenges and limitations, and a possible extraction solution for the 22 GeV upgrade with the goal of extracting beam at different turns/energies to different experimental halls.
The pixelated semiconductor tracking detectors became standard tool in experiments of high energy physics. An increasing demand for high resolution data requires highly granular detectors. Small pixels size and low noise electronics allows more data to be recorded for each event (cluster of pixels). Every pixel of modern detectors (e.g. Timepix3/4) can record deposited energy and time of interaction.
The ionizing particle interacting with sensor creates a charge cloud which is directed by the electric field to the pixel electrodes. In this work we show invertible model of signal formation (charge collection dynamics). Its inversion applied to measured data calculates interaction parameters: 3d coordinates with subpixel resolution (6-10x), deposited energy (corrected for charge losses) and time (corrected for drift). Very fast algorithm have been developed and tested. It uses technique similar to convolutional neural networks and can be hardwired into the electronics.
Future detector studies rely on advanced software tools for performance estimation and design optimization. Particle flow reconstruction is a key ingredient in optimal jet energy resolutions. While Pandora
stands out as a well-established algorithm for particle flow analysis, its application has primarily been confined to high-granularity CALICE calorimeters.
This limitation prompted exploration into its compatibility with other detector
types. Key4hep, a turnkey solution for experiment lifecycles, offers a flexible
framework that allows different experiments to benefit from its synergies. Leveraging Key4hep, PandoraPFA was successfully adapted to study particle flow in a Liquid-Argon calorimeter for the first time. This presentation examines the integration of Pandora PFA into the Key4hep framework and its application in a LAr calorimeter. Furthermore, it assesses Pandora PFA’s ability to distinguish
between particle showers and discusses its implications on the jet energy resolution.
We present SKMHS22, a new set of diffractive PDFs and their uncertainties at NLO and NNLO accuracy in pQCD within the xFitter
framework. We describe all diffractive DIS datasets from HERA and the most recent H1/ZEUS combined measurements Three scenarios are considered: standard twist-2, twist-4 (including longitudinal virtual photons), and Reggeon exchange. For the contribution of heavy flavors, we utilize the Thorne-Roberts general mass variable number scheme. We show that for those corrections, in particular, the twist-4 contribution
allows us to include the high-β region and leads to a better description of datasets. We find that the inclusion of the Reggeon exchange improves the description of the diffractive DIS. The resulting sets are in good agreement with all datasets, which cover a wider kinematical range than in previous fits. The SKMHS22 diffractive PDFs sets presented in this work are available via the LHAPDF interface.
The KOTO experiment at J-PARC is dedicated to searching for the rare decay $K_L \rightarrow \pi^0 \nu \bar{\nu}$. This decay violates CP symmetry and is sensitive to new physics beyond the Standard Model(SM) because its branching ratio is predicted to be $3 \times 10^{-11}$ with a small theoretical uncertainty in SM. One of main backgrounds is caused by a small contamination of charged kaons in the neutral beam. We installed a new charged particle detector in the beam to reject the background events by detecting charged kaons directly. This detector consists of a 0.2-mm-thick plastic scintillator film and 12-$\mathrm{\mu m}$-thick aluminized mylar. The scintillation photons escaping from the scintillator surface are reflected by the mylar and are detected with multiple photomultiplier tubes on the sides. In the data taken in 2023, we concluded light yield was 18.9 photoelectrons and inefficiency was less than 0.1%. In this presentation, I will report on the performance in detail.
The ALICE Collaboration has proposed a next-generation heavy-ion experiment to be installed at the LHC Interaction Point 2 during the LHC Long Shutdown 4, in preparation for Run 5 (2035) and 6. ALICE 3 will be equipped with a Time-Of-Flight (TOF) detector for the identification of charged particles and which should reach a time resolution of about 20 ps, with novel silicon sensors. In this poster, the R&D behind the three technologies that are being considered for the construction of the ALICE 3 - TOF will be presented: innovative Low Gain Avalanche Detectors (LGADs) integrated in the design of fully depleted 110 nm MAPS (as in the INFN ARCADIA project), the novel concept of thin double LGADs and Silicon Photon Avalanche Diodes (SPADs), with the latter only for the outermost layer of the TOF detector.
In view of the High-Luminosity LHC era the ATLAS experiment is carrying out an upgrade campaign which foresees the installation of a new all-silicon Inner Tracker (ITk) and the modernization of the reconstruction software. Track reconstruction will be pushed to its limits by the increased number of proton-proton collisions per bunch-crossing and the granularity of the ITk detector. In order to remain within CPU budgets while retaining high physics performance, the ATLAS Collaboration plans to use ACTS, an experiment-independent toolkit for track reconstruction. The migration to ACTS involves the redesign of the track reconstruction components as well as the ATLAS Event Data Model (EDM), resulting in a thread-safe and maintainable software. In this contribution, the current status of the ACTS integration for the ATLAS ITk track reconstruction is presented, with emphasis on the improvements of the track reconstruction software and the implementation of the ATLAS EDM.
The experimental & theoretical research on physics of massive neutrinos is based on standard paradigm of three-neutrino mixing, which describes the oscillations of neutrino flavors measured in solar, atmospheric a& long-baseline experiments. However, several anomalies , corresponding to an L/E of 1m/MeV could be interpreted by involving sterile neutrino as RAA & Galium anomaly.
STEREO was designed to investigate this conjecture, which would potentially extend the SM, detector provides a complete study of anomalies for a pure 235U antineutrino spectrum, using HEU core
We will describe an analysis of full set of data generated by STEREO & an accurate prediction of reactor. The measured antineutrino energy spectrum suggests that anomalies originate from biases in nuclear experimental data used forpredictions, while rejecting the hypothesis of a light sterile neutrino. Our result supports neutrino content of SM & establishes a new reference for 235U antineutrino energy spectrum.
Successful reconstruction of hadronic events is critical for the physics measurements at high energy frontier, where the precise measurement of Higgs boson properties is essential, as it provides excellent opportunities to discover New Physics.
We propose a new methodology called jet origin identification, which could identify the jet seemed from 11 different colored SM particles (udsbc, their anti-particles, and the gluon). We showed that these 11 different jets could be efficiently separated using state-of-art simulation and Deep learning tools. Using Jet Origin identification, the Higgs measurement precisions could be significantly improved (i.e., Higgs to cc couplings be improved by 2 times and Higgs to s and b quark exotic decays be improved by 2 orders of magnitudes). It could also be applied to other critical measurements like time-dependent CP measurements and EW measurements.
The HL-LHC phase will be a challenge for the CMS-RPC system since the expected operating conditions are much higher with respect to those for which the detectors have been designed, and could introduce non-recoverable aging effects which can alter the detector properties. A longevity test is therefore needed to estimate the impact of HL-LHC conditions on RPC detector performance. This will allow us to confirm that the RPC system will survive the harsher background conditions expected at the HL-LHC. A dedicated long term irradiation program launched in 2016 at the CERN Gamma Irradiation Facility (GIF++), where a few RPC detectors are exposed to intense gamma radiation to mimic the HL-LHC operational conditions. The main detector parameters (currents, rate, resistivity) are continuously monitored as a function of the collected integrated charge and the detector performance has been studied with muon beams. The latest results of the irradiation test are presented.
We update the lepton universality tests and the Vus determination using measurements of tau decays. The tau lepton branching fraction global fit has been improved taking into account uncertainties on external nuisance parameters in its constraints. It will be included in the Heavy Flavour Averaging Group (HFLAV) as-of-2023 report and in the updated Tau Branching Fractions review in the PDG 2024 edition. Lepton universality tests are improved by the Belle II measurement of the tau mass (2023) and the preliminary Belle II measurement of the ratio of the leptonic branching fractions (December 2023). Vus has been determined using the recently published radiative corrections for tau decays and a novel determination of the inclusive hadronic decay rate of the tau to us hadrons with lattice QCD (March 2024).
In view of the HL-LHC, the Phase-2 CMS upgrade will replace the entire trigger and data acquisition system. The detector readout electronics will be upgraded to allow a maximum L1A rate of 750 kHz, and a latency of 12.5 µs. The upgraded system will be entirely running on commercial FPGA processors and should greatly extend the capabilities of the current system, being able to maintain trigger thresholds despite the harsh environment as well as trigger on more exotic signatures such as long-lived particles to extend the physics coverage. The muon trigger should be able to identify muon tracks in the experiment and measure their momenta and other parameters for use in the global trigger menu. The L1 track finder in CMS will bring some of the offline muon reconstruction capability to the L1 trigger, delivering unprecedented reconstruction and identification performance. We review the design of muon trigger, its architecture, and the muon reconstruction and identification algorithms.
The performance of the Level-1 Trigger (L1T) is pivotal for the data-taking endeavor of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC). The custom hardware-based L1T system reduces the event rate from the collision frequency of 40 MHz to around 115 kHz as input to the High Level Trigger (HLT). The effective operation and monitoring of the L1T are critical for selecting important physics events.
The L1T monitoring uses an end-to-end approach, from in-situ monitoring during data-taking to cumulative analysis of the offline-reconstructed data, with quality control tests for performance metrics such as efficiencies and rates. This poster provides an overview of the tools and workflows used for trigger monitoring, enabling fast identification and diagnosis of potential problems. It highlights new data tiers and modern software frameworks recently introduced for monitoring, paving the way for efficient trigger monitoring in the High-Luminosity LHC era.
The results from the short-baseline experiments, such as the LSND and MiniBooNE, hint at the potential existence of an additional neutrino state characterized by a mass-squared difference of approximately 1 eV². In addition, a sterile neutrino with a mass-squared difference of $10^{-2}$ eV² has been proposed to improve the tension between the results obtained from the T2K and NOνA experiments. The additional light sterile neutrino state hypothesis introduces four distinct spectra of mass states. We discuss the implications of the above scenarios for the observables that depend on the absolute mass of the neutrinos, namely - the sum of neutrino masses (∑) from cosmology, the effective mass of the electron neutrino from beta decay (m$_β$), and the effective Majorana mass ( m$_{ββ}$) from neutrinoless double beta decay. We show that the current constraints of the above variables can disfavor some scenarios.
Hyper-K is a next-generation long baseline neutrino experiment. One of its primary physics goals is to measure neutrino oscillation parameters precisely, including CP-asymmetry. As conventional νµ beam from J-PARC neutrino baseline contains only 1.5% of νe interaction of total, it is challenging to measure νe/νe(anti) scattering cross-section on nuclei. To reduce systematic uncertainty, IWCD will be built to study neutrino interaction rate with higher accuracy. The presented, simulated data comprises νeCC0π as main signal & NCπ0, νµCC are major background events. To reduce the backgrounds, initially a log-likelihood-based reconstruction algorithm to select candidate events was used, which however, sometimes struggles to distinguish π0 events properly from electron-like events. Thus, ML-based framework has been developed to enhance the purity & signal efficiency rate of νe events. Implementing it notably enhances both the efficiency & purity of νe signals over the conventional approach.
Heterogeneous computing solutions for real-time event reconstruction are an emerging trend for future designs of trigger and data-acquisition systems, especially in view of the upcoming high-luminosity program of the LHC. FPGA devices offer significant improvements on latency when highly-parallelised algorithms, also based on machine-learning solutions, are coded and deployed on such devices. In this context, standard air-based cooling is not adequate and new solutions are needed to effectively exploit the full computing potential of these high-density devices. In this abstract we present our work in simulating, designing, constructing and testing a liquid-based micro-channeling solution that demonstrates efficient cooling. Our solution enables the deployment of more complex and powerful algorithms on FPGA devices, thus enhancing the performance and reliability of real-time event reconstruction.
Future e$^+$e$^-$ colliders provide a unique opportunity for long-lived particle (LLP) searches. This study focusses on LLP searches using the International Large Detector (ILD), a detector concept for a future Higgs factory. The signature considered is a displaced vertex inside the ILD's Time Projection Chamber. We study challenging scenarios involving small mass splittings between heavy LLP and dark matter, resulting in soft displaced tracks. As an opposite case, we explore light pseudo-scalar LLPs decaying to boosted, nearly collinear tracks. Backgrounds from beam-induced processes and physical events are considered. Various tracking system designs and their impact on LLP reconstruction are discussed. Assuming a single displaced vertex signature, model-independent limits on signal production cross-section are presented for a range of LLP lifetimes, masses, and mass splittings. The limits can be used for constraining specific models, with more complex displaced vertex signatures.
The European Strategy for Particle Physics identifies an e+e- Higgs factory as its top priority and the first step towards an ultra-high energy future hadron collider. The Future Circular Collider (FCC) is being proposed at CERN to address these goals. The FCC includes an electron-positron collider (FCC-ee), which will be followed by an energy-frontier hadron collider (FCC-hh).
New long lived particles (LLPs) are connected to many new physics models and could be the key to new physics discoveries at FCC-ee.
This contribution presents ongoing sensitivity analysis for exotic Higgs boson decays to LLPs at FCC-ee within the FCCAnalyses framework.
The study targets the production of a Higgs boson in association with a Z boson in e+e- collisions at 240 GeV, with the Higgs boson decaying into two long-lived scalars. This builds upon previous work with improved statistics and a refined analysis strategy.
This poster will illustrate the key aspects covered in the upcoming LHC EFT WG Note: SMEFT predictions, event reweighting, and simulation.
Emphasising the challenges associated with the generation of SMEFT predictions using the event reweighting technique, we illustrate the subtleties behind operators that introduce helicity configuration not allowed in the SM. Furthermore, we introduce a novel reweighting tool able to significantly reduce the computational time of SMEFT prediction generation.
The poster focuses on processes from the top sector, serving as benchmarks to highlight these topics.
The CEPC booster has been designed to provide electron and positron beams at different energies for the collider. The latest booster design aligns with the TDR's higher luminosity objectives for four energy modes. The booster's optics have transitioned from FODO in the CDR to TME structure, resulting in a significant reduction in emittance to match the lower emittance of the collider in the TDR. Extensive efforts have been invested to address the challenge of error sensitivity for the booster, ensuring that the dynamic aperture with errors meets the requirements across all energy modes. Additionally, a combined magnets scheme (B+S) has been proposed to minimize the magnet construction costs and reduce the operation costs through lower power consumption. This poster will show the design status of the CEPC booster in the TDR, encompassing parameters, optics, dynamic aperture, ramping scheme, and injection scheme.
Hyper-Kamiokande (HK) is a next-generation international neutrino experiment currently under construction in Japan. HK will explore proton decay and have the capability to detect Earth-crossing, atmospheric, solar, cosmic, and accelerator neutrinos. Expected to start data collection in 2027, HK will require periodic calibration for optimal performance.
The calibration at lower energies will involve utilizing a Deuterium Tritium neutron Generator (DTG), entailing the generation of a radioactive N16 cloud strategically positioned in various locations within the water tank.
A simulation of N16 cloud generation and decay was conducted, and the daughter particles were propagated through the detector volume using a dedicated HK simulation software. Energy and vertex reconstructions were performed using multiple reconstruction algorithms.
The DTG will be deployed using a computer-controlled crane to achieve accurate positioning while prioritizing the cleanliness and safety of the detector.
The measurement of low-mass e+e− pairs is a powerful tool to study the properties of the quark-gluon plasma created in ultra-relativistic heavy-ion collisions. Since such pairs do not interact strongly and are emitted during all stages of the collisions, they allow us to investigate the full space-time evolution and dynamics of the medium created. Thermal radiation emitted by the colliding system contributes to the dielectron yield over a broad mass range and gives insight into the temperature of the medium. The upgraded ALICE detector for LHC Run 3 gives unprecedented possibilities to measure the dielectron spectrum in pp and Pb-Pb collisions. Machine learning tools are nowadays widely spread in the field of particle physics and can help to improve the separation of signal and background events. In this poster I will present a machine learning approach based on boosted decision trees to identify electrons and discriminate contributions from different dielectron sources.
The transformer models are dominating the generative modeling, namely in the natural language processing domain. The attention mechanism in those models does not suffer from implicit bias and it enables the processing of large amounts of data thanks to the parallelization of computations during the training. This study presents experiments with the transformer blocks in an image completion model trained on Monte Carlo simulations of an electromagnetic calorimeter. We test different setups of the model to observe and analyze the behavior of the transformer-based masked model on the calorimeter data that is characterized by high granularity and a high dynamic range of values.
We present results using an optimized jet clustering with variable R, where the jet distance parameter R depends on the mass and transverse momentum of the jet. The jet size decreases with increasing $p_{T}$, and increases with increasing mass. This choice is motivated by the kinematics of hadronic decays of highly Lorentz boosted top quarks, W, Z, and H bosons. The jet clustering features an inherent grooming with soft drop and a reconstruction of subjets in one sequence. These features have been implemented in the Heavy Object Tagger with Variable R (HOTVR) algorithm, which we use to study the performance of jet substructure tagging with different choices of grooming parameters and functional forms of R.
China JinPing Underground Laboratory (CJPL) is an underground laboratory with 2800 meters rock overburden and is ideal to carry out experiment for rare-event searches. Cosmic muons and muon-induced neutrons present an irreducible background to neutrino experiment and dark matter experiment at CJPL. A precise measurement of the cosmic-ray background of CJPL would play an important role in the future experiments. Using a 1-ton liquid scintillator detector for the Jinping Neutrino Experiment(JNE),we give a measurement of cosmic muon flux and cosmogenic neutron production in liquid scintillator detector at CJPL. This study provides a clear understanding of cosmic-ray background at deep underground laboratory.
ProtoDUNE Single-Phase was DUNE's first full-scale engineering prototype and operated from 2018-2020. It took test beam data of charged hadrons in 2018, including data of positively charged kaons at high GeV-scale momenta. A total inelastic cross section was measured using these test beam kaons with the thin-slice method, which artificially divides the detector into slices where the particle either interacts in or passes through. The cross section data can help inform modeling uncertainties for final state and secondary interactions used in neutrino and nucleon decay analyses. The following poster will show the event selection, analysis methods, and final extracted cross section.
Muon reconstruction performance plays a crucial role in the precision and sensitivity of the LHC data analysis of the ATLAS experiment. Di-muon J/Psi and Z resonances are used to calibrate to per-mil accuracy the detector response for muons. This poster aims to provide an overview and the current status of the Muon Momentum Calibration within the ATLAS detector, thus the study of the procedure used to identify the corrections to the simulated muon transverse momenta to precisely describe the measurement of the same quantities in data. The results achieved are fundamental for improving the reach of measurements and searches involving leptons, such as Higgs decays to dimuons and ZZ or low/high mass searches in the beyond-the-standard model sector as well as high-precision physics analyses, such as the measurement of the W boson mass.
The Covid-19 pandemic has exposed certain societal weaknesses, including the lack of scientists in the media and the readiness of the public to believe in fake news. "Neutralina" is a character conceived on Instagram (@neutralina.lu) in response to the observed need for scientific outreach done by women in Peruvian and Latin American society. The objectives of this project include normalizing the presence of women in science, fighting against stereotypes and fake news, and disseminating scientific knowledge. The character has gained a sizable young audience and is expanding beyond Instagram into other formats such as podcasts, real-life conferences, and roundtable discussions. Statistics detailing its growth will be presented, alongside strategies employed to engage the young audience.
The precise knowledge of neutrino flux and related uncertainties at the near and far detectors of the T2K experiment is crucial for extracting various neutrino oscillation parameters and neutrino cross-section measurements. The current Monte Carlo beam simulation framework, JNUBEAM, relies on the GEANT3 toolkits, which are no longer maintained. Additionally, it utilizes the FLUKA software to simulate hadronic production from interactions of the proton beam with the target. We aim to create a replacement framework solely based on well-established GEANT4 toolkits. Our new framework, namely G4JNUBEAM, uses GEANT4 available physics processes to consider primary proton interactions in the T2K target through to the decays of muons and hadrons, and subsequent production of neutrinos. We present simulation results for validation against NA61/SHINE data, neutrino flux predictions using G4JNUBEAM, and comparisons with the results obtained from the FLUKA+JNUBEAM simulations.
We describe the fit of top-quark mass values at NNLO using as input the double-differential distributions on rapidity and invariant mass of t-tbar pairs obtained by the ATLAS and CMS collaborations from unfolding of their experimental data to the parton level, compared to NNLO theory predictions.
We consider different state-of-the-art PDF sets, finding results of the fits compatible among each other within uncertainties.
On the other hand, we observe some tension between the fits to different datasets.
Contribution partly based on [arXiv:2311.05509], plus updates.
Antiproton annihilation at-rest can provide a unique probe into the intra-nuclear structure of nuclei. This process was first observed in the 1950’s using photographic emulsion and has since been observed and studied on a variety of nuclei. We present here the first observation and reconstruction of antiproton annihilation at-rest interactions on argon nuclei using data from the LArIAT experiment, a liquid argon time projection chamber (LArTPC). LArIAT was exposed to a charged particle test-beam at Fermilab from 2015-2017. Antiprotons tagged using LArIAT’s beamline instrumentation were reconstructed in the LArTPC, and the multiplicities of final-state particles emerging from the identified annihilation at-rest vertex were measured. These results will inform searches for neutron-antineutron oscillation events in the future LArTPCs like DUNE due to their similar topological signature.
The Jiangmen Underground Neutrino Observatory (JUNO) consists of the Central Detector (CD), Water Cherenkov Detector (WCD), and Top Tracker (TT) each utilizing thousands of Photomultiplier Tubes (PMTs) for signal detection. These signals are processed by front-end readout electronics and converted into digital ADC waveforms. Real-time waveform processing using FPGAs is used for charge reconstruction and timestamp tagging. Processed signals are transmitted to the data acquisition (DAQ), while raw waveforms are sent to the DAQ once verified by the global trigger electronics. JUNO is interested in an energy range spanning from tens of KeV to tens of GeV. The high event rate and massive raw data generated by PMT waveforms necessitate an online event classification (OEC) system to identify events based on physical characteristics, compress data volume, and handle unusual data acquisition situations. This presentation will discuss the implementation of the OEC system in JUNO.
ATLAS Open Data for Education delivers proton-proton collision data from the ATLAS experiment at CERN to the public along with open-access resources for education and outreach. To date ATLAS has released a substantial amount of data from 8 TeV and 13 TeV collisions in an easily-accessible format and supported by dedicated documentation, software, and tutorials to ensure that everyone can access and exploit the data for different educational objectives. Along with datasets, ATLAS also provides data visualisation tools and interactive web based applications for studying the data along with Jupyter Notebooks and downloadable code the search for known and unknown particles and make measurements. The Open Data educational platform which hosts the data and tools is used by tens of thousands of students worldwide, and we present the project development, lessons learnt, impacts, and future goals.
We present a novel readout circuit tailored primarily for PbWO4 scintillation detectors in high-energy experiments. The design integrates a 4x4 SiPM array directly coupled to a preamplification stage, housed within a compact electronics module. The readout circuit is design to work with independent number of the SiPMs without affecting the timing output. This module incorporates bias control for SiPMs and adjustable gain and offset controls via USB/RS485 interfaces. Optimization efforts focused on achieving spectral resolution, rapid response, compactness, and low energy consumption. Key features include a fixed bias voltage, externally adjustable preamplifier settings stored on EEPROM and output signal compatibility up to 1V/50Ω. We fabricated a prototype with a 3x3 array of 20 mm x 20 mm x 200 mm PbWO4 crystals coupled to individual sensor arrays and readouts, subjected to thorough testing in energy range 50MeV to 5GeV. Comprehensive characterization measurements will be presented.
The CMS Level-1 Trigger Data Scouting (L1DS) defines a new approach within the CMS Level-1 Trigger (L1T), enabling the acquisition and processing of L1T primitives at the 40 MHz bunch-crossing (BX) rate. The L1DS will reach its full potential with the CMS Phase-2 Upgrade at the HL-LHC, harnessing the improved Phase-2 L1T design, featuring tracker and high-granularity calorimeter data for the first time. Since LHC Run 3, an L1DS demonstrator has been gathering muons and calorimeter objects, with ongoing data characterization to validate the system. The objective is to present the initial findings, evaluating performance using SM candles like $Z\rightarrow ee$ or $Z\rightarrow \mu\mu$ and examining object multiplicity and occupancy per BX, as well as bunch-to-bunch correlations. The studies confirm L1DS functionality and its potential for trigger diagnostics, luminosity investigations, and physics searches, enhancing the study of signatures thus far deemed constrained by L1T selections.
The RPC detectors in the CMS experiment operate with a gas mixture made of 95.2% C2H2F4, known to be a greenhouse gas. Several eco-friendly alternatives to C2H2F4, such as HFO, have been studied in the last few years in order to find an alternative mixture with low Global-Warming Potential (GWP), while maintaining the performance of the RPC chambers. Another way to improve the RPC standard gas mixture GWP, could be replacing between 30% and 40% of the C2H2F4 with CO2. Studies of Eco-gas and CO2 based mixtures are carried out at the CERN Gamma Irradiation Facility (GIF++), where the LHC Phase-2 conditions are mimicked by a 13.6TBq radiation source and a muon beam. This poster presents the performance of a 1.4 mm gap RPC chamber with several alternative gas mixtures in a high gamma background environment, as well as future perspectives of aging studies.
The Deep Underground Neutrino Experiment (DUNE) far detectors require readout of several hundred thousand charge-sensing channels immersed in the largest liquid argon time projection chambers ever built, calling for cryogenic front-end electronics in order to be able to adequately instrument the full detectors. The ProtoDUNE-II program at the CERN neutrino platform consists of 2 liquid argon time projection chambers that will serve as demonstrators of the horizontal drift (HD) and vertical drift (VD) technologies that will be used in the first 2 DUNE far detectors, including the final design of the cryogenic ASICs used for charge readout and digitization. This talk will present the design of these electronics along with evaluations of their performance from both the ProtoDUNE-II assembly experience and early commissioning data from ProtoDUNE-HD.
The high luminosity operation of the LHC will deliver collisions with a luminosity about 10 times the original design value. This poses a big challenge for trigger and data acquisition in real-time due to nearly 200 overlapping collisions, called pile up, within a bunch crossing. The CMS experiment will revamp its trigger structure as part of the required upgrade, to have tracker and more granular calorimeter data available for the first layer (Level 1, L1) of the trigger deployed in custom hardware including high-end FPGAs, SoM etc.. The correlator units at L1 will further process the information from each sub-detector to make a global event description through the particle flow (PF) approach. Disentanglement of the pileup particles from those of interesting physics processes is achieved by implementing the Pile-Up Per Particle (PUPPI) algorithm at L1. We present the strategy for implementation of PUPPI and PF at the L1, focusing on the Hadron Forward Calorimeter detector of CMS.
ProtoDUNE-SP is a single-phase liquid argon time projection chamber, which was in operation at CERN from 2018 to 2020. It is a prototype detector for the DUNE far detector, which is designed to contain about 70 kiloton liquid argon for neutrino detections. In addition to the R&D studies, it also implements charged particle beam to study their behaviors in the liquid argon. These particles, especially hadrons like pions and protons, are highly involved in the neutrino-argon interaction. The knowledge on these hadron-argon interactions will help us constrain the systematic uncertainties for future neutrino analyses at DUNE. This work presents the measurements on pion-argon and proton-argon inclusive cross sections using 1 GeV beam data collected at ProtoDUNE-SP.
Developed within the European Project STRONG2020, PrecisionSM is an annotated database that compiles the available data on low-energy hadronic cross sections in electron-positron collisions. It is important to collect and organize these experimental measurements since they are used to perform precise tests of the Standard Model, such as in the anomalous magnetic moment of the muon. In addition to the datasets, the database also contains details regarding the systematic uncertainties and the treatment of Radiative Corrections. The database is accessible through a custom website (https://precision-sm.github.io) which lists all published measurements and the links to their location on HEPData. Moreover, the website displays some examples of tools to elaborate the data listed. This talk will discuss the current status of this project and its future prospects, and will also provide examples on how to display the information of the database.
The anomalous magnetic moments of leptons represent excellent probes of the Standard Model and therefore also of possible new physics effects.
In particular, the persisting hint of new physics in the muon $g$-2 motivates the investigation of similar effects also in the other leptonic dipoles.
In this work, we examine the new physics sensitivity of the tau $g$-2 at future high-energy lepton colliders such as the FCC-ee or a Muon Collider.
We show that these facilities can access a number of processes like the radiative Higgs decay $h\to \tau^+\tau^-\gamma$, the Drell-Yan processes
$\ell^+\ell^- \to \tau^+\tau^-(h)$, or vector-boson-fusion processes such
as $\ell^+\ell^- \to \ell^+\ell^-\tau^+\tau^-$ which can probe the tau $g$-2
at the level of $\mathcal{O}(10^{-5}-10^{-4})$, a resolution that is orders
of magnitude better than the current bounds.
During the upcoming High Luminosity phase of the Large Hadron Collider (HL-LHC), the integrated luminosity of the accelerator will increase to 3000 fb-1. The expected experimental conditions in that period, in terms of background rates, event pileup and the probable aging of the current detectors, present a challenge for all existing experiments at the LHC, including the Compact Muon Solenoid (CMS) experiment. To ensure the high performance of the CMS muon system, several upgrades are currently being implemented. In the case of the Resistive Plate Chamber (RPC) system, an improved version of the already existing RPCs (iRPC), will be installed in the forward region on the 3rd and 4th endcap disks of CMS, to extend the RPC coverage in the high pseudorapidity region up to 2.4. The iRPCs have entered a mature stage of the production stream at CERN and Ghent. In this poster, the production facilities and the selection procedures of the certified RPC gaps and chambers are presented.
The quest for proton decay is a pivotal endeavor in particle physics, offering potential validation of Grand Unification Theories. In this pursuit, DUNE employs LArTPC technology and ML to boost detection sensitivity and minimize background events. This poster presents a new multimodal ML framework to distinguish proton decay into charged kaons and muons from DUNE's atmospheric neutrino interactions. Using data processed by the CFF Algorithm, the framework integrates modified ResNet and EfficientNet models using late fusion and a gating mechanism for each LArTPC plane. The late fusion model shows promising signal discrimination compared with methods combining preselection cuts with BDT and CNN features. The key advantage of this ML framework is its ability to analyze raw detector data, avoiding track reconstruction. This method effectively addresses the incomplete kaon association from proton decay and the misidentification of protons as signal events.
Neutrino flavor oscillation, a crucial phenomenon in particle physics, explores the interplay between flavor and mass eigenstates, revealing insights beyond the standard model. Probabilistic measures traditionally study these transitions, while the quantum features of neutrinos, such as entanglement, open avenues for quantum information tasks. Quantum complexity, an evolving field, finds application in understanding neutrino oscillations, particularly through quantum spread complexity, offering insights into charge-parity symmetry violations. Our results suggest that complexity favors the maximum violation of charge-parity, which is consistent with recent experimental data. This approach enhances our grasp of neutrino behavior, connecting quantum information theory with particle physics.
The aim of the SABRE (Sodium-iodide with Active Background REjection) experiment based in Australia is to detect an annual rate modulation from dark matter interactions in ultra-high purity NaI(Tl) crystals in order to provide a model independent test of the signal observed by DAMA/LIBRA.
Radionuclides from intrinsic and cosmogenic processes including $^{40}$K, $^{210}$Pb, $^{232}$Th and $^{238}$U provide a fundamental limit to the sensitivity of SABRE. Radiation from these isotopes must be studied and quantified in order to distinguish it from dark matter events.
In this talk the chemical procedures, sample preparation as well as sample measurement techniques for radio-impurities in SABRE are conferred. The focus of is being put on the experimental challenges for the measurements of the dominant radio-impurities in the SABRE crystal background: $^{40}$K measured via inductively coupled plasma mass spectrometry and $^{210}$Pb measured via accelerator mass spectrometry.
This work utilizes text analysis techniques to uncover connections and trends in quantum chromodynamics (QCD) research over time. Through embedding-based analysis, we are able to draw conceptual connections between disparate works across QCD subfields. Examining topic clustering and trajectories over time provides insights into new phenomena gaining momentum and experimental approaches coming to prominence in the QCD research area. Furthermore, we construct citation graphs between influential papers to reveal impactful contributions and relationships, compare them with respect to their topic, and propose intertopical and citation-related recommendations.
The aim of the LHCb Upgrade II is to operate at a luminosity of up to 1.5 x 10$^{34}$ cm$^{-2}$ s$^{-1}$. The required substantial modifications of the current LHCb ECAL due to high radiation doses in the central region and increased particle densities are referred to as PicoCal. An enhancement already during LS3 will reduce the occupancy and mitigate substantial ageing effects in the central region after Run 3.
R&D on several scintillating sampling ECAL technologies is currently being performed: SpaCal with garnet scintillating crystals and tungsten absorber, SpaCal with scintillating plastic fibres and tungsten or lead absorber, and Shashlik with polystyrene tiles, lead absorber and fast WLS fibres.
Time resolutions of better than 20 ps at high energy were observed in test beam measurements of prototype SpaCal and Shashlik modules. The presentation will also cover results from detailed simulations to optimise the design and physics performance of the PicoCal.
A search for scalar resonances decaying to four leptons is presented, with the data collected by the CMS detector from 2016 to 2018 at center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 138 fb-1. A model-independent approach is introduced and applied. Large mass region is covered from 130 GeV to 3 TeV, and different production mechanisms and width assumptions are tested.
A search for low mass narrow vector resonances decaying into quark-antiquark pairs at high transverse momentum is presented. The analysis is based on data collected in Run 2 with the CMS detector at the LHC in proton-proton collisions at $\sqrt{13}~\mathrm{TeV}$. Signal candidates are reconstructed as large-radius jets and identified using the ParticleNet algorithm. This analysis presents the most sensitive limits in the boosted topology for couplings to a new vector resonance (Z') as well as couplings to a new scalar resonance ($\Phi$). The invariant jet mass spectrum is probed for a potential narrow peaking signal over a smoothly falling background. Upper limits at the 95% confidence level are set on the coupling of narrow resonances to quarks, as a function of the resonance mass. For masses between 50 and 300 GeV, these are the most sensitive limits to date.}
The Karlsruhe Tritium Neutrino (KATRIN) experiment is designed to measure the effective electron antineutrino mass with a sensitivity better than $m_\nu c^2=0.3\,\mathrm{eV}$ ($90\%\,\mathrm{C.L.}$) using precision electron spectroscopy of tritium beta decay. This determination occurs in the spectral endpoint ($E_0$) region, up to some $10\,\mathrm{eV}$ below $E_0\approx 18.6\,\mathrm{keV}$.
Light neutral pseudoscalars and vector bosons arise in many theories beyond the Standard Model (BSM). High-statistics beta spectroscopy with KATRIN is a complementary probe for these new physics theories regarding coupling strengths of bosons to neutrinos or electrons.
The measured beta spectrum is characteristically distorted due to the emission of an additional boson in the decay as described in JHEP 01 (2019) 206. We present the sensitivity estimates of the second measurement campaign ($4\times 10^6$ electrons in the ROI of $[-40, +130]\,\mathrm{eV}$ around $E_0$) to such light boson couplings.
While experimental data has not ruled out the possibility of additional Higgs bosons or gauge sectors, several alternative models have been proposed to go beyond the standard model and tackle the question of hierarchy. These models predict the existence of heavy vector-like partner quarks that exhibit vector-axial (V-A) coupling, typically on the TeV scale. In this work, We focus on the unusual decays of the partner heavy quark, called $T$, into $H^\pm b$, which may compete with $W^\pm b$ decays and create a new discovery channel at the Large Hadron Collider (LHC). Using Monte Carlo (MC) simulations, we analyse the signal-to-noise ratio of $ pp\to qg\to T^{+}b\bar{b}j\to H^{+}b\bar{b}j\to W^{+}b\bar{b}j\to 1\ell+4b+1j+ E_T$, and evaluate the sensitivity of the LHC to the masses of $T$ and $H^\pm$ in the two-Higgs-doublet model (2HDM) plus the vector-like quark (VLQ) model. We take into account current and predicted luminosities, as well as theoretical and experimental bounds.
Many extensions of the Standard Model with Dark Matter candidates predict new long-lived particles (LLP). The LHC provides an unprecedented possibility to search for such LLP produced at the electroweak scale and above. The ANUBIS concept foresees instrumenting the ceiling and service shafts above the ATLAS experiment with tracking stations in order to search for LLPs with decay lengths of O(10m) and above. After a brief review of the ANUBIS physics case, this contribution will discuss the first complete prototype detector module called proANUBIS, its design, installation, and commissioning, and its upgraded trigger system. A first glimpse at data taking in the ATLAS cavern in 2024 will be given, followed by a summary of long-term plans.
In this study we investigate the feasibility of detecting heavy neutral leptons ($N_d$) through exotic Higgs decays at the proposed International Linear Collider (ILC), specifically in the channel of $e^+ e^-\to qq~H$ with $H\to\nu N_d\to\nu~lW\to\nu l~qq$. Analyses based on full detector simulations of the ILD are performed at the center-of-mass energy of 250 GeV for two different beam polarization schemes with a total integrated luminosity of 2 ab$^{-1}$. A range of heavy neutral lepton masses between the Z boson and Higgs boson masses are studied. The $2\sigma$ significance reach for the joint branching ratio of $BR(H\to\nu N_d)BR(N_d\to lW)$ is about 0.1%, nearly independent of the mass, while the $5\sigma$ discovery is possible at a branching ratio of 0.3%. Interpreting these results in terms of constraints on the mixing parameters $|\epsilon_{id}|^2$ between SM neutrinos and the heavy neutral lepton, it is expected to have a factor of 10 improvement from current constraints.
The Karlsruhe Tritium Neutrino (KATRIN) experiment probes the absolute neutrino mass scale by precision spectroscopy of the tritium $\beta$-decay spectrum. By 2025, a final sensitivity better than 0.3$\,$eV/c$^2$ (90% C.L.) is anticipated with a total of 1000 days of measurement.
Going beyond this goal, for instance towards the regime of inverted mass ordering, requires novel technological approaches to significantly improve statistics, energy resolution, and background suppression. In this work, we explore two key strategies: (1) implementing a differential detector technique with sub-eV energy resolution (quantum sensor detector array, time-of-flight) and (2) exploring a large-volume atomic tritium source. This allows for high statistics to be acquired more quickly and with ultra-high energy resolution. In this poster presentation, we investigate the limits by physics and requirements by technology to confine achievable sensitivities on neutrino mass with a differential measurement.
We present a detailed analysis of the transverse momentum distribution of charged particles from three different schemes. The first two arise from considering the color string picture described by the Schwinger mechanism convoluted with Gaussian and q-Gaussian string tension fluctuations, obtaining the $p_T$-exponential and the Tricomi’ function, respectively. Both are compared with the QCD-based Hagedorn fitting function, usually used to describe the hard $p_T$ spectra. We determine the statistics of the charged particles' invariant yield by analyzing the experimental data of minimum bias pp collisions reported by RHIC and LHC experiments. Finally, we compute the Shannon entropy, finding that the heavy tail of the $p_T$ spectrum leads to a rise in the monotonically increasing behavior of the entropy as a function of the center of mass energy and the temperature.
The ALLEGRO detector concept is a proposal for the detector to be operating at the Future Circular Collider FCC-ee. The calorimetry system consists of a high granular noble liquid electromagnetic calorimeter and a hadronic calorimeter with scintillating tiles using wavelength shifting fibers. The individual components of the calorimetry system in the barrel and extended barrel regions will be introduced. The simulation chain from Geant4 deposits to the clusters in the calorimeter will be described. Calibration of the Geant4 energy deposits to the electromagnetic scale and corrections for the losses in the cryostat will be shown. Results of the reconstruction of electrons or pions will be presented.
In several models of beyond Standard Model physics discrete symmetries play an important role. For instance, in order to avoid flavor changing neutral currents, a discrete Z2 symmetry is imposed on Two-Higgs-Doublet-Models (2HDM). This can lead to the formation of domain walls as the Z2 symmetry gets spontaneously broken during electroweak symmetry breaking in the early universe.
Due to this simultaneous spontaneous breaking of both the discrete symmetry and the electroweak symmetry, the vacuum manifold has the structure of 2 disconnected 3-spheres and the formed domain walls can exhibit lots of special effects in contrast to standard domain walls. In this talk I will focus on some of these effects such as CP and electric charge violating vacua localized inside the domain walls.
I will also discuss the scattering of standard model fermions off such types of domain walls as, for example, top quarks being transmitted or reflected off the wall as a bottom quark.
ALICE is the LHC experiment designed for the study of nucleus-nucleus collisions. Its primary goal is to characterize the quark--gluon plasma (QGP), a deconfined state of matter created at extreme temperatures and energy densities. Heavy quarks (charm and beauty) are excellent QGP probes, as they are mostly produced at the earliest collision stages and survive the entire medium evolution, thus allowing us to investigate their interactions with the medium. To do so, heavy-flavour results obtained in Pb--Pb collisions can be compared to those obtained in pp collisions, which serve as a baseline and a test of pQCD calculations.
This poster presents the status of D-meson measurements in pp and Pb--Pb collisions, for different centralities, at $\sqrt{s} = 13.6$ TeV and $\sqrt{s_\text{NN}} = 5.36$ TeV, respectively, using data from the LHC Run 3. D-mesons are reconstructed from their hadronic decays at central rapidity with the ALICE detector.
The coupling of the Higgs boson to fermions is a crucial part of the standard model with still much room to explore. Since we’ve measured the interaction with the heavy third generation, our focus naturally shifts to the lighter generations. As the Higgs boson coupling scales with mass, this endeavor is much more difficult. The next natural candidate, the charm quark, in particular poses a challenge, as distinguishing jets which originate from charm quarks from other types of jets is a difficult problem. In this poster, we summarize the current status of Higgs-charm coupling measurements. Special focus will be put on the benefits of charm jet identification with modern machine learning methods such as transformer models.
The Deep Underground Neutrino Experiment (DUNE) is a long-baseline neutrino-oscillation experiment aiming to measure CP-violation and the neutrino mass ordering. The far detector consists of four 17-kt modules based on Liquid Argon Time Projection Chamber (LArTPC) technology. The technologies chosen for the first and second DUNE modules are tested with large scale prototypes at the CERN Neutrino Platform. The first operation of the ProtoDUNE detectors (2018-2020) led to improvements in the design, construction and assembly procedures of the LArTPCs foreseen for DUNE modules.
The ProtoDUNE detectors have been updated and will take cosmic and beam data in 2024. ProtoDUNE-HD is equipped with the Horizontal Drift (HD) design, formally known as ‘Single Phase’ and ProtoDUNE-VD uses the recently proposed Vertical Drift (VD) design, an evolution of the previously ‘Dual-Phase’ design. This talk will present the status of the two detectors as well as the first results from the data taking.
The purification of the Juno Liquid Scintillator is a crucial and complex key point for the Jiangmen Underground Neutrino Observatory (JUNO). The huge LS mass (20 kton), the high transparency (> 20 m @ 420 nm), high radio-purity (< 10^-15 g/g, even 10^-17 g/g in U/Th) and extraordinary energy resolution (3% @ 1 MeV) are fundamental to achieve JUNO’s goals. Physics purposes include neutrino mass ordering, solar neutrinos, supernova neutrinos and atmosphere neutrinos. In this talk, the construction and commissioning of the five JUNO LS purification systems will be presented: alumina filtration, distillation, LS mixing, water extraction, nitrogen stripping. Moreover, two essential auxiliary systems will be described: ultra-pure water and high pure nitrogen. So far, two rounds of plants commissioning had been done with the production of the first sample of JUNO LS whose radio-purity will be checked and monitored online soon. Detector LS filling is planned to start in Nov. 2024.
Coherent Elastic Neutrino-Nucleus Scattering (CEvNS) is an interaction well predicted by the Standard Model. Its large cross-section allows to study neutrinos with relatively small detectors. Precision measurement of the CEvNS cross-section is a way to study neutrino properties and search for new physics beyond the Standard Model. The NUCLEUS experiment aims to detect and characterize CEvNS using reactor neutrinos, in an ultra-low background environment. The NUCLEUS target detector will be a 10g array of cubic CaWO4 and Al2O3 crystals with 5mm side. The experiment will be installed between two 4.25 GW reactor cores at the Chooz-B nuclear power plant in France. The experiment is currently under commissioning at the 15 m.w.e. underground lab at TUM (Munich) and will move to Chooz in 2024. NUCLEUS will provide important insights into neutrino physics and potential new physics beyond the Standard Model. In this talk, the recent results and prospects of NUCLEUS will be presented.
The Higgs boson was discovered in 2012 and most of its properties agree with the standard model (SM). However, several rare Higgs boson decay channels haven't been observed, including the $H→Zγ$ channel with the branching ratio of $(1.5±0.1)×10^{−3}$. The rare Higgs decays provide probes for physics beyond the SM (BSM). Therefore, the search for $H→Zγ$ decay is performed, where $Z→l^+l^-$ with $l=e,μ$. It has a clean final state and loop-induced diagram sensitive to alteration in various BSM scenarios. The results, derived from the samples of proton-proton collisions at $\sqrt s=$ 13 TeV, recorded by the CMS experiment at the LHC, will be presented in this poster. The expected and observed significance is 1.2 and 2.7 standard deviations, respectively, for a Higgs boson mass of 125.38 GeV. Similar results are also observed in the ATLAS experiment. Subsequently, two collaborations performed the combined analysis, showing the first evidence of this channel with 3.4 standard deviations.
In the Color String Percolation Model, the QGP formation is associated with the emergence of the percolation cluster of color strings. Then, the estimation in the thermodynamic limit of phenomenological observables is suitable for heavy ion collisions, where a large number of particles are produced. In order to extrapolate these estimations to small systems, such as pp collision, finite size effects are studied by considering the nucleon number of the projectiles. In particular, we found that the transition temperature of the QGP formation is greater for small systems than for large ones. Under this scheme, we estimated $\sqrt{s}$=3.7(5) TeV, $\sqrt{s}$=185(15) GeV, and $\sqrt{s}$=182(15) GeV as the minimal center of mass energy required for QGP formation in pp minimum bias, AuAu, and PbPb collisions, respectively. These estimations are consistent with the energies at which the QGP has been experimentally observed. Predictions on OO collisions for QGP formation are also reported.
We present latest developments in Analysis Description Language (ADL), a declarative domain-specific language describing the physics algorithm of a HEP data analysis decoupled from software frameworks. Analyses written in ADL can be integrated into any framework for various tasks. ADL is a multipurpose construct with uses ranging from analysis design to preservation, reinterpretation, queries, visualisation, combination, etc. The most advanced infrastructure to execute ADL on events is the CutLang runtime interpreter. Recent technical developments include an automated interface with different data types, generation of the abstract syntax tree, a visualization tool that that auto-converts analysis flows to graphs, incorporation of trained machine learning models and a Jupyter-based plotting tool. We also report physics implications including a large scale LHC analysis implementation and validation effort for BSM reinterpretation purposes and studies with ATLAS and CMS open data.
In my poster, I will present four sub-topics related to radiation protection for the CPEC:
1. Conceptual design for the collider dump system: This includes the parameters of two dilution kickers and the sizes of a graphite core and iron shell. The maximum temperature rises in the collider dump for four operations are calculated and they are below the graphite melting point.
2. Radiation level in collider tunnel: This will cover synchrotron radiation and beam loss simulation, shielding for magnet insulations and electronics, and the dose-equivalent rate in the tunnel.
3. Linac shielding design: The thickness of the Linac bulk shielding has been determined based on beam loss assumptions, and the sizes of Linac dumps have been optimized to ensure that the dose equivalent is within the safety limit.
4. Estimation for radioactivity production: The radioactivity in the air, cooling water, and rocks surrounding the tunnel are assessed. The results meet Chinese mandatory standards.
The T2K experiment is a long-baseline neutrino oscillation experiment in Japan. A muon (anti-)neutrino beam produced at J-PARC is detected at the near detector ND280 and the far detector Super-Kamiokande (SK). The ND280 detects neutrino interaction candidates before oscillation to predict the neutrino flux in SK and constrain neutrino-nucleus interaction models.
To reduce systematic uncertainties, a new detector Super-FGD was installed in ND280 in October 2023. It consists of about 2 million 1 cm^3 scintillator cubes and fibers inserted in the cubes in three directions. Signals detected by photo detectors attached to the fiber ends are digitized and transmitted in custom electronics boards. A DAQ system was developed to read out this data in a way that minimizes system complexity while also accepting data at a high rate. We report on this DAQ system which was integrated to the conventional ND280 DAQ system successfully and detected the first neutrino candidate event in December 2023.
The Penetrating particle Analyzer (PAN) is an instrument designed to operate in space to measure and monitor the flux, composition, and direction of highly penetrating particles in energy range from 100 MeV/n to 20 GeV/n. The demonstrator, called Mini.PAN, employs 2 sectors of permanent magnets arranged in Halbach geometry. These are interleaved with silicon strip detectors with 25 µm pitch in the bending direction. They are complimented by hybrid pixel detectors (HPD) allowing for high-rate measurements and a time-of-flight system made of scintillators. We present results of laboratory testing of the individual subdetectors and Mini.PAN as a whole. It was found that particle energy resolution of below 20 % can be achieved with Mini.PAN. We give an outlook at future development towards instrument simplification relying purely on latest generation HPD (Pix.PAN) and discuss possible application in deep space or orbits around the Moon, where such precise measurements have never been done.
The Hyper-Kamiokande experiment will study long-baseline neutrino oscillations with the primary focus of a search for the leptonic CP violation, following the successful T2K experiment. Thanks to an 1.3MW beam produced at J-PARC and an 184 kilotonne fiducial mass of the far detector, the event rates will be 20 times higher than those of T2K, and the search will be systematically limited mainly due to the uncertainties on $\nu_{e}$/$\bar{\nu}_{e}$ cross sections for water target. To make full use of the high data statistics, an intermediate water Cherenkov detector (IWCD) will be built around 1km away from the neutrino source. The detector will have a 63 tonne fiducial mass, allowing the cross sections to be measured with the required precision. It will be able to scan mean neutrino energies by changing its vertical position, enabling measuring the relationship between the neutrino energies and the observed energies. This talk will detail the detector design and the current status.
The Large Hadron electron Collider (LHeC) project is studying a new LHC interaction region for deep inelastic scattering collisions between electrons and hadrons in the TeV energy scale. An intense 50 GeV lepton beam is brought into collision with one 7 TeV hadron beam from CERN’s Large Hadron Collider in parallel to the hadron-hadron operation.
This paper presents the status of the study including the energy recovery linac, the optimisation of the accelerator performance for e & p beam and the challenging task to maintain sufficient beam quality. The LHC lattice has been re-optimised to include an interleaved scheme for electron and proton focusing.
A flexible beam optics has been found for matched e & p beam conditions. It is fully compatible with the HL-LHC upgrade project and the ATS scheme for highest luminosity in the interaction points of ATLAS and CMS,allows concurrent e-p collisions and alternating e-p/hadron-hadron collisions in parallel to the HL-LHC standard p-p operation.
KM3NeT is a research infrastructure with neutrino telescopes at two sites in the Mediterranean Sea for the detection of high-energy cosmic neutrinos. The two underwater telescopes, ARCA and ORCA, are Cherenkov detectors, using similar technology but with different geometrical layouts. In this way, it is possible to cover a large range of neutrino energy and address various science topics ranging from neutrino astronomy to neutrino oscillation research. The main challenge is to instrument a cubic kilometer of detection volume with optical modules for the detection of Cherenkov radiation. The technology of the KM3NeT optical module follows a multi-PMT approach. It contains 31 three-inch photomultiplier tubes for high resolution, good positioning and timing calibration. Its integration process follows a strict protocol as the production takes place in parallel in different integration sites. In this talk, we will describe the KM3NeT optical module technology and its integration process.
ALICE 3 is a new detector proposed to operate during the LHC Run 5 and 6. The Muon IDentifier (MID) detector is one of the ALICE 3 subsystems optimized to detect muons down to momenta below 1.5 GeV/c for rapidities |y|<1.3 for the reconstruction of J/ψ vector mesons down to zero transverse momentum at midrapidity. The ALICE 3 tracker large-acceptance will offer access to rare charmonium and exotic states that decay to Jpsi, pions, and photons. The MID detector will be installed outside the superconducting magnet and includes an absorber with variable thickness (70 cm to 38 cm), Plastic scintillator, multi-wire proportional chambers, and resistive proportional chambers technologies are considered for the construction of MID.
This talk presents an overview of the detector and its physics goals. Emphasis is made on recent results from a beam test with plastic scintillators and MWPC, the status of the MID simulation, and the plans for the R&D.
The Deep Underground Neutrino Experiment (DUNE) comprises a suite of Near Detectors and Far Detectors based on the Liquid Argon TPC technology, enhanced by a powerful Photon Detection System (PDS) that records the scintillation light emitted in Argon. Besides providing the timing information for an event, photon detectors can be used for calorimetric energy estimation.
The two observables generated from energy depositions by particles in liquid Argon are charge and light. The visible energy could be estimated using the charge alone. However, only electrons escaping recombination reach the wire planes, so corrections must be applied for this loss. Charge and light are anticorrelated and their sum is directly proportional to the total energy deposited: the advantage of using both is that the correction for recombination is no longer necessary. I will present an overview of DUNE PDS and the results obtained for calorimetric analyses in the DUNE detectors by combining charge and light.
The KOTO experiment at J-PARC searches for the rare decay, $K_L \rightarrow \pi^0\nu\overline{\nu}$. This search requires a high intensity $K_L$ beam which sets KOTO in a unique position to probe sub-GeV quark coupling to dark matter. One avenue to study this is the mode $K_L \rightarrow \pi^0\pi^0X$, where $X\rightarrow\gamma\gamma$. This mode was studied in the E391a experiment at KEK in the $X$ mass region 194.3-219.3 MeV, with the best upper limit on the branching ratio set with an $X$ mass of 214.3 MeV at $< 2.4 \times 10^{-7}$. In KOTO, with an improved calorimeter and kaon flux, the single event sensitivity is improved by more than an order of magnitude in that mass range. In addition, the scope of the study is broadened to include the first search for $K_L \rightarrow \pi^0\pi^0X$ with $X$ mass in the range 155-190 MeV. I will present the results of the analysis on this mode using data collected in 2018 and 2021.
Unitarity and $CPT$ symmetry constrain the $CP$ asymmetries entering the Boltzmann equation for net particle number generation. These constraints often manifest as cancelations of the leading-order asymmetries in decays and scatterings. In this poster, we consider the asymmetries of seesaw type-I leptogenesis with top-Yukawa corrections. Even when starting with Maxwell-Boltzmann phase-space densities, some of the contributions required by unitarity and $CPT$ symmetry are interpreted as approximations of quantum statistics and thermal-mass effects. The work is based on JCAP10 (2022) 042.
This presentation concerns the application of non-extensive statistics, more specifically that proposed by C. Tsallis, in the study of transverse momentum distributions of mesons composed of charm quarks produced in collisions between heavy ions at relativistic energies. Non-extensive statistics has been successful in the description of transverse momentum spectra of particles produced in hadronic collisions at high energies, that might be connected to the degree of equilibrium reached in these collisions. This question is particularly important for heavy quarks in collisions between heavy ions, given its role in the investigation of the medium formed in these collisions. We will present an update of previous results including the consideration of the transverse expansion of the medium, improving the interpretation of the systematic behavior of the q and T parameters regarding the dynamics of charm quarks in these collisions.
DUNE will be a long baseline neutrino experiment with a broad physics program, including neutrino oscillation, proton decay, and supernova studies. The detector, located 1,500 m (4,850 ft) underground at SURF, South Dakota, will be 1,300 km (810 mi) away from the ultimate 2.4 MW proton beam source at Fermilab. Four far detector modules, of 17 kt total mass of liquid argon each, will produce ionization data at a rate of ~1-2 TB/s per module, while in total ~30-60 PB/year can be permanently stored. This contribution presents the design and operational performance of a trigger primitive generation (TPG) algorithm implemented on FPGAs using the ATLAS FELIX readout interface in the DUNE Trigger and DAQ system. Although in parallel a software-based TPG was developed and delivered as the baseline solution, we demonstrate that the FPGA-based system was successfully integrated and put into operation.
Trilinear Higgs Couplings are crucial quantities for determining precisely the electroweak breaking mechanism. In the talk both Type 2 and Type 4 THDMS are analyzed and compared with respect to embedding the different current excesses at the LHC. In the talk also vacuum stability is discussed in the THDMS compared with the NTHDM. Precision requirements for measuring the trilinear couplings at LHC as well as at lepton colliders w/wo polarized beams are included.
Pileup, or the presence of multiple independent proton-proton collisions within the same bunch-crossing, has been critical to the success of the LHC, allowing for the production of enormous proton-proton collision datasets. However, the typical LHC physics analysis only considers a single proton-proton collision in each bunch crossing; the remaining pileup collisions are viewed as an annoyance, adding noise to the physics process under study. By independently reconstructing these pileup collisions, it is possible to access an enormous dataset of lower-energy hadronic physics processes, which we demonstrate using data recorded by the ATLAS Detector during Run 2 of the LHC. Comparisons to triggered alternatives confirm the ability to use pileup as an unbiased dataset. The potential benefits of using pileup for physics are shown through the evaluation of the jet energy resolution, derived from dijet asymmetry measurements, comparing single-jet-trigger-based and pileup-based datasets.
Mixed Layer (ML) in Oceans is defined as the less dense upper region of the water column where turbulent mixing occurs. Mixed Layer Depth (MLD) is the depth of this region and shows diurnal, seasonal fluctuations, and spatial variations. MLD is an indicator for climate change. When atmospheric muons enter the sea, a decreased muon count at the bottom of water is observed. Muon count is proportional to the density of water which can be measured by counting muons at the bottom. Combining this measurement with the sea surface temperature, salinity, and altimetry data from earth observing satellites, MLD can be estimated. We proposed a 4m2 scintillator based underwater muon detection system which can measure average water column density by counting surviving muons at the bottom. Using a Geant4 model, it is shown that combining this density measurement with data from Earth observing satellites enables us to continuously estimate daily mean MLD with an accuracy of 3% for down to 60m depth.
The precision measurement of daily proton fluxes with AMS during twelve years of operation in the rigidity interval from 1 to 100 GV is presented. The proton fluxes exhibit variations on multiple time scales. From 2014 to 2018, we observed recurrent flux variations with a period of 27 days. Shorter periods of 9 days and 13.5 days are observed in 2016. The strength of all three periodicities changes with time and rigidity. Unexpectedly, the strength of 9-day and 13.5-day periodicities increases with increasing rigidities up to ~10 GV and ~20 GV respectively. Then the strength of the periodicities decreases with increasing rigidity up to 100 GV.
In high energy physics experiments, visualization not only plays important roles in detector design, data quality monitoring, simulation and reconstruction, but also aids physics analysis to improve the performance.
Besides the traditional physics data analysis based on statistical methods, the visualization method is intuitive and can provide unique advantages, especially in searching for rare signal events and new physics beyond the standards model.
By applying the event display tool to several physics analyses in the BESIII experiment, we demonstrate that visualization can benefit the potential physics discovery and improve the signal significance.
With the development of modern visualization techniques, it is expected to play a more important role in future data processing and physics analysis of particle physics experiments.
The CDF collaboration reported an anomaly of the W boson mass in 2022. We discuss the possibility to explain the anomaly in a gauge-Higgs unification model. We evaluate the W boson mass in the GUT inspired SO(5) × U(1) × SU(3) gauge-Higgs unification in the Randall-Sundrum warped space. The muon decay proceeds by the exchange of not only the zero mode of the W boson but also Kaluza-Klein excited modes at the tree level. The anti-de Sitter curvature of the RS space also affects the relationship among the gauge couplings and the ratio of W boson mass to the Z boson mass. The W couplings of leptons and quarks also change. We find that the anomaly can be explained by these effects in the gauge-Higgs unification model.
If a high school student asks ten physicists what a particle is, he/she might get ten different answers, including a) particle is what we see in the detector, b) a point-like object with mass and various charges, c) a collapsed wave function, d) an excitation of a quantum field or even e) an irreducible representation of the Poincare group. I will briefly discuss strong and weak points of the above definitions and argue for option d) as a promising path to satisfy curiosity of motivated high school students. To introduce particles as excitations or waves in quantum fields at this level is certainly challenging but I believe it is time we tried. I assume that the students are somewhat familiar with a classical harmonic oscillator and classical travelling and standing waves. Using electromagnetic field as an example, I will show how we might picture a single photon as a wave of the minimum amplitude and energy allowed by quantum mechanics - the quantum of electromagnetic field.
An ATLAS search for axion like particles (ALPs) that decay into diphoton is presented. ALPs are hypothetical light particles that may be a component of a hidden (dark) dark sector. ALPs arising from Higgs decays are studied, where the Higgs is produced in association with a Z boson that is reconstructed leptonically. For prompt ALP decays, a dedicated search looking for two leptons and two collimated photons (merged or resolved) has been published. Studies focusing on the case where ALPs are long lived and mostly decay within the calorimeter volume, are on-going. In this case, photons are displaced photons and must be identified through dedicated tools. In this poster, current and prospective results on these ALPs searches will be presented.
High Energy Physics as a field is necessarily situated within the
broader societal context that surrounds it. As a result, societal biases
also shape physics research. Be it through retention of physicists who
are LGBTQ+, recruitment of young LGBTQ+ physicists, or fighting
discrimination in the lives and careers of our LGBTQ+ friends, family,
and colleagues, much can be done to fight the systemic barriers in place
which many LGBTQ+ people face.
The LGBTQ+ CERN group is a CERN-recognised Informal Network seeking to
provide a welcoming space for lesbian, gay, bisexual, trans*, intersex,
asexual, genderqueer and other LGBTQ+ individuals at CERN, also
welcoming friends and allies.
This talk will focus on the experiences of High Energy Physicists in the
LGBTQ+ CERN informal network in their careers and the group's efforts to
reach out to both the broader LGBTQ+ and CERN communities. It will also
highlight specific actions that can be taken to foster a more inclusive
environment.
The S-matrix for a QFT in 4D Minkowski space is an inherently holographic object, i.e. defined at the (conformal) boundary of spacetime. A section of this boundary is the celestial 2-sphere and Lorentz group acts on it by conformal transformations. I will briefly review scattering, when translated from the basis of plane waves (translation eigenstates) to the conformal basis (dilatation eigenstates). The resulting object is called a celestial amplitude and the change of basis is implemented for massless particles by a Mellin transform. I will apply this formalism to amplitudes of Goldstone bosons with an emphasis on their soft theorems. The illustrative example will be the U(1) (non)-linear sigma model.
We improve the YFS IR resummation theory so that it includes all of the attendant collinear contributions which exponentiate. The attendant new resummed contributions are shown to agree with known results from the collinear factorization approach. We argue that they improve the corresponding precision tag for a given level of exactness in the respective YFS hard radiation residuals as the latter are realized in the YFS MC approach to precision high-energy collider physics.
We disclose a serious deficiency of the Baym-Kadanoff construction of thermodynamically consistent
conserving approximations. There are two vertices in this scheme: dynamical and conserving. The divergence of each indicates a phase instability. We show that each leads to incomplete and qualitatively different behavior at different critical points. The diagrammatically controlled dynamical
vertex from the Schwinger-Dyson equation does not obey the Ward identity and cannot be continued beyond its singularity. The divergence in the conserving vertex, obeying the conservation laws, does not invoke critical behavior of the spectral function and the specific heat. Consequently, the divergence of the conserving vertex must coincide with that of the dynamical one to yield a consistent and reliable description of criticality coming from an effective static fermion-fermion interaction taking place, for instance, in magnetism, superconductivity, and chiral symmetry breaking.
We use the toolbox of modern amplitude methods to examine a theory that has been mostly neglected Quantum electrodynamics. In this work we have focused on maximum helicity violating (MHV) amplitudes in massless electrodynamics. Formulas for arbitrary number of external photons for some processes are presented . There we recursively show that the defining property of these amplitudes is just the soft photon theorem. We present this derivation for spinor, scalar and vector electrodynamics through recursion relations. All results are derived fully from symmetry and analytical structure of the underlying theory without the need to look at action or the standard Feynman rules.
We present an interpretable implementation of the autoencoding algorithm, used as an anomaly detector, built with a forest of deep decision trees on FPGA, field programmable gate arrays. Scenarios at the Large Hadron Collider are considered for which the autoencoder is trained using the Standard Model. The design is then deployed for anomaly detection of unknown processes. The inference is made with a latency value of 30 ns at percent-level resource usage using the Xilinx Virtex UltraScale+ VU9P FPGA. The work is documented at https://arxiv.org/abs/2304.03836https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Farxiv.org%2Fabs%2F2304.03836&data=05%7C02%7Ctmhong%40pitt.edu%7C145c10d7cef14d5997c708dc5f4b5dd2%7C9ef9f489e0a04eeb87cc3a526112fd0d%7C1%7C0%7C638490024536951313%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=2yF9%2F1fl7bFVNNnuK01iu7zUizXVHllm5dGcatlsTCM%3D&reserved=0
The poster will show the aspect of neutron skin and links of this topic with different areas of physics. After a theoretical introduction and examples of where neutron skin research can be used, I would like to show calculations made in recent months. Pb+Pb, proton+Pb, antiproton+Pb collisions at a high momentum were studied, and simulations were done with the UrQMD program. Among the calculations could be found: the net charge (total and for pions), the average total charge, or the multiplicity of the number of pions (positive to negative) for a large (peripheral) collision parameters and other.
We present the observation of entanglement in top quark pairs using data collected with the CMS detector in the 2016 Run II of the LHC. Event signatures are selected only when two high pT leptons are present consistent with the dileptonic decay channel. An entanglement proxy D is used to determine whether the top quark pairs are entangled in the production threshold with D < -⅓ signaling entanglement. D is observed (expected) to be −0.480^{+0.026}{−0.029} (−0.467^{+0.026}{−0.029}) at the parton level. The observed significance is 5.1 standard deviations with respect to the non-entangled hypothesis. This measurement provides a new probe of quantum mechanics at the highest energies ever produced.
The Higgs boson discovery at the Large Hadron Collider (LHC) completed the Standard Model of Particle Physics, and it confirmed the Higgs mechanism as a suitable description of the Electroweak-Symmetry-Breaking (EWSB). Nevertheless, the dynamics of the EWSB is still one of the most consequential questions in particle physics and a fascinating topic due to its connection to other open questions about the structure of the early universe, matter-anti-matter asymmetry and fermionic mass hierarchies. A pathway to study the EWSB mechanism is to investigate the longitudinal polarisation state of massive electroweak bosons. In this presentation I will discuss the computation and phenomenology of higher order QCD effects to polarised boson production cross-sections at the LHC and their impact on the extraction of the longitudinal polarisation fractions.
The Baur, Spira, and Zerwas model of composite quarks and leptons predicts the excited neutrinos to be produced in proton-proton collisions via contact interactions. Subsequently, the excited neutrinos decay via gauge interaction or contact interaction. The final states always include missing transverse energy; there can also be zero to three charged leptons and/or jets. The present study scans the possible final state scenarios, depending on the model parameter values, to identify searches by the ATLAS and CMS Collaborations that can be reinterpreted as a search for the excited neutrinos. As an example, the publicly available results of the monojet ATLAS search are used to derive rough limits on the excited tau neutrino mass and the contact interaction scale. The reinterpretation of the search can considerably improve the current 1.6 TeV mass limit and reach the 4 TeV region.
This poster is dedicated to searches for additional Higgs bosons from an extended Higgs sector in fermionic final states. These scalar states are predicted by several Beyond Standard Model theories, like Two Higgs doublet Models (2HDM) and the Minimal Supersymmetric extension of the Standard Model (MSSM). The results are interpreted in various benchmark scenarios.
Using the most recent experimental data and lattice calculations of scattering lengths of pipi scattering and employing dispersive representation of the amplitude based on Roy equations, we compute the subthreshold parameters of this process. We use Monte Carlo sampling to numerically model the probability distribution of the results based on all uncertainties in the inputs. In the second part of the analysis, we use the new results for the subthreshold parameters to obtain constraints on the three-flavour chiral condensate in the context of chiral perturbation theory.
Timing measurements are critical for the detectors at the future HL-LHC. The ATLAS Collaboration builds a new High Granularity Timing detector (HGTD) for the forward region. A customized ASIC - ALTIROC - has been developed, to read out fast signals from low gain avalanche detectors (LGAD), which has <=50 ps time resolution for signals from minimum ionising particles. A custom-designed pre-amplifier, discriminator, and TDC circuits with minimal jitter have been implemented in a series of prototype ASICs. The latest version, ALTIROC3, is designed to contain full functionality. Hybrid assemblies with ALTIROC3 ASICs and LGAD sensors have been characterized with charged-particle beams at DESY and CERN-SPS and with laser-light injection. The time-jitter contributions of the sensor, pre-amplifier, discriminator, TDC and digital readout are evaluated. The poster will introduce the HGTD project and present preliminary results from laboratory and test-beam measurements.
The Aether-Scalar-Tensor (AeST) theory is an extension of General
Relativity (GR) which allows for Modified Newtonian Dynamics (MOND) in
its static weak-field limit and a LCDM-like cosmological limit.
MOND successfully describes the behaviour of galaxies without the need
for dark matter. This is best summarised by the Radial Acceleration
Relation (RAR), which directly relates the observed gravitational
acceleration to the acceleration expected from the gravity of the
baryons alone. However, it is generally accepted that MOND fails to
account for the state of galaxy clusters, apparently needing missing
matter to fit the observations.
We consider static spherically symmetric weak-field solutions of AeST
and study the hydrostatic isothermal gaseous sphere as a simplified
model of a galaxy cluster in AeST. We construct the RAR of AeST for
isothermal spheres and find that the AeST RAR for isothermal spheres in
certain cases shares qualitative features also found in the
observational RAR for galaxy clusters, illustrating the potential of
AeST to address the shortcomings of MOND in galaxy clusters.
A new hadronic calorimeter (HCAL) with scintillating glass tiles has been designed for future lepton collider experiments (e.g. the Circular Electron Positron Collider). Using a sampling structure (similar to the CALICE AHCAL technology), the new HCAL design aims for better handron and jet performance, with a higher sampling fraction by using glass instead of plastic scintillator.
Full simulation studies were done on jet performance of Higgs hadronic decays using a Particle-Flow Algorithm (PFA) named "Arbor". The HCAL design was optimised in terms of longitudinal depth, transverse granularity, glass density and effective light yield.
Hardware activities focus on measurements of glass tiles developed within the Glass Scintillator Collaboration. First batches of cm-scale glass tiles were tested with beam particles at CERN and DESY. In the contribution highlights of R&D activities will be presented, including performance studies, design optimisations and latest beamtest results.
The Pierre Auger Observatory is the world's largest cosmic ray detector. It employs a hybrid technique combining a 3000 km$^2$ surface detector (SD) array comprising 1660 water-Cherenkov stations with 27 fluorescence telescopes, arranged in 4 sites, that overlook the atmosphere above the SD array during clear and moonless nights. In stable operation since 2004, we have published numerous breakthrough results regarding the properties of the highest energetic particles in the Universe with unprecedented statistics. Envisaging a deeper understanding of the highest energy cosmic rays, AugerPrime, the major upgrade of the Pierre Auger Observatory, will allow us to improve inferences on the mass composition and acceleration mechanisms, probe hadronic interactions at the $\sqrt{s} \sim 100\:\mathrm{TeV}$ scale and increased search sensitivity for the sources of ultra-high-energy cosmic rays. We summarize our most significant results and prospects for the next decade of AugerPrime operations.
The mass composition of ultra-high-energy cosmic rays (UHECR) is the key input in searches for new physics, understanding the astrophysical processes and hadronic interactions at extreme center-of-mass energies exceeding 400 TeV. At the Pierre Auger Observatory, the largest UHECR observatory ever built, accurate inferences on the UHECR mass composition were recently extended up to cosmic-ray energies of 100 EeV. This breakthrough became possible thanks to the application of machine learning for the estimation of the depth of the maximum of air-shower profiles on an event-by-event basis from the Surface Detector data. Our new findings include the indications of the changes in the mass compositions correlated with the three features of the energy spectrum (ankle, instep, steepening) and hadronic-model independent evidence of heavy and nearly pure primary beam for E > 50 EeV. We discuss the implications of these findings for the astrophysical and hadronic interaction models.
Ultra-high-energy cosmic rays are a unique probe for studying hadronic interactions at the $\sqrt{s} \sim 100\:\mathrm{TeV}$ scale. The Pierre Auger Observatory, the world's largest cosmic ray detector ever built, has gathered unprecedented statistics about the highest energetic particles in the Universe. Our results point to inconsistencies in hadronic interaction models, namely, a deficit in simulations of the muon content of air showers. Recently, we developed a novel approach in which an overall shift in the depth of the maximum of air-shower profiles and a 15 - 25% increase of the predicted hadronic signal provide a better description of our data. In this contribution, we present our results based on the Auger Phase I data and prospects for the Phase II dataset using AugerPrime, from which an enhanced measurement of the muon and electromagnetic shower content, also near the shower core, are made available by the installation of new detectors.
To unveil the origin of galactic PeV cosmic rays, observation of sub-PeV gamma rays is crucial. Sub-PeV gamma-ray astronomy is established in the northern hemisphere since the discovery of the Crab nebula >100TeV by the TibetASγ collaboration in 2019. ALPACA is a new air shower experiment under construction in Bolivia to explore the sub-PeV gamma-ray sky in the southern hemisphere for the first time. The ALPACA array consists of 400 scintillation counters covering 82,800 m$^2$ and underground muon detectors (MDs) covering 3,600 m$^2$ and will start operation in 2025. A prototype array ALPAQUITA with 97 scintillation counters is operating since 2022. The first 900 m$^2$ MD is in construction. In this contribution, we present the performance of the ALPAQUITA including the detection of the moon’s shadow by charged cosmic rays and search for bright gamma-ray sources. The status of the first MD construction and a plan to complete the full ALPACA array are also presented.
Unraveling the origin and nature of ultra-high-energy cosmic rays (UHECRs) stands as an essential inquiry in astroparticle physics. Motivated by unprecedented observational capabilities, the Fluorescence detector Array of Single-pixel Telescopes (FAST) emerges as a promising next-generation, ground-based UHECR observatory.
The FAST employs a cost-effective array of cutting-edge fluorescence telescopes, equipped with four 200 mm diameter photomultiplier tubes positioned near the focal plane of a segmented 1.6 m diameter mirror. Currently, there are three prototypes in operation at the Telescope Array Experiment and one at the Pierre Auger Observatory. Together they enable to remotely observe UHECRs in both hemispheres using the same technology.
We present the recent findings from the FAST deployed in both hemispheres, including telescope calibrations, atmospheric monitoring, electronics upgrades, and the detection of ultra-high-energy cosmic rays, leading to stand-alone operations.
The sky in ultra-high-energy cosmic ray (UHECRs) above a few EeV is surprisingly isotropic which complicates the identification of the sources. UHECR spectrum, composition and angular distributions are influenced by interactions with background photon fields and by the deflection in extragalactic and galactic magnetic fields (EGMF and GMF). Moreover, the spatial structure of the EGMF is not yet well understood. In this work we study the propagation of UHECRs with the Monte Carlo code CRPropa3 for a range of UHECR source and EGMF models. UHECR deflection in the GMF is taken into account by mapping arrival directions at the edge of the Milky Way to those at Earth. We predict the sky distributions of UHECRs at Earth for various combinations of source catalogues, injected energy and mass distributions, and EGMFs. We identify the impact of the different model ingredients on spectrum, composition and UHECR sky. Comparison with data can then constrain scenarios for the source and EGMF models.
The quest for new physics is a major aspect of the CMS experimental program. This includes a myriad of theoretical models involving resonances that can decay to massive bosons, photons, leptons or jets. This talk presents an overview of such analyses with an emphasis on new results and the novel techniques developed by the CMS collaboration to boost the search sensitivity. The searches are carried out with the full luminosity of the Run-II of the LHC in proton-proton collisions at √s = 13 TeV with the CMS detector.
Many new physics models predict the existence of new, heavy particles. This talk summarizes recent ATLAS searches for Beyond-the-Standard-Model heavy resonances which decay to quarks, or leptons, using Run 2 data collected at the LHC.
We present results of searches for massive vector-like top and bottom quark partners using proton-proton collision data collected with the CMS detector at the CERN LHC at a center-of-mass energy of 13 TeV. Single and pair production of vector-like quarks are studied, with decays into a variety of final states, containing top and bottom quarks, electroweak gauge and Higgs bosons. We search using several categories of reconstructed objects, from multi-leptonic to fully hadronic final states. We set exclusion limits on both the vector-like quark mass and cross sections, for combinations of the vector-like quark branching ratios.
The Standard Model of Particle Physics explains many natural phenomena yet remains incomplete. Leptoquarks (LQs) are hypothetical particles predicted to mediate interactions between quarks and leptons, bridging the gap between the two fundamental classes of particles. Vectorlike quarks (VLQs) lie at the heart of many extensions seeking to address the Hierarchy Problem, as they can naturally cancel the mass divergence for the Higgs boson. This talk will present the new results from LQ and VLQ searches with the ATLAS detector using the Run-2 dataset.
Leptoquarks are hypothetical particles with non-zero lepton and baryon numbers, predicted by many extensions of the Standard Model, and can provide an explanation for the similarity between the quark and lepton sectors. We present searches for leptoquarks that have been carried out by the CMS Experiment with a focus on the most recent results with the full integrated luminosity of the Run-II data era of the LHC.
We present an overview of searches for new physics with top and bottom quarks in the final state, using proton-proton collision data collected with the CMS detector at the CERN LHC at a center-of-mass energy of 13 TeV. The results cover non-SUSY based extensions of the SM, including heavy gauge bosons or excited third generation quarks. Decay channels to vector-like top partner quarks, such as T', are also considered. We explore the use of jet substructure techniques to reconstruct highly boosted objects in events, enhancing the sensitivity of these searches.
Deep learning methods are becoming indispensable in the data analysis of particle physics experiments, with current neutrino studies demonstrating their superiority over traditional tools in various domains, particularly in identifying particles produced by neutrino interactions and fitting their trajectories. This talk will showcase a comprehensive reconstruction strategy of the neutrino interaction final state employing advanced deep learning within highly-segmented dense detectors. The challenges addressed range from mitigating noise from geometrical detector ambiguities to accurately decomposing images of overlapping particle signatures in the proximity of the neutrino interaction vertex and inferring their kinematic parameters. The presented strategy leverages state-of-the-art algorithms, including transformers and generative models, with the potential to significantly enhance the sensitivity of future physics measurements.
The Deep Underground Neutrino Experiment (DUNE) is a next-generation long-baseline neutrino oscillation experiment with a broad research program including measuring CP-violation in the neutrino sector, determining neutrino mass ordering and studying neutrinos from space. DUNE will employ massive, high-precision Liquid-Argon Time-Projection Chambers at the far site (70 kt total mass) to produce superb-resolution images of neutrino interactions. It is critical to reconstruct the visible particles from these complex images and extract crucial physics information. Pandora is a powerful multi-algorithm pattern recognition software that seamlessly blends traditional techniques with the latest machine-learning approaches and is the official reconstruction method for the DUNE Far Detector. This talk presents the Pandora event reconstruction for DUNE's Horizontal and Vertical Drift modules, and discusses approaches to tuning reconstruction for the specific needs of DUNE's various physics goals.
Deep learning can give a significant impact on physics performance of electron-positron Higgs factories such as ILC and FCCee. We are working on two topics on event reconstruction to apply deep learning; one is jet flavor tagging. We apply particle transformer to ILD full simulation to obtain jet flavor, including strange tagging. The other one is particle flow, which clusters calorimeter hits and assigns tracks to them to improve jet energy resolution. We modified the algorithm developed in context of CMS HGCAL based on GravNet and Object Condensation techniques and add a track-cluster assignment function into the network. The overview and performance of these algorithms will be presented.
We believe the sophisticated simulation developed for long time in ILD context is essential to try these novel technologies in event reconstruction. Comparison with other Higgs factory results as well as primal consideration on impact to physics performance will also be discussed.
In this work, we would like to present a novel approach to the reconstruction of multiple calorimetric clusters within the Large Hadron Collider forward (LHCf) experiment using Machine Learning techniques. The LHCf experiment is dedicated to understand the hadronic components of cosmic rays by measuring particle production in the forward region of LHC collisions. One of the significant challenges in the LHCf experiment is the efficient and accurate reconstruction of unstable neutral particles, such as $\pi^0$, $K_s^0$ and $\Lambda^0$, within the calorimeters. These particles play a crucial role in cosmic ray physics and are pivotal for tuning hadronic interaction models. By using a comprehensive training dataset obtained from detailed Monte Carlo simulations, our method leverages advanced Machine Learning algorithms to enhance the reconstruction of multiple calorimetric clusters, significantly improving both the resolution and the efficiency compared to traditional techniques.
Hyper-Kamiokande (Hyper-K) is a next generation water-Cherenkov neutrino experiment, currently under construction to build on the success of its predecessor Super-Kamiokande (Super-K). With 8 times greater fiducial volume and enhanced detection capabilities, it will have significantly reduced statistical uncertainties as compared to Super-K. For corresponding suppression of backgrounds and systematic uncertainties, advances in event reconstruction, event selection, and analysis techniques are required. Machine learning has the potential to provide these enhancements, taking full advantage of new and improved detectors and enabling new analysis techniques to meet the physics goals of Hyper-K. This talk provides an overview of some areas where machine learning is explored for event reconstruction in Hyper-K. Results and comparisons to traditional methods are presented along with discussions of the plans and challenges for applying machine learning techniques to water Cherenkov detectors.
The fidelity of detector simulation is crucial for precision experiments, such as DUNE which uses liquid argon time projection chambers (LArTPCs). We can improve the detector simulation by performing dedicated calibration measurements. Using conventional calibration approaches, typically we are only able to tackle individual detector processes per measurement. However, the detector effects are often entangled in the measured detector output, particularly in LArTPCs. We present a differentiable simulator for a LArTPC which enables gradient-based optimization of the detector simulation by simultaneously fitting multiple relevant modeling parameters. The use of the differentiable simulator allows in-situ calibration which provides natural consistency between the calibration measurements and simulation application. This work also paves a way towards developing an "inverse detector simulation" for mapping the detector output to detector physics quantities of interest.
Future collider experiments represent a frontier in particle physic, and tracking and particle identification (PID) are crucial aspects for these experiments. In this contribution, innovations in detector technologies, such as high-resolution silicon detectors and ultra light drift chambers with PID capabilities, as proposed for the IDEA detector at Future Circular Collider (FCC), are discussed in details, highlighting their pivotal role in achieving precise momentum measurements and efficient particle identification. Moreover, the integration of machine learning algorithms for cluster counting and energy loss measurements are discussed for their importance in PID. The contribution concludes with a perspective on the implications of these advancements for the discovery potential and scientific reach of future collider experiments, underlining the collaborative efforts of physicists and engineers in shaping the forefront of particle physics research.
In the passive CMOS Strips Project, strip sensors were designed by a collaboration of German institutes and produced at LFoundry in 150 nm technology. Up to five individual reticules were connected by stitching at the foundry in order to obtain the typical strip lengths required for the LHC Phase-II upgrade of ATLAS or CMS trackers. The sensors were tested in a probe station and characterised with a Sr90-source as well as laser-based edge- and top-TCT systems. At last, detector modules were constructed from several sensors and thoroughly studied in a test beam campaign at DESY. All of these measurements were performed before and after irradiation. Sensors were also simulated using Sentaurus TCAD and the charge collection characteristics were studied using Allpix^2. This presentation will provide an overview of simulation results, summarize the laboratory measurements and present the test beam results for irradiated and unirradiated passive CMOS strip sensors.
Detectors at future high energy colliders will face enormous technical challenges. Disentangling the unprecedented numbers of particles expected in each event will require highly granular silicon pixel detectors with billions of readout channels. With event rates as high as 40 MHz, these detectors will generate petabytes of data per second. To enable discovery within strict bandwidth and latency constraints, future trackers must be capable of fast, power efficient, and radiation hard data-reduction at the source. We are developing a radiation hard readout integrated circuit in 28nm CMOS with on-chip machine learning for future intelligent pixel detectors. We will show track parameter predictions using a neural network within a single layer of silicon and hardware tests on the first tape-outs produced with TSMC. Preliminary results indicate that reading out featurized clusters from particles above a modest momentum threshold could enable using pixel information at 40 MHz.
Semiconductor hybrid pixel detectors with Timepix3 chips developed by Medipix collaboration at CERN can simultaneously measure deposited energy and time of arrival of individual particle hits in all 256 x 256 pixels with 55 µm pitch size. Their nanosecond temporal resolution was exploited to perform characterization of the ultra-high dose-per-pulse electron beam from a linear accelerator with varying pulse lengths (few µs range), as well as the proton beam produced in a cyclotron with varying beam current. Since Timepix3 has single-particle detection sensitivity, the AdvaPIX TPX3 detector was positioned out-of-primary-beam, thus using the opportunity to study the primary beam by measuring induced secondary radiation. Investigated quantities included irradiation time, delivered dose rate, pulse count, pulse frequency, and beam stability. The results and methods can be utilized for both online and offline beam monitoring and characterization.
In this study, high-energy Carbon-ion and proton beams produced in an accelerator were used. The Minipix Sprinter, a hybrid semiconductor pixel detector with novel operation and configuration customized for highly energetic particles was positioned in the primary beam for spectral and component characterization of individual particles. This detector has a demonstrated quantum-imaging sensitivity, enables real-time visualisation of particle tracks along with full spectral and tracking response.
Particles were sorted using a trained AI machine learning algorithm into: i) ions, ii) protons, iii) electrons and photons, and iv) thermal neutrons.
Results were processed for in-beam and mixed radiation fields from accelerators and space environment in terms of particle flux with particle type identification, dose rate, directional maps, and energy spectra. The outcomes provide complete beam characterization with particle identification, serving as valuable data for benchmarking MC codes.
Future collider experiments, operating at exceptionally high instantaneous luminosities, will require tracking detectors with space and time resolutions of a tenth of microns and a tenth of picoseconds, to properly perform tracks and vertices reconstruction. Several technologies have been explored, with the 3D-trench silicon pixel developed by the INFN TimeSPOT collaboration emerging as one of the most promising. Beam tests were conducted at SPS/H8 in 2022 and 2023, using both discrete-component low-noise electronics and integrated ASIC developed in CMOS 28-nm. An overview of these results will be presented. Since the performance of the front-end electronics is a critical bottleneck for future 4D tracking systems, the INFN IGNITE project is exploring innovative solutions for integrated micro-systems, which will also be discussed at the conference. Finally, the latest results on the performance of highly irradiated 3D silicon sensors up to 10^17 1-MeV neq/cm^2 will also be presented.
The ATLAS Collaboration consists of more than 5000 members, from over 100 different countries. Regional, age and gender demographics of the collaboration are presented, including the time evolution over the lifetime of the experiment. In particular, the relative fraction of women is discussed, including their share of contributions, recognition and positions of responsibility, including showing how these depend on other demographic measures.
Many world-known scientists and engineers like G. Breit, G. Budker, G. Charpak, G. Gamow, M. Goldhaber, A. Ioffe, S. Korolyov, E. Lifshitz, M. Ostrogradsky, S. Timoshenko, V. Veksler were born in Ukraine, while some, like L. Landau and M. Bogolybov, started their career there. Reclaiming their scientific legacy as well as that of many others helps to promote Ukrainian contributions to particle physics both inside and outside of Ukraine and to motivate the next generation of Ukrainian scientists in the time of war. We will present the status of Ukrainian scientific infrastructure two years after the start of the full-scale invasion and past, present and expected future contributions of Ukrainian scientists to CERN.
The LUX-ZEPLIN (LZ) collaboration has recently approved a new Community Agreement (CA), replacing our previous Code of Conduct. The new CA was created in response to a growing awareness that a purely aspirational Code of Conduct was not sufficient to handle all the interpersonal problems that can arise inside a large collaboration. After discussions with members of several other collaborations during the Snowmass process, members of the DPF Ethics Advisory Committee, and participation in a Code of Conduct workshop at Fermilab in November 2022, an LZ task force produced a new CA. The new document has a focus on restorative justice but also includes a formal process to respond to reported violations. In this talk, I will summarize some of the discussions leading up to the new CA and provide some thoughts on its implementation so far.
This session will present European Research Council (ERC) funding opportunities, available for both early career researchers and senior research leaders. The ERC operates according to a «bottom-up» approach, allowing researchers to identify new opportunities in any field of research. It encourages competition for funding between the most creative and competent researchers of any nationality and age. An update will be given on the calls, deadlines and budgets, applicants’ suitable profiles and other relevant information.
The ERC monitors closely the outcome of every call and has taken actions to tackle imbalances and potential unconscious biases. Efforts made to ensure equal treatment of all candidates, with particular focus on gender balance, as well as data and statistics collected in running ERC schemes, with specific attention to Physical Sciences, will be presented.
Promoting gender equality in physics, particularly through educational initiatives, is crucial given the low representation of women in the field, as evidenced by enrollment statistics. To address this, we initiated the Physics Project Days (PPD), a four-day workshop tailored for schoolgirls. This program aims to foster interest in physics and establish cross-school networks. Through hands-on experimentation in various physics disciplines, including particle physics, laser physics, and nanoscience, participants engage with cutting-edge research topics. The PPD undergo rigorous evaluation to ensure its effectiveness and are recognized as a valuable tool for gender equality work by the German Research Foundation since 2015.
LHCb is a collaboration of about 1600 members from 98 institutions based in 22 countries, and representing many more nationalities. We aim to work together on experimental high energy physics, in the best and most collaborative conditions. The Early Career, Gender & Diversity (ECGD) office supports this goal, and in particular to work towards gender equality, and support diversity in the collaboration. The ECGD officers advise the LHCb management and act as LHCb contacts for all matters related to ECGD. They are available for listening to and advising - in a confidential manner - colleagues who have witnessed or have been subject to harassment, discrimination or other inappropriate behaviour. In this talk we briefly introduce the ECGD office, discuss what we have learnt from analysis of the collaboration's demographics, share the conclusions from discussions on different topics debated in dedicated collaboration-meeting sessions, and the early career initiatives that are carried out.
I will describe recent results on the double copy for amplitudes in (A)dS4 and their soft limits, which are relevant for holography and cosmology.
The duality between color and kinematics and associated double-copy construction has proven remarkably useful as a computational tool first in integrand construction at the multi-loop level, and more recently in efficiently constructing the contributions of higher derivative operators. Intriguingly double-copy consistency relates information measured in the IR to behavior in the UV. I will present recent insights into this emergent UV structure encoded in the Wilson coefficients of gauge and gravity effective field theories in the double-copy web of theories.
We show how to construct the moduli-space integrands for one-loop superstring amplitudes from the knowledge of 10D field-theory loop integrands in the BCJ form. Our map provides an alternative to intricate computations involving worldsheet supersymmetry. This construction is a one-loop higher-point analogue of a recent conjecture for the three-loop four-point superstring amplitude.
Abstract: The effective field theory (EFT) approach for new physics has risen to a prominent role given the current status of the LHC. However, the large number of operators in the Standard Model EFT (SMEFT) calls for a new organizing principle. In this talk, I will introduce the geometry construction for EFT that manifests the invariance from field redefinition and neatly organizes physical quantities. I will then generalize this approach to the fermionic sector of EFT. As applications, I will show that both tree amplitudes and one-loop renormalization group equations can be compactly written as geometric objects. This talk is based on 2310.02490.
Black holes, neutron stars and other compact gravitating objects can be described at long distances by a point particle effective field theory. In such effective theory, tidal effects and/or the dynamics of the horizon are captured in a series of Wilson coefficients, the so-called Love numbers, which can be determined by matching with a complete description of the compact object in general relativity. Surprisingly, even in the classical theory, these coefficients undergo a renormalization group flow, which arises from an ambiguity in separating the compact object environment. In this talk I will explain the basic setup of this EFT and a systematic procedure to carry out the matching and running of the Love numbers using scattering amplitudes.
Recently, Arkani-Hamed et al. proposed the existence of zeros in scattering amplitudes in certain quantum field theories including the cubic adjoint scalar theory Tr($\phi^3$), the $SU(N)$ non-linear sigma model (NLSM) and Yang-Mills (YM) theory. These hidden zeros are special kinematic points where the amplitude vanishes and factorizes into a product of lower-point amplitudes, similar to factorization near poles. In our work, we show a close connection between the existence of such zeros and color-kinematics duality. In fact, the hidden zeros can be derived from the Bern-Carrasco-Johansson (BCJ) relations. We also show that these zeros extend via the Kawai-Lewellen-Tye (KLT) relations to special Galileon amplitudes and their corrections, evincing that these hidden zeros are also present in permutation-invariant amplitudes.
Effective Field Theories provide an interesting way to parameterize indirect BSM physics, when its characteristic scale is larger than the one directly accessible at the LHC, for a large class of models. Even if the Higgs boson is SM-like, BSM effects can manifest itself through higher-dimension effective interactions between SM fields, providing indirect sensitivity through distortions of kinematic distributions. Constraints on such effects derived by measurements of several production and decay modes of the Higgs boson and their combination on the data set collected by the CMS experiment a centre of mass energy of 13 TeV will be presented.
We present SMEFiT3.0, an updated global SMEFT analysis of Higgs, top quark, and diboson data from the LHC complemented by electroweak precision observables (EWPOs) from LEP and SLD. We consider the most recent inclusive and differential measurements from the LHC Run II, alongside with a novel implementation of the EWPOs. We assess the impact on the SMEFT parameter space of HL-LHC measurements when added on top of SMEFiT3.0 by means of dedicated projections that extrapolate Run II data. Subsequently, we quantify the unprecedented impact that measurements from future electron-positron colliders would have on both the SMEFT parameter space and on UV-complete models. We present projections for both the FCC-ee and the CEPC based on the most recent running scenarios and include $Z$-pole EWPOs, fermion-pair, Higgs, diboson, and top quark production, using optimal observables for both $W^+W^-$ and $t\bar{t}$.
Experiments have confirmed the presence of a mass gap between the Standard Model and potential New Physics. Consequently, the exploration of effective field theories to detect signals indicative of Physics Beyond the Standard Model is of great interest. In this study, we examine a non-linear realizations of the electroweak symmetry breaking, wherein the Higgs is a singlet with independent couplings, and Standard Model fields are additionally coupled to heavy bosonic resonances. We present a next-to-leading-order determination of the oblique S and T parameters. Comparing our predictions with experimental values allows us to impose constraints on resonance masses, requiring them to exceed the TeV scale (𝑀_𝑅 > 3 TeV). This finding aligns with our earlier analysis, employing a less generalized approach, where we computed these observables.
We study the capabilities of a muon collider, at 3 and 10 TeV center-of-mass energy, of probing the interactions of the Higgs boson with the muon. We consider all the possible processes involving the direct production of EW bosons ($W,Z$ and $H$) with up to five particles in the final state. We study these processes both in the HEFT and SMEFT frameworks, assuming that the dominant BSM effects originate from the muon Yukawa sector. Our study shows that a Muon Collider has sensitivity beyond the high-luminosity LHC, especially as it does not rely on the Higgs-decay branching fraction to muons. A 10 TeV muon collider provides a unique sensitivity on muon and (multi-) Higgs interactions, significantly better than the 3 TeV option. Particularly, we find searches based purely on multi-Higgs production to be particularly effective in probing these couplings.
In this talk we study the phenomenological implications of multiple Higgs boson production from longitudinal vector boson scattering in the context of effective field theories. We find compact representations for effective tree-level amplitudes with up to four final state Higgs bosons. Total cross sections are then computed for scenarios relevant at the LHC in which we find the general Higgs Effective Theory (HEFT) prediction avoids the heavy suppression observed in the Standard Model Effective Field Theory (SMEFT).
In this study, we analyze the existence of neutrino secret interactions (νSI), mediated by a new massive vector boson. We provide a recipe for setting limits on this BSM scenario by the detection of one or more neutrino events from HEν scattering on non-relativistic and ultra-relativistic CνB for the full mediator mass range. In particular, we present an analysis of the effect of the angular cutting parameter on coupling constant in the light mediator mass range. We illustrate the calculations with constraints on νSI coupling constant from SN1987A and Blazar TXS 0506+056.
We consider the standard three-generation framework augmented by an extra sterile neutrino in the mass range (Δm$^2_{41}=10^{-4}:1 eV^2$). In this picture, four mass spectrums are possible due to unknown signs of the atmospheric (Δm$^2_{31}$) and sterile (Δm$^2_{41}$) mass-squared differences. We study how the sensitivity to the ordering of the active states and the octant of mixing angle $\theta_{23}$ get affected in this scenario. We also discuss the possible determination of sign of Δm$^2_{41}$. This analysis is done in the context of a liquid argon detector using beam neutrinos traveling a distance of 1300 km and atmospheric neutrinos propagating through a distance $10-10^4$ km, allowing resonant matter effects. We present separate results from these sources, do a combined study, and probe the synergy between these two to give an enhanced sensitivity. We also discuss the implications for cosmology, beta decay, and neutrinoless double beta decay in presence of a sterile neutrino.
A new Quantum Field Theory (QFT) formalism for neutrino oscillations in a vacuum is proposed. The neutrino emission and detection are identified with the charged-current vertices of a single second-order Feynman diagram for the underlying process, enclosing neutrino propagation between these two points. The L-dependent master formula for the charged lepton production rate is derived, which provides the QFT basis for analyzing neutrino oscillations. Further, techniques are developed for constructing amplitudes of neutrino-related processes in terms of the neutrino mass matrix, with no reference to the neutrino mixing matrix. The proposed approach extensively uses Frobenius covariants within the framework of Sylvester’s theorem on matrix functions. It is maintained that fitting experimental data in terms of the neutrino mass matrix can provide better statistical accuracy in determining the neutrino mass matrix compared to methods using the neutrino mixing matrix at intermediate stages.
We propose a dynamical scoto-seesaw mechanism using a gauged $B-L$ symmetry. Dark matter is reconciled with neutrino mass generation, in such a way that the atmospheric scale arises through the standard seesaw,
while the solar scale is scotogenic, arising radiatively from the exchange of dark sector particles. This way we explain the solar-to-atmospheric scale ratio. The TeV-scale seesaw mediator and the two dark fermions carry different $B-L$ charges. Dark matter stability follows from the residual matter parity that survives $B-L$ breaking. Besides having collider tests, the model implies sizeable charged lepton flavour violating (cLFV) phenomena, including Goldstone boson emission processes.
Radiative seesaw models are examples of testable extensions of the SM to explain the light neutrino masses. In radiative seesaw models at 1-loop level, such as the popular scotogenic model, in order to successfully reproduce neutrino masses and mixing, one has to rely either on unnaturally small Yukawa couplings or on a very small mass splitting between the CP-even and CP-odd components of the neutral scalar mediators. We discuss here scotogenic-like models where light-active neutrino masses arise at the three-loop level, providing a more natural explanation for their smallness. The proposed models are in general consistent with the neutrino oscillation data and allows to accommodate the measured dark matter relic abundance. Specific realizations also allow to explain the W-mass anomaly and the baryon asymmetry of the Universe via leptogenesis. We explore the rich phenomenology of these models, in particular in near future lepton flavor violation experiments.
Lepton number violation is present in models of leptogenesis and Majorana neutrino masses, but has so far not been observed experimentally. Beyond the conventional seesaw mechanisms, lepton number violation may be induced at a higher mass dimension, e.g. from effective operators at mass dimension 7. Just like the seesaw mechanisms generate the dimension-5 Weinberg operator, these dimension-7 operators have a finite number of possible tree-level UV-completions. In this work we explore the phenomenology of the full range of such dimension-7 UV-completions, including the generation of Majorana neutrino masses in 1- and 2-loop diagrams. We find that there are several regions of parameter space close to the discovery reach of both colliders and neutrinoless double beta decay experiments that can lead to the observed value of neutrino masses.
The CMS Collaboration is preparing to replace its current endcap calorimeters for the HL-LHC era with a high-granularity calorimeter (HGCAL), featuring a previously unrealized transverse and longitudinal segmentation, for both the electromagnetic and hadronic compartments, with 5D information (space-time-energy) read out. The proposed design uses silicon sensors for the electromagnetic section and high-irradiation regions of the hadronic section , while in the low-irradiation regions of the hadronic section plastic scintillator tiles equipped with on-tile silicon photomultipliers (SiPMs) are used. The full HGCAL will have approximately 6 million silicon sensor channels and about 240 thousand channels of scintillator tiles. In this talk we present the ideas behind the HGCAL, the current status of the project, the lessons that have been learnt, in particular from beam tests as well as the design and operation of vertical test systems and the challenges that lie ahead.
The FoCal is a high-granularity forward calorimeter to be installed as an ALICE upgrade during the LHC Long Shutdown 3 and take data in Run 4.
It will cover a pseudorapidity interval of $3.4 < \eta < 5.8$, allowing to explore QCD at unprecedented low Bjorken-$x$ of down to $\approx 10^{-6}$ -- a regime where non-linear QCD dynamics are expected to be sizable.
It consists of a compact silicon-tungsten sampling electromagnetic calorimeter with pad and pixel readout to achieve high spatial resolution for discriminating between isolated photons and decay photon pairs. Its hadronic component is constructed from copper capillary tubes with scintillator fibers.
The detector design allows measuring a multitude of probes, including direct photons, jets, as well as photo-production of vector mesons in ultra-peripheral collisions and angular correlations of different probes.
We will give an overview of the FoCal performance using results from recent test beams of small-scale prototypes.
The FASER experiment at the Large Hadron Collider (LHC) aims to detect new, long-lived fundamental particles and to study neutrino interactions. To enhance its discovery potential, a new W-Si preshower detector is being built, which will enable the identification and reconstruction of electromagnetic showers produced by high-energy photon pairs with separations as fine as 200 µm. The detector incorporates a cutting-edge monolithic ASIC with hexagonal pixels measuring 100 µm in pitch, designed to achieve an extended dynamic range for charge measurement and capable of storing charge information for thousands of pixels per event. The ASIC integrates fast front-end electronics based on SiGe heterojunction bipolar transistor technology, providing a O(100) ps time resolution. Analog memories embedded within the pixel array facilitate frame-based event readout, minimizing dead areas. In this presentation, we detail the design and expected performance of the preshower detector.
To cope with the increase of the LHC instantaneous luminosity, new trigger readout electronics were installed on the ATLAS Liquid Argon Calorimeters. On the detector, 124 new electronic boards digitise at high speed 10 times more signals than the legacy system. Downstream, large FPGAs are processing up to 20 Tbps of data to compute the deposited energies. Moreover, a new control and monitoring infrastructure has been developed. This contribution will present the challenges of the commissioning, the first steps in operation, and the milestones still to be completed towards the full operation of both the legacy and the new trigger readout paths for the LHC Run-3.
The Tile Calorimeter (TileCal) is a sampling hadronic calorimeter covering the central region of the ATLAS experiment, with steel as absorber and plastic scintillators as active medium. New electronics of the TileCal is needed to meet the requirements of a 1 MHz trigger, higher ambient radiation, and to ensure better performance under high pile-up conditions at the HL-LHC. Both the on- and off-detector TileCal electronics will be replaced. The modular front-end electronics feature radiation-tolerant commercial off-the-shelf components and redundant design to minimise single points of failure. The results of the extensive R&D programme for on- and off-detector systems, together with expected performance and results of beam tests with the electronics prototypes will be discussed. A demonstrator module was inserted in 2019 in the TileCal. The performance of the demonstrator will be presented.
The Tile Calorimeter (TileCal) is a central hadronic calorimeter of the ATLAS experiment at the LHC. The TileCal plays an important role in the reconstruction of jets, hadronically decaying tau leptons, missing transverse energy and provides information to the dedicated calorimeter trigger. This sampling calorimeter is composed by the plastic scintillating tiles and steel absorbers. The scintillating light is read-out by the wavelength shifting fibres coupled to the photomultiplier tubes. The dedicated calibration systems are used to monitor and calibrate each stage of the signal production from scintillation light to the signal reconstruction. The linearity, stability in time and the precision of the calibration systems will be discussed. Moreover, the energy scale and timing of the TileCal are validated using isolated muons and jets from the p-p collisions. The performance of the detector using LHC Run 3 will be shown.
The latest CMS results on spectroscopy and properties of beauty mesons and baryons are presented. The results are obtained with the data collected by the CMS experiment in proton-proton collisions at sqrt(s)=13 TeV.
The Belle and Belle$~$II experiments have collected a $1.4~\mathrm{ab}^{-1}$ sample of $e^+e^-$ collision data at centre-of-mass energies near the $\Upsilon(nS)$ resonances. These data include a 19.2$~$fb$^{-1}$ sample collected near the $\Upsilon(10753)$ resonance. We present several results related to the following processes: $e^+e-\to \Upsilon(nS)\eta$, $e^+e-\to \gamma X_b(\chi_{bJ}\pi^+\pi^-)$, $e^+ e^-\to h_b(1P)\eta$ and $e^+e^-\to\chi_{bJ}(1P)\omega$. The last analysis also includes data samples collected by Belle at similar centre-of-mass energies. In addition, we present Belle measurements of the $B^{0}$ and $B^+$ meson mass difference, a pentaquark search in $\Upsilon(1S)$ and $\Upsilon(2S)$ decays, as well as studies of $h_b(2P)$ decays to the $\eta \Upsilon(1S)$ and $\chi_{bJ}\gamma$ final states.
The latest studies of beauty meson decays to open charm final states from LHCb are presented. Several first observations and branching fraction measurements using Run 1 and Run 2 data samples are shown. These decay modes will provide important spectroscopy information and inputs to other analyses.
The LHCb experiment collected the world's largest sample of charmed hadrons during the Run 1 and 2 of the LHC (2011--2018). This allows performing some of the world's most precise measurements of production, quantum numbers and decay properties of known charmed baryons, as well as searching for new excited states. The latest results in this field are presented, including some new amplitude analyses.
The world’s largest sample of $J/ψ$ events accumulated at the BESIII detector offers a unique opportunity to investigate $η$ and $η^′$ physics via two body $J/ψ$ radiative or hadronic decays. In recent years the BESIII experiment has made significant progresses in $η/η^′$ decays. A selection of recent highlights in light meson spectroscopy at BESIII are reviewed in this report, including observation of the cusp effect in $η^′→π^0π^0η$, transition form factor measurements, as well as the search for rare/forbidden decays of $η/η^′$.
This presentation will feature several recent results of charmonium decays, including four first-time observations: $\psi(3686) \to \Omega^- K^+ anti-\Xi^0$, $\eta_c(2S) \to K^+ K^- \eta$, $\eta_c(2S) \to \pi^+ \pi^- K_s K^{+/-} \pi^{-/+}$, and $\chi_{cJ} \to 3(K^+K^-)$. Additionally, an updated measurement of the M1 transition $\psi(3686) \to \gamma \eta_c(2S)$ with $\eta_c(2S) \to K\bar{K} \pi$ will be discussed. In the search for $\eta_c(2S) \to \pi^+ \pi^- \eta_c$, no significant signal was found, leading to the provision of an upper limit. These new measurements provide valuable insights into charmonium decay, contributing to a better understanding of the decay mechanism of charmonia and, consequently, a deeper comprehension of non-perturbative strong interactions.
The Belle and Belle$~$II experiments have collected a $1.4~\mathrm{ab}^{-1}$ sample of $e^+e^-$ collision data at centre-of-mass energies near the $\Upsilon(nS)$ resonances. These samples contain a large number of $e^+e^-\to c\bar{c}$ events that produce charmed mesons. We present measurements of charm-mixing parameters from flavour-tagged $D^0\to K^0_{\rm S}\pi^+\pi^-$ decays. Direct $C\!P$ violation is searched for in $D^0\to K^0_{\rm S}K^0_{\rm S}$ decays, $D^0\to \pi^0\pi^0$ decays and several modes where the $D$ meson decays to a four-body final state. For the four-body decays, asymmetries in the distributions of triple and quadruple moments probe for $C\!P$ violation.
The radiative decays of $J/\psi$ provide a gluon-rich environment and are therefore regarded as one of the most promising hunting grounds for glueballs. Using the world's largest samples of $J/\psi$ events produced in $e^+e^-$ annihilation, BESIII performed the first measurements of the quantum numbers of the $X(2370)$ particle, along with its mass, production, and decay properties, and found that they are consistent with the features of a glueball.
At BESIII, the lineshapes of $e^+e^- \to \phi \pi\pi$, $\omega \pi\pi$, $\phi \pi^0$, $K_sK_L \pi^0$, $\eta \pi \pi$ and $\omega \eta^{'}$ are
measured from 2.0 to 3.08 GeV, where resonant structures are observed in these processes. These results provide important information on light vector mesons (i.e. excited $\rho$, $\omega$ and $\phi$) in the energy regions above 2 GeV.
In electron-positron annihilation, the process $e^+e^- \to \chi_{c1}$ can occur via the production of two virtual photons or through neutral current, therefore being suppressed with respect to the normal annihilation process via one virtual photon. Using a dedicated scan sample around the $\chi_{c1}$ mass, the direct production of $\chi_{c1}$ has been established for the first time. This provides a new approach for the study of the internal nature of hadrons.
This talk will present four recent measurements conducted by BESIII, focusing on the cross-sections of electron-positron annihilation into open-charm and hidden-charm final states within the center-of-mass energies ranging from 3.80 to 4.95 GeV. The open charm final states include $e^+ e^- \to D \bar{D}$ and $D_s^+ D_s^-$, revealing abundant structures in their cross-section line shapes. The hidden-charm final states encompass $e^+ e^- \to \eta h_c$ and $\omega \chi_{c1/2}$, with an observed structure near 4.2 GeV in the former channel and two structures in the latter. While one of these structures can be linked to the previously discovered $\psi$(4415), the others represent novel observations. These new cross-section measurements at the $\tau$-charm energy region provide crucial insights into the spectrum of vector charmonium and charmonium-like states.
Studying the properties and behavior of pentaquarks deepens our understanding of quantum chromodynamics (QCD) and the strong interactions. The LHCb experiment, with a large heavy-flavor dataset and detector performance optimized for beauty and charm hadron studies, is uniquely positioned to explore the properties of heavy-flavor pentaquark states. This talk highlights the latest advancements in the study of pentaquark states within the LHCb experiment, including study of pentaquark states in both prompt and non-prompt production. These results hold important reference value for understanding the formation of pentaquark states.
Since the obervation of the X(3872), a large number of exotic tetraquark candidates has been observed in the past 20 years. Moreover, some of these hadrons suggest an explicit exotic internal structure: charged, open-flavour, doubly heavy-flavour, full heavy-flavour states have enriched the field of exotic spectroscopy along with an increasing interest from the theory community. These states provide a unique benchmark to study QCD binding rules. In this talk, the latest results on exotic tetraquarks studies from LHCb are presented.
Recent ATLAS results on exotic hadron spectroscopy will be presented, including studies of exotic tetraquarks using various final states and searches for exotics in $\Upsilon+2\mu$ channel
Since the start of the LHC, pinning down the properties of top quarks has been a vital point of the LHC research program. Only recently, it was understood how top-quark properties can even be used to probe quantum entanglement and by such study foundational problems of quantum mechanics at the LHC. In this talk, recent CMS measurements of top-quark properties and their interpretation in terms of quantum measurements will be discussed.
We present the first complete high-precision results for the top-quark decay width $\Gamma_t$, $W$-helicity fractions and semi-inclusive distributions for the top-quark decay process to the third order in the strong coupling constant $\alpha_s$. We find, in particular, that the pure $\mathcal{O}(\alpha_s^3)$ correction decreases $\Gamma_t$ by $0.8\%$ of the previous $\mathcal{O}(\alpha_s^2)$ result, exceeding considerably the error estimated by the usual scale-variation prescription. With this critical piece of correction incorporated, our to-date most precise theoretical prediction meets the request by future colliders. This computation is achieved by a very efficient approach, applied recently also to the calculation of $\mathcal{O}(\alpha_s^3)$ QCD correction to lepton-pair invariant-mass spectrum in B-meson semi-leptonic decay.
In this talk, I will present the results of the first calculation of open bottom production at hadron colliders at NNLO+NNLL, i.e. a next-to-next-to-leading-order calculation that resums collinear logarithms at next-to-next-to-leading-logarithmic accuracy. This new computation achieves significantly reduced theory errors compared to previous calculations, with errors of just a few percent at high transverse momenta. These results are compared to data from several measurements performed at the Tevatron, where lower-order predictions have previously been found to underestimate the cross section. To perform such comparisons, the hadronisation and decay of the b-quark is included in the theory calculation where needed, yielding predictions for a wide range of final states.
We discuss the sensitivity to quartic $\gamma \gamma \gamma \gamma$, $\gamma \gamma WW$, $\gamma \gamma ZZ$, $\gamma \gamma t \bar{t}$ anomalous couplings at the LHC via photon-induced processes. Tagging the intact protons allow improving the sensitivities by two or three orders of magnitude with respect to standard methods. We also discuss the sensitivity to Axion Like Particles production.
This talk will present the recent results from various analyses on signals containing intact protons measured by the CMS Precision Proton Spectrometer (PPS), which includes searches for New Physics in the electroweak sector. As the operation of the HL-LHC will require more stringent selection of intact protons due to the higher pileup, the PPS2 project will be presented and projections for potential physics analyses with the upcoming data from Phase-II will be discussed.
The future collider LHeC is set to operate at a center-of-mass energy of 1.2 TeV and is anticipated to provide an integrated electron-proton luminosity of about 1 ab$^{-1}$. This talk will present a comprehensive survey of possible studies of high-energy photon-photon processes at the LHeC, for the $\gamma \gamma$ center-of-mass energy of up to 1~TeV. The scientific potential of studying such photon-photon interactions is evaluated for various $\gamma \gamma$ processes, including, in particular, the exclusive production of pairs of W and Z bosons, lepton pairs, Higgs bosons as well as pairs of charged supersymmetric particles.
We present high statistics measurements of primary cosmic rays Proton, Helium, Carbon, Oxygen, Neon, Magnesium, Silicon, Sulfur, Iron, and Nickel.
The data shows that to high degree of accuracy there are only two classes of primary cosmic ray elements for nuclei with Z>=2.
Precision measurements of the cosmic ray D flux are presented as function of rigidity from 1.9 to 21 GV, based on 21 million D nuclei. We observed that over the entire rigidity range D exhibit nearly identical time variations with p, $^3$He, and $^4$He fluxes. Above 4.5 GV, the D/⁴He flux ratio is time independent and its rigidity dependence is well described by a single power law $\propto R^\Delta$ with $\Delta_{D/^4He}$ = −0.108 ± 0.005. This is in contrast with the $^3$He/$^4$He flux ratio for which we find $\Delta_{^3He/^4He}$
= −0.289 ± 0.003. The significance of $\Delta_{D/^4He}$ > $\Delta_{^3He/^4He}$ exceeds 10 $\sigma$. In addition, we found that above ∼ 13 GV the rigidity dependence of D and p fluxes is identical with a D/p flux ratio of 0.027 ± 0.001. These unexpected observations show that contrary to expectations, cosmic deuterons have a sizeable primary component.
We present high statistics measurements of the secondary cosmic rays Lithium, Beryllium, Boron, Fluorine, and Phosphorus The unexpected rigidity dependence of the secondary cosmic ray fluxes and their ratios to the primary cosmic rays such as Li/C, Be/C, B/C, Li/O, Be/O, B/O, F/Si, and P/Si are discussed.
We present for the first time the high statistics precision measurement of time structures of Li, Be, B, C, N, and O nuclei in cosmic rays in an entire solar cycle (11 years), from May 2011 to Nov 2022 between 2 and 60 GV. The fluxes and their ratios have been determined for 147 Bartels rotations. The fluxes are anti-correlated with solar activity, and the amplitude of the time structures decreases with rigidity, and all nuclei exhibit similar time variations. The Li, Be, and B fluxes exhibit a significant difference in solar modulation with respect to the C, N, and O fluxes. This observation provides new information on the propagation of cosmic rays in the heliosphere.
Information on time variations of the anti-proton spectrum is very limited. We present the continuous twelve-year measurements of cosmic ray anti-protons spectrum from 1 to 42 GV. The measured antiproton spectrum time variations are distinctly different from electrons, positrons, and protons. This provides unique information to the understanding of heliosphere physics.
The NUSES space mission focuses on advancing observational and technological approaches to investigate various cosmic phenomena.
This includes high-energy astrophysical neutrinos, the study of low-energy cosmic and gamma rays, the Sun-Earth environment, space weather, and the interactions within the Magnetosphere-Ionosphere-Lithosphere Coupling (MILC) system.
NUSES embodies two experiments, Terzina and Zirè. Terzina's primary objective is the detection of ultra-high-energy cosmic rays or neutrino-induced extensive air showers. Zirè, that also includes a low energy module (LEM), is dedicated to measuring electrons, protons, and light nuclei up to a few hundreds of MeVs, and detecting MeV photons. The NUSES light readout system is based on the use of Silicon Photomultipliers (SiPMs).
This work explores the scientific objectives, design, and current status of the project, highlighting the mission's commitment to advancing scientific knowledge through cutting-edge sensing technology.
In this talk, a mechanism for producing a cosmologically-significant relic density of one or more sterile neutrinos will be discussed. This scheme invokes two steps: First, a population of "heavy" sterile neutrinos is created by scattering-induced decoherence of active neutrinos; Second, this population is transferred, via sterile neutrino self-interaction-mediated scatterings and decays, to one or more lighter mass (∼10 keV to ∼1 GeV) sterile neutrinos that are far more weakly (or not at all) mixed with active species and could constitute dark matter. Dark matter produced this way can evade current electromagnetic and structure-based bounds, but may nevertheless be probed by future observations.
Many extensions to the Standard Model predict new particles decaying into two bosons (W, Z, photon) making these important signatures in the search for new physics. Searches for such diboson resonances have been performed in different final states and novel analysis techniques, including unsupervised learning, are also used to extract new features from the data. This talk summarises such recent ATLAS searches with Run 2 data collected at the LHC and explains the experimental methods used, including vector-boson-tagging techniques.
A summary of searches for heavy resonances with masses exceeding 1 TeV decaying into pairs or triplets of bosons is presented, performed on data produced by LHC pp collisions at sqrt{s}=13TeV and collected with the CMS detector during 2016 - 2018. The common feature of these analyses is the boosted topology, namely the decay products of the considered bosons (both electroweak W, Z bosons and the Higgs boson) are expected to be highly energetic and close in angle, leading to a non-trivial identification of the quarks and leptons in the final state. The exploitation of jet substructure techniques allows to increase the sensitivity of the searches where at least one boson decays hadronically. Various background estimation techniques are adopted, based on data-MC hybrid approaches or relying only in control regions in data. Results are interpreted in the context of the Warped Extra Dimension and Heavy Vector Triplet theoretical models, two possible scenarios beyond the standard model.
Beyond the standard model theories with extended Higgs sectors (e.g. SUSY) or extra spatial dimensions predict resonances with large branching fractions in a pair of Higgs bosons with negligible branching fractions to light fermions. We present an overview of searches for new physics containing Higgs boson pairs in the final state, using proton-proton collision data collected with the CMS detector at the CERN LHC.
Many theories beyond the Standard Model (SM) predict new physics phenomena that decay hadronically to dijet or multijet final states. This talk summarises the latest results from the ATLAS detector using the Run-2 dataset, involving these final states. A number of sensitive kinematics are explored, including the invariant mass and angular distributions. More exclusive final states and novel techniques are also explored.
The smallness of neutrino masses in conjunction with together their observed oscillations could be pointing to physics beyond the standard model that can be naturally accommodated by the so-called "seesaw" mechanism, in which new Heavy Neutral Leptons (HNL) are postulated. Several models with HNLs exist that incorporate the seesaw mechanism, sometimes also providing a DM candidate or giving a possible explanation for the baryon asymmetry. This talk presents an overview of the most recent searches for HNLs interpreted in such models, using both prompt and long-lived signatures in CMS using the full Run-II data-set collected at the LHC. A special focus is given to HNL signatures that benefit from the exploitation of dedicated data streams and innovative usage of the CMS detector.
BSM theories extending the Standard Model gauge group are well motivated by grand unification, compositeness or flavor symmetries, and naturally introduce additional gauge bosons. Existing experimental bounds coming from LHC exclude the existence of an additional neutral gauge boson Z' with masses of up to about 5 TeV, depending on the model. The reach could be extended at future lepton colliders due to a cleaner collision environment. In our contribution, we show that a muon collider operating at 10 TeV could extend this reach by an order of magnitude for a vast set of BSM scenarios, far beyond the collider energy. We also present a framework to efficiently discriminate between different Z' models due to their vector and axial vector couplings using leptonic observables. We briefly discuss the impact of systematic uncertainties as well as beam polarization if available at a muon collider.
In preparation for Run 3 at the LHC, the MC Simulation performed with Geant4 within ATLAS has undergone significant improvements to enhance its computational performance and overall efficiency. This talk offers a comprehensive overview of the optimizations implemented in the ATLAS simulation for Run 3. Notable developments include the application of EM range cuts, the implementation of Neutron and Photon Russian roulette and the development of the Woodcock tracking in the EM Endcap Calorimeter, the tuning of simulation parameters, smarter and more efficient geometry descriptions, the implementation of new Geant4 core features and improvements that target the way Geant4 is linked and used within the framework. These enhancements collectively resulted in a speedup in CPU time of a factor of 2 compared to the baseline configuration used in Run 2. In addition, this contribution provides an overview of forthcoming optimizations, emphasizing both immediate and longer-term enhancements.
Simulating detector and reconstruction effects on physics quantities is of paramount importance for data analysis, but unsustainably costly for the upcoming HEP experiments.
The most radical approach to speed-up detector simulation is a Flash Simulation, as proposed by the LHCb collaboration in Lamarr, a software package implementing a novel simulation paradigm relying on deep generative models and seq2seq attention-driven techniques to deliver simulated samples. Thanks to its modular layout, Lamarr provides analysis-level quantities by applying a pipeline of machine-learning-based modules that properly transforms the information resulting from physics generators.
Good agreement is observed by comparing key reconstructed quantities obtained with Lamarr against those from the existing detailed Geant4-based simulation. Integrated within the general LHCb Simulation software framework, we show that a two-order magnitude cost reduction can be achieved by adopting Lamarr.
The simulation of MC events is a crucial task and an indispensable ingredient for every physics analysis. To reduce the CPU needs of the GEANT simulation, ATLAS has developed a strong program to replace parts of the simulation chain by fast simulation tools. Among those tools is AtlFast3, which utilizes a combination of Generative Adversarial Networks and sophisticated parametrizations for the fast simulation of showers in the EM and hadronic calorimeters. FATRAS is a tool that approximates particle interactions with the material through physics formalisms. An integration of FATRAS with the experiment-independent common tracking software (ACTS) is also in development. Track overlay is a technique to speed up the production of MC samples with additional interactions. Machine learning techniques are used to ensure this method can even be applied in dense tracking environments. This talk will discuss the status of the development of these tools as well as their performance.
The Circular Electron Positron Collider (CEPC) is a future Higgs factory to measure the Higgs boson properties. Like the other future experiments, the simulation software plays a crucial role in CEPC for detector designs, algorithm optimization and physics studies. Due to similar requirements, the software stack from the Key4hep project has been adopted by CEPC. As the initial application of Key4hep, a simulation framework has been developed for CEPC based on DD4hep, EDM4hep and k4FWCore since 2020. However, the current simulation framework for CEPC lacks support for the parallel computing. To benefit from the multi-threading techniques, the Gaussino project from the LHCb experiment has been chosen as the next simulation framework in Key4hep. This contribution presents the application of Gaussino for CEPC. The development of the CEPC-on-Gaussino prototype will be shown and the simulation of a tracker detector will be demonstrated.
Micropattern Gaseous Detectors (MPGDs) rely heavily on the simulation of the particle passage as conducting these studies allows scientists to cut huge costs and development for prototyping. Even though Garfield++ is a very important part of the simulation of MPGDs, it is very comprehensively intensive particularly when large detector volumes and high gas gains are required. In order to mimic the interaction of relativistic particles through gaseous detectors, High Energy Electro-Dynamic (HEED) photo absorption ionisation (PAI) model was added in the parallel Garfield toolkit (pGarfield). Thus, the whole track of an ionising particle through a detector was simulated with the help of the new pGarfield/HEED implementation. The results will illustrate how parallelization reduced the amount of time and CPU power required for computation of the full simulation. Additionally, a number of studies were conducted for further optimisation and the findings of those studies will also be reported.
Particle physics relies on Monte Carlo (MC) event generators for theory-data comparison, necessitating several samples to address theoretical systematic uncertainties at a high computational cost. The MC statistic becomes a limiting factor and the significant computational cost a bottleneck in most physics analyses. In this talk, the Deep neural network using Classification for Tuning and Reweighting (DCTR) is used to reweight simulations to different models or model parameters by using the full event kinematic information. This methodology avoids the need for simulating the detector response multiple times by incorporating the relevant variations in a single sample. In this talk, DCTR is evaluated for the reweighting of two systematic uncertainties in MC simulations of top quark pair production in the CMS experiment. Additionally, it is investigated for reweighting a next-to-leading-order generator to a next-to-next-to-leading-order generator for top quark pair production.
The Circular Electron Positron Collider (CEPC) introduces new challenges for the vertex detector in terms of material budget, spatial resolution, readout speed, and power consumption. A Monolithic Active Pixel Sensor (TaichuPix) has been developed for CEPC.
The baseline vertex detector is designed with a three double-layers barrel structure. This structure aims to minimize particle scattering and enhance the impact parameter. It involves installing silicon pixel sensors and cables on both sides of the support structure. Each ladder consists of a common support structure and two layers of silicon detectors.
The vertex detector prototype has been developed and characterized at the DESY test beam facility, and the results indicate a spatial resolution better than 5 µm and a detection efficiency better than 99%.
The Circular Electron Positron Collider (CEPC) is a proposed future Higgs and Z factory. To achieve an excellent momentum resolution for the precision measurements, the tracking system has to be covered by sensors with good spatial resolution and low material budget, while keeping cost-effective for a large sensitive area. High-Voltage CMOS (HVCMOS) is a promising technology option. In this talk the development of tracker concept based on HVCMOS sensors will be presented. Latest development of HVCMOS sensors using 55nm process will be introduced.
The CERN proposed $e^+e^-$ Future Circular Collider (FCC-ee) is designed as an electroweak, flavour, Higgs and top factory with unprecedented luminosities. Many measurements at the FCC-ee will rely on the precise determination of the vertices, measured by dedicated vertex detectors.
All vertex detector designs use Monolithic Active Pixel Sensors (MAPS) with a single-hit resolution of ≈3 µm and a material budget as low as 0.3% $X/X_0$ per detection layer, which is within specifications for most of the physics analyses.
This contribution presents the status of the fully engineered vertex detectors, their integration with the collider beam pipe, and discusses their predicted performance using DD4hep full simulation. A concept for an ultra-light vertex detector using curved wafer-scale MAPS is also presented, which allows reducing the material budget to nearly one-fifth. This improves the vertexing capabilities, especially for heavy flavour decays, such as $B^0$→$K^{*0}𝜏^+𝜏^-$.
The IDEA detector concept has been proposed for experiments future high-energy electron-positron colliders, covering a rich physics program from the Z to WW, H and ttbar. The tracking system of the IDEA detector concept consists of different subsystems: a vertex detector, an inner tracker, a drift chamber and a silicon wrapper between the drift chamber and the calorimeters. In this talk the layout of the inner tracker and silicon wrapper will be described. The core of the system are multi-chips modules based on the ATLASPIX3 monolithic pixel detector. Prototypes of quad-modules and staves for the barrel region have been realized, including the cooling distribution. The performances of the individual components have been measured and a demonstrator program for the feasibility of their integration in under way.
High voltage CMOS pixel sensors are proposed to be used in future particle physics experiment. The ATLASPIX3 chip consists of 49000 pixels of dimension 50μm x 150 μm, realized in in TSI 180nm HVCMOS technology. It was the first full reticle size monolithic HVCMOS sensor suitable for construction of multi-chip modules and supporting serial powering through shunt-LDO regulators. The readout architecture supports both triggered and triggerless readout with zero-suppression.
With the ability to be operated in a multi-chip setting, a 4-layer telescope made of ATLASPix 3.1 was developed, using the GECCO readout system as for the single chip setup. To demonstrate the multi-chip capability and for its characterisation, a beam test was conducted at DESY using 3--6 GeV positron beams with the chips operated in triggerless readout mode with zero-suppression. The detector performance have also been tested with hadron beams and operating both with and without the built-in power regulators.
The hadron particle identification provided by the RICH system in LHCb over a momentum range of 2.6 – 100 GeV/c has been a key element of the success of the experiment and will remain equally important for Upgrade II. A substantial improvement in the precision of the measurements of the space and time coordinates of the photons detected in the RICH detectors is needed to keep the excellent performance at instantaneous luminosities up to 7.5 times those expected for Upgrade I and 75 times those released in the past. It will require a readout strategy with high-resolution timing information and significant improvements in the resolution of the reconstructed Cherenkov angle, new optical schemes and very light-weight components. The reconstruction software will also need a major upgrade to benefit from these improvements. The key elements towards the realisation of this programme will be discussed, with an overview of the R&D, simulation results and performance studies.
Diversity and inclusion are vital for effective collaboration within an organisation like ATLAS, and the Early Career Scientists Board (ECSB) is an essential part of ATLAS’s efforts in this area.
The ECSB continuously organises workshops and events to provide a platform for early-career scientists to develop their skills and careers in science more effectively and works to identify and eliminate possible obstacles that may hinder the growth of early-career researchers in the ATLAS collaboration. But it's not just about career development: the ECSB believes that personal development should be allowed to happen regardless of individual background and that everyone in the collaboration should feel heard, respected, and valued. This presentation highlights the importance of supporting young scientists in creating an inclusive community.
The Muon g-2 collaboration consists of approximately 200 members with a variety of backgrounds: different scientific research disciplines, home institutions, countries of origin, and career stages. The scientific mission of the collaboration is to measure the anomalous magnetic moment of the muon with unprecedented precision. To perform such measurements every day, we face complex problems that we aim to solve efficiently using our resources. A diverse collaboration with a variety of backgrounds helps us to come up with effective solutions for these complex problems. This talk will summarize the organization's structures and the ED&I activities that the collaboration has put in place to build and maintain an inclusive and equitable work environment.
The CMS Collaboration is host to thousands of members from around the world, working together on a wide variety of research topics towards a better understanding of the fundamental processes that make up our universe. CMS is formed from over 250 institutes in 58 countries, as such bringing a wealth of diverse perspectives which enhance our science. The CMS Diversity and Inclusion Office has grown through efforts from Collaboration to improve the working environment of all members. Recently, the CMS D&I Office has engaged in the implementation of formal structural changes from demographic data collection and analysis, to advising decision-making bodies on equitable selection practices, to drafting recommendations for ensuring a safe and collaborative environment for all. This poster presents selected material from recent efforts of the CMS D&I Office.
The Institute of Physics' flagship gender equality award, Project Juno, was a pivotal initiative for fostering inclusivity within UK & Ireland university physics departments. In 2023, the School of Physics & Astronomy at the University of Edinburgh achieved the highest level of recognition – Juno Champion status.
In this presentation, as the Chair of the Juno application activity in Edinburgh, I will reflect on positive changes, such as financial support for childcare expenses for conference and research travel and our Neurodiversity Network, and ongoing work such as our desires around "decolonising" the taught physics curriculum; I'll also reflect on what didn't work.
Project Juno concluded in 2023, the presentation will also introduce the new inclusion model to replace it. This transition marks an new era in considering how to create diverse, equitable, and inclusive environments within physics departments.
As an International Organization, publicly funded, it is essential that CERN's personnel reflect the diverse nationality and gender populations of our Member and Associate Member States. As such, we record and report on these dimensions.
In this context, CERN's 25 by '25 initiative is yielding better than expected progress on gender. However, retaining this diversity is the next frontier.
And what about the invisible dimensions we do not record? Our cultural, ethnic, neuro, socio-econmoic, religious,(dis)ability, sexual orientation, and other gender diversity?
Our "in-group" privileges can mask our parallel "out-group" experiences, even to ourselves.
Beyond Gender and Nationality, CERN's Diversity & Inclusion Programme Leader invites the audience to participate in an Invisible Dimensions Poll. Shedding light on - and being curiuos about - less visibile characteristics may be key to nurturing and retaining the diversity we need.
LHC experiments are diverse environments that bring together thousands of members from various countries to achieve common goals. Thus, they offer great opportunity for successful collaboration. However, the culturally diverse nature of these collaborations also presents unique hindrances. Differences in culture can manifest in our communication style. This directly affects how we perceive interpersonal interactions. At ALICE, we run two workshops aimed at promoting wellbeing and inclusivity in the collaboration with the hope of mitigating such conflicts. Inclusive Workspaces is arranged remotely to enable participation from people in all time-zones. A new course Collaborating in Culturally Diverse Teams is offered in-person only during the ALICE collaboration weeks. In this talk, we will share rationale, content, and experiences from the two courses we are currently arranging within the ALICE collaboration.
We study the (ambi-)twistor model for spinning particles interacting via electromagnetic field, as a toy model for studying classical dynamics of gravitating bodies including effects of both spins to all orders. The all-orders-in-spin effects are encoded as a dynamical implementation of the Newman-Janis shift, and we find that the expansion in both spins can be resummed to simple expressions in special kinematic configurations, at least up to one-loop order. We also observe that cutting rules associated with causality prescription for worldline propagators can be viewed as Poisson brackets of subdiagrams.
We will report on recent progress on obtaining classical observables in general relativity from the heavy-mass effective theory. As a concrete example we will discuss the NNLO corrections to the radiation produced by the scattering of two heavy scalars modelling Schwarzschild blackholes. I will also describe how to extract waveforms from this result using the observable based KMOC approach.
We will discuss the computation of classical tree-level five-point scattering amplitudes for the two-to-two scattering of spinning celestial objects with the emission of a graviton. Using this result, we will then turn to the computation of the leading-order time-domain gravitational waveform. The method we describe is suitable for arbitrary values of classical spin of Kerr black holes and does not require any expansion in powers of the spin.
Gravity is a fundamental theory of physics, but due to its weakness, our understanding of it remains limited. Despite our knowledge of it being restricted due to its weakness, recent computational advancements, initially developed for the Standard Model, have provided us with new tools to explore its effects. It has opened up exciting opportunities to study gravitational interactions and compare them to data from observation. This talk will discuss some of the latest computational advancements, focusing on higher-order perturbation effects, spin, and such effects contribute to precision computation of observables in general relativity using amplitudes.
The double copy is a powerful tool allowing us to obtain amplitudes in gravity from simpler ones in gauge theory. It was originally derived from string theory, relating the tree level amplitudes of closed string amplitudes to two copies of open string amplitudes. In the field theory limit, this reduces to being able to obtain tree-level graviton amplitudes from the "square" of tree-level gluon amplitudes. At the same time, these field theory amplitudes have miraculously simple expressions coming from twistor space, for any number of external legs and helicity configuration of gluons and gravitons. However, the double copy relation between these formulae has historically been extremely non-obvious. In this work we use concepts from graph theory to demonstrate the derivation of a double copy based in twistor space, and explore what this can teach us about the relation between gauge theory and gravity.
I will review determinant operators in N=4 super Yang-Mills theory with gauge group SU(N), which are half BPS dimension N operators, also known as giant gravitons. I will discuss our recent paper on the 4-point correlation function of two dimension 2 superconformal primary operators and two determinant operators, which is dual, by AdS/CFT, to two gravitons scattering off a D3-brane that moves along the geodesic in AdS. This has been studied in the weak 't Hooft coupling regime up to 3 loops in the planar limit. By integrating over the spacetime coordinates with a certain measure, we can use supersymmetric localisation to study the correlator at arbitrary 't Hooft coupling. We obtain closed formulas for the integrated correlator for arbitrary 't Hooft coupling in the planar limit and beyond. Finally, I will discuss how we can use SL(2,Z) invariance to complete our results to include instanton contributions.
Obtaining experimental data on the trilinear Higgs boson self-coupling $\kappa_3$ and the quartic self-coupling $\kappa_4$ is crucial for understanding the structure of the Higgs potential in beyond the Standard Model theories. While Higgs pair production allows directly investigating $\kappa_3$, triple Higgs production processes offer complementary insights into $\kappa_3$ and can also provide initial experimental constraints on $\kappa_4$, despite the lower cross section rates. Our study focuses on triple Higgs production at the HL-LHC, using efficient Graph Neural Network techniques to maximise statistical significance in the $6b$ and $4b2\tau$ decay modes. We demonstrate the potential to establish limits on the values of both couplings beyond theoretical perturbative unitarity constraints. Furthermore, future high-energy lepton colliders operating at the TeV scale present opportunities for further analysis of triple Higgs production.
Triple Higgs production will allow us to probe the nature of the scalar potential in High energy physics directly. In particular, it will permit us to probe the quartic self coupling of the Higgs boson. In this talk we will discuss the prospects of measuring triple Higgs production in proton-proton colliders within and beyond the Standard Model (BSM) considering the final state in which each Higgs decays into a pair b bbar (six-b jet final state). The BSM scenarios reviewed include extensions with one and two singlets, also the effect of effective theory operators will be reviewed
Measuring the Higgs self-coupling is a key target for future colliders, in particular through di-Higgs production at e+e- Linear Colliders with $\sqrt{s} \ge 500$GeV, e.g. at ILC, C3 or CLIC. This contribution will discuss the roles and the interplay of di-Higgs production processes at various collider energies, including the case of non-SM values of the self-coupling. Previous studies, already based on Geant4-based detector simulation, established that the Higgs self-coupling can be extracted with 10-27% precision and provided a solid understanding of the limiting factors. This provides a robust starting point to explore the potential of more modern and sophisticated reconstruction and analysis techniques. We review the impact of advanced, often machine-learning-based algorithms, including e.g. jet clustering, kinematic fitting and matrix element-inferred likelihoods on the reconstruction of ZHH events, and offer an outlook on what can be expected for the self-coupling measurement.
The process of Higgs pair production at the LHC is of particular interest since it sensitively depends on the trilinear Higgs self-coupling of the detected Higgs boson and therefore provides experimental access to the Higgs potential and important information about the electroweak phase transition in the early Universe. In this talk both resonant and non-resonant di-Higgs production will be discussed. For resonant di-Higgs production, using the example of the Two Higgs Doublet Model (2HDM), it is demonstrated that potentially large higher-order corrections to the trilinear couplings and interference effects between the non-resonant and the resonant contributions have a strong impact on the shape of the invariant mass distribution and on the result for the total cross section. It is pointed out that neglecting the non-resonant contributions, as it is done by the experimental collaborations up to now, can lead to unreliable results for the exclusion limits. While the present bounds from non-resonant di-Higgs production are still relatively weak, it is shown that they already provide a new way for probing so far unconstrained parameter regions of extended Higgs sectors. The parameter region of extended Higgs sectors giving rise to a strong first-order electroweak phase transition in the early Universe and potentially detectable gravitational wave signals is typically correlated with a certain mass splitting among the additional Higgs bosons, giving rise both to to a significant enhancement of the trilinear Higgs-boson self-coupling compared to the SM value and to a characteristic ``smoking gun’’ signature in the search for additional Higgs boson. The public tool anyH3, which provides predictions for the trilinear Higgs coupling to full one-loop order within arbitrary renormalisable theories, is briefly discussed in this context.
Precise experimental data from the Large Hadron Collider and the lack of any persuasive new physics signature demand improvement in the understanding of the Standard Model. The scattering cross-sections are plagued with Leading power (LP) and next-to-leading power (NLP) logarithms. Resummation of LP logarithms has a long history of almost three decades and their resummation methods are well known in the present literature. However, precise prediction also requires the resummation of NLP logarithms, as they have a sizeable numerical impact in the cross-section calculation. These NLP logarithms for colour singlet processes are well known in the literature, however, there is a scarcity of results when final state colour particles are involved in the scattering process. In the talk, I will discuss a new method of calculating the NLP logarithms where final state colour particles are involved and will show its application for Higgs+ jet production.
We have derived a general and explicit expression for the Jarlskog invariant of CP violation in flavor oscillations of three active neutrinos by using the 18 original parameters in the canonical seesaw mechanism (i.e., 3 heavy Majorana neutrino masses, 9 active-sterile flavor mixing angles and 6 CP-violaing phases). This novel analytical result provides the first model-independent window to look at thermal leptogenesis for the cosmological matter-antimatter asymmetry from leptonic CP violation at low energies. A simplified result in the minimal seesaw framework with only two right-handed neutrino fields is discussed to illustrate how CP violation in the light Majorana neutrino sector is correlated with that in the heavy Majorana neutrino sector.
Based on the paper under writing (the general seesaw) and the one published in Phys. Lett. B 844 (2023) 138065 (the minimal seesaw) by Zhi-zhong Xing.
In this talk I will present the role of the often neglected "mixed" scattering processes within realistic hybrid type I + Type II seesaw framework. It will be demonstrated that as the seesaw scales comes close the mixed processes become numerically significant and can result in orders of magnitude correction to the present day baryon asymmetry. I will quantitatively discuss the level of degeneracy where the traditional approximations starts to become erroneous.
We show that, in a $U(1)_{R-L}$-symmetric SUSY model, the pseudo-Dirac bino and wino can give rise to three light neutrino masses through effective operators, generated at the messenger scale between a SUSY breaking hidden sector and the visible sector. The neutrino-bino/wino mixing follows a hybrid type I+III inverse seesaw pattern. The light neutrino masses are governed by the ratio of the $U(1)_{R-L}$-breaking gravitino mass, $m_{3/2}$, and the messenger scale $\Lambda_M$. The charged component of the $SU(2)_L$-triplet, here the lightest charginos, mix with the charged leptons and generate FCNC at tree level. We find that resulting LFV observables yield a lower bound on the messenger scale, $\Lambda_M\gtrsim(500-1000)~{\rm TeV}$ for a simplified hybrid mixing scenario. We identify interesting mixing structures for certain $U(1)_{R-L}$-breaking singlino/tripletino Majorana masses. We describe the rich collider phenomenology expected in this neutrino-mass generation mechanism.
In the PMNS matrix, the relation $U_{\mu i}=U_{\tau i}$ (with $i=1,2,3$) is experimentally favored at the present stage. The possible implications of this relation on some hidden flavor symmetry has attracted a lot of interest in the neutrino community. In this paper, we analyze the implications of $U_{\mu i}=U_{\tau i}$ (with $i=1,2,3$) in the context of the canonical seesaw mechanism. We also show that the minimal symmetry proposed in JHEP 06 (2022) 034 is one possible but not necessary reason for the above-mentioned relation.
The MicroBooNE experiment is an 85-ton active volume liquid argon time projection chamber (LArTPC) neutrino detector situated in the Fermilab Booster Neutrino Beam (BNB). In this talk, we will present a comprehensive overview of the experiment's investigations of the MiniBooNE Low Energy Excess in the single-photon and $e^+e^-$ pair channels which target standard model background interpretation as well as Beyond the Standard Model (BSM) explanations alternative to 3+1 oscillation. The photon searches include a model-independent search for an inclusive photos, as well as a targeted search for neutral current coherent-like single-photon production. Moreover, we will introduce a suite of new searches aimed at exploring BSM scenarios, which investigate multiple exotic electron-positron pair production models that could be attributed to neutrinos acting as a portal to a potential "Dark Sector" of new physics.
We foresee JUNO to be the world's largest liquid-scintillator detector of 20 kton upon its completion, with unprecedented light yield and photo-coverage. JUNO provides us an excellent equipment to search for nucleon decay in parallel to its rich neutrino program, particularly via the decay channels predicted by the supersymmetric unified theories. The particle identification capability of JUNO is the key to tag the cascaded decays and de-excitations. I shall highlight the activities in nucleon decay searches at JUNO with the anticipation of the first data.
A key focus of the physics program at the LHC is the study of head-on proton-proton collisions. However, an important class of physics can be studied where the protons narrowly miss one another and remain intact. In such cases, the electromagnetic fields surrounding the protons can interact producing high-energy photon-photon collisions. Alternatively, interactions mediated by the strong force can also result in intact forward scattered protons. Instrumentation to detect and measure protons scattered through very small angles is installed. We describe the ATLAS Forward Proton Detectors (AFP and ALFA), including their performance to date, covering Tracking and Time-of-Flight Detectors as well as the associated electronics, trigger, readout, detector control and data quality monitoring. The physics interest, beam optics and detector options for the extension of the programme into the HL-LHC are also discussed. Finally, a glimpse on the newest results will be given.
The CMS Precision Proton Spectrometer is designed for studying Central Exclusive Production in pp collisions at the LHC. It consists of tracking and timing detectors to measure protons that escape along the LHC beam line after the interaction in CMS. Both tracking and timing systems underwent a substantial upgrade for Run 3. The tracking detector employs new single-sided 150 um-thick silicon 3D pixel sensors, read out with the PROC600 chip. An innovative mechanical solution was adopted to mitigate the radiation effects caused by the non-uniform irradiation of the readout chip, allowing for moving the detectors during beam downtimes. The time-of-flight measurement system uses 500 um-thick single-crystal CVD diamond sensors in double-diamond configuration and was upgraded with the aim of improving the radiation tolerance and obtaining a time resolution of less than 30 ps. In this contribution the new apparatuses installed for Run 3 and their preliminary performance will be presented.
The TOTEM experiment at the LHC has produced a
large set of measurements on diffractive processes and pp cross sections.
A new detector, called nT2, has been designed to measure the inelastic scattering rate during the LHC special run of 2023.
Due to the high radiation environment and the special run schedule,
the detector had to be installed in 10-20 minutes at most, then commissioned and operated after only few days.
The detector, based on plastic scintillators read out by matrices of SiPMs, was designed with such constraints in mind.
The front-end, DAQ and control electronics was devoloped with a fault tolerant architecture, moving as many functionalities as
possible on a radiation tolerant SoC FPGA, hosting an integrated ARM controller.
In this talk we will describe, for the first time, the nT2 detector and its read out and control electronics.
The detector was successfully operated during the special run:
we will here present the preliminary results on the detector performance.
The ATLAS Zero Degree Calorimeters (ZDCs) detect neutral particles emitted at very forward rapidities in nuclear collisions at the LHC. The ZDCs consist of modules of sampling hadronic calorimeters made up of alternating tungsten-fused silica rod layers that act as Cherenkov radiators. They have been upgraded for LHC Run 3 with new fused silica rods for better radiation hardness, along with low-attenuation air-core cables and new readout electronics. A new Reaction Plane Detector(RPD) was also implemented. The ATLAS and CMS ZDC groups have proposed a joint project to build a next-generation HL-ZDC that will include an Electromagnetic and Hadronic section, as well as an RPD, all enclosed in a monolithic mechanical design that should simplify installation and thus reduce radiation exposure. This talk will review the performance of the ATLAS ZDC in the first year of Run 3, and provide an outlook of the HL-ZDC detector, with particular attention to the upgraded EM section.
SND@LHC is a new forward experiment measuring neutrinos produced at the LHC. Its detector has been installed in 2021-2022. The first physics data yielded, among the rest, the first observation of neutrinos produced at a collider.
The detector currently in use is a hybrid system based on a 830 kg target with tracking capabilities, followed by a calorimeter and a muon system. Its configuration allows to identify all three neutrino flavours, opening a unique opportunity to probe heavy flavour production in a $\eta$ region not accessible to ATLAS, CMS and LHCb.
A thorough upgrade is foreseen for Run 4: the new detector will replace emulsions with silicon sensors, use a magnetised HCAL, and add a muon spectrometer. A second detector with complementary coverage is also foreseen, which will vastly reduce many systematics.
This talk will describe both the performance of the current SND@LHC detector and its foreseen upgrade, whose letter of intent is being finalised in these days.
AugerPrime, the major upgrade of the Pierre Auger Observatory, has as its main objective to provide an enhanced estimation of the mass composition of the highest energy cosmic rays on an event-by-event basis. It consists of the addition of a surface scintillator detector (SSD) and a radio antenna on top of the existing water-Cherenkov detectors (WCD) of the surface detector array (SD). An additional small PMT installed inside the WCD increases the dynamic range of the SD. The new electronics board allows the connection of all the new detectors, including a higher sampling rate, increased dynamic range, and improved local data processing. An underground array of scintillator detectors will allow for direct measurement of the muon content at $10^{17} - 10^{19}$ eV, in partial overlap with the nominal energy at the LHC. In this contribution, we describe the AugerPrime upgrade and the expected physics performance for the next decade of planned operations.
LHCb has collected the world's largest sample of charmed hadrons. This sample is used to measure the $D^0 -\overline{D}^0$ mixing and to search for $C\!P$ violation in the mixing. New measurements of several decay modes are presented, along with prospects for the sensitivity at the LHCb upgrades.
The LHCb experiment published the first observation of CP violation in the decay of charmed particles in 2019, using the decay channels $D^0 \to \pi^+\pi^-$ and $D^0 \to K^+K^-$. Additional measurements in other decay channels are essential to understand whether this effect can be explained within the Standard Model, or if new sources of CP violation are needed. We present the latest searches for CP violation in the decay of charmed hadrons in complementary decay channels, and we discuss the prospects for the sensitivity at the LHCb upgrades.
We update our analysis of D meson mixing including the latest experimental results. We derive constraints on absorptive and dispersive CP violation by combining all available data, and discuss future projections. We also provide posterior distributions for observable parameters appearing in D physics.
We revisit the problem of nonperturbative contribution to the mass difference in D0−D0bar mixing within the Standard Model.
As it is known the GIM cancellation in the leading OPE is very effective, and the SM calculation gives the result which is orders of magnitude smaller than the experimental value for this quantity. Therefore, it is necessary to go beyond the leading terms to catch the effects of operators of dimension 9 and dimension 12, appearing through the condensate contributions.
We investigate the size of nonlocal condensate contributions using various models. Our preliminary results show that the GIM suppression can be lifted within the approach
The Belle and Belle$~$II experiments have collected a $1.4~\mathrm{ab}^{-1}$ sample of $e^+e^-$ collision data at centre-of-mass energies near the $\Upsilon(nS)$ resonances. These samples contain a large number of $e^+e^-\to c\bar{c}$ events that produce charmed mesons and baryons. We present searches for rare flavour-changing neutral current $c\to u\ell^+\ell^-$ processes in several decay modes. Further, we study several decays of the $\Lambda_c$ and $\Xi_c$ to determine branching fractions, as well as $C\!P$ asymmetries in singly Cabibbo-suppressed decays.
It is well known that the decay rates of leptonic decays of mesons, as well as the rates and angular observables of semileptonic decays of mesons and baryons, can provide a window to physics beyond the Standard Model. We point out the difficulties and prospects of such an endeavor in the case of $c\to s e\nu$ for which both the experimental and theoretical uncertainties are under control. Besides the generic EFT approach, we also discuss several specific models of physics beyond the Standard Model.
LHCb is playing a crucial role in the study of rare and forbidden semileptonic decays of charm hadrons, which might reveal interactions beyond the Standard Model. We present the latest measurements of charm decays with two leptons in the final state.
The production mechanism of (anti)nuclei in ultrarelativistic hadronic collisions is under intense debate in the scientific community. Two successful models used for the description of the experimental measurements are the statistical hadronisation model and the coalescence approach. In the latter, multi-baryon states are assumed to be formed by coalescence of baryons that are close in phase-space at kinetic freeze-out. Due to the collimated emission of nucleons in jets, the available phase-space is limited, hence the production of nuclear states by coalescence in jets is expected to be enhanced with respect to the production in the underlying event. In this contribution, the results for the coalescence parameter $B_{2}$, that quantifies the formation probability of deuterons by coalescence, in and out of jets measured in both pp and p-Pb collisions are presented in comparison with predictions from the coalescence model.
(Anti)hypernuclei are among the most promising probes to study the production mechanism of light nuclei in high-energy hadronic collisions. According to coalescence, the production of $\mathrm{^{3}_{\Lambda} H}$, $\mathrm{^{4}_{\Lambda} H}$, and $\mathrm{^{4}_{\Lambda} He}$ in small colliding systems (pp and p–Pb) is extremely sensitive to their internal wave function, while in the Statistical Hadronisation Models (SHMs) the nuclear structure does not enter explicitly in the prediction of the yields.
In this contribution, the production measurements of $\mathrm{^{3}_{\Lambda} H}$, $\mathrm{^{4}_{\Lambda} H}$, and $\mathrm{^{4}_{\Lambda} He}$ from pp to the most central Pb--Pb collisions are presented. The results are based on the data samples collected by ALICE during the LHC Run 2 and Run 3. For the $\mathrm{^{3}_{\Lambda} H}$, in addition, an innovative method to extract its properties starting from the system size dependency of its production yield will also be presented.
Experimental results on the electromagnetic form factors are very useful to constrain the QCD-based theoretical models. The electron-positron collider experiments are powerful tools to study the EMFFs of various baryons in time-like region via energy scan or ISR-return methods. We will report recent progress of baryon EMFFs measurements in time-like region at BESIII, including the EMFFs of the SU(3) octet and decuplet baryons, and of the charmed baryons.
We will discuss the light-front formulation of quarkonium $\gamma^* \gamma$ transition form factors for $J^{PC} = 2^{++}$ meson states. We will present $\gamma^* \gamma \to \chi_{c2}$ transition amplitudes and the pertinent helicity form factors. We show the results for the two-photon decay width of $\chi_{c2}$ as well as three independent $\gamma^* \gamma$ transition form factors of $\chi_{c2}$ as a function of photon virtuality $Q^2$.We compare our results for the two-photon decay width to the recently measured ones by the Belle-2 and BES III collaborations. Our approach explains the value of $\Gamma(\chi_{c2})/ \Gamma(\chi_{c0})$ measured experimentally. We also present the off-shell widths as a function of photon virtuality and compare them to the Belle data.
I.Babiarz, R.Pasechnik, W.Schafer and A.Szczurek, $\chi_{c2}$ tensor meson transition form factors, work in progress
I.B., {\it et al.}, JHEP 06 (2020) 101, doi:10.1007/JHEP06(2020)101
We present a first-principles lattice QCD calculation of the local form factors describing the $B_{s}\to \mu^{+}\mu^{-}\gamma$ decay. We focus on the region of large di-muon invariant masses $\sqrt{q^{2}} \gtrsim 4.2~{\rm GeV}$, where the contributions from the four-quarks operators in the effective weak Hamiltonian (which are neglected at present) are expected to be small. We use our results for the form factors to determine the branching fraction for $B_{s} \to \mu^{+}\mu^{-}\gamma$, which has been recently measured by the LHCb in the region $\sqrt{q^{2}} > 4.9~{\rm GeV}$.
The rare radiative $K^+\to\pi^+\ell^+\ell^-$ decays ($\ell=e,\mu$) provide experimental access to the $K^+\to\pi^+\gamma^*$ transition. The relevant form factor is conventionally written in terms of two hadronic parameters, $a_+$ and $b_+$, which are being measured by NA62 in both electron and muon channels. Comparing the two channels allows for a stringent test of lepton-flavor universality. However, appropriate experimental analysis requires adequate theory inputs: Although the $K^+\to\pi^+\gamma^*$ conversion has been studied extensively, radiative corrections involve the $K^+\to\pi^+\gamma^*\gamma^{(*)}$ transitions (with up to two virtual photons), not fully addressed in the literature. At the same time, the $K^+\to\pi^+\gamma^*\gamma^*$ transition is essential for the description of the $K^+\to\pi^+e^+e^-\ell^+\ell^-$ decays, which represent a background to new-physics searches.
2-color QCD (SU(2) gauge theory coupled to fundamental fermions) has several novel features: for instance, enhanced Pauli-Gursey symmetry yields degeneracies between mesons and di/tetra-quark states. The quantum mechanical matrix model provides a simplified platform to directly probe the properties of low-energy (spin-0 and spin-1) hadrons. Using variational calculation, we numerically obtain the energy eigenstates and eigenvalues of the matrix model at ultra-strong coupling. In chiral limit, the effects of non-perturbative axial anomaly are quantified. Interestingly, in chiral limit, gluons contribute significantly (~50%) to spin of hadrons and spin-0 hadrons are primarily composed of reducible connections. These effects are suppressed in heavy quark limit. Further, at strongly coupling, the system can undergo quantum phase transitions (in presence or absence of chemical potential). The ground state can be a spin-1 di-quark state which spontaneously breaks spatial rotational symmetry.
Vector boson scattering is a key production process to probe the electroweak symmetry breaking of the standard model, since it involves both self-couplings of vector bosons and coupling with the Higgs boson. If the Higgs mechanism is not the sole source of electroweak symmetry breaking, the scattering amplitude deviates from the standard model prediction at high scattering energy. Moreover, deviations may be detectable even if a new physics scale is higher than the reach of direct searches. Latest measurements of production cross sections of vector boson pairs in association with two jets in proton-proton collisions at sqrt(s) = 13 and 13.6 TeV at the LHC are reported using a data set recorded by the CMS detector. Differential fiducial cross sections as functions of several quantities are also measured.
Measurements of diboson production in association with two additional jets at the LHC probe interactions between electroweak vector bosons predicted by the Standard Model and test contributions from anomalous quartic gauge couplings. The ATLAS experiment has recently performed such measurements in a variety of final states, amongst them the scattering into a massive electroweak gauge boson and a photon. The scattering of massive electroweak gauge bosons is studied in leptonic final states of W boson pairs, Z boson pairs, as well as WZ pairs decays. All measurements include a comprehensive set of differential kinematic distributions. Also presented are measurements using semi-leptonic decays of the gauge boson pair, and Z-boson decays into neutrinos. The measured kinematic distributions are interpreted in an Effective Field Theory approach and used to constrain dimension-8 operators.
This talk reviews recent measurements of multiboson production using CMS data at sqrt(s) = 13 and 13.6 TeV. Inclusive and differential cross sections are measured using several kinematic observables.
Measurements of multiboson production at the LHC are important probes of the electroweak gauge structure of the Standard Model and can constrain anomalous gauge boson couplings. In this talk, recent measurements of diboson production by the ATLAS experiment at 13 TeV and 13.6 TeV are presented. Studies of gauge-boson polarisation and their correlation in WW, WZ and ZZ events are also presented. In WZ events, these studies have been extended to a phase space with high transverse momentum Z bosons. Finally, measurements of triboson production are discussed, and the observation in the WWy, WZy and WWy channels are reported.
We show that testing Bell inequalities in $W^\pm$ pair systems by measuring their angular correlation suffers from the ambiguity in kinetical reconstruction of the dilepton decay mode. We further propose a new set of Bell observables based on the measurement of the linear polarization of the $W$ bosons that can be used in the semileptonic decay mode of $W^\pm$ pair, and we analyze the prospects of testing the violation of Bell inequalities at $e^+e^-$ colliders.
We consider the diphoton production in hadronic collisions at the next-to-next-to-leading order (NNLO) in perturbative QCD, taking into account for the first time the full top quark mass dependence. We present the computation of the two-loop form factors for diphoton production in the quark annihilation channel, that are relevant for the phenomenological studies of the full NNLO. The MIs are written in the so-called canonical logarithmic form, except for the elliptic ones. We perform a study on the Maximal Cut in order to show the elliptic behaviour of the non-planar topology. The Master Integrals are evaluated by means of differential equations in a semy-analitical approach through the generalised power series technique. Finally we use this result with all the other contributions showing selected numerical distributions.
As shown by recent theoretical and experimental developments, the Standard Model of fundamental interactions can be tested at collider with the lens of quantum information theory. To achieve this aim it is essential to establish a relation between the kinematical distribution of stable, detectable particles and the spin-density matrix.
In the realistic simulation it is unavoidable to impose acceptance cuts on the kinematics of final-state particles, leading to a partial loss of information necessary to reconstruct the spin-density matrix of underlying resonances.
In this work we study the impact of acceptance cuts and higher-order corrections on the extraction of polarisation and spin-correlation coefficients in di-boson production at the LHC with leptonic decays. We investigate a purely geometrical factor which can accurately compensate for acceptance effects. The application of this factor allows to obtain a clean interpretation of diboson LHC events directly starting from the data.
https://indico.fnal.gov/event/63898/
Nowadays, research in Beyond Standard Model scenarios aimed at describing the nature of dark matter is a very active field. DarkPACK is a recently released software conceived to help to study such models. It can already compute the relic density in the freeze-out scenario, and its potential can be used to compute other observables. With the present contribution, I would like to introduce DarkPACK, its current capabilities and the future perspectives.
We study the new approaches to explore the ultralight (axion) dark matter by gravitational wave experiments and radio telescope based on the superradiance process and resonant conversion process.
A strongly self-interacting component of dark matter can lead to formation of compact objects. These objects (dark stars) can in principle be detected by emission of gravitational waves from coalescence with black holes or other neutron stars or via gravitational lensing. However, in the case where dark matter admits annihilations, these compact dark matter made objects can have significant impact on the cosmic reionization and the 21-cm signal. We demonstrate that even if dark matter has Planck scale suppressed annihilations, dark stars could inject a substantial amount of photons that would interact with the intergalactic medium. For dark matter parameters compatible with current observational constraints, dark stars could modify the observed reionization signal in a considerable way.
Magnetic monopoles are intriguing hypothetical particles and inevitable predictions of Theories of Grand Unification. They are produced during phase transitions in the early universe, but mechanisms like the Schwinger effect in strong magnetic fields could also contribute to the monopole number density. I will show how from the detection of intergalactic magnetic fields we can infer additional bounds on the magnetic monopole flux, and how even well-established limits, such as Parker bounds and limits from terrestrial experiments, strongly depend on the acceleration in cosmic magnetic fields. I will also discuss the implications of these bounds for minicharged monopoles and magnetic black holes as dark matter candidates.
The origin of the neutrinos masses, baryon asymmetry in the universe, and the nature of dark matter remain fundamental open problems in HEP. The FCC-ee provides exciting opportunities to resolve these mysteries with the discovery of heavy neutral leptons (HNLs) via e+e- → Z → vN by exploiting a huge sample ($5\cdot 10^{12}$) of Z bosons. The expected very small mixing between light and heavy neutrinos leads to tiny mixing angles, resulting in very long HNL lifetimes and a spectacular signal topology. Recent work based on a parametrised simulation of the IDEA detector will be described. The sensitivity region in the HNL parameter space will be mapped for prompt and long-lived signatures, with emphasis on background reduction, and detector requirements. A percent-level mass resolution can be achieved with inner-detector timing over large part of the HNL parameter space. Results of models with HNLs oscillations inside the detector will be also presented.
We reinvestigate contributions of scalar leptoquarks in $R_D^{(*)}$ anomalies and in $B \to K^{(*)} \nu \bar \nu$ decays.Then, we update the constraints on parameter space and find which scalar leptoquarks remain viable and consistent with low-energy and high-energy flavour physics constraints. We comment on the implications of such selection.
We study how the recent experimental results constrain the
gauge sectors of U(1) extensions of the standard model
using a novel representation of the parameter space.
We determine the bounds on the mixing angle between the massive
gauge bosons, or equivalently, the new gauge coupling as a function of
the mass $M_{Z'}$ of the new neutral gauge boson $Z'$ in the
approximate range $(10^{-2},10^4)$\,GeV/$c^2$.
We consider the most stringent bounds obtained from direct
searches for the $Z'$. We also exhibit the allowed parameter
space by comparing the predicted and measured values of the
$\rho$ parameter and those of the mass of the $W$ boson.
Finally, we discuss the prospects of $Z'$ searches at future colliders.
This work is presently submitted for publication, the corresponding preprint can be found at arXiv:2402.14786.
Conventional searches at the LHC operate under the assumption that Beyond the Standard Model particles undergo immediate decay upon production. However, this assumption lacks inherent a priori justification. This talk delves into the exploration of displaced decay signatures across various collider experiments. Combining insights from several studies, we show how small Yukawa couplings, compressed mass spectra, and collider boosts lead to distinctive displaced decays, observable at the CMS, ATLAS and proposed future detectors. These phenomena, manifesting within both Type-I and Type-III seesaw mechanisms, and the Vector-like lepton model with non-zero hypercharge provide a unique insight into the behaviours of neutrinos and dark matter. The seminar highlights the technical challenges and breakthroughs in detecting and interpreting these signatures, emphasising their significance in probing the depths of the extensions of the Standard Model.
Various theories beyond the Standard Model predict new, long-lived particles decaying at a significant distance from the collision point. These unique signatures are difficult to reconstruct and face unusual and challenging backgrounds. Signatures from displaced and/or delayed decays anywhere from the inner detector to the muon spectrometer are examples of experimentally demanding signatures. The talk will focus on the most recent results using pp collision data collected by the ATLAS detector.
Many models beyond the standard model predict new particles with long lifetimes. These long-lived particles (LLPs) decay significantly displaced from their initial production vertex thus giving rise to non-conventional signatures in the detector. Dedicated data streams and innovative usage of the CMS detector boost are exploited in this context to significantly boost the sensitivity of such searches at CMS. We present recent results of searches for long-lived particles and other non-conventional signatures obtained using data recorded by the CMS experiment during the completed Run-II and the ongoing Run-III of the LHC.
The CMS Collaboration has recently approved the publication of full statistical models of physics analyses. This includes the publication of the CMS data, which facilitates the construction of the full likelihood. The statistical inference tool "Combine" needed for this purpose is now available under an open source licence. This talk highlights some features of Combine and discusses the use of those models including the ones used for the discovery of the Higgs boson.
With the growing datasets of HEP experiments, statistical analysis becomes more computationally demanding, requiring improvements in existing statistical analysis software. One way forward is to use Automatic Differentiation (AD) in likelihood fitting, which is often done with RooFit (a toolkit that is part of ROOT.) As of recently, RooFit can generate the gradient code for a given likelihood function with Clad, a compiler-based AD tool. At the CHEP 2023 conference, we showed how using this analytical gradient significantly speeds up the minimization of simple likelihoods. This talk will present the current state of AD in RooFit. One highlight is that it now supports more complex models like template histogram stacks ("HistFactory"). It also uses a new version of Clad that contains several improvements tailored to the RooFit use case. This contribution will furthermore demo complete RooFit workflows that benefit from the improved performance with AD, such as ATLAS Higgs measurements.
A flexible and dynamic environment capable of accessing distributed data and resources efficiently, is a key aspect for HEP data analysis, especially for the HL-LHC era. A quasi-interactive declarative solution, like ROOT RDataFrame, with scale-up capabilities via open-source standards like Dask, can profit from the "HPC, Big Data and Quantum Computing" Italian Center DataLake model under development. The starting point is a prototypal CMS high throughput analysis platform, offloaded on local Tier-2.
This contribution evaluates the scalability, identifies bottlenecks and explores the interactivity of such platform, on two use-cases: a CMS physics analysis with high-rate triggered events and a study of the CMS muon detector performance in phase-space regions driven by analysis needs, accessing detector datasets. The metrics used to evaluate the scaling and speed-up performance will be reported and results will be discussed, emphasising the differences with the legacy analysis workflows.
In recent years, the data published by the Particle Data Group (PDG) in the Review of Particle Physics has primarily been consulted on the PDG web pages and in pdgLive, or downloaded in the form of PDF files. A new set of tools (PDG API) makes PDG data easily accessible in machine-readable format and includes a REST API, downloadable database files containing the PDG data, and an associated Python package. To find desired information, users either navigate to the corresponding review article or section in particle listings or summary tables on the web, use the new API, or rely on a Google-based search of the PDG website. Large Language Models (LLM) combined with Retrieval-Augmented Generation (RAG) are expected to enhance the searching of PDG information and to provide fine-grained accurate results. We will present the status of the new PDG API, give examples of its use, and discuss first results from RAG-based searching of PDG data.
The Key4hep project aims at providing a complete software stack to enable complete and detailed detector studies for future experiments. It was first envisaged five years ago by members of the CEPC, CLIC, ILC and FCC communities and has since managed to attract contributions also from others, such as the EIC or the MuonCollider. Leveraging established community tools, as well as developing new solutions where necessary, the Key4hep software stack is reaching production readiness rapidly, and is already used for physics studies.
This presentation will give an overview of the status of the Key4hep project and the components that are developed within its context. We will also report on some key insights and experiences that we gained along the way, e.g. integrating communities and their existing tools into a coherent approach, or on our experiences with building and releasing the stack using spack. Finally, we briefly highlight currently ongoing developments and plans.
"Data deluge" refers to the situation where the sheer volume of new data generated overwhelms the capacity of institutions to manage it and researchers to use it. This is becoming a common problem in industry and big science facilities like the MAX IV laboratory and the LHC.
As a solution to this problem, a small collaboration of researchers has developed a machine learning-based data compression tool called "Baler". Baler allows researchers to design lossy compression algorithms tailored to their data sets via an easy-to-use pip-package. This compression method yields substantial data reduction and can compress scientific data to 1% of its original size.
Baler recently performed compression and decompression of data on FPGAs, which extends Balers reach into the field of bandwidth compression. This contribution will bring an overview of the Baler software tool and results from Particle Physics, X-ray ptychography, Computational Fluid Dynamics, and Telecommunication.
The forthcoming generation of $e^+e^-$ colliders demands advanced mass resolutions for the Higgs ($H$), $W$, and $Z$ bosons when decaying into jets. Dual-readout calorimetry achieves this by making use of two independent energy readings of the hadronic shower, leveraging the distinct $e/h$ factors of Cherenkov and scintillation light produced in a calorimeter equipped with two types of optical fibres. This allows for event-by-event compensation of the electromagnetic fraction.
In this context, we present HiDRa, a 65x65x250 cm$^3$ dual-readout fibre calorimeter prototype currently under construction. The primary objective is to assess the performance in terms of linearity and resolution when exposed to a high-energy hadron beam. The talk will focus on strategic choices made to offer scalable solutions for the mechanics and readout electronics. The insights gained from the evaluation of HiDRa will be key for the construction of a full 4$\pi$ calorimeter at a future $e^+e^-$ collider.
Two technological prototypes of high granularity calorimetry based on the scintillator option have been developed within the CALICE collaboration, including an electromagnetic calorimeter prototype (ScW-ECAL) and a hadron calorimeter prototype (AHCAL). The ScW-ECAL prototype is finely segmented with 6700 readout channels in total and consists of 32 longitudinal layers, each with scintillator strips and a copper-tungsten plate. The AHCAL prototype has been developed with totally 12960 SiPM-on-Tile units in 40 longitudinal layers. Each layer is instrumented with scintillator tiles and an iron plate. Successful beamtest campaigns for the two prototypes were successfully finished at CERN during 2022 and 2023 with beam particles in the momentum range of 1-350 GeV. This contribution will present prototype developments and results of key performance based on the beamtest data. Highlights of ongoing studies of electromagnetic and hadronic shower properties will also be included.
The ALLEGRO is a detector concept optimized for precision measurements at the Future Circular Collider FCC-ee with a noble liquid calorimeter as an electromagnetic calorimeter. An extensive R&D program on the high granular noble liquid calorimeter suitable for advanced reconstruction techniques, e.g. machine learning and particle flow, has been launched. The fine segmentation of the calorimeter is realized by the usage of the straight multilayer electrodes. In this talk, we will introduce the ALLEGRO concept, discuss the design of the calorimeter, and show the expected performance. The results of the simulations and the measurements with the first prototypes of the multilayer electrodes will be compared. The optimization studies of the mechanical structure of the calorimeter and the cryostat, along with the results of the tests on the absorber prototype will be shown. The status of the preparation for the beam test prototype module will be presented.
A multi-Tev muon collider has been proposed as a powerful tool to investigate the Standard Model with unprecedented precision, after the High-Luminosity LHC era. However muons are not stable particles and it is of extremely important to develop technologies able to distinguish collisions from the background radiation induced by the beam itself. In this context, an innovative hadronic calorimeter (HCAL), based on Micro Pattern Gas Detectors (MPGD) as active layers, has been proposed. MPGDs represent the ideal technology, featuring high rate capability, spatial and good time resolution, good response uniformity and, moreover, they are radiation hard and allow for high granularity (1x1 cm2 cell size). The response of MPGD HCAL to the incoming particles is studied in Monte Carlo simulations and presented. The tests performed at SPS with muons of 100 GeV, for the MPGDs characterization, and at PS with pions of few GeV, for a HCAL cell prototype study, are also shown.
The precision measurements planned at future lepton colliders require excellent energy resolution especially in multi-jet events to successfully separate Z, W, and Higgs decays.
Over the past years the dual-readout method, which exploits complementary information from Scintillation and Cherenkov channels, has emerged as candidate to fulfil these requirements. Dedicated studies in simulation as well as test beam prototypes have investigated various detector geometries based on a fibre dual-readout calorimeter. One variation of the geometry, relying on capillary tubes, promises easy assembly with excellent geometrical accuracy at a moderate cost. In this talk we present the latest results from simulation of this newest prototype as well as compare this to recent test beam results. The simulation is also used to investigate the performance with a larger prototype fit for hadronic shower containment and the full "4π" detector geometry using the capillary tube design.
A pioneering fixed-target experiment is proposed for the LHC, aimed at measuring the dipole moments of charm baryons and potentially the tau lepton. Leveraging particle channeling and spin precession in bent crystals, the experiment offers a novel approach to probe these elusive properties. The detector system comprises a high-precision spectrometer for charged particle momentum measurement and a Cherenkov detector for particle identification. The tracking system features state-of-the-art silicon pixel sensors from the LHCb VELO detector, strategically positioned inside Roman Pot stations, in tandem with a dipole magnet. The R&D is quite advanced, including simulation studies to optimise the detector design and the sensitivity to dipole moments. A proof-of-principle test is planned during the LHC Run3. This presentation will highlight the latest progress, advancements, and the physics perspectives of the proposed experiment.
In this talk we will discuss how two objects of great interest to both physicists and mathematicians are connected.
On one hand, amplituhedra are the image under a linear map of the positive part of the Grassmannian -- where all the Pluckers are nonnegative. Introduced by physicists to encode scattering amplitudes in N=4 super Yang-Mills theory, they are semialgebraic sets which generalize polytopes inside the Grassmannian.
On the other hand, cluster algebras are a remarkable class of commutative rings with very nice combinatorics introduced by Fomin and Zelevinsky motivated by the study of total positivity. Many nice algebraic varieties are known to have a cluster algebra structure, including the Grassmannian. They also emerged in physics in the context of scattering amplitudes, where they contributed to both conceptual and computational advances.
We will show how Amplituhedra possesses surprisingly rich cluster structures and how they relate to their geometry and combinatorics.
We study a novel geometric expansion for scattering amplitudes in planar $\mathcal{N}=4$ super Yang-Mills theory in the context of the Amplituhedron which reproduces the all-loop integrand as a canonical differential form on the positive geometry. By considering the integrand in terms of negative rather than positive geometries, it has previously been shown that one gets a sum of terms that are accurate to all loop orders, and instead relies on a different expansion in the terms of a new type of tree and multi-cycle graphs. One can then calculate an all-cycle order result in the approximation where only tree graphs in the space of all loops are considered. Furthermore, using differential equation methods, it is possible to calculate and resum integrated expressions and obtain strong coupling results. In this work we extend the expansion to graphs with a single internal cycle, and introduce a powerful method for determining differential forms for higher number of cycles as well.
The Correlahedron describes correlation functions in maximally supersymmetric Yang-Mills theory. In this talk, I present an alternative geometric formulation. This allows us to study the loop geometry using a novel idea of so-called chambers. We characterize the boundary structure of the chambers and compute their canonical form up to three loops.
I will review the application of techniques from Gröbner theory and tropical geometry to describe the singularities of massless scattering amplitudes.
I will describe recent progress on phrasing the kinematic algebra at the heart of the color-kinematic duality in terms of a quasi-shuffle Hopf algebra. First, in the heavy-mass effective field theory (HEFT) limit, the algebra is shown to easily generalize from Yang-Mills to DF^2+YM theory. This theory contains \alpha’ corrections to Yang-Mills and is relevant for bosonic string amplitudes. Then, after a factorization away from HEFT, and in the small \alpha’ limit, a direct Hopf algebra construction is obtained for pure Yang-Mills. This leads to a closed-form expressions for Bern-Carrasco-Johansson numerators, that are compact, relabelling symmetric, and local.
In recent years modern amplitude methods have been successfully applied to so-called exceptional scalar effective field theories, chief among them the non-linear sigma model (NLSM) describing the dynamics of pions. A hallmark feature of NLSM amplitudes is their vanishing soft behavior (Adler zero) which was crucial for the formulation of on-shell recursion relations at tree-level.
In this contribution we present novel off-shell recursion relations valid for tree-level amplitudes and planar loop-integrands. Still leveraging the Adler zero as a guiding principle, we formulate a Berends-Giele-type recursion for NLSM amplitudes and integrands based on a single effective cubic vertex.
Two Higgs doublet model (2HDM) is very a simple extension of the Standard Model (SM). It provides interesting phenomenology concerning several unsolved issues of the SM. To remove the undesirable flavour changing neutral currents (FCNCs), 2HDM is usually described with an additional $Z_2$-symmetry. But, one can circumvent the issue of FCNCs by assuming similar Yukawa structure for the two scalar doublets too. The model with this intriguing feature is termed as aligned two Higgs doublet model (A2HDM). The constraints on A2HDM are much weaker than the 2HDM cases. A2HDM also provides a generic framework to study different varieties of 2HDMs together. Here, we illustrate a global fit of A2HDM using the package HEPfit. We study the possibility of having new particles lighter than the SM Higgs. We perform a bayesian analysis including stability and perturbativity bounds, flavour and electroweak precision observables, and scalar (and pseudoscalar) searches at LEP and LHC for this global fit.
We study the phenomenology of charged Higgs bosons ($H^\pm$) and Vector-Like Quarks (VLQs), denoted as $T$, with a charge equal to the top quark, within the Two Higgs Doublet Model Type-II (2HDM-II) framework. We examine two scenarios: one with a singlet $(T)$ (2HDM-II+$(T)$) and another with a doublet $(TB)$ (2HDM-II+$(TB)$). We find that VLQs significantly influence the 2HDM-II's (pseudo)scalar sector, notably easing the stringent mass constraints on the charged Higgs boson from $B$-physics observables like $B\to X_s\gamma$, due to altered charged Higgs couplings to SM top and bottom quarks. The impact differs between the singlet and doublet scenarios. We also explore the effects of oblique parameters $S$ and $T$ on VLQ mixing angles and present insights into VLQ $T$ pair production leading to a $2t4b$ final state, offering guidance for extended Higgs and quark sector searches at the LHC.
The concept of reduction of couplings consists in the search for relations between seemingly independent couplings of a renormalizable theory that are renormalization group invariant. In this work, we demonstrate the existence of such 1-loop relations among the top Yukawa, the Higgs quartic and the gauge colour couplings of the Two Higgs Doublet Model at a high-energy boundary. The phenomenological viability of the reduced theory suggests the value of tanβ and the scale in which new physics may appear.
In the Standard Model, one doublet of complex scalar fields is the minimal content of the Higgs sector in order to achieve spontaneous electroweak symmetry breaking. However, several theories beyond the Standard Model predict a non-minimal Higgs sector and introduce charged scalar fields that do not exist in the Standard Model. As a result, singly- and doubly-charged Higgs bosons would be a unique signature of new physics with a non-minimal Higgs sector. As such, they have been extensively searched for in the ATLAS experiment, using proton-proton collision data at 13 TeV during the LHC Run 2. In this presentation, a summary of the latest experimental results obtained in searches for both singly- and doubly-charged Higgs bosons are presented.
We present searches from the CMS experiment, performed with data collected during LHC Run 2 at a centre-of-mass energy of 13 TeV, for additional Higgs bosons. A variety of states are searched for, at masses both above and below 125 GeV. A range of decay channels is covered in the searches for additional Higgs bosons, including - but not limited to - the channels used for measurements of the 125 GeV Higgs boson.
The discovery of the Higgs boson with the mass of about 125 GeV completed the particle content predicted by the Standard Model. Even though this model is well established and consistent with many measurements, it is not capable to solely explain some observations. Many extensions of the Standard Model addressing such shortcomings introduce additional neutral bosons. The current status of searches for resolved, resonant signatures in the full LHC Run 2 dataset of the ATLAS experiment at 13 TeV are presented.
The existence of sterile neutrinos can lead to a matter-enhanced resonance that results in a unique disappearance signature for Earth-crossing neutrinos, providing a different probe of the short baseline anomalies. Sterile neutrinos have been proposed as an explanation of the tension between appearance and disappearance experiments in the vanilla 3+1 model. IceCube has performed an improved search for eV-scale unstable sterile neutrinos with a high purity sample of up-going muon neutrinos from 500 GeV to 100 TeV using eleven years of data. The results of this analysis will be presented along with the results of the no-decay/stable sterile neutrino analysis.
KM3NeT/ORCA is a water Cherenkov neutrino telescope under construction in the Mediterranean sea. With ORCA, the KM3NeT collaboration will measure atmospheric neutrino oscillations to determine the neutrino mass ordering and constrain the oscillation parameters Δm31² and θ23. In addition, Beyond the Standard Model hypotheses can be tested. In this contribution, the sensitivity of ORCA to the presence of a light sterile neutrino in a 3+1 model is presented, as well as the first measurements of the active-sterile mixing parameters. Using 433 kton-yr of data with a partial configuration of only 5% of the final detector, ORCA is able to constrain the active-sterile mixing angles θ24 and θ34. Two sets of scenarios are explored. First, θ24 and θ34 are simultaneously constrained under the assumption of an eV-mass sterile neutrino. Then, each mixing angle is individually constrained over a broad range of mass squared difference ∆m41² to probe the hypothesis of a very light sterile neutrino.
The SoLid experiment has taken data at the 70 MW BR2 reactor (SCK·CEN, Belgium), exploring very short baseline anti-neutrino oscillations. The 1.6-tons detector uses an innovative antineutrino detection technique based on a highly segmented target volume made of PVT cubes and LiF:ZnS screens read by wavelength shifting fibers and MPPCs. The technology has a linear energy response and allows the direct measurement of the positron energy. A robust pulse shape discrimination can distinguish neutrons from positrons, gammas, proton recoil. The main challenge of the experiment has been to operate at very low overburden and with an internal BiPo contamination in the ZnS layers. 280 days of data with reactor in operation and 170 days with reactor off have been recorded. The experiment has a signal to background ratio close to 1/3 with about 120 neutrinos detected per day. An oscillation analysis has been performed and the results using frequentist and bayesian statistics will be presented.
The KATRIN experiment aims to measure the neutrino mass by precision spectroscopy of tritium β-decay. Recently, KATRIN has improved the upper bound on the electron-neutrino mass to 0.8 eV/c² at 90% CL and is continuing to take data.
Beyond the neutrino mass, the ultra-precise measurement of the β-spectrum at KATRIN can reveal further distinct signatures of new physics. Current investigations involve searching for an eV-scale sterile neutrino motivated by several anomalies, specific Lorentz invariance violating parameters only accessible via interaction processes such as at KATRIN, and probing the local relic neutrino background by threshold-free neutrino capture on tritium. Additionally, searches are being conducted for general neutrino interactions, enabling a broad search for novel interactions, and for neutrino-DM interactions using the dark MSW effect.
This presentation will highlight a selection of new physics searches carried out at KATRIN and present their most recent results.
The process of Coherent Elastic Neutrino-Nucleus Scattering (CEvNS), first observed in 2017 by the COHERENT collaboration, has provided a powerful tool to study Standard and beyond the Standard Model physics within the neutrino sector. In this talk, we present the results of constraining different new physics scenarios by using data from current CEvNS measurements. We mainly focus on non-standard interactions, neutrino magnetic moments, and leptoquark scenarios. Our analysis includes the latest data from the cesium iodide and liquid argon detectors used by the COHERENT collaboration. In addition, we present the expected sensitivities to these scenarios from future CEvNS experiments.
ESSνSB is a design study for a long-baseline ν-experiment to measure the CP violation in the leptonic sector at the second neutrino oscillation maximum using a beam driven by the uniquely powerful ESS linear accelerator. The ESSνSB CDR showed that after 10 years, more than 70% of the possible CP-violating phase, δCP, range will be covered with 5σ C.L. to reject the no-CP-violation hypothesis. The expected precision for δCP is better than 8° for all δCP values, making it the most precise proposed experiment in the field. The recently started extension project, the ESSνSB+, aims in designing two new facilities, a Low Energy nuSTORM and a Low Energy Monitored Neutrino Beam to use them to precisely measure the ν-nucleus cross-section in the energy range of 0.2 – 0.6 GeV. A new water Cherenkov detector will also be designed to measure cross sections and serve to explore the sterile neutrino case. An overall status of the project will be presented together with the ESSvSB+ additions.
LHC restarted in April 2022 and the plan is to run at an average instantaneous luminosity of 2.0×10^33 cm−2 s−1 at the LHCb interaction point, a factor of five higher than in the past. In order to cope with the increased luminosity and to take data at the full bunch crossing frequency (30MHz visible interaction rate) in trigger-less mode, the LHCb Detector has undergone a major upgrade,allowing LHCb to collect approximately 50 fb−1 in the next 10 years. The upgraded Muon Detector with new off-detector and control electronics, able to cope with the full LHC bunch crossing frequency in trigger-less mode also features updated control systems and reconstruction softwares. Steady progress has been made in finalising the detector control system, calibration and alignment using data collected in 2022 and 2023. In 2024, the detector is expected to reach its nominal performance Current status and prospects of the LHCb muon detector will be discussed in detail.
The most important ATLAS upgrade for LHC run-3 has been in the Muon Spectrometer, where the replacement of the two forward inner stations with the New Small Wheels (NSW) introduced two novel detector technologies: the small strip Thin Gap Chambers (sTGC) and the resistive strips Micromegas (MM). The integration of the two NSW in the ATLAS endcaps marks the culmination of an extensive construction, testing, and installation program. The NSW actively contributes to the muon spectrometer trigger and tracking, during the concurrent finalization of the commissioning phase of this innovative system and the optimization of its performances. This presentation will offer an overview of the strategies employed for simulation and reconstruction integration and optimization, followed by a detailed report on the performance studies of the NSW system during its initial operation with LHC Run3 data.
The CMS experiment at the LHC has started data taking in Run 3 at a pp collision energy of 13.6 TeV. A highly performing muon system has been crucial to achieve many of the physics results obtained by CMS. This is achieved by the highly efficient muon spectrometer. The legacy CMS muon detector system consists of Drift Tube chambers in the barrel and Cathode Strip Chambers in the endcap regions, plus Resistive Plate Chambers in both, barrel and endcap. During the LS2 period, Gas Electron Multiplier chambers were added to enhance the redundancy of the system while maintaining the precision of muon momentum resolution at the L1 trigger. The CMS muon system has been running smoothly in the first 2 years of Run 3 with negligible contribution to the downtime/luminosity loss while showing the same excellent detector performance as in Run 2. This talk reports the operation summary and performance study of the CMS muon system carried out using the first dataset collected at 13.6 TeV.
The muon system of the CMS experiment at the LHC has been upgraded by the installation of the first station of Gas Electron Multiplier (GEM), GE1/1, over the Long Shutdown 2 (LS2). The High-Luminosity phase of the LHC (HL-LHC) upgrade for CMS incorporates two additional stations, GE2/1 and ME0. Three GE2/1 chambers have been installed in CMS, with two new ones added at the beginning of 2024, while ME0 is slated for installation during LS3. The aim is to enhance the muon system's capabilities for HL-LHC by extending its acceptance up to |η|=2.8 and improving the muon triggering while maintaining performance achieved during Run 2. We present operational aspects of GEM detectors during Run 3, covering detector stability, performance metrics such as muon detection efficiency, and occasional high occupancy events. Finally, we report on the effects of magnetic field variations observed during commissioning and address the correlation between GEM baseline currents and LHC beam luminosity.
With the LHC operating beyond its design parameters, CMS keeps pushing the limits of SM measures and BSM searches. In this context, the CMS Drift Tubes community is challenged to assess performance with increasing accuracy, while identifying issues as soon as possible. Novel strategies and tools were explored for these purposes. Dedicated analysis-oriented data formats were designed to retain maximal detector information while constraining size down to 10 KB/event, aiming for inclusion in central production. An advanced automation framework, developed within CMS, was used to deploy analysis pipelines that get triggered by filters based on external conditions or overall data quality. Finally, a quasi-interactive declarative analysis approach, relying on Dask for parallel processing, was explored to ensure prompt inspection of large data volumes. We summarise the achievements in all above fronts and report the experience collected over the 2024 LHC run, when tools were firstly deployed.
Muon objects play a key role in the CMS physics program, as many are the analyses targeting final state with muons. The ability to trigger, reconstruct, and identify events with prompt and non-prompt muons with high efficiency and excellent resolution is thus crucial for the success of the experiment. In this talk, muon reconstruction, identification and isolation efficiencies as well as momentum measurements during the first years of Run 3 of the LHC will be discussed. It will highlight novel machine learning algorithms used in High Level Trigger (HLT) and for muon identification.
BESIII has accumulated 4.5 $fb^{-1}$ of $e^+e^-$ collision data at the 4.6 and 4.7 GeV energies, presenting a unique opportunity to investigate $\Lambda_c^+$ decays. Our presentation will include the first measurement of the decay asymmetry in the pure W-boson-exchange decay $Λ_c^+→Ξ^0 K^+$, as well as the study of $\Lambda_c^+ \to \Lambda l^+ \nu$ and the branching fraction measurements of the inclusive decays $\Lambda_c^+ \to X e^+ \nu$ and $\bar{\Lambda_c} \to \bar{n} X$.
Furthermore, we will present the results of the partial wave analysis of $\Lambda_c^+ \to \Lambda \pi^+ \pi^0$, and the latest branching fraction measurements of Cabibbo-suppressed and Cabibbo-favored $\Lambda_c^+$ decays, including $\Lambda_c \to p \pi^0, \Sigma^- K^+ \pi^+, p \eta (\omega)$, and more.
BESIII has collected 2.93 and 7.33 $fb^{-1}$ of $e^+e^-$ collision data at 3.773 and 4.128-4.226 GeV, recording the largest dataset of $D\bar{D}$ and $D_sD_s$ pairs in the world, respectively.
We will present the observation of $D^+ \to K_s a^0(980)$ and a new $a^0$-like state with a mass of 1.817 GeV, and the determination of U-spin breaking parameters of the decay $D^0 \to K_L \pi^+ \pi^-$, along with the amplitude analyses of $D^{0(+)} \to 4\pi$ and $D^+ \to K_s\pi^+\pi^0\pi^0$. Our presentation will also include the latest measurements of quantum-correlated $DD$ decays, including the CP-even fraction of $D^0 \to K_s\pi^+\pi^-\pi^0$, $KK\pi\pi$.
We will also present study of $D_s^* \to e \nu$ and the improved measurements of $|Vcs|$ and $D_s$ decay constant in $D_s^+ \to \mu^+ \nu$ and $\tau^+ \nu$. Furthermore, we will present the $D_s \to \eta^{(‘)}$, $D_s \to f^0(980)$, and $D_s \to \phi$ form factor studies.
The rich structure of flavor physics provides a plethora of possibilities to test or constrain the standard model. This requires both precise experimental measurements as well as theoretical predictions. Determining nonperturbative contributions due to the strong force is the prime task of lattice QCD calculations leading e.g. to determinations of decay constants, form factors or bag parameters. Selecting a few examples we highlight recent progress in lattice QCD determinations in the light and heavy flavor sector and discuss their impact on flavor physics.
The measurement of the decay rates $\eta_c\to\gamma\gamma$ and $\eta_b\to\gamma\gamma$ are part of the BES3 and Belle II programmes respectively as tests of the Standard Model. Here we provide, for the first time, precise SM values for these decay rates using lattice QCD. For $\Gamma(\eta_c\to\gamma\gamma)$ we obtain 6.788(61) keV in good agreement with, but much more accurate than, experimental results using $\gamma\gamma \to \eta_c \to K\overline{K}\pi$. Our value is in 4 sigma tension with the PDG global fit result, however. Building on this study, we have been able to predict $\Gamma(\eta_b \to \gamma\gamma)$ with an uncertainty of 2\%. We also compare the ratio of the form factors to the meson decay constants with expectations from NRQCD to assess how well nonrelativistic effective theories work in these two cases.
he Belle and Belle$~$II experiments have collected a 1.1$~$ab$^{-1}$ sample of $e^+ e^-\to B\bar{B}$ collisions at the $\Upsilon(4S)$. The study of hadronic $B$ decays in these data allow the precise measurement of absolute branching fractions and angular distributions of the decay products. These measurements provide tests of QCD and enable the generation of more realistic simulation samples. We present measurements of the decays $B^-\to D^0\rho^-$, $\bar{B}^0\to D^+\pi^-\pi^0$, $B\to DK^{*}K$ and $\bar{B}^0\to\omega\omega$. In addition, we search for the decays $B\to D^{(*)}\eta\pi$, which can be related to poorly known $B\to X_c\ell\nu$ decays that include an $\eta$ meson in the final state.
We present an update of the analysis of $B^0_d$ and $B^0_s$ mesons to charmless three-body final states that include a $K^0_S$ meson. The primary goal of the analysis is to search for the as-yet unobserved decay mode $B^0_s \to K^0_S K^+K^-$ In addition, the branching-fraction measurements for the set of decay modes $B^0_{(d,s)} \to K^0_S h^+ h^{'-}$ (where $h$ and $h^{'}$ are each a pion or kaon), measured relative to $B_0^d \to K^0_S \pi^+ \pi^-$, are updated. The analysis uses the data recorded by the LHCb experiment during Runs 1 and 2 of the LHC, corresponding to an integrated luminosity of 9 fb-1.
The presence of charmonium in the final states of $B$ decays is a very clean experimental signature that allow the efficient collection of large samples of these decays. Such a clean experimental signature makes these decays also suitable for precision measurements of beauty hadrons lifetime and width differences $\Delta\Gamma_{q}$ ($q=d,s$). In this work we present the most recent results of LHCb in the study of these decays, with particular attention to lifetime and branching ratio measurements.
COMPASS is the longest-running experiment at CERN, having performed a series of data takings from 2002 to 2022, spanning a record-breaking 20 years.
One of the objectives of the experiment's broad physics program was to perform semi-inclusive measurements of target spin-dependent asymmetries in (di-)hadron production in DIS using 160 GeV muons and polarized targets.
These measurements provide access to the spin structure of the nucleon, which is described in terms of transverse momentum-dependent parton distribution functions. The most recent measurements were performed in 2022 using a transversely polarized deuteron target.
These measurements play a crucial role in constraining the transversity and Sivers functions of the d quark.
The talk will present the recent COMPASS results from the 2022 run.
It has been proposed that at small Bjorken x, or equivalently at high energy, hadrons represent maximally entangled states of quarks and gluons. This conjecture is in accord with experimental data from the electron-proton collider HERA at the smallest accessible x. In this Letter, we propose to study the onset of the maximal entanglement inside the proton using Diffractive Deep Inelastic Scattering. It is shown that the data collected by the H1 Collaboration at HERA allows us to probe the transition to the maximal entanglement regime. By relating the entanglement entropy to the entropy of final state hadrons, we find a good agreement with the H1 data using both the exact entropy formula as well as its asymptotic expansion which indicates the presence of a nearly maximally-entangled state. Finally, future opportunities at the Electron Ion Collider are discussed.
The talk is based on 10.1103/PhysRevLett.131.241901
Studies of the transverse-spin dependent azimuthal asymmetries in the Drell-Yan process permit to access the spin-dependent structure of the nucleon and in particular to test the limited universality of its transverse-momentum dependent parton distributions, which are known from deep inelastic scattering.
In 2015 and 2018 the COMPASS Collaboration at CERN performed measurements of the $\pi^-$p $\rightarrow \mu^+\mu^-$X reaction at 190 GeV/$c$ pion beam and transversely polarised NH$_3$ target. Results of the analysis will be presented and including those obtained with a novel approach where the asymmetries were weighted by powers of a transverse momentum of the dimuon system with respect to the beam. This approach overcomes the convolution over the intrinsic transverse momentum and opens an easy access to certain $k_{\rm T}^2$ moments of the transverse-momentum dependent parton distribution functions.
Chiral symmetry constrains QCD properties in large magnetic fields $e B \sim m_\pi^2$, thereby providing stringent model-independent tests for lattice QCD and hadronic models. As examples of magnetic-field dependent observables calculated with chiral perturbation theory, we exhibit the finite-volume dependence of the pressure anisotropy and magnetization, as well as detail how finite-volume effects can be exploited for lattice correlation functions of neutral particles in magnetic fields. Due to the potential relevance for magnetars, weak decays are also investigated. Chiral symmetry leads to next-to-leading order predictions for decay rates without any undetermined parameters.
A measurement of additional radiation in $e^+e^-\to\mu^+\mu^-\gamma$ and $e^+e^-\to\pi^+\pi^-\gamma$ initial-state-radiation events is presented using the full $BABAR$ data sample. For the first time results are presented at next-to- and next-to-next-to-leading order, with one and two additional photons, respectively, for radiation from the initial and final states. The comparison with the predictions from Phokhara and AfkQed generators reveals discrepancies for the former in the one-photon rates and angular distributions. While this disagreement has a negligible effect on the $e^+e^-\to\pi^+\pi^-(\gamma)$ cross section measured by $BABAR$, the impact on the KLOE and BESIII measurements is estimated and found to be indicative of significant systematic effects. The findings shed a new light on the longstanding deviation between the muon $g-2$ measurement, the Standard Model prediction using the data-driven dispersive approach and the comparison with lattice QCD calculations.
We consider extensions of the soft-gluon effective coupling that generalize the Catani-Marchesini-Webber (CMW) coupling in the context of soft-gluon resummation beyond the next-to-leading logarithmic accuracy. Starting from the probability density of correlated soft emission in d dimensions we introduce a class of soft couplings relevant for resummed QCD calculations of hard-scattering observables. We show that at the conformal point, where the d-dimensional QCD beta-function vanishes, all these effective couplings are equal to the cusp anomalous dimension. We present explicit results in d dimensions for the soft-emission probability density and the soft couplings at the second-order in the QCD coupling. Finally, we study the structure of the soft coupling in the large-nF limit and we present explicit expressions to all orders in perturbation theory. We also check that, at the conformal point, our large-nF results agree with the known result of the cusp anomalous dimension.
The top quark mass is a fundamental parameter of the SM. Its precise determination allows a powerful consistency check of the theory. About ten years after the completion of Run 1, ATLAS and CMS published the final combination of fifteen measurements at 7 and 8 TeV based on various final states and complementary techniques. In this talk, details of the combination, the systematic model, and the correlation of systematic uncertainties will be discussed and an outlook for future improvements will be given.
The CMS Collaboration has carried out a rich program of top quark mass measurements, providing thorough tests of the internal consistency of the Standard Model. While direct measurements suffer from ambiguities in interpreting the measured parameter, extractions from cross-section measurements provide a solution to this problem with the drawback of being less precise and relying on the picture of a stable top quark. In this talk, we focus on extractions of the top quark mass from the jet mass distribution in fully hadronic decays of boosted top quarks. These have proven to be competitive in precision compared to other measurements and have the potential to be turned into an extraction of a well-defined top quark mass. This measurement complements other top quark mass extractions as it has different uncertainties. We present the current status, explore potential improvements, and provide an outlook for such a measurement at the HL-LHC.
The top-quark mass is one of the key fundamental parameters of the Standard Model that must be determined experimentally. Its value has an important effect on many precision measurements and tests of the Standard Model. The Tevatron and LHC experiments have developed an extensive program to determine the top quark mass using a variety of methods. In this contribution, the top quark mass measurements by the ATLAS experiment are reviewed. These include measurements in two broad categories, the direct measurements, where the mass is determined from a comparison with Monte Carlo templates, and determinations that compare differential cross-section measurements to first-principle calculations. In addition, new results on top-quark properties are shown. This includes the first observation of quantum entanglement in top-quark pair events and a test of lepton-flavour universality in emu final states.
We generalize and update our former top quark mass calibration framework for Monte Carlo event generators based on the e+e- hadron-level 2-jettiness distribution in the resonance region for boosted top production. The updated framework includes the addition of the shape variables sum of jet masses, modified jet mass and the treatment of two more gap subtraction schemes to remove the leading renormalon. These generalizations entail implementing a more versatile shape-function fit procedure and accounting for a certain type of massive power corrections. The theoretical description employs boosted heavy-quark effective theory at NNLL matched to soft-collinear effective theory at NNLL and full QCD at NLO and includes the dominant top width effects. We update the top mass calibration results by applying the new framework to PYTHIA 8.205, HERWIG 7.2 and SHERPA 2.2.11.
Precision measurements of the top quark mass at hadron colliders have been notoriously difficult. Energy-Energy Correlators (EECs) provide clean access to angular correlations in the hadronic energy flux, but their application to the precision mass measurements is less direct since they measure a dimensionless angular scale.
Inspired by the use of standard candles in cosmology, I will show that a single EEC-based observable can be constructed that reflects the characteristic angular scales of both the $W$-boson and top quark masses. This gives direct access to the dimensionless quantity $m_t/m_W$, from which $m_t$ can be extracted in a well-defined short-distance scheme as a function of the well-known $m_W$. I will demonstrate several remarkable properties of this observable as well as its statistical feasibility. This proposal provides a road map for a rich program for top mass determination at the LHC with record precision.
Based on arXiv:2311.02157 and arXiv:2201.08393
We discuss the possibility that light new physics in the top quark sample at the LHC can be found by investigating with greater care well known kinematic distributions, such as the invariant mass mbℓ of the b-jet and the charged lepton in fully leptonic t ̄t events. We demonstrate that new physics can be probed in the rising part of the already measured mbℓ distribution. To this end we analyze a concrete supersymmetric scenario with light right-handed stop quark, chargino and neutralino. The corresponding spectra are characterized by small mass differences, which make them not yet excluded by current LHC searches and give rise to a specific end-point in the shape of the mbℓ distribution. We argue that this sharp feature is general for models of light new physics that have so far escaped the LHC searches and can offer a precious handle for the implementation of robust searches that exploit, rather than suffer from, soft bottom quarks and leptons.
Heavy baryon production in various collision systems, from RHIC to LHC energies, is a challenge for theoretical understanding of heavy quarks (HQs) hadronization. An hadronization via coalescence-fragmentation predicts large Λc/D0 from AA collisions at RHIC and LHC to pp collisions at top LHC energies. The model shows significant enhancements in Ξc/D0 and Ωc/D0 in pp collisions in agreement with ALICE data. We extend this approach to multi-charmed baryons: Ξcc and Ωccc. Investigating the impact of system size from PbPb, KrKr, ArAr, and OO as planned by ALICE3 will permit to study the role of non-equilibrium in charm quark distribution. We show that in PbPb collisions it resembles SHM predictions under full thermalization, but baryons like Ωccc, are sensitive to HQ thermalization. Predictions for B mesons, Λb, and Ξb in PbPb and pp collisions at top LHC energies are presented. The study of heavy hadron production could sheds light on hadronization and equilibration of HQ in the QGP.
Heavy quarks (charm and beauty) serve as useful probes for investigating the properties of the quark-gluon plasma (QGP) generated in ultrarelativistic heavy-ion collisions. The characterisation of the heavy-quark in-medium energy loss and of their diffusion process within the medium are, in particular, of great interest.
In this contribution, measurements of charm-hadron $R_{\rm{AA}}$ and of prompt- and non-prompt D meson $v_2$ coefficient in Pb--Pb collisions at $\sqrt{s_{\rm{NN}}} = 5.02$ TeV will be shown, along with comparisons to model predictions that incorporate various implementations of heavy-quark interaction and hadronisation with the QGP constituents. Angular correlations of heavy-flavour decay electrons with charged particles, and their modification in the presence of the QGP will be also presented. The latest findings from the LHC Pb--Pb Run 3 data will be featured, showcasing the performance of $v_2$ measurements for charm mesons and baryons.
In this work we present the first semi-analytical predictions of the azimuthal anisotropies for jets in heavy-ion collisions, obtaining a quantitative agreement with available experimental data. Jets are multi-partonic, extended objects and their energy loss is sensitive to substructure fluctuations. We find that jet azimuthal anisotropies have a specially strong dependence on color coherence physics due to the marked length-dependence of the critical angle θc. By combining our predictions for the collision systems and center of mass energies studied at RHIC and the LHC we show that the relative size of jet azimuthal anisotropies for jets with different cone-sizes R follow a universal trend that indicates a transition from a coherent regime of jet quenching to a decoherent regime. These results suggest a way forward to reveal the role played by the physics of jet color decoherence in probing deconfined QCD matter.
We investigate possible signatures of gluon saturation using forward p + A → j + j + X di-jet production processes at the Large Hadron Collider. In the forward rapidity region, this is a highly asymmetric process where partons with large longitudinal momentum fraction x in the dilute projectile are used as a probe to resolve the small x partonic content of the dense target. Such dilute-dense processes can be described in the factorization framework of Improved Transverse Momentum Distributions (ITMDs). We present a new model for ITMDs where we explicitly introduce the impact parameter (b) dependence in the ITMDs, to properly account for the nuclear enhancement of gluon saturation effects, and discuss the phenomenological consequences for p − Pb, p − Xe and p − O collisions at the LHC.
We present a new coherent jet energy loss model for heavy-ion collisions. It is implemented as a Monte Carlo perturbative final-state parton shower followed by elastic and radiative collisions with the medium constituents. Coherency is achieved by starting with trial gluons that act as field dressing of the initial jet parton. These are formed according to a Gunion-Bertsch seed. The QCD version of the LPM effect is attained by increasing the phase of the trial gluons through elastic scatterings with the medium.
The model has been validated by successfully reproducing the BDMPS-Z prediction for the energy spectrum of radiated gluons in a static medium. The realistic case for LHC energy with minimal assumptions is also produced and shown. We also show the influence of various parameters on the energy spectrum and transverse momentum distribution. The model is constructed with realistic medium description and jet-medium coupling in mind.
One of the most significant problems in modern physics is the apparent asymmetry of matter and antimatter in the Universe. Recent laser experiments in the United States have demonstrated that high intensity lasers striking high-Z targets release electron/positron pairs that can be separated magnetically, while also resulting in transmutation of the lasers' targets. With these experimental results, a composite model of hadron structure involving matter and antimatter is indicated and explained. The implications of these experimental results, including specifically the cosmology resulting from this composite matter/antimatter model (covering proton-proton chain reactions in stars, neutrino generation, beta decay, Dark Matter, Dark Energy, and Strong Force/gravity/inertia interchange) will be discussed.
Knowledge of the primordial matter density field from which the present non-linear observations formed is of fundamental importance for cosmology, as it contains an immense wealth of information about the physics, evolution, and initial conditions of the universe. Reconstructing this density field from the galaxy survey data is a notoriously difficult task, requiring sophisticated statistical methods, advanced cosmological simulators, and exploration of a multi-million-dimensional parameter space. In this talk, I will discuss how simulation-based inference implemented through sliced score matching allows us to tackle this problem and obtain data-constrained realisations of the primordial dark matter density field in a simulation-efficient way for general non-differentiable simulators. In addition, I will describe how graph neural networks can be used to get optimal data summaries for galaxy maps, and how our results compare to those obtained with classical likelihood-based methods.
Mounting evidence suggests that planned and present gravitational-wave detectors may be sensitive to signatures from first-order phase transitions in the early universe. Here, we investigate the influence of heavy vector-like fermions on the phase transition. Specifically, we consider the recently-proposed "flavour transfer" model, where the SM flavour structure is augmented by a new horizontal SU(2) flavour gauge group. For such a model, the new gauge symmetry is broken far above the electroweak scale and constraints are dominated by “flavour-transfer” operators rather than flavour-changing currents. We calculate the finite-temperature corrections to the effective potential and determine the critical temperature at which we expect a phase transition. We examine the parameters for which the phase transition is strongly first order, and estimate whether the corresponding peak frequency of the gravitational-wave lies within the sensitivity windows of upcoming detectors.
The Two-Higgs-Doublet-Standard Model-Axion-Seesaw-Higgs-Portal inflation (2hdSMASH) model consisting of two Higgs doublets, a Standard Model (SM) singlet complex scalar and three SM singlet right-handed neutrinos can embed axion dark matter, neutrino masses and address inflation. We report on an investigation of the inflationary aspects of 2hdSMASH and its subsequent impact on low energy phenomenology.
By analyzing the renormalization-group flow of the parameters we identify the necessary and sufficient constraints for running all parameters perturbatively and maintaining stability from the electroweak to the PLANCK scale. Stringent constraints arise on the singlet scalar self coupling from inflationary constraints. We show that inflation is realized in a variety of field space directions in the effective single field regime. Benchmark scenarios at colliders are provided as well.
The occurence of $CP$-asymmetric processes is one of the necessary conditions for successful matter-antimatter asymmetry generation in the early universe. Considering any initial state, by unitarity and $CPT$ symmetry, the sum of the asymmetries over all possible final states vanishes. In this contribution, we present a diagrammatic approach to simplifying asymmetry calculations and allowing for systematic tracking of their cancelations at any perturbative order. It is based on Phys. Rev. D 103, L091302, and recent developments.
FASER, the ForwArd Search ExpeRiment, has successfully taken data at the LHC since the start of Run 3 in 2022. From its unique location along the beam collision axis 480 m from the ATLAS IP, FASER has set leading bounds on dark photon parameter space in the thermal target region and has world-leading sensitivity to many other models of long-lived particles. In this talk, we will give a full status update of the FASER experiment and its latest results, with a particular focus on our very first search for axion-like particles and other multi-photon signatures.
The NA62 experiment at CERN, designed to measure the highly-suppressed decay $K^{+} \rightarrow \pi^{+}\nu\bar{\nu}$, has the capability to collect data in a beam-dump mode, where 400 GeV protons are dumped on an absorber. In this configuration, New Physics (NP) particles, including dark photons, dark scalars and axion-like particles, may be produced and reach a decay volume beginning 80 m downstream of the absorber.A search for NP particles decaying in flight to hadronic final states is reported, based on a blind analysis of a sample of $1.4 \times 10^{17}$ protons on dump collected in 2021.
The MoEDAL experiment at IP8 on the LHC ring is the 7th LHC experiment and the 1st dedicated to the search for BSM physics. It took data at LHC’s Run-1&2. The MoEDAL detector is an unconventional and mostly passive detector dedicated to the search for Highly Ionizing Particle (HIP) avatars of new physics. An upgraded MoEDAL detector, installed for Run-3, is currently taking data allowing us to also search for massive singly & multiply charged Long-Lived Particles (LLPs).
MoEDAL-MAPP is currently installing the MoEDAL Apparatus for Penetrating Particles (MAPP-1) in the UA83 tunnel ~100m from IP8 as part of MoEDAL-MAPP’s New Physics Search Facility (MNPSF) at the LHC. MAPP-1 extends MoEDAL’s reach to include sensitivity to Feebly Ionizing Particles (FIPs) such as milli-charged particles, with sensitivity to LLPs. The MoEDAL-MAPP Collaboration is planning to add the MAPP-2 detector to the MNPSF for data taking at the High Luminosity to greatly enhance our sensitivity to neutral LLPs.
Many extensions of the Standard Model with Dark Matter candidates predict new long-lived particles (LLP). The LHC provides an unprecedented possibility to search for such LLP produced at the electroweak scale and above. The ANUBIS concept foresees instrumenting the ceiling and service shafts above the ATLAS experiment with tracking stations in order to search for LLPs with decay lengths of O(10m) and above. In this contribution, we will present the latest findings from our intensive recent studies of ANUBIS’ sensitivity for several BSM models predicting long-lived particle signatures, with a particular focus on challenging Heavy Neutral Lepton scenarios.
Signatures of new physics at the LHC are varied and by nature often very different from those of Standard Model processes. Novel experimental techniques, including dedicated datastreams are exploited to boost the sensitivity of the CMS Experiment to search for such signatures. In this talk we highlight the most recent CMS results, obtained using the data collected at the LHC Run-II through the so-called “Data Scouting” and “Data Parking” strategies. These approaches have allowed to set some of the strongest constraints to date for low mass resonances in prompt and long-lived signatures.
Jiangmen Underground Neutrino Observatory (JUNO), located in the southern part of China, will be the world’s largest liquid scintillator (LS) detector upon completion. Equipped with 20 kton LS, about 17612 20-inch PMTs and 25600 3-inch PMTs in the central detector (CD), JUNO will provide a unique apparatus to probe the mysteries of neutrinos, particularly the neutrino mass ordering puzzle. In recent decades Machine Learning has been more and more widely used in various neutrino experiments. If each PMT is viewed as a pixel, the JUNO CD can be regarded as a large spherical camera, providing a perfect scenario for the application of Machine Learning. This talk will present an overview of Machine Learning applications in JUNO, including reconstruction, particle identification, etc. These Machine Learning based methods not only provide alternative approaches complementary to the traditional ones, but also demonstrate huge potential on enhancing the performance of the JUNO detector.
Advanced machine-learning (ML) based methods are being increasingly used to tackle the analyses of large and complex datasets. At CMS we explore the unique opportunity to exploit these new ML methods to extract information and address scientific questions to search for physics beyond the Standard Model, with the overarching aim to discern possible signatures for new physics. In this talk we will discuss results using this novel strategy carried out with the full Run-II dataset collected by the CMS experiment at the LHC.
Designing the next generation colliders and detectors involves solving optimization problems in high-dimensional spaces where the optimal solutions may nest in regions that even a team of expert humans would not explore.
Resorting to Artificial Intelligence to assist the experimental design introduces however significant computational challenges in terms of generation and processing of the data required to perform such optimizations: from the software point of view, differentiable programming makes the exploration of such spaces with gradient descent feasible; from the hardware point of view, the complexity of the resulting models and their optimization is prohibitive. To scale up to the complexity of the typical HEP collider experiment, a change in paradigma is required.
In this talk I will describe the first proofs-of-concept of gradient-based optimization of experimental design and implementations in neuromorphic hardware architectures, paving the way to more complex challenges.
Recent advances in AI have been significant, with large language models demonstrating astonishing capabilities that hold the promise of driving new scientific discoveries in high-energy physics. In this report, we will discuss two potential approaches to large models. The first is a specialized intelligent agent for BESIII experiment based on large language models, encompassing its brain, sensors, memory, actuators, and learning systems. Second, we will talk about the large model that directly processes particle physics data, referred to as a scientific large model. We discuss recent progress and considerations for developing future AI scientists.
The CMS Tracker in Run 3 is made up of thousands of silicon modules (Pixel:1856 modules, Strip: 15148 modules). Because of the aging of the detector, and all other possible accidents that may happen during the operations, there is the need for constant monitoring of the detector components, in order to guarantee the best data quality. The procedures and tools adopted by the CMS Tracker group to monitor the data (both Online and Offline) will be presented, together with the results of the data certification in Run 3. New tools are being developed to support the work of the human shifters in the daily task of checking hundreds of histograms. To further increase the efficacy of the process, machine learning models are being trained on data to identify the most subtle discrepancies that may pass unnoticed to the human eye.
This talk will summarise a method based on machine learning to play the devil's advocate and investigate the impact of unknown systematic effects in a quantitative way. This method proceeds by reversing the measurement process and using the physics results to interpret systematic effects under the Standard Model hypothesis. We explore this idea in arXiv:2303.15956 by considering the hypothetical possibility that the efficiency to reconstruct a signal is mismodelled in the simulation. Extensions of this method to include hypothetical backgrounds are also discussed, which have the potential to significantly streamline the analysis procedure in a complex experiment.
The future circular electron-positron collider (FCC-ee) is receiving much attention in the context of the FCC Feasibility Study currently in advanced progress described in the mid-term report, in preparation for the next EU strategy update. We present IDEA, a detector concept optimized for FCC-ee and composed of a vertex detector based on DMAPS, a very light drift chamber, a silicon wrapper, a dual readout calorimeter outside a thin 2 Tesla solenoid and muon chambers inside the magnet yoke. In particular we discuss the physics requirements and the technical solutions chosen to address them. We also present some possible upgrades that are studied in order to further extend and improve the physics capabilities of IDEA. We then describe the detector R&D currently in progress and show the expected performance on some key physics benchmarks.
The future circular electron-positron collider (FCCee) will be used to study the heaviest known particles with unprecedented precision, a goal that introduces multiple challenges in the detector design. One of the proposed experiments for FCCee is ALLEGRO, a general-purpose detector concept that is currently in its design and optimization phase. This contribution will introduce ALLEGRO’s calorimeter system, offering a comprehensive overview of the baseline technologies planned for its two calorimeter systems: a highly granular noble-liquid electromagnetic calorimeter and a hadronic calorimeter with scintillating-light readout using wavelength shifting fibers. Preliminary results from performance studies with the combined calorimeters are presented, thus shedding light on the promising capabilities of this newly introduced detector concept for FCCee. Additionally, we briefly introduce the potential use of machine-learning approaches for particle identification and detector calibration.
The International Large Detector (ILD) is a detector designed primarily for the International Linear Collider (ILC), a high-luminosity linear electron-positron collider with an initial center-of-mass energy of 250 GeV, extendable to 1 TeV or more. The ILD concept is based on particle flow for overall event reconstruction, which requires outstanding detector capabilities including superb tracking, very precise detection of secondary vertices and high-granularity calorimetry. ILD as a general-purpose detector can also serve as an excellent basis to compare the science reach and detector challenges for different collider options. ILD is actively exploring possible synergies with other Higgs/EW factory options, in particular FCC-ee. Possible updates of the detector concept by introducing new technologies are also being studied. In this talk we will report on the state of the ILD concept, report on recent results and discuss selected examples of studies of ILD at colliders other than ILC.
The Hybrid Asymmetric Linear Higgs Factory (HALHF) proposes a shorter and cheaper design for a future Higgs factory. It reaches a $\sqrt{s}$ = 250 GeV using a 500 GeV electron beam accelerated by an electron-driven plasma wake-field, and a conventionally-accelerated 31 GeV positron beam. Assuming plasma acceleration R&D challenges are solved in a timely manner, the asymmetry of the collisions brings additional challenges regarding the detector and the physics analyses, from forward boosted topologies and beam backgrounds. This contribution will detail the impact of beam parameters on beam-induced backgrounds, and provide a first look at what modification compared to e.g. the ILD can improve the physics performance at such a facility. The studies will be benchmarked against some flagship Higgs Factory analyses for comparison.
A 10 TeV muon collider is the ideal machine to explore the energy frontier. In addition to producing large samples of Standard Model particles, it has the potential to create new, possibly massive states, enabling a broad physics program that includes direct and indirect searches for new physics, precise Standard Model measurements in an unexplored energy regime, and significant advancements in the Higgs sector.
The fulfillment of such physics potential lies in the detector's ability to reconstruct physics objects and measure their properties over a wide range of momenta at high levels of beam-induced background. At 10 TeV collisions, the transverse momenta of Standard Model particles are relatively low, while new heavy states are expected to decay into high-momentum central physics objects.
This contribution presents a possible detector design and the reconstruction performance of the main physics objects using a detailed detector simulation that includes the beam-induced background.
The Jiangmen Underground Neutrino Observatory (JUNO) is a multipurpose neutrino experiment under construction. The JUNO detector requires an unprecedent energy resolution of 3% at 1 MeV. It is composed of the central liquid scintillator detector, the water Cherenkov detector and the top tracker. The central detector is a φ35.4 m acrylic vessel supported by a stainless-steel structure, containing 20-kton 1iquid scintillator. Scintillation photons are detected by 17612 20” PMTs plus 25600 3” PMTs. The water Cerenkov detector is a cylindrical water pool containing 35-kton of ultra-pure water and equipped with 2400 20” PMTs. The top tracker is to reconstruct the cosmic-ray muon tracks. In addition, a calibration system will calibrate the detector to achieve a better than 1% energy linearity. Currently, installation of the JUNO detector is underway and will be finished in this year. In this talk, a detailed introduction of the JUNO detector and its installation status will be presented.
I present recent efforts to tame the algebraic complexity of two-loop five-point scattering amplitudes in the spinor helicity formalism. These amplitudes are required, for instance, to obtain next-to-next-to-leading order predictions for the production of three jets or of a massive vector boson with two jets at the Large Hadron Collider. I review the method of numerical generalized and the integrand decomposition technique employed to generate finite-field or p-adic samples of the amplitudes. I then focus on the study of the analytic properties of the rational coefficients and their reconstruction from said numerical samples. I show how to exploit correlations among codimension-one residues to simplify the calculation. I touch upon various interdisciplinary aspects, including elements of number theory, computational algebraic geometry, constraint programming, memoization, and GPU acceleration.
Recent developments on Feynman integrals and string amplitudes greatly benefitted from multiple polylogarithms and their elliptic analogues — iterated integrals on the sphere and the torus, respectively. In this talk, I will review the Brown-Levin construction of elliptic polylogarithms and propose a generalization to Riemann surfaces of arbitrary genus. In particular, iterated integrals on a higher-genus surface will be derived from a flat connection. The integration kernels of our flat connection consist of modular tensors, built from convolutions of Arakelov Green functions and their derivatives with holomorphic Abelian differentials. At genus one, these convolutions reproduce the Kronecker-Eisenstein kernels of elliptic polylogarithms and modular graph forms.
We consider the 5-mass kite family of self-energy Feynman integrals and present a systematic approach for putting its associated differential equation into a convenient form (also called the epsilon or canonical form).
We show that this is most easily achieved by making a change of variables from the kinematic space to the function space of two tori with punctures.
We demonstrate how the locations of relevant punctures on these tori, which are required to parametrize the full image of the kinematic space onto this moduli space, can be extracted from integrals over the solution of homogeneous differential equations (also called maximal cuts).
A boundary value is provided to systematically solve the differential equation in terms of iterated integrals over so-called Kronecker-Eisenstein coefficients -- the equivalents of rational functions on a torus.
In the context of high-energy particle physics, a reliable theory-experiment confrontation requires precise theoretical predictions. This translates into accessing higher-perturbative orders, and when we pursue this objective, we inevitably face the presence of complicated multiloop Feynman integrals. There are serious bottlenecks to compute them with classical tools: the time to explore novel technologies has come. In this work, we study the implementation of quantum algorithms to optimize the integrands of scattering amplitudes. Our approach relies on the manifestly causal Loop-Tree Duality (LTD), which re-casts the loop integrand into phase-space integrals and avoids spurious non-physical singularities. Then, we codify this information in such a way that a quantum computer can understand the problem, and build Hamiltonians whose ground-state are directly related to the causal representation. Promising results for generic families of multiloop topologies are presented.
In recent years, research studies in high-energy physics have confirmed the creation of strongly interacting quark-gluon plasma (sQGP) in ultra-relativistic nucleus-nucleus collisions. NA61/SHINE at CERN SPS investigates hadronic matter properties by varying collision energy (ranging from 5 GeV to 17 GeV) and systems (such as p+p, p+Pb, Be+Be, Ar+Sc, Xe+La, Pb+Pb). Utilizing femtoscopic correlations, we can unveil the space-time structure of the hadron emitting source.
Our focus is on femtoscopic correlations in small to intermediate systems, comparing measurements with symmetric Lévy source calculations to explore Lévy source parameters' relation to average pair transverse mass. Of particular significance is the Lévy exponent $\alpha$, which characterizes the source's shape and may hold connections to the critical exponent $\eta$ near the critical point. Therefore, by measuring $\alpha$, we aim to understand and measure the location of the critical endpoint on the QCD phase diagram.
Femtoscopy is a unique tool to investigate the space-time geometry of the matter created in ultra-relativistic collisions. If the probability density distribution of hadron emission is parametrized, then the dependence of its parameters on particle momentum, collision energy, and collision geometry can be given. In recent years, several measurements have come to light that indicate the adequacy of assuming a Lévy-stable shape for the mentioned distribution. In parallel, several new phenomenological developments appeared, aiding the interpretation of the experimental results or providing tools for the measurements. In this talk, we discuss important aspects of femtoscopy with Lévy sources in light of some of these advances, including phenomenological and experimental ones.
Space–time properties of quark--gluon plasma (QGP), a state of matter with unbound partons produced in heavy-ion collisions, can be studied using femtoscopic correlations of particle pairs emitted after the hadronization.
In this talk, results of 1D and 3D femtoscopic analyses of identical charged kaon pairs are reported in p--Pb and Pb--Pb collisions at $\sqrt{s_{{\rm NN}}}$ = 5.02 TeV measured by ALICE. The multiplicity dependencies of 1D radii, compared to other collision systems at different energies, disfavor models suggesting similar evolution for matter created in small and large collision systems. The obtained 3D radii allowed to extract maximal emission times for kaons in a wide centrality range. It will be shown that a new 1D and 3D femtoscopic analysis of identical proton pairs in Pb--Pb collisions at $\sqrt{s_{{\rm NN}}}$ = 5.36 TeV, recently collected by ALICE, can provide further understanding of the physical processes occurring in heavy-ion collisions.
Heavy quarks are produced in hard partonic scatterings at the very early stage of heavy-ion collisions and experience the whole evolution of the Quark-Gluon Plasma medium. Two-particle femtoscopic correlations at low relative momentum, are sensitive to the final-state interactions and to the space-time extent of the region from which the correlated particles are emitted. Correlations study between the charmed mesons and identified charged hadrons can shed light on their interactions in the hadronic phase and the interaction of charm quarks with the medium.
We will report the measurement of femtoscopic correlations between $D^0$ and charged hadrons at mid-rapidity in Au+Au collisions at ${\sqrt{s_{NN}}}$ = 200 GeV by the STAR experiment. $D^0$ mesons are reconstructed via the $K^{-}{\pi}^{+}$ decay channel using topological criteria enabled by the Heavy Flavor Tracker. We will compare the experimental data with available theoretical models to discuss their physics implications.
The production yield of (hyper)nuclei is commonly described using two conceptually different models: statistical hadronization (SHM) or coalescence. This talk will present the elliptic flow measurements ($v_{2}$) of $\mathrm{^{3} He}$ and $\mathrm{^{3}_{\Lambda} H}$ at LHC energies using the large Pb-Pb data sample collected by ALICE during the Run 3 of LHC. Results will be compared with the flow measurements of their nucleon constituents to test the baryon number scaling expected from the coalescence production mechanism. Furthermore, in the presence of elliptic flow, $\mathrm{^{3}_{\Lambda} H}$ are expected to be polarized with respect to the beam direction. The first polarization measurement of $\mathrm{^{3}_{\Lambda} H}$ will be presented and exploited to determine the $\mathrm{^{3}_{\Lambda} H}$ spin, an unknown parameter in theory calculations.
We augment the conventional $T$-$\mu_B$ planar phase
diagram for QCD matter by extending it to a multi-dimensional domain spanned by temperature $T$, baryon chemical potential $\mu_B$, external magnetic field $B$ and angular velocity $\omega$. This is relevant for peripheral heavy-ion collisions or astrophysical systems where $B$, $\omega$ are non-zero. Using two independent approaches, one from a rapid rise in entropy density and another dealing with a dip in the speed of sound, we identify deconfinement in the framework of a modified statistical hadronization model. We find that the deconfinement temperature $T_C(\mu_B,~\omega,~eB)$ decreases nearly monotonically with increasing $\mu_B,~\omega$ and $eB$ with the most prominent drop (by nearly $40$ to $50$ MeV) in $T_C$ occurring when all the three quasi-control (collision energy and impact parameter dependent) parameters are tuned simultaneously to finite values that are achievable in present and upcoming heavy-ion colliders.
Originally motivated for the generation of (Majorana) neutrino masses, the Type-II Seesaw Model has also a rich extended Higgs sector with, if accessible at the LHC, a distinctive phenomenology of neutral, charged and doubly-charged states. The goal of the work is to present an exhaustive phenomenological study of the most promising production and decay channels of pair or associated scalars, decaying to gauge bosons or in cascades. These decays can be studied within LHC energies reach, by comparing cutflow results of different multi-lepton final states. The work is a collaboration between ATLAS experimentalists and theoreticians in continuation of an endeavor that lead to previous published ATLAS analyses for the search of (doubly)charged Higgs bosons (Eur. Phys. J. C 79 (2019) 58 [arXiv :1808.01899v2 [hep-ex]] et JHEP 06 (2021) 146 [arXiv :2101.11961v2 [hep-ex]]), aiming at proposals for future experimental searches.
We present searches from the CMS experiment, performed with data collected during LHC Run 2 at a centre-of-mass energy of 13 TeV, for rare Higgs bosons decays into light pseudoscalars. A variety of final states are explored, probing both boosted and resolved topologies.
Searches for axion-like-particles (ALPs) in Higgs boson decays, as well as searches for ALP production with two top quarks are presented, using LHC collision data at 13 TeV collected by the ATLAS experiment in Run 2. The searches cover a mass range of ALPs below the Z-boson mass. Novel reconstruction and identification techniques used in these searches are described.
Decays of Higgs bosons produce pairs of vector bosons in highly entangled states - near-perfect Bell states. In the language of quantum information theory the pair of spin-1 W and Z bosons is a bipartite system of two qutrits. The chiral decays of the W and Z permit measurement of the full bipartite bipartite spin-density matrix, allowing the LHC experiments to perform quantum state tomography, entanglement measurements even perhaps Bell inequality tests.
In the framework of the two Higgs doublet Model (2HDM) type-1, we investigate the scope of the LHC in accessing the process $gg\to H \to hh\to b\bar b\tau\tau$ by performing a Monte Carlo (MC) analysis aimed at extracting this signal from the SM backgrounds, in the presence of a dedicated trigger choice and kinematical selection. We prove that some sensitivity to such a channel exists already at Run 3 of the LHC while the High-Luminosity LHC (HL-LHC) will be able to either confirm or disprove this theoretical scenario over sizable regions of its parameter space.
While the physics program for the future Higgs factory focuses on measurements of the 125 GeV Higgs boson, production of new exotic light scalars is still not excluded by the existing experimental data, provided their coupling to the gauge bosons is sufficiently suppressed. We present prospects for discovering an extra scalar boson produced in association with a $Z$ boson at the ILC running at 250 GeV. Based on a full simulation of the International Large Detector (ILD), decay-mode independent search for the new scalar is presented, exploiting recoil of the scalar against a $Z$ boson decaying into a pair of muons. Also presented are prospects for the light scalar observation in selected decay channels, where higher sensitivity can be reached with use of hadronic $Z$ boson decays.
This work is carried out in the framework of the ILD concept group as a contribution to the focus topic of the ECFA e$^+$e$^-$ Higgs/EW/Top factory study.
https://indico.fnal.gov/event/63898/
The NP06/ENUBET experiment concluded its ERC funded R&D program demonstrating that the monitoring of charged leptons from meson decays in an instrumented decay tunnel can constrain the systematics on the resulting neutrino flux to 1%, opening the way for a cross section measurement with unprecedented precision. The two milestones of this phase, the end-to-end simulation of a site independent beamline optimized for the DUNE energy range and the testbeam characterization of a large scale prototype of the tunnel instrumentation, will be discussed.
We will also present studies for a site dependent implementation at CERN carried out in the framework of Physics Beyond Colliders. This work is based on a more efficient version of the beamline able to cover the HK energy region as well and will include radioprotection and civil engineering studies, with the goal of proposing a cross section experiment in the North Area with the two protoDUNEs as neutrino detectors, to be run after CERN LS3.
The Deep Underground Neutrino Experiment (DUNE) is a future long-baseline neutrino oscillation experiment featuring a 70kT liquid argon (LAr) far detector. The near detector complex, situated at Fermilab, includes NDLAr - a LAr detector that is critical for constraining systematic uncertainties via in situ measurements to enable precision studies of neutrino oscillations. Challenging event pile-up from the world's most intense 1.2MW neutrino beam will be mitigated by combination of modularised detector approach and state-of-the-art readout technologies. True 3D pixel-based charge readout capabilities combined with high-timing resolution of the light readout will eliminate ambiguities that would otherwise arise in conventional LAr detectors. This talk will describe the novel design of the detector and its subsystems, highlighting key performance results from single-module commissioning of the NDLAr 2x2 Demonstrator, an NDLAr prototype that will record first DUNE neutrino data
The SoLAr collaboration proposes to use the liquid argon time projection chamber (LArTPC) technology to detect MeV-scale neutrinos, specifically to search for solar neutrinos, at the Boulby Underground Laboratory in the United Kingdom. SoLAr's innovative approach combines the light and charge readout of LArTPCs onto a combined dual readout anode plane, allowing for better positional resolution in light detection and combined light and charge calorimetry. Two small-scale prototype detectors were built and operated at the University of Bern in 2022 and 2023. Furthermore, simulations have been developed on the performance of various tonne-scale dual readout geometries. The contribution will cover the SoLAr detector concept, preliminary simulations, and the results from the two prototype detectors using cosmic-ray muons.
The CLOUD collaboration is pioneering the first fundamental research reactor antineutrino experiment using the novel LiquidO technology for event-wise antimatter tagging. CLOUD’s program is the byproduct of the AntiMatter-OTech EIC/UKRI-funded project focusing on industrial reactor innovation. The experimental setup comprises an up to 10 tonne detector, filled with an opaque scintillator and crossed by a dense grid of wavelength-shifting fibres. The detector will be located at EDF-Chooz’s new “ultra-near” site, ~35 m from the core of one of the most powerful European nuclear plants, with minimal overburden. Detecting of order 10,000 antineutrinos daily and with a high (≥100) signal-to-background discrimination, CLOUD aims for the highest precision of the absolute flux, along with explorations beyond the Standard Model. Subsequent phases will exploit metal-doped opaque scintillators for further detection demonstration, including the first attempt at surface detection of solar neutrinos.
The potential for a new Europe-based flagship neutrino experiment opens with dismantling the EDF Chooz-A nuclear reactor complex (up to 50,000m3 of underground volume) hosting the SuperChooz experiment. The new site is ~1km from the N4-nuclear reactors of the EDF Chooz-B. This shallow location is expected to be possible thanks to the novel LiquidO technology, heralding the detection of both reactor and solar neutrinos with unprecedented active background rejection. The physics programme encompasses some of the world's most precise measurements (i.e. θ13⊕Δm2 and θ12⊕δm2) while probing, with unique discovery potential, a few of the most insightful building-block symmetries of the Standard Model. In late 2022, CNRS+EDF agreed on the technical feasibility study, called the "SuperChooz Pathfinder" era, where the AntiMatter-OTech project, funded by the EU-EIC and UKRI, provides technological demonstrator and physics explorations, as led by the CLOUD collaboration.
Next generation long-baseline neutrino experiments require precision measurements of neutrino interactions in near detectors The Intermediate Water Cherenkov Detector (IWCD) will operate as a near detector for Hyper-K, and a similar sized near detector is considered for ESSnuSB. The Water Cherenkov Test Experiment (WCTE) is a 50-ton test experiment that will operate in CERN's recently upgraded T9 test beam line. WCTE will be used to test the photon detectors, calibration techniques and event reconstruction algorithms that will be necessary to realize precision measurements in water Cherenkov detectors with particle energies up to 1 GeV. WCTE will also allow for the measurement of physics processes important for modelling neutrino interactions in water Cherenkov detectors, such as pion absorption and scattering, lepton scattering and secondary neutron production. We present the status and plans of the WCTE and discuss the potential impacts of its measurements.
The Jinping Neutrino Experiment (JNE), situated in the world's deepest underground laboratory, the China Jinping Underground Laboratory (CJPL), conducts research on solar neutrinos, geo-neutrinos, supernova neutrinos, and neutrinoless double beta decay. The Jinping Neutrino one-ton prototype, located in CJPL-I, has completed measurements of cosmic rays and background. Next, JNE plans to build a multi-hundred-ton neutrino detector in CJPL-II by the end of 2026. Using simulations, we've optimized the detector's geometry and finished structural design. The excavation of the foundation pit in D2 Hall of CJPL-II is completed. The detector will use new 8-inch MCP-PMTs, undergoing tests; self-developed ADC has been tested on the one-ton prototype. Oil and water-based slow liquid scintillators (SLSs) are developed. We have also developed reconstruction algorithms for SLSs, enabling particle identification of electrons, gamma rays, and protons in the several MeV energy range.
During runs 1 and 2 of the LHC, the ALICE Muon Spectrometer (MS) has produced many results at forward pseudorapidities (2.5<$\eta$<4) and down to $p_{\rm T}$=0, mainly on quarkonia and open heavy flavors. However, the frontal absorber of the MS prevented the separation of charm and beauty contributions because of the lack of spatial resolution in the interaction point region. To remove this limitation, a new tracker, the Muon Forward Tracker (MFT), has been installed in front of the frontal absorber. Covering almost the full acceptance of the MS, the MFT is composed of 936 high-performance pixel sensors (ALPIDE). In addition, the front-end and the readout electronics of the MS have been upgraded to cope with the increase of event rate from about 10kHz to 50kHz in Pb-Pb collisions.
After an overview of the design of these upgrades, this contribution will focus on the performance of the muon detection in terms of data taking, track reconstruction and measurement of displaced vertices.
The LHCb detector, a single-arm forward spectrometer designed for the investigation of heavy flavor physics at the Large Hadron Collider (LHC), features one of the world’s largest and most radiation-exposed muon detectors. Throughout Runs 1 and 2 of the LHC, operating at an instantaneous luminosity of 4x10^32 cm-2 s-1 this detector has exhibited remarkable performance, with a tracking inefficiencies at the level of O(1%). Following a preliminary upgrade targeting off-detector and control electronics, a second upgrade has been proposed to maximize the flavor physics potential during the HL-LHC period. However, the higher instantaneous luminosity at Run5 makes it necessary to redesign the muon detector to preserve its high detection capabilities. Different sub-detector technologies have being considered to cope with the wide difference in particle rates between the innermost region and the outer one. The state of the art of the muon detector design for the LHCb Upgrade II is presented.
To withstand the challenging conditions of increased luminosity and higher pileup expected during the high-luminosity LHC (HL-LHC), the muon spectrometer of the CMS experiment will undergo specific upgrades targeting both the electronics and detectors to cope with the new challenging data-taking conditions and to improve the present tracking and triggering capabilities. The upgrade of the electronics will target the Drift Tubes (DT) in barrel, Cathode Strip Chambers (CSC) in end-caps and Resistive Plate Chambers (RPC) in barrel and end-caps of the present Muon system. For the detector upgrade, the deployment of new stations for the end-cap is planned, where the background rate is expected to be higher. These upgrades are based on the triple gas electron multiplier (GEM) and improved RPC (iRPC) technology, featuring improved time, spatial resolution and enhanced rate capability. The presentation will give an overview of the Muon System upgrades, with the ongoing activities and plans.
The CMS Muon system is undergoing significant upgrades for High-Luminosity LHC operation, including the installation of the Muon Endcap 0 (ME0) detector. ME0 is a 6-layer station, scheduled for production starting in 2024, that will expand the geometrical acceptance for muons in the pseudorapidity range 2.03<|η|<2.8. Comprising 18 chambers per endcap, each housing 6 triple-GEM detectors, ME0 enhances muon measurement by adding six hits per track, crucial for robust track reconstruction at the first trigger level. Production and quality control are distributed across various sites to expedite the process. Lessons from prior upgrades inform the design improvements for ME0, aligned with the overall objectives of strengthening muon triggering and reconstruction capabilities. This presentation provides an overview of the ME0 upgrade and its current status.
In view of the challenging data taking of CMS in HL-LHC Collisions, an extensive upgrade is underway for the CMS Muon System to ensure its optimal performance in muon triggering and reconstruction. The key role of RPCs as dedicated muon detectors will provide relevant timing information, profiting of their time resolution, to secure sub-bunch crossing event timestamp. To meet the requirements of LHC Phase-2, the RPC system will be expanded up to 2.4 in pseudo-rapidity. The forward Muon system's upcoming RE3/1 and RE4/1 stations will feature improved RPCs (iRPC). Distinguished by a unique design and geometry, including a 2D strip readout, these iRPCs represent a significant advancement over the current RPC system. The enhancements include the use of thinner electrodes, a narrower 1.4 mm gas gap, and improved FEB allowing a 30 fC threshold. At the end of 2023, two iRPC chambers were installed in the CMS detector. Present talk provides a full summary of the iRPC project.
Resistive Plate Chambers are used in the ATLAS experiment for triggering muons in the barrel region. These detectors use a Freon-based gas mixture containing C2H2F4 and SF6, high global warming potential greenhouse gases. To reduce the greenhouse gas emissions and cost, it is crucial to search for new environmentally friendly gas mixtures. In August 2023, at the end of the proton-proton data-taking campaign, ATLAS collaboration decided to replace the standard gas mixture (94.7% C2H2F4, 5.0% i-C4H10, 0.3% SF6) with a new CO2-based gas mixture: 64% C2H2F4, 30% CO2, 5.0% i-C4H10, 1% SF6. The performance of the RPC detectors with the new gas mixture will be presented with a particular emphasis on detector efficiency, cluster size and timing performance, as well as the efficiency of the L1 Muon Barrel trigger system.
RPC detectors play a crucial role in triggering events with muons in the ATLAS central region; it is facing a significant upgrade in view of the HL-LHC program. In the next few years, 306 triplets of new generation RPCs, with 1 mm gas gap (instead of 2 mm) will be installed in the innermost region of the ATLAS Muon Barrel Spectrometer, increasing from 6 to 9 the number of tracking layers, doubling the trigger lever arm and increasing the coverage. An innovative front-end electronics will allow to operate the RPCs with an order of magnitude less of average charge. Both sides of RPCs are readout by strip panels, the second coordinate is reconstructed from the time difference of signal drift at opposite detector's ends. The expected time resolution is approximately 300 ps; the possibility of a stand-alone Time of Flight measurement will have a huge impact on ATLAS searches for long-lived particles. An overview and the present status of the ATLAS RPC Phase II project will be presented.
The current RPC system of the ATLAS Muon Spectrometer is undergoing a major upgrade, with the installation of approximately 1000 RPC detector units of new generation in the innermost barrel layer. The goal of the project is to increase the detector coverage and improve the trigger robustness and efficiency. The Italian collaboration is taking care of the construction and test of the chambers located in the large sectors of the ATLAS barrel (BIL). Here we present the state of the art of the production, certification and logistics of the BIL chambers. In particular, we describe the protocols defined and the instrumentation created for the certification of gas volumes at the Italian production factory, for the construction and certification of the read-out panels in Cosenza and for the assembly and certification with cosmic rays of the detectors at CERN. The certification results of the components produced are analyzed and discussed.
Uncover recent breakthroughs in Charge-Parity Violation (CPV) and lifetime measurements as presented by the CMS experiment. The measurements use 13 TeV pp collision data collected by the CMS experiment at the LHC.
ATLAS results on weak decays on b hadrons are presented, including studies of $B^0_{(s)}\to\mu^+\mu^-$ rare decay, precision CP violation measurements with $B^0_s\to J/\psi\phi$ decay, as well as $B^0$ meson lifetime measurement.
We propose a set of new optimized observables using penguin mediated decays together with their CP conjugate partners that are substantially cleaner than the corresponding branching ratios, which are plagued by large end point divergences. We find that the dominant contribution to the uncertainties of these observables stem from the corresponding form factors. The Standard model estimate of some of these observables exhibit deviations from the corresponding experimental numbers at greater than the 2 sigma level. The pattern of deviations w.r.t these observables as well as the individual branching ratios suggest that a possible explanation might be new physics both in $b\to s$ and $b\to d$ transitions. We find that, taken one at a time, only the Wilson coefficients $C_{4d,s}^{NP}$ and $C_{8gd,s}^{NP}$ can potentially satisfy all the current experimental data on the branching ratios as well as the optimized observables for vector vector and pseudoscalar pseudoscalar final states.
The latest time-dependent CP violation measurements using beauty to open charm decays from LHCb are presented. These decays provide sensitivity to important CKM parameters such as the angles beta and gamma from the unitarity triangle.
The Belle$~$II experiment has collected a 362 fb$^{-1}$ sample of $e^+e^-\to B\bar{B}$ decays at the $\Upsilon(4S)$ resonance. The asymmetric-energy SuperKEKB collider provides a boost to the $B$ mesons in the laboratory frame, enabling measurements of time-dependent $C\!P$ violation. We present measurements of both time-dependent and direct $C\!P$ violation in hadronic $B$ decays. Among the new results, we measure $C\!P$-violating parameters related to the determination of the least well-known angle of the unitarity triangle, $\phi_2$ (also known as $\alpha$), using the decays $B^0\to\rho^+\rho^-$ and $B^0\to \pi^0\pi^0$. In addition, the penguin-sensitive $B^0\to J/\psi\pi^0$ decay is studied; the results from this mode constrain the systematic effects related to the determination of the unitarity-triangle angle $\phi_1$ (also known as $\beta$).
Measuring the mixing phases of the B0 and Bs mesons is very important to validate the CP violation paradigm of the Standard Model and to search for new physics beyond it. Golden modes to measure these quantities are those governed by tree-level $b\to c\bar{c}q$ transitions, that allow precise and theoretically clean determinations to be performed. Besides, measuring the mixing phases with modes receiving major contributions from penguin diagrams open the possibility to reveal new physics appearing in the loops. In this presentation we show the most recent measurements of B0 and Bs mixing phases at LHCb.
Flavour physics represents a unique test bench for the Standard Model (SM). New analyses performed at the LHC experiments and new results coming from Belle II are bringing unprecedented insights into CKM metrology and new results for rare decays. The CKM picture provides very precise SM predictions through global analyses. We present here the results of the latest global SM analysis performed by the UTfit collaboration including all the most updated inputs from experiments, lattice QCD and phenomenological calculations for Summer 2024.
Hard exclusive meson production and deeply virtual Compton scattering are common processes to constrain generalised parton distributions. The measurement of exclusive reactions, notably the exclusive $\pi^0$ production, were conducted at COMPASS in 2016 and 2017 using the 160 GeV/$c$ muon beam scattering off a 2.5~m long liquid hydrogen target equipped with time-of-flight detector to record the recoiling target proton.
We will report on the preliminary results on exclusive $\pi^0$ production cross-section from 2016 data dependent on the squared four-momentum transfer $|t|$ and on the azimuthal angle $\phi$ between the scattering plane and the $\pi^0$ production plane. The results will provide a further input to phenomenological models for constraining flavour dependent generalised parton distributions, in particular chiral-odd ("transversity") ones.
We present a novel strategy based on the step-scaling technique to study non-perturbatively thermal QCD up to very high temperatures. As a first concrete application, we compute the meson and baryonic screening masses with a precision of a few per mille in the temperature range from approximately 1 GeV up to the electroweak scale in the theory with three massless quarks. We observe a clear splitting between the vector and the pseudoscalar meson screening masses up to the highest temperature investigated. A comparison with the high-temperature effective theory shows that the one-loop perturbative matching with QCD does not provide a satisfactory description of the non-perturbative data up to the highest temperature considered.
We investigate the role of elastic and inelastic (radiative) processes in the strongly interacting quark-gluon plasma (sQGP) within the effective dynamical quasi-particle model (DQPM) constructed for the description of non-perturbative QCD phenomena of the strongly interacting quark-gluon plasma (sQGP) in line with the lattice QCD equation-of-state.
We present the results for the:
1) Energy, temperature and $\mu_B$ dependencies of the total and differential radiative cross sections and compare them to the corresponding elastic cross sections.
2) Transition rate and relaxation time of radiative versus elastic scatterings.
3) Jet transport coefficients such as the transverse momentum transfer squared per unit length as well as the energy loss per unit length and investigate their dependence on the temperature $T$ and momentum of the jet parton depending on the choice of the strong coupling in thermal, jet parton and radiative vertices.
We calculate differential distributions for diffractive dijets production in $e p \to e' p$ $jet$ $jet$ using off diagonal unintegrated gluon distributions (GTMDs). Different models are used.
We concentrate on the contribution of exclusive $q \bar q$ dijets.
Results of our calculations are compared to H1 and ZEUS data. In general, except of one GTMD, our results are below the HERA data. This is in contrast to recent results where the normalization was adjusted to some selected distributions and no agreement with other observables was checked. We conclude that the calculated cross sections are only a small part of the measured ones which contain probably also processes with pomeron remnant.
We present also azimuthal correlations between the sum and the difference of dijet transverse momenta. The cuts on transverse momenta of jets generate azimuthal correlations which can be misinterpreted.
We evaluate the cross section for diffractive bremsstrahlung of a single photon in the $pp \to pp \gamma$ reaction at high energies and at forward photon rapidities. Several differential distributions, for instance, the rapidity, the absolute value of the transverse momentum, and the energy of the photon are presented. We discuss also azimuthal correlations between outgoing particles. We compare the results for our complete approach, based on QFT and the tensor-Pomeron model, with two versions of soft-photon approximations where the radiative amplitudes contain only the leading terms. We discuss also the possibility of a measurement of two-photon-bremsstrahlung in the $pp \to pp \gamma \gamma$ reaction. In our calculations we impose a cut on the relative energy loss of the protons where measurements by the ATLAS Forward Proton detectors are possible. Our predictions can be verified by ATLAS and LHCf combined experiments. We discuss also the role of the $p p \to p p \pi^0$ background.
We present our results for azimuthal decorrelation of a vector boson and jet in proton-proton collisions. We show that using a recoil-free jet definition reduces the sensitivity to contamination from soft radiation and simplifies our calculation by eliminating non-global logarithms. Specifically, we consider the $p_T^n$ recombination scheme, as well as the $n\to \infty$ limit, known as the winner-take-all scheme. Such jet definitions also significantly simplify the calculation for a track-based measurement, which is preferred due to its superior angular resolution. We present a detailed discussion of the factorization in Soft-Collinear Effective Theory and resummation in the back-to-back limit up to next-to-next-to-leading logarithms. We conclude with phenomenological studies, finding an enhanced matching correction for high jet $p_T$ due to the electroweak collinear enhancement of a boson emission off di-jets.
Understanding the cancellation of ultraviolet and infrared singularities in perturbative quantum field theory is of central importance for the development and automation of various theoretical tools that make accurate predictions for observables at high-energy colliders. The loop-tree duality aims to find an efficient solution by treating loop and tree-level contributions under the same foot to achieve a local cancellation of singularities at the integrand level, and thus avoid dimensional regularisation. In this talk, we exploit the causal properties of scattering amplitudes in the loop-tree duality representation to present different applications to physical processes at higher orders.
Effective Field Theory (EFT) provides a universal language for testing beyond the Standard Model physics at LHC scales. With increasing complexity and new sophisticated techniques, the sensitivity of these analyses could be significantly improved in past years. In this talk, recent searches for anomalous couplings in the top quark sector and their combination with other EFT results from CMS will be presented. The new results significantly exceed previously achieved precision and prepare the path to explore the full potential of LHC data.
The production of a single top quark $t$ in association with a $W$ and a $Z$ boson receives large contributions from beyond-the-standard-model (BSM) theories, particularly through the electroweak interaction of the top quark. This talk presents a study on the sensitivity of the $tWZ$ process to such effects in the form of effective field theory (EFT) operators. The study is based on the recently published results by the CMS experiment, which provided the first evidence for this process.
Additionally, new possible analysis strategies aimed at maximizing the sensitivity to EFT operators will be highlighted in order to exploit the full potential of the LHC.
We present theoretical results at approximate NNLO in QCD for top-quark pair-production total cross sections and top-quark differential distributions at the LHC in the SMEFT. These approximate results are obtained by adding higher-order soft gluon corrections to the complete NLO calculations. The higher-order corrections are large, and they reduce the scale uncertainties. These improved theoretical predictions can be used to set stronger bounds on top-quark QCD anomalous couplings.
This study explores fully leptonic WZ and WW production within the SMEFT framework at NLO in QCD, focusing on both CP-even and CP-odd triple gauge coupling dimension-six operators. We investigate the off-shell production processes and contrast our findings with those derived under the narrow-width approximation. Alongside the conventional kinematical observables, we examine polarisation-related observables and angular coefficients. Moreover, we also assess potential SMEFT effects on asymmetry observables. Furthermore, through a sensitivity analysis, we identify critical LHC observables that are particularly sensitive to SMEFT-induced modifications, thereby shedding light on potential avenues for new physics searches in diboson production at the LHC.
Accurate predictions for Standard Model and Beyond the Standard Model phenomena are fundamental to collider experiments. In this context, electroweak corrections, enhanced by Sudakov logarithms, emerge as the dominant higher-order effect at the TeV scale and beyond. We computed Sudakov EW corrections in the high-energy limit for the dimension-6 SMEFT operators that maximally grow with energy. In this talk, I will explore the phenomenology for the illustrative process of top quark pair production at the LHC incorporating these new operators. In particular, I will present the impact of EW corrections on the tails of differential distributions, and address the limitations of applying a simple $k$-factor approach in accurately representing the underlying physics.
The Large Hadron-electron Collider and the Future Circular Collider in electron-hadron mode will make possible the study of DIS in the TeV regime providing electron-proton collisions with instantaneous luminosities of $10^{34}$ cm$^{−2}$s$^{−1}$. In this talk we will review the opportunities for measuring standard and anomalous top quark couplings, both to lighter quarks and to gauge bosons, flavour changing and conserving, through single top quark and $t\bar t$ production. We will discuss the studies in inclusive DIS of different EW parameters like the effective mixing angle and the gauge boson masses, and the weak neutral and charged current couplings of the gauge bosons. We will also review the possibilities in direct $W$ and $Z$ production, and analyse the implications of a precise determination of parton densities at the LHeC or FCC-eh on EW measurements at hadronic colliders. Special emphasis is given to possibilities to empower $pp$ and $e^+e^-$ physics at the LHC and FCC.
https://indico.fnal.gov/event/63898/
https://indico.cern.ch/event/1387590/
Greetings by Director General for Higher Education, Science and Research, Ministry of Education, Youth, and Sports (Radka Wildova)
Remarks by Rector of the Czech Technical University in Prague (Vojtech Petracek)
Welcome from Deputy President, Czech Academy of Sciences (Jan Ridky)
Welcome from State Secretary of the Ministry of Foreign Affairs (Radek Rubes)
More details: https://indico.cern.ch/event/1291157/page/35115-discussion-lunch-education-and-outreach
Panellists: Fabiola Gianotti (CERN), Lia Merminga (Fermilab, USA), Yifang Wang (IHEP, China), Shoji Asai (KEK, Japan) and Dmitri Denisov (BNL, USA).
Particle physics, with its developed international cooperation and coordination can serve as a good example to other branches of science on building cooperation on a global scale. Lab directors will introduce their facilities and will discuss current activities and future plans in areas where Czech scientists participate in the experiments. A panel will then explore future directions and benefits of science with representatives from the Czech political and scientific communities.
Panelists:
Fabiola Gianotti (CERN), Lia Merminga (FNAL), Yifang Wang (IHEP), and Shoji Asai (KEK)
You may suggest question here: https://indico.cern.ch/event/1291157/manage/surveys/5595/
This year's ICHEP2024 series will feature a panel discussion on Future Colliders. While large-scale future collider projects offer groundbreaking physics opportunities, they also pose various daunting challenges. These relate to advancing enabling technologies, securing funding, ensuring energy sustainability, and long-term strategic planning to foster a vibrant high-energy physics community that can realize its ambitions.