- Compact style
- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
This is the ninth Colombian Meeting on High Energy Physics (9th ComHEP). We hope to bring together young and senior particle physicists from Colombia and abroad, to discuss recent progress in particle physics, cosmology and related areas. The program of the meeting will address a broad range of topics, divided in dedicated sessions on:
This year the conference will take place at Universidad de Nariño, in the city of Pasto.
Follow us in twitter!
Scientific Organizing Committee
Very Special Linear Gravity (VSLG) is an alternative model for linearized gravity, featuring massive gravitons while still retaining two physical degrees of freedom. Recently, its gravitational period decay dynamics has been calculated through effective field theory techniques. In this work, we aim to test this new model by a complete Bayesian analysis over the dataset of the PSR B1913+16 binary. We found a $95\%$ CL upper bound for the graviton mass $m_g$ around $10^{-19} \,eV$ while also obtaining a relevant discrepancy for the predicted value of the mass of one of the two companion stars. Finally, we discuss some potential ripercussions for a non-zero graviton mass at the cosmological level.
Tradicionalmente, las ráfagas de rayos gamma (GRBs) se asocian con el colapso de estrellas masivas o la colisión de objetos compactos. Sin embargo, nuestra búsqueda sistemática en el plano Epeak-Eiso recalibrado revela GRBs que desafían la clasificación convencional. Recientemente, aparentes GRBs largos han mostrado asociaciones con fusiones de estrellas compactas y exhiben emisión extendida, lo que lleva a confusión durante su clasificación. A través de meticulosos análisis temporales y espectrales, hemos profundizado en las características de estos eventos, proporcionando información sobre sus elusivos progenitores. Nuestra investigación incluye un examen exhaustivo de los estimadores temporales, incluyendo el tiempo de emisión, el retardo espectral y la energía isotrópica, derivados de las observaciones de Swift. Las fases posteriores de nuestra investigación explorarán las propiedades de las galaxias anfitrionas y los medios circundantes para proporcionar una comprensión completa de estos fenómenos desconcertantes. Al desafiar los paradigmas establecidos, nuestro estudio contribuye significativamente a la evolución en curso de la clasificación de los GRB y mejora nuestra comprensión de las explosiones más potentes del Universo.
Durante el trabajo de investigacion titulado Estudio de geodesicas en el espacio-tiempo de
agujeros negros tipo Sagittarius A*, se determin´o las trayectorias de movimiento denominadas
geod´esicas, que se generan al considerar un sistema compuesto por dos agujeros negros supermasivos
(SMB); para este fin, se emple´o la solucion de Roy Kerr de las ecuaciones de campo de
Einstein, y se descompuso como una superposicion lineal del tensor m´etrico de Schwarzschild
y un tensor de perturbaci´on; a la superposicion de estos dos tensores se denomino m´etrica
general del sistema, al ser un modelo de perturbacion en lugar de una solucion exacta de
las ecuaciones de campo de Einstein, se determinan intervalos del espacio-tiempo para los
cuales la metrica general del sistema es una solucion que describe correctamente lo topologıa
del espacio-tiempo; con base en la m´etrica general del sistema se parametriza la ecuacion
diferencial que caracteriza las curvas geodesicas, y se determina una cantidad conservada
por la variaci´on de la m´etrica a lo largo de la base de coordenadas, con estos dos conceptos
se determina un sistema de ecuaciones diferenciales cuya soluci´on es obtenida empleando un
modelo lineal para la solucion.
La solucion obtenida describe la ´orbita de una estrella supermasiva B0-2 V, conocida
como S2; la trayectoria r(ϕ) est´a compuesta por la superposicion de 3 ´ordenes de aproximacion, r(0)(ϕ) es una funci´on que describe una ´orbita el´ıptica que converge a la solucion
obtenida por las ecuaciones de Newton, a partir de dicha solucion se puede parametrizar
el momentum angular de cualquier partıcula que se mueva alrededor de los agujeros negros
en terminos del semi eje mayor y la excentricidad de la elipse; a continuacion, se obtiene el
primer orden de aproximaci´on r(1)(ϕ), que es una soluci´on que describe la precesi´on de las
orbitas el´ıpticas, en este caso, se determina una precesi´on de 75 arcsec/year; finalmente,
r(2)(ϕ) permite determinar la energ´ıa por unidad de masa promedio por orbita que tendr´ıa
la estrella S2, y cuyo valor oscila alrededor de −2.9 · 1013m2/s2.
La motivación para realizar esta investigación surge de la importancia de implementar muongrafía volcánica en Nariño, Colombia. Una región ubicada en la cordillera de los Andes que alberga varios volcanes, entre ellos el Volcán Galeras, conocido por su actividad constante y cercanía a la ciudad de Pasto. Este trabajo se fundamenta en la necesidad de optimizar un sistema de detección de muones. Estas partículas, al ser capaces de atravesar grandes cantidades de roca, ofrecen la posibilidad de obtener información valiosa sobre la estructura interna de los volcanes. A través de la optimización en el diseño del detector, se busca establecer las bases que contribuyan al estudio de muongrafía volcánica, representando un avance significativo en la comprensión de las dinámicas volcánicas y la mitigación de riesgos asociados a desastres naturales.
En esta investigación, se desarrolló a través de simulaciones en GEANT4 y la extensión GODDeSS, un modelamiento detallado de un detector de centelleo utilizando características comúnmente empleadas en detectores de muongrafía. Con el objetivo de encontrar una configuración óptima que maximice la colección de luz, a partir de los componentes del detector y su respuesta a las interacciones con muones. Se comparó el rendimiento de luz en dos configuraciones distintas: la primera que emplea una barra centelladora con recubrimiento óptico, y la segunda que integra una fibra óptica WLS dentro de la barra centelladora. Esta metodología de realizar simulaciones no solo proporciona un entendimiento profundo de la física involucrada, sino que también permite realizar ajustes y mejoras en el diseño del detector.
Los resultados de este estudio mostraron que al integrar fibra dentro de la barra centelladora se logra una mayor eficiencia en la longitud de atenuación y un aumento en la colección de luz de aproximadamente 3 veces más, lo que justifica la integración de este componente en el diseño del detector.
Esta investigaci ́on se enfoca en estudiar c ́omo las propiedades de los neutrinos se ven afectadas
en una supernova de tipo colapso de n ́ucleo. El estudio se centrar ́a en determinar la variaci ́on del
flujo de neutrinos producidos durante el colapso debido a su propagaci ́on a trav ́es del medio material,
calcular el flujo de neutrinos esperado al llegar a la Tierra y analizar los cambios en los ́angulos de
mezcla y las diferencias de masa efectiva bajo diferentes condiciones de densidad. La investigaci ́on
proporcionar ́a una comprensi ́on fundamental sobre c ́omo las propiedades de los neutrinos cambian
en una regi ́on de alta densidad de materia. Se utilizar ́a un enfoque te ́orico y computacional, haciendo
uso del software proporcionado por SNEWS, bas ́andose en diversos modelos y c ́alculos presentados
en estudios previos.
Los resultados experimentales obtenidos en el último siglo proporcionan evidencia de
que las masas de los neutrinos son distintas de cero, pero no permiten determinar la
escala de estas masas. Debido a la influencia de los neutrinos en distintos
observables cosmológicos, como las fluctuaciones de densidad de materia en el
universo, es posible obtener información complementaria sobre sus masas a partir de
estos observables. Este estudio establece un límite superior sobre la masa total de
los neutrinos. Para ello, utilizamos el software CAMB, una herramienta de código
abierto que genera el espectro de potencia a partir de los parámetros del modelo
cosmológico, con el fin de encontrar el mejor ajuste de la masa total de los neutrinos
para los valores de H0 y σ8
We propose a new and compact realization of singlet Dirac dark matter within the WIMP framework. Our model replaces the standard Z_2 stabilizing symmetry with a Z_6, and uses spontaneous symmetry breaking to generate the dark matter mass, resulting in a much simplified scenario for Dirac dark matter. Concretely, we extend the Standard Model (SM) with just two new particles, a Dirac fermion (the dark matter) and a real scalar, both charged under the Z_6 symmetry. After acquiring a vacuum expectation value, the scalar gives mass to the dark matter and mixes with the Higgs boson, providing the link between the dark sector and the SM particles. With only four free parameters, this new model is extremely simple and predictive. We study the dark matter density as a function of the model's free parameters and use a likelihood approach to determine its viable parameter space. Our results demonstrate that the dark matter mass can be as large as 6 TeV while remaining consistent with all known theoretical and experimental bounds. In addition, a large fraction of viable models turns out to lie within the sensitivity of future direct detection experiments, furnishing a promising way to test this appealing scenario.
Relaxing the constraints on kinetic mixing is possible if the dark photon can couple directly to Dirac neutrinos
The parameter space of freeze-in dark matter through light dark photon (``minimal freeze-in dark matter'') is currently being probed by direct detection experiments through electron and nuclear recoil. We show the dark matter production in this scenario is sensitive to cosmic equation of state during reheating, from matter-like to kination-like. The main result is that low reheating scenario with reheating temperature $\Trh \lesssim 1$~TeV is severely constrained by current experiments and can be completely probed up to $\Trh \lesssim 10$~TeV in future experiments, leaving only two viable dark matter mass ranges $0.03~{\rm MeV}\lesssim m_\chi \lesssim 1$~MeV and $m_\chi\gtrsim 10$~TeV.
In this presentation I will speak about the production of Dark matter through gluon fusion in the Standard Model and its minimal supersymmetric extensions. Specifically, We are going to discuss the calculation of the differential cross section at leading order for this process.
The Generalized SU(2) Proca theory, despite its potential in modeling various physical phenomena, fails to accurately describe dark energy. Using the Green's function formalism, approximate solutions demonstrate that dark energy solutions exist independently of the cosmic epoch. However, integrating Proca SU(2) dark energy with matter and radiation introduces singularities during the transition from the matter epoch to the dark energy epoch, undermining the consistency of the model. Consequently, this theory cannot provide a reliable framework for modeling a dark energy-dominated universe.
In this talk we will first introduce the dynamical scotogenic model, which extends the usual radiative see-saw mechanism by one Z2 even scalar singlet that spontaneously breaks the U(1) lepton number symmetry, and explain some details of its phenomenology, emphasizing in the scalar sector. Then, we explain how this model can introduce neutrino masses compatible with the experimental observations, as well as two possible dark matter candidates in the scalar and fermion sector. After, we make a brief analysis of the DM relic density for both candidates, as well as the scattering cross section in comparison with experimental data from LUX-ZEPPLIN and XENON-1T. Finally, we explain how the model can induce collider observable signatures in both channels, and make a production cross section analysis in the context of the LHC.
We present single and multi-component scalar dark matter scenarios explored via effective operators up to order $6$. For this, we utilize the mathematica code Sym2Int and generate the relevant operators of the Lagrangian up to the desired energy dimension. The operators are used as interaction inputs for the mathematica package FeynRules to produce the model files necessary for the calculations of dark matter observables with the code micrOmegas. Initially we consider the prospect of generating the observed dark matter density for a single real or complex particle connected to the standard model through effective operators. Later we consider a two-component scenario where the complex and real dark matter fields are connected through operators introduced by some particular $Z_{2n}$ symmetry.
The mechanism of hadron mass generation through the strong interactions of quantum chromodynamics (QCD) accounts for most matter in the visible universe. The pattern of its momentum dependence reflects in the internal structure of mesons and baryons. In this connection, we provide a selective overview of the progress in the computation of the hadron electromagnetic and transition form factors and the corresponding experimental efforts at the Thomas Jefferson National Accelerator Facility, the planned Electron-Ion Collider and other hadron physics laboratories, making comparisons with observations and predictions from other theoretical tools. We also discuss the implications of these efforts for the tests of the celebrated Standard Model of particle physics, in particular the anomalous magnetic moment of the muon.
Light-front wave functions (LFWF) of hadrons of mesons can be derived from the projection of their Bethe-Salpeter wave functions on the light front. We obtain the Poincaré-covarinat wave functions within a functional approach to QCD, solving first the quark gap equation within a chiral-symmetry preserving truncation scheme and then the Bethe-Salpeter equation of the mesons. With the LFWF in hand, we can derive the meson’s parton distribution amplitude (PDA), transverse momentum distribution (TMD) and parton distribution function (PDF) for the light mesons, 𝐷 and 𝐵 mesons, as well as quarkonia. Last not least, I will present recent progress on the calculation of elementary quark-fragmentation functions and their generalization to jet functions.
Lattice Quantum Chromodynamics (QCD) provides a non-perturbative approach to QCD from first principles which has proven successful for the study of several physical quantities of interest, for example the energy spectrum of mesons and baryons. It is a crossroads of particle physics, applied mathematics and high performance computing: the numerical simulations performed often require solving several linear systems to compute the quark propagator involving sparse yet extremely large matrices which depend on random variables that need to be efficiently sampled from a non-trivial distribution dictated by the action of the theory of interest. Significant research has been focused on fast solvers for such linear systems to be used in supercomputers, efficient sampling of the distributions and improved actions that define the theory on a lattice, among many other topics of interest. State-of-the-art methods are being used to study exotic states of great theoretical and experimental interest, such as glueballs, hybrid mesons with gluonic excitations or states with quantum numbers not allowed in the conventional quark model. To understand how this is done, this talk will focus on a particular area of lattice QCD: hadron spectroscopy. Here we calculate the spectrum of the energy eigenstates of the theory (mesons, baryons, etc...) by means of Monte Carlo averages over random variables which represent the gluon background. Starting with a short introduction to some widely used methods, including how to build creation operators for the different states, we will see how we can use and improve them to map out the low-lying meson spectrum while at the same time look for the predicted-but-not-yet-found scalar glueball.
Available all day during the breaks
A systematic study of the neutrino mass matrix Mν with two texture zeros under the assumption that neutrinos are Dirac particles, is carried through in detail. Our study is done without any approximation, first analytically and then numerically. Current neutrino oscillation data are used in our analysis. Phenomenological implications are studied.
We implement a model with a new non-universal Lmu-Ltau gauge symmetry that contains a Dark matter candidate and a scotogenic realization of a Majorana operator for neutrino masses. and we analyze its phenomenological and theoretical constraints.
En este trabajo, se presenta un análisis detallado al problema del momento magnético anómalo del muón en el contexto de un modelo teórico extendido que incorpora un grupo de simetría $U(1)_d$ adicional al Modelo Estándar de la física de partículas.
El momento magnético es un parámetro fundamental netamente cuántico que mide que tan sensible es el espín de una partícula a un campo electromagnético externo. Ahora, la relación entre el momento magnético y el espín viene dada por un factor de proporcionalidad llamado factor giromagnétigo ($g$). Tratando "clásicamente" (es decir, se considera el proceso a orden árbol) la ecuación de Dirac, se llega a que el valor de $g$ debe ser igual a 2. Sin embargo, al introducir el formalismo de la teoría cuántica de campos, se deben considerar todos los posibles procesos que respeten los estados iniciales y finales. Por ende, aparecerán nuevas contribuciones a este valor (por eso el nombre de "anómalo"). Además, estas nuevas contribuciones serán sensibles a las partículas que existen en la naturaleza (incluyendo también partículas desconocidas).
En los últimos años se ha observado la existencia de discrepancias sutiles de los valores experimentales de este observable (Colaboración Muon g-2) con las predicciones teóricas (White Paper), lo cual podría ser un indicio de física más allá del Modelo Estándar. Adicionalmente a esto, colaboraciones que emplean el uso de métodos computacionales tales como el Lattice QCD (Colaboración BMW) o mediciones experimentales en procesos de dispersión (Colaboración CMD3) han llegado a resultados que entran en conflicto con la misma predicción del Modelo Estándar (White Paper), creando una ambigüedad teórica.
El Modelo $U(1)_d$ es un extensión simple del Modelo Estándar mediante la adición de un nuevo campo de gauge asociado a una nueva simetría gauge. Este grupo de simetría cuenta únicamente con un solo generador, por lo que será similar al grupo de simetría $U(1)_Y$ del Modelo Estándar. Además, se supone un escenario para un fotón oscuro masivo en el cual los quarks y leptones conocidos no poseen carga de $U(1)_d$. Es decir, no existe una interacción directa entre los fermiones del Modelo Estándar y este fotón oscuro.
En ese orden de ideas, este trabajo se centró en las implicaciones que tiene el considerar un modelo $U(1)_d$ extendido en el contexto del momento magnético anómalo del muón. Es decir, se hizo el cálculo detallado de la contribución asociada al nuevo campo procedente de la adición del nuevo grupo de simetría $U(1)_d$ teniendo en cuenta todas las predicciones teóricas del Modelo Estándar (White Paper, CMD3 y BMW) y se compararon estos resultados con la predicción experimental de la colaboración Muon g-2. Donde, de esta comparación, se podrá obtener un análisis del qué tan restringidos están los parámetros del nuevo modelo (o qué tan viable es) para la debida concordancia entre las predicciones teóricas y la medición experimental.
Se espera que los resultados de esta investigación proporcionen una comprensión
más profunda de las posibles extensiones del Modelo Estándar y sus implicaciones en la física de partículas, contribuyendo así al avance del conocimiento en este campo.
Palabras clave: momento magnético anómalo, muón, Modelo Estándar, grupo de simetría, Muon g-2.
We studied the Hadronic Light-by-Light (HLbL) contribution to the muon anomalous magnetic moment. Upcoming measurements will reduce the experimental uncertainty of this observable by a factor of four, therefore the theoretical precision must improve accordingly to fully harness such experimental breakthrough. With regards to the HLbL contribution, this implies a study of the high-energy intermediate states that are neglected in dispersive estimates. We focus on the maximally symmetric high-energy regime and in quark loop approximation of perturbation theory, following the method of the OPE with background fields proposed by Bijnens et al. in 2019 and 2020, we confirm their results regarding the contributions to the muon $g-2$. For this we use an alternative computational method based on a reduction of the full quark loop amplitude, instead of projecting on a supposedly complete system of tensor structures motivated by first principles. Concerning scalar coefficients, mass corrections have been obtained by hypergeometric representations of Mellin-Barnes integrals. By our technique the completeness of such kinematic-singularity/zero-free tensor decomposition of the HLbL amplitude is explicitly checked.
We present a canonical quantization for Non abelian Chern-Simons on
the Null Plane coordinates using the Dirac procedure and the
Faddeev-Jackiw formalism. The constraint structure when using
null-plane coordinates is considered and the gauge conditions are
determined.
The ATLAS Open Data provides a wide amount of tools so students, professors and different institutes along the world can use them to be trained in the field of experimental particle physics. Along these last years, this collaboration has released a set of different analyses yielding SM and BSM scenarios by using data and mc samples collected during the run 1 and 2 at CERN (with an energy of center of mass equivalent to 8 TeV and 13 TeV). Currently, the collaboration is making a big effort to prepare the next release of data for a luminosity of 36 fb^-1, which will include some improvements compared to the previous releases (analyses, frameworks, programming tools, webpage, virtual machines, documentation, etc), to the scientific community for research and educational purposes and the idea of this talk is to have the chance to show some of these implementations to young and senior researches in the meeting that will take place in the COMHEP.
Investigación centrada en el análisis musical y aplicaciones computacionales, enfocada en la sistematización de estructuras musicales, específicamente de los acordes 6-4. El trabajo aborda la identificación y clasificación de estos acordes mediante el uso de herramientas de programación y análisis de datos. En las primeras etapas del proyecto, se presentaron algunas limitaciones técnicas con softwares ya preestablecidos para análisis musical, tales como Jsymbolic, lo que llevó a la decisión de desarrollar código adoptando la librería music21 en Python, permitiendo un manejo más efectivo de archivos XML y evitando problemas de enarmónicos. Se desarrolló, entonces, un algoritmo que facilita la identificación y clasificación de los acordes 6-4 en diversas partituras, generando resultados estadísticos que contribuyen a una mejor comprensión de su uso y correcta interpretación. Este enfoque no solo busca clarificar la teoría musical relacionada con los acordes 6-4, sino que también establece una base para futuras investigaciones en la intersección de la música y la computación, promoviendo un análisis más riguroso y sistemático de las estructuras musicales.
Esta charla se ve inspirada por la que ya hizo el año pasado el profesor Jose David Ruiz, acerca de Arte y Ciencia; y en la misma quisiera abordar el tema de la divulgación científica, su importancia y el rol que los profesores, investigadores y estudiantes jugamos en ella, por qué es importante y qué proyectos de divulgación existen a nivel nacional tanto en Física de Partículas, como en Física en general. A su vez, me gustaría comentar mi experiencia en mi grupo de divulgación, El Divulgatorio, el cual precisamente este mes va a llevar a cabo toda una programación acerca de Física de Partículas en conmemoración de los 70 años del CERN, esto último en compañía del profesor Jose David Ruiz.
First, we present a summary of the current state of our Observatory. Secondly, I talk about the future of our Astronomical and Space Sciences Center, looking forward to establishing international links through agreements with important Astronomical Observatories of the World. Through our dedicated work we have managed to participate in several international projects and also in scientific meetings in different places over the world. We have got the International Code “H78” from the “MINOR PLANET CENTER” of the USA, and our data also appears in “NEAR EARTH OBJECTS-DYNAMIC SITE-NEODyS”. We belong to the “INTERNATIONAL ASTEROID WARNING NETWORD – IAWN”. We have also participated in simulated collisions of Asteroids with Earth, and in the international study of the Asteroid APOPHIS. We are currently building the new Center for Astronomical and Space Sciences (project which was approved and financed by the government of Colombia) that will have the following components: a professional Observatory equipped with the largest telescope in Colombia (one meter in diameter), an amateur Observatory so that children and adolescents in the region begin to work from an early age in the fascinating field of scientific research, and a Planetarium. This Science Center will begin operating in October 2025.
Meeting point: Universidad de Nariño
Nuestro trabajo se centra en la investigación y análisis de nuevos modelos teóricos en física de partículas, utilizando los paquetes de software SARAH y SPheno como herramientas principales. SARAH, implementado en Mathematica, es reconocido por su capacidad para la construcción y análisis de modelos como el Modelo Estándar, sus extensiones y en general, modelos de nueva física. SPheno permite la realización de cálculos numéricos precisos en modelos de física de partículas con un alto grado de exactitud. Nuestra investigación se centró en analizar el Modelo Estándar, de leptoquarks y el Modelo Escotogénico mediante su implementación con los paquetes computacionales mencionados anteriormente. Como dato inicial, llevamos a cabo cálculos para generar resultados en la evaluación de los modelos, específicamente el cálculo de masas, vértices y la creación de diagramas de Feynman, para su posterior ejecución con Spheno.
In this talk, we are going to present the latest results obtained by our group regarding the theoretical predictions of electroweak observables, such as the HLBL contribution to the muon anomalous magnetic moment, the Higgs boson mass up to three-loop accuracy, and additional Higgs boson properties in both the Standard Model and its main supersymmetric extensions.
The exact formulation of quantum field theories for fundamental particles with spin $\frac{3}{2}$ represents a significant challenge in theoretical physics due to the inherent complexities in describing these systems. In particular, elastic pion-nucleon scattering involves intermediate states with spin $\frac{3}{2}$,corresponding to the $\Delta(1232)$ resonance, which underscores the importance of studying these fields. In this work, we perform a systematic analysis of the description of Rarita-Schwinger fields and the different parameterizations of their propagator within an effective Lagrangian model. This model is consistently constructed to preserve the relevant symmetries, allowing for the generation of the necessary amplitudes to describe the pion-nucleon scattering process. This approach enables the accurate calculation of physical observables, such as the cross-section, which are crucial for understanding the dynamic properties of the involved hadronic resonances.
The analysis is conducted within the framework of Quantum Chromodynamics (QCD), given the relevance of pion-nucleon scattering in understanding strong interactions and the characteristics of the resulting hadrons. Special attention is given to the parameterization of the interaction vertex and the propagator of resonances with spin $\frac{3}{2}$, which are crucial aspects for ensuring the consistency and predictive capability of the model. The obtained results are compared with available experimental data, evaluating different parameterizations to identify the one that best reproduces the empirical results. This provides a robust theoretical foundation for future studies of similar hadronic processes.
In this talk, we present the implementation of the replica trick introduced by Parisi to incorporate fluctuations of external fields into the QED Lagrangian. As a first approximation, we study how magnetic and thermal fluctuations, described by white noise, induces effective interactions between fermions, exploring the consequences in various scenarios. Specifically, we demonstrate that magnetic fluctuations, from the perspective of perturbation theory and effective action, break the $U(1)$ symmetry of QED. This results in surviving vector currents and the generation of an effective magnetic mass for photons. Furthermore, we show that magnetic fluctuations break spatial symmetries in such a way that four fermion resonances, with their corresponding spectral widths, can be distinguished, leading to the propagation of four independent fermion quasi-particle modes. Additionally, we compute the effects of magnetic fluctuations on photon and dilepton production during the thermalized stages of heavy-ion collisions. Finally, we report the implications of incorporating thermal fluctuations in the Quark-Gluon Plasma (QGP) phase on the deconfinement temperature.
In this talk, I am going to discuss the computations of the decay width of the Higgs boson into a Z boson-photon pair at leading order in the Standard Model and its supersymmetric extensions.
Una obra que ofrece un viaje dramático a través de la historia de la física, representando momentos clave en la evolución del conocimiento científico. A lo largo de la obra, se presentan escenas que ilustran los desafíos y descubrimientos que han moldeado nuestra comprensión del universo, desde las ideas más primitivas hasta los avances más revolucionarios. Los personajes, que encarnan a figuras históricas de la física, enfrentan crisis de conocimiento y luchan con lo desconocido, reflejando la curiosidad y la incertidumbre inherentes a la búsqueda científica. Esta obra no solo destaca los logros de la física, sino que también invita al público a reflexionar sobre la naturaleza de la ciencia y su impacto en nuestra percepción del mundo, celebrando así la eterna búsqueda de la verdad.
Quantum entanglement is one of the most famous and strange phenomena observed in quantum systems. Going against classical intuition, entanglement and other correlation have been studied widely by researchers mainly in low-energy systems (eV - MeV). The Large Hadron Collider (LHC) provides a unique environment to test these quantum properties at the highest energy scales ever. This can be done through the top quark-antiquark $(\mathrm{t\bar{t}})$ system produced at the LHC, since thanks to the top's large decay width, its spin information gets transferred to the angular distributions of its decay products. In this work, we present some general aspects of quantum tomography in the $\mathrm{t\bar{t}}$ system (Phys. Rev. D 100, 072002) and recent results probing quantum entanglement between the top quarks (arXiv:2406.03976) using data recorded by the Compact Muon Solenoid (CMS) detector during the LHC Run II.
This work is based on the study of the production of Higgs pairs decaying into bbtautau for high pT (boosted regime). Boosted HH produces highly collimated jets, which sometimes is an inconvenient since conventional methods to reconstruct jets might not be good enough to identify the content of these jets at all. Thus, we focus on the study of the substructure of jets and kinematic variables at first level (pT and mass of the main objects) to identify which reconstructed jets might contain boosted bb or tautau pairs and categorize them into different regions of study (RR, RB, BR and BB where R and B refers to resolved and boosted jets for bb and tautau possible combinations). This methodological approach provides a way to study the overlap region between the resolved analysis (already available for the easyjet framework in the ATLAS community) and the boosted analysis (the framework we developed during this work), which hopefully would lead to understand this overlap and find a baseline to separate the resolved and boosted events.
The presentation consists of the work done in the Atlas experiment related to the migration to release 22 of a tagger based on the Lund Jet Plane that allows to understand the jet evolution and to discriminate the process from which they come, taking into account the different regions of the plane. The objective is to compare the performance of this tagger for boosted W, Z (and eventually H) bosons and top quarks with other taggers.
La charla tiene como objetivo mostrar el estudio de la estructura de los jets de partículas generados en el LHC para distintas interacciones.
Este estudio se realiza por medio del plano de Lund que es un método gráfico creado a partir de algoritmos de clustering de las señales detectadas luego de la colisión de protones en el acelerador. Posteriormente se realiza el declustering sobre los jets primarios para construir el plano.
El input para la creación de estos planos de Lund es un dataset público llamado el JetClass que contiene datos simulados de alrededor de 100M de jets provenientes de diferentes procesos de interacción. La información de estos dataset fue verificada por medio de simulaciones propias de eventos usando Madgraph (Generador de eventos de Monte Carlo que vincula Delphes y Pythia).
This work focuses on extrapolating the constraints for diHiggs (HH) within the future HL-LHC, specifically targeting the boosted VBF (Vector Boson Fusion) topology. This production mode, while having a lower production rate than ggF, is more sensitive to certain coupling modifiers such as κ₂V.
Previous studies using Run 2 data have shown that the boosted analysis is more sensitive to this modifier. This work aims to reproduce those results but for a scenario with higher luminosity and collision energy (up to 3000 fb⁻¹). Multiple scenarios are proposed, considering variations to the group of systematic uncertainties.
As luminosity increases, seems that statistical uncertainties become less relevant, while systematic uncertainties become dominant. Various scenarios are analyzed, ranging from maintaining the same uncertainties as Run 2 to removing them completely. All of this is to analyze what new information the improvements of the HL-LHC can bring.
Inelastic dark matter is a topic of great interest due to its rich phenomenology, which allows for the exploration of particles in the sub-GeV scale. Our study focuses on an inelastic model involving two Majorana fermions, χ1 and χ2, mediated by a dark photon and a dark Higgs boson, the latter being responsible for the generation of mass in the dark sector. Given the scale, we concentrated on the Deep Underground Neutrino Experiment (DUNE), taking into account both the On-Axis and Off-Axis detectors. Specifically, we identified an interesting region within the model, compatible with DUNE's constraints, by considering a small mass split between the fermions.
The series of Large Hadron Collider upgrades, culminating in the High-Luminosity Large Hadron Collider, will significantly expand the physics program of the Compact Muon Solenoid (CMS) experiment. However, these upgrades will also create more challenging experimental conditions, affecting detector operations, triggering, and data analysis [1]. During Run 3, the LHC is designed to achieve an integrated luminosity of approximately $150-300 {fb}^{-1}$ per experiment. In terms of instantaneous luminosity, the LHC is expected to operate with a luminosity of up to $2 * {10}^{34} {cm}^{-2} {s}^{-1}$ , and will be at least $5 - 7.5 * {10}^{34} {cm}^{-2} {s}^{-1}$ when the High Luminosity Large Hadron Collider is completed for Run 4. These conditions will affect muon triggering, identification, and measurement, which are critical capabilities of the experiment.
To address these challenges, additional muon detectors are being installed on the CMS endcaps, based on gaseous electron multiplier technology. In 2019, 161 were installed with the GE1/1 station. We are now working on the ME0 upgrade, which will be installed in 2026. The assembly and quality control of ME0 detectors is distributed to several production centers around the world (CERN, PKU and Frascati). The quality control procedures to standardize the performance of the detectors and the status of ME0 production is presented. The quality controls: Component acceptance, Foil leakage current, Module gas leakage, high voltage (HV), gas gain and uniformity, HV stability, electronics test/validation and Chamber cosmic-stand ensure that the project achieves a high success rate.
[1] Abbas, M., et al. (2022). Quality control of mass-produced GEM detectors for the CMS GE1/1 muon upgrade. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 1034, 166716. https://doi.org/10.1016/j.nima.2022.166716
Estudio de los jets a bajo pt pertenecientes al proceso $\text{p p > b b~}$ excluyendo a $h$ y $z$ como mediadores, haciendo uso de MadGraph y ROOT.
Para transmitir señales ópticas con alto porcentaje de eficiencia se requiere que la fibra óptica tenga un mínimo de defectos en su estructura.
A través de este análisis se pretende exponer un proceso basado en inteligencia artificial que recibe como argumento una imagen del corte transversal de una fibra óptica y la clasifica como apta o no apta para trasmitir la señal de acuerdo a la exigencia de la aplicación.
Taus are third-generation leptons with a short lifetime, and their decay products offer a unique opportunity: by analyzing the kinematic distributions of these products, it is possible to reconstruct variables that reveal tau polarization. Polarization, which is directly linked to helicity, is a crucial observable that distinguishes between right- and left-handed particles. Moreover, with other observables, polarization information provides valuable insights into Z boson decays to ditau leptons. Through angular momentum conservation, where the Z boson has a spin of one, the two taus strongly correlate with specific helicity combinations. Therefore, studying tau decays allows us to explore the electroweak sector in greater detail.
Polarization variables take advantage of the asymmetry between left- and right-handed tau decays, and they can be used to increase sensitivity in searches for physics beyond the Standard Model, offering a window to study the possible properties of new particles.
The study of third-generation fermion channels at the LHC has gained increasing importance, particularly in light of potential excesses observed in the data. This work is a phenomenological study aimed at establishing and distinguishing the effects of different types of BSM particles that can produce an posible excesses in channels with two taus in the final state. These include resonant production, where neutral bosons such as the Z boson, heavy scalars, or pseudoscalars decay into two taus via an s-channel, and non-resonant production, such as the exchange of a scalar or vector leptoquark in a t-channel.
The distinction between these production channels is achieved by analyzing tau polarization and calculating the interference effects among these channels using the MadGraph, Pythia, and Delphes frameworks. This analysis helps identify the best kinematic observables and assess their impact on the statistical significance of the observation.
Hotel v1501
In this work, we explore the potential of lepton-flavor violating (LFV) Higgs decays as a probe for new physics, specifically through the mediation of an ultralight gauge boson, denoted as $\chi$. Our study bridges the gap between a model generating the LFV interaction $\bar{\ell}_i \ell_j \chi$ at tree level and an effective field theory framework that preserves the light mass of $\chi$. We utilize stringent constraints from recent CMS and ATLAS data on $H\to\ell_i\ell_j$ decays to infer an upper bound on the exotic decay channel $H\to\ell_i\ell_j\chi$. The talk will delve into various kinematic observables, including the lepton energy spectrum, Dalitz plot distribution, and asymmetries in lepton charge and forward-backward distributions, offering a comprehensive perspective on the implications of our findings.
An open problem that the Standard Model does not solve is about the ori-
gin of the mass hierarchy among fermions. Different alternatives have been
proposed by adding extra groups to the Gauge group of the Standard Model or
by building hybrid models with some of them. It has been shown that the S3
symmetry has given good results if, in addition, three Higgs doublets with their
invariant potential of S3 are introduced. However, when taking into account
the minimization conditions of the Higgs potential, the resulting matrix VCKM
exhibits a residual symmetry with zeros in some entries. Following the success
of S3, an extension of the Standard Model is proposed by means of the same
group, but obtained from modular symmetry. In doing so, certain special func-
tions known as modular forms are taken into account, which have a particular
transformation under the application of the modular group. By considering a
modular symmetry, it is possible to assign to the quark fields and Higgs fields
a new quantity known as the modular weight which, together with the symme-
try of S3, produces new constraints on the way the Yukawa sector Lagrangian
is built and hence the couplings, which will be in terms of modular forms. A
proper assignment of the quark and Higgs fields in S3 and their modular weights
allows a mass matrix with texture zeros to be written. When calculating the
elements of the quark mixing matrix VCKM, it is found that, indeed, the VCKM
matrix does not exhibit zeros in any of its inputs and they are comparable to
the data provided by the PDG.
We study the possibility of obtaining the Standard Model (SM) of particle physics as an effective theory of a more fundamental
one, whose electroweak sector includes two non-universal local $U(1)$ gauge groups, with the chiral anomaly cancellation taking place through an interplay among families. As a result of the spontaneous symmetry breaking, a massive gauge boson $Z'$ arises, which couples differently to the third family of fermions (by assumption, we restrict ourselves to the scenario in which the $Z'$ couples in the same way to the first two families). Two Higgs doublets and one scalar singlet are necessary to generate the SM fermion masses and break the gauge symmetries.
We show that in our model, the flavor-changing neutral currents (FCNC) of the Higgs sector are identically zero if each right-handed SM fermion is only coupled with a single Higgs doublet. This result represents a FCNC cancellation mechanism different from the usual procedure in Two-Higgs Doublet Models~(2HDM).
The non-universal nature of our solutions requires the presence of three right-handed neutrino fields, one for each family. Our model generates all elements of the Dirac mass matrix for quarks and leptons, which is quite non-trivial for non-universal models. Thus, we can fit all the masses and mixing angles with two scalar doublets. Finally, we show the distribution of solutions for the scalar boson masses in our model by scanning well-motivated intervals for the model parameters. We consider two possibilities for the scalar potential and compare these results with the Higgs-like resonant signals recently reported by the ATLAS and CMS experiments at the LHC.
Finally, we also report collider, electroweak, and flavor constraints on the model parameters.