- Compact style
- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
The Tenth Annual Large Hadron Collider Physics (LHCP2022) conference will take place fully online from 16th to 20th May 2022 |
---|
News: Bulletin 2 is available.
The LHCP conference series started in 2013 after a successful fusion of two international conferences, "Physics at Large Hadron Collider Conference" and "Hadron Collider Physics Symposium". The programme will contain a detailed review of the latest experimental and theoretical results on collider physics, with many final results of the Large Hadron Collider Run-2, potentially a first glimpse of the upgraded accelerator and detector operation in Run-3, and discussions on further research directions within the high energy particle physics community, both in theory and experiment.
The main goal of the conference is to provide intense and lively discussions between experimenters and theorists in such research areas as the Standard Model Physics and Beyond, the Higgs Boson, Heavy Quark Physics and Heavy Ion Physics as well as to share recent progress in the high luminosity upgrades and future collider developments.
MAIN DEADLINES |
---|
| ||
- deadline | 13 May 2022 | |
Poster abstract submission | ||
- submission deadline | 28 March 2022 | |
- acceptance notification | 8 April 2022 at the latest | |
Conference dates | 16 - 20 May 2022 | |
Proceedings submission | TBA |
LHCb+ATLAS+CMS
LHCb+ATLAS+CMS
ATLAS+CMS
Particle physics is a field which is full of striking visuals: from Feynman diagrams to event displays, there is no shortage of colourful high-contrast shapes and designs to capture the imagination. Can these visuals be used to reach out to budding scientists from their very earliest days? This talk will describe the development of the "Particle Physics for Babies" children's book, a concept imagined by a first-time dad/physicist who wanted to find a way to communicate his physics passion to his newborn daughter. The book was co-developed with the ATLAS outreach team and the International Particle Physics Outreach Group, and has grown to include downloadable captions which allow parents to explain the images in the book to their children or grandchildren with confidence, allowing science to be part of a new child's universe from day 0.
Deep Learning (DL) is one of the most popular Machine Learning models in the High Energy Physics (HEP) community and has been applied to solve numerous problems for decades. The ability of the DL model to learn unique patterns and correlations from data to map highly complex non-linear functions is a matter of interest. Such features of the DL model could be used to explore the hidden physics laws that govern particle production, anisotropic flow, spectra, etc., in heavy-ion collisions. This work sheds light on the possible use of the DL techniques such as the feed-forward Deep Neural Network (DNN) based estimator to predict the elliptic flow ($v_2$) in heavy-ion collisions at RHIC and LHC energies. A novel method is used to process the track level information as input to the DNN model. The model is trained with Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV minimum bias simulated events with AMPT event generator. The trained model is successfully applied to estimate centrality dependence of $v_2$ for both LHC and RHIC energies. The proposed model is quite successful in predicting the transverse momentum ($p_{\rm T}$) dependence of $v_2$ as well. A noise sensitivity test is performed to estimate the systematic uncertainty of this method. Results of the DNN estimator are compared to both simulation and experiment, which concludes the robustness and prediction accuracy of the model.
Reference: N. Mallick, S. Prasad, A. N. Mishra, R. Sahoo and G. G. Barnaf\"oldi,
[arXiv:2203.01246 [hep-ph]].
The ATLAS Visitor Centre at CERN is a guided exhibition space that has been welcoming visitors from around the world since 2009. In a recent effort, ATLAS has reinvented the whole exhibition, replacing the original installation with a completely new exhibition. This contribution will highlight the basic concept behind the new exhibition, introduce its main components along with details on their implementation and hint at the process of getting from an idea to the final implementation. This contribution will also present some of the efforts to make the exhibition more inclusive and accessible to a wider and more diverse audience.
SND@LHC is a compact and stand-alone experiment to perform measurements with neutrinos produced at the LHC in a hitherto unexplored pseudo-rapidity region of 7.2 < 𝜂 < 8.6, complementary to all the other experiments at the LHC. The experiment is located 480 m downstream of IP1 in the unused TI18 tunnel. The detector is composed of a hybrid system based on an 800 kg target mass of tungsten plates, interleaved with emulsion and electronic trackers, followed downstream by a calorimeter and a muon system. The configuration allows efficiently distinguishing between all three neutrino flavours, opening a unique opportunity to probe physics of heavy flavour production at the LHC in the region that is not accessible to ATLAS, CMS and LHCb. This region is of particular interest also for future circular colliders and for predictions of very high-energy atmospheric neutrinos. The detector concept is also well suited to searching for Feebly Interacting Particles via signatures of scattering in the detector target. The first phase aims at operating the detector throughout LHC Run 3 to collect a total of 150 fb$^{−1}$. The experiment was recently approved by the Research Board at CERN and its detector is being commissioned. A new era of collider neutrino physics is just starting.
We present a reinterpretation study of existing results from the CMS Collaboration, specifically, searches for light BSM Higgs pairs produced in the chain decay $pp\to H_{\rm SM}\to hh(AA)$ into a variety of final states, in the context of the CP-conserving 2-Higgs Doublet Model (2HDM) Type-I. Through this, we test the LHC sensitivity to a possible new signature, $pp\to H_{SM}\to ZA\to ZZ h$, with $ZZ\to jj \mu^+\mu^-$ and $h_{SM}\to b\bar b$. We perform a systematic scan over the 2HDM Type-I parameter space, by taking into account all available theoretical and experimental constraints, in order to find a region with a potentially visible signal. We investigate the significance of it through a full Monte Carlo simulation down to the detector level. We show that such a signal is an alternative promising channel to standard four-body searches for light BSM Higgses at the LHC with an integrated luminosity of $L = 300/$fb.
The simulation is being used to produce artificial events for physics analyses. In the ATLAS experiment at LHC CERN, Geneva, Switzerland, simulation is carried out on the GEANT4 platform. The GEANT4 uses geometry descriptions as an input for the modelling of the propagation of the particles in the material. Adding CATIA (Computer-Aided Three-dimensional Interactive Application) CAD application into simulation infrastructure brings an opportunity for the early study of the detector geometry for precise simulation. Paper describes a method of calculation of X_0/λ radiation parameters for the CATIA native geometry descriptions. The core part is the so-called scanner function, enabling the generation of the control points on the geometry, where the calculations will carry out. The algorithm contains as well, the initial transformations of geometries before scanning and output interfaces with the standard applications Root and Excel
The Liquid Argon Calorimeters are employed by ATLAS for all electromagnetic calorimetry in the pseudo-rapidity region |η| < 3.2, and for hadronic and forward calorimetry in the region from |η| = 1.5 to |η| = 4.9. They also provide inputs to the first level of the ATLAS trigger. After successful period of data taking during the LHC Run-2 between 2015 and 2018 the ATLAS detector entered into the a long period of shutdown. In 2022 the LHC will restart and the Run-3 period should see an increase of luminosity and pile-up up to 80 interaction per bunch crossing.
To cope with this harsher conditions, a new trigger readout path has been installed during the long shutdown. This new path should improve significantly the triggering performances on electromagnetic objects. This will be achieved by increasing the granularity of the objects available at trigger level by up to a factor of ten.
The installation of this new trigger readout chain required also the update of the legacy system. More than 1500 boards of the precision readout have been extracted from the ATLAS pit, refurbished and re-installed. The legacy analog trigger readout that will remain during the LHC Run-3 as a backup of the new digital trigger system has also been updated.
For the new system 124 new on-detector boards have been added. Those boards that are operating in a radiative environment are digitizing the calorimeter trigger signals at 40MHz. The digital signal is sent to the off-detector system and processed online to provide the measured energy value for each unit of readout. In total up to 31Tbps are analyzed by the processing system and more than 62Tbps are generated for downstream reconstruction. To minimize the triggering latency the processing system had to be installed underground. The limited available space imposed a very compact hardware structure. To achieve a compact system, large FPGAs with high throughput have been mounted on ATCA mezzanines cards. In total no more than 3 ATCA shelves are used to process the signal from approximately 34000 channels. Given that modern technologies have been used compared to the previous system, all the monitoring and control infrastructure is being adapted and commissioned as well.
This contribution will present the challenges of the installation, the commissioning and the milestones still to be completed towards the full operation of both the legacy and the new readout paths for the LHC Run-3.
The ATLAS Open Data project aims to deliver open-access resources for education and outreach in High Energy Physics using real data recorded by the ATLAS detector. The Open Data release so far has resulted in the release of a substantial amount of data from 8 TeV and 13 TeV collisions in an easily-accessible format and supported by dedicated software and documentation to allow its fruitful use by users at a range of experience levels. To maximise the value of the data, software, and documentation resources provided ATLAS has developed initiatives and promotes stakeholder engagement in the creation of these materials through on-site and remote training schemes such as high-school work experience and summer schools programs, university projects and PhDs qualification tasks. We present examples of how multiple training programs inside and outside CERN have helped and continue to help development the ATLAS Open Data project, lessons learnt, impacts, and future goals.
ALFA and AFP detectors are being prepared to take data during Run 3. ALFA underwent refurbishment whereas AFP, among other upgrades, was equipped with new solution for Time-of-Flight system, so-called Out-of-Vacuum solution. AFP Silicon Tracker is equipped with new modules and ToF, after various testbeams, seems to achieved desired resolution with high efficiency.
The ATLAS Trigger in Run 3 is expected to record on average around 1.7 kHz of primary 13.6 TeV physics data, along with a substantial additional rate of delayed data (to be reconstructed at a later date) and trigger-level-analysis data, surpassing the instantaneous data volumes collected during Run 2.
Events will be selected based on physics signatures such as the presence of energetic leptons, photons, jets or large missing energy. New in the Level 1 (L1) trigger are the New Small Wheel and BIS78 chambers, in combination with new L1Muon endcap sector logic and MUCTPI. In addition, a new L1Calo system based around eFEX, jFEX and gFEX systems for egamma, tau, jets and missing energy will be under commissioning in 2022. In the High Level Trigger, the ATLAS physics menu was re-implemented from scratch using a new multi-threaded framework.
We will present first results from the early phases of commissioning the Run 3 trigger in 2022. We will describe the ATLAS Run 3 trigger menu, and how it differs from Run 2. Exploring how rate, bandwidth, and CPU constraints are integrated into the menu. Improvements made during the run to react to changing LHC conditions and data taking scenarios will be discussed and we will conclude with an outlook on how the trigger menu will evolve with the continued commissioning on the new L1 systems.
The Virtual Visit service run by the ATLAS Collaboration has been active since 2010. The ATLAS Collaboration has used this popular and effective method to bring the excitement of scientific exploration and discovery into classrooms and other public places around the world. The programme, which uses a combination of video conferencing, webcasts, and video recording to communicate with remote audiences, has already reached tens of thousands of viewers, in a large number of languages, from tens of countries across all continents. We present a summary of the ATLAS Virtual Visit service that is currently in use – including a new booking system and hand-held video conference setup from the ATLAS cavern – and present a new system that is being installed in the ATLAS Visitors Centre and in ATLAS cavern. In addition, we show the reach of the programme over the last few years.
We have explored the effect of weak magnetic field on the transport of charge and heat in hot and dense QCD matter by calculating their response functions, such as electrical conductivity ($\sigma_{\rm el}$), Hall conductivity ($\sigma_{\rm H}$), thermal conductivity ($\kappa_0$) and Hall-type thermal conductivity ($\kappa_1$) in kinetic theory approach. The interactions among partons have been subsumed through their thermal masses. It is found that, $\sigma_{\rm el}$ and $\kappa_0$ decrease, and $\sigma_{\rm H}$ and $\kappa_1$ increase in the presence of weak magnetic field, whereas the emergence of finite chemical potential enhances these transport coefficients. The effects of weak magnetic field and finite chemical potential on aforesaid transport coefficients are more noticeable at low temperatures. On the other hand, at high temperatures, they have only a mild dependence on magnetic field and chemical potential. We have observed that the finite chemical potential further extends the lifetime of magnetic field. This study sheds light on the understanding of the effects of weak magnetic field and finite chemical potential on the local equilibrium through the Knudsen number, the elliptic flow, and the interplay between charge and heat transport coefficients through the Wiedemann-Franz law. The components of the Knudsen number in the weakly magnetized hot and dense QCD matter remain much below unity. Thus, the separation between the macroscopic and microscopic length scales is sufficient for the medium to remain in its local equilibrium state. Further, we have found that the elliptic flow gets increased due to the weak magnetic field and becomes decreased due to the finite chemical potential. Furthermore, the components of the Lorenz number are observed to be strongly affected by the finite chemical potential than by the weak magnetic field. With the increase of temperature, the components of the Lorenz number increase, thus confirming the violation of the Wiedemann-Franz law for the hot and dense QCD matter in an ambience of weak magnetic field.
In this work, we derive lower mass bounds on the $Z^\prime$ gauge boson based on the dilepton data from LHC with 13 TeV of center-of-mass energy, and forecast the sensitivity of the High-Luminosity-LHC with $L=3000 fb^{-1}$, the High-Energy LHC with $\sqrt{s}=27$~TeV, and also at the Future Circular Collider with $\sqrt{s}=100$~TeV. We take into account the presence of exotic and invisible decays of the $Z^\prime$ gauge boson to find a more conservative and robust limit, different from previous studies. We investigate the impact of these new decays channels for several benchmark models in the scope of two different 3-3-1 models. We found that in the most constraining cases, LHC with $139fb^{-1}$ can impose $m_{Z^{\prime}}>4$~TeV. Moreover, we forecast HL-LHC, HE-LHC, and FCC bounds that yield $m_{Z^{\prime}}>5.8$~ TeV, $m_{Z^{\prime}}>9.9$~TeV, and $m_{Z^{\prime}}> 27$~TeV, respectively. Lastly, put our findings into perspective with dark matter searches to show the region of parameter space where a dark matter candidate with the right relic density is possible.
In current and future high-energy physics experiments, the sensitivity of selection-based analysis will increasingly depend on the choice of the set of high-level features determined for each collision. The complexity of event reconstruction algorithms has escalated in the last decade, and thousands of parameters are available for analysts. Deep Learning approaches are widely used to improve the selection performance in physics analysis.
In many cases, the development of the algorithm is based on a brute force approach where all the possible combinations of available neural network architectures are tested using all the available parameters. A crucial aspect is that the results from a model based on a large number of input variables are more difficult to explain and understand. This point becomes relevant for neural network models since they do not provide uncertainty estimation and are often treated as perfect tools, which they are not.
In this work, we show how using a sub-optimal set of input features can lead to higher systematic uncertainty associated with classifier predictions. We also present an approach to selecting an optimal set of features using ensemble learning algorithms. For this study, we considered the case of highly boosted di-jet resonances produced in $pp$ collisions decaying to two $b$-quarks to be selected against an overwhelming QCD background. Results from a Monte Carlo simulation with HEP pseudo-detectors are shown.
In this study, we explore the effects of CP-violating anomalous interactions of the top-quark through the semileptonic decay modes of the top-quark arising due to pair-production of tt̄ at the Large Hadron Collider. Predictions on the LHC sensitivities of the coupling strength to such CP-violating interactions would be discussed for the 13 TeV LHC data and for the future hadron collider with 14 TeV energy.
In the Standard Model, CP violation in the Electroweak sector is parametrized by the Jarlskog Invariant. This is the order parameter of CP-violation, in the sense that it vanishes iff CP is conserved. When higher dimensional operators are allowed, and the Standard Model Effective Field Theory is constructed, numerous new sources for CP violation can appear. However, the description of CP violation as a collective effect, present in the SM, is inherited by its Effective extension.
Here, I argue that such a behaviour has to be captured, at dimension 6, by flavor invariant, CP violating objects, linear in the Wilson coefficients. Such a description ensures that CP violation in the SMEFT is treated in a basis independent manner. In particular, I claim these are the objects that have to vanish, together with the SM Jarlskog Invariant, for CP to be conserved, and viceversa. Different assuptions on the flavor structure of the SMEFT operators lead to invariants with different relative importance. A consistent way to address this issue in our framework is presented.
In recent years, crowdfunding platforms have gained popularity as a way to raise funds for various endeavors. This poster discusses the use of crowdfunding as a non-traditional way to finance physics outreach projects. Such tools can provide much needed flexibility to projects and serve as a platform to spread the word about your project. The poster is based on first-hand experience using such tools and includes a discussion of important advise and common pitfalls.
In this paper, we study the prospect of ECAL barrel timing to develop triggers dedicated to long-lived particles decaying to jets at the level-1 of HL-LHC. We construct over 20 timing-based variables, and identify three of them which have better performances and are robust against increasing PU. We estimate the QCD prompt jet background rates accurately using the ``stitching'' procedure for varying thresholds defining our triggers and compute the signal efficiencies for different LLP scenarios for a permissible background rate. The trigger efficiencies can go up to $\mathcal{O}(80\%)$ for the most optimal trigger for pair-produced heavy LLPs having high decay lengths, which degrades with decreasing mass and decay length of the LLP. We also discuss the prospect of including the information of displaced L1 tracks to our triggers, which further improves the results, especially for LLPs characterised by lower decay lengths.
The strong force is the least known fundamental force of nature, and the effort of precisely measuring its coupling constant has a long history of at least 30 years. This contribution presents a new experimental method for determining the strong-coupling constant from the Sudakov region of the transverse-momentum distribution of Z bosons produced in hadron collisions through the Drell-Yan process. The analysis is based on predictions at third order in perturbative QCD,and employs a measurement performed in proton-proton collisions with the CDF experiment. The determined value of the strong coupling at the reference scale corresponding to the $Z$-boson mass is $\alpha_S(m_Z) = 0.1185^{+0.0014}_{-0.0015}$. This is the most precise determination achieved so far at a hadron collider. The application of this methodology at the LHC has the potential to reach sub-percent precision.
The increase of the particle flux (pile-up) with luminosities of L ≃ 7.5 × 10^34cm^−2s^−1 is one of the main experimental challenges for the HL-LHC physics program. A powerful new way to mitigate the effects of pileup is to use high-precision timing information to distinguish between collisions occurring close in space but well-separated in time. A High-Granularity Timing Detector, based on low gain avalanche detector technology, is therefore proposed in front of the LAr end-cap calorimeters for pile-up mitigation and for luminosity measurement. It will cover the pseudo-rapidity range from 2.4 to 4.0. Two silicon sensors double-sided layers will provide precision timing information for MIPs with a resolution better than 30 ps per track in order to assign each particle to the correct vertex. About an order of ten thousand silicon sensor modules will be produced. The module production to be carried out at several sites, will involve the assembly of many components that are produced by various vendors. The history of the production of these components, their quality control checks and tests are to be tracked and recorded. This poster presents the work that concerns the development of a production database that store these information. The development of applications that are used for uploading and retrieving of data from the database, and a user interface for interacting with the database are presented too.
A new era of hadron collisions will start around 2028 with the High-Luminosity LHC, that will allow to collect ten times more data that what has been collected so far at the LHC. This is possible thanks to a higher instantaneous luminosity and higher number of collisions per bunch crossing.
To meet the new trigger and data acquisition requirements and withstand the high expected radiation doses at the High-Luminosity LHC, the ATLAS Liquid Argon Calorimeter readout electronics will be upgraded. The triangular calorimeter signals are amplified and shaped by analogue electronics over a dynamic range of 16 bits, with low noise and excellent linearity. Developments of low-power preamplifiers and shapers to meet these requirements are ongoing in 130nm CMOS technology. In order to digitize the analogue signals on two gains after shaping, a radiation-hard, low-power 40 MHz 14-bit ADCs is being developed using a pipeline+SAR architecture in 65nm CMOS. The characterization of the prototypes of these on-detector components is promising and will likely fulfill all the requirements.
The signals will be sent at 40MHz to the off-detector electronics, where FPGAs connected through high-speed links will perform energy and time reconstruction through the application of corrections and digital filtering. Reduced data are then sent with low latency to the first-level trigger-system, while the full data are buffered until the reception of the trigger decision signal. If an event is triggered, the full data is sent to the ATLAS readout system. The data-processing, control, and timing functions will be realized with dedicated boards using the ATCA technology.
The results of tests of prototypes of the on-detector components will be presented. The design of the off-detector boards along with the performance of the first prototypes will be discussed. In addition, the architecture of the firmware and processing algorithms will be shown.
The Low Gain Avalanche Detector (LGAD) technology is proposed for the ATLAS High Granularity Timing Detector (HGTD) towards the High-Luminosity Large Hadron Collider (HL-LHC). The USTC-IME v2.0 and v2.1 LGAD sensors are designed by the University of Science and Technology of China (USTC) and fabricated by the Institute of Microelectronics of the Chinese Academy of Science (IME, CAS). Various designs with different peripheral structures and gain layer implantation are realized in the production. The IV/CV electrical characterization, charge collection and timing resolution measurements are carried out with Sr-90 beta source and test-beam are performed on the single-pad test structures and large arrays, both before and after the neutron irradiation at JSI. The results show that the USTC-IME-v2.1 sensors, of which the carbon implantation is well optimized, can provide collected charge of more than 4 fC and time resolution better than 70 ps at appropriate bias voltage even with the radiation fluence up to 2.5 × 10^15 cm^−2 1 MeV neutron equivalent, which satisfies the requirements of the HGTD.
We explore the ability of a recently proposed jet substructure technique, Dynamical Grooming, to pin down the properties of the Quark-Gluon Plasma formed in ultra-relativistic heavy-ion collisions. In particular, we compute, both analytically and via Monte-Carlo simulations, the opening angle $\theta_g$ of the hardest splitting in the jet as defined by Dynamical Grooming. Our calculation, grounded in perturbative QCD, accounts for the factorization in time between vacuum-like and medium-induced processes in the double logarithmic approximation. We observe that the dominating scale in the $\theta_g$-distribution is the decoherence angle $\theta_c$ which characterises the resolution power of the medium to propagating color probes. This feature also persists in strong coupling models for jet quenching. We further propose for potential experimental measurements a suitable combination of the Dynamical Grooming condition and the jet radius that leads to a pQCD dominated observable with a very small sensitivity (≤10%) to medium response.
References:
[1] P. Caucal, A. Soto-Ontoso, A. Takacs, arXiv:2111.14768
[2] P. Caucal, A. Soto-Ontoso, A. Takacs, JHEP 07 (2021) 020
The ATLAS Collaboration has developed a variety of printables for education and outreach activities. We present two ATLAS Colouring Books, the ATLAS Fact Sheets, the ATLAS Physics Cheat Sheets, and ATLAS Activity Sheets. These materials are intended to cover key topics of the work done by the ATLAS Collaboration and the physics behind the experiment for a broad audience of all ages and levels of experience. In addition, there is ongoing work in translating these documents to different languages, with one of the colouring books already available in 18 languages. These printables are prepared to complement the information found in all ATLAS digital channels, they are particularly useful in outreach events and in the classroom. We present these resources, our experiences in the creation of them, their use, feedback received, and plans for the future.
Color-reconnection (CR) mechanism used in PYTHIA8 has been reported to describe collective-like effects in small systems, such as mass-dependent growth in $\langle {\textit{p}_{\rm T}} \rangle$ as a function of multiplicity, enhanced baryon production over meson at intermediate ${\textit{p}_{\rm T}}$, etc., similar to those observed in heavy-ion collisions. Color-reconnection (CR) and rope-hadronization (RH) development in PYTHIA8 have aided in a better understanding of the small system. We measure charge-independent (CI) and charge-dependent (CD) two-particle differential number correlation functions, $\rm{R_2}{\left( \Delta \eta, \Delta \varphi \right)}$, and transverse momentum correlation functions, $\rm{P_2} {\left( \Delta \eta, \Delta \varphi \right)}$, of charged particles produced in ${pp}$ collisions at the LHC centre-of-mass energies ${\sqrt{\textit{s}}}$ = 2.76 TeV, 7 TeV and 13 TeV with the PYTHIA8 model. For inclusive charged hadrons ($h^\pm$) in three distinct transverse momentum ($\textit{p}_{\rm T}$) ranges, PYTHIA8 predictions for $\rm{R_2}$ and $\rm{P_2}$ correlation functions with full azimuthal coverage in the pseudorapidity range $|\eta| < 1.0$ are shown. The strengths and shapes of $\rm{R_2}$ and $\rm{P_2}$ correlation functions are reported for various combinations of CR and RH to study particle production mechanisms in small systems. Additionally, for a better understanding of angular ordering and jet properties implemented in the PYTHIA8 model, ${\Delta \eta}$ and ${\Delta \varphi}$ dependence of $\rm{R_2}$ and $\rm{P_2}$ are compared. The evolution of near-side width of these correlation functions for different transverse momentum and energies is shown.
This submission describes revised plans for Event Filter Tracking in the upgrade of the ATLAS Trigger and Data Acquisition system for the high pileup environment of the High-Luminosity Large Hadron Collider (HL-LHC). The new Event Filter Tracking system is a flexible, heterogeneous commercial system consisting of CPU cores and possibly accelerators (e.g., FPGAs or GPUs) to perform the compute-intensive Inner Tracker charged particle reconstruction. Demonstrators based on commodity components have been developed to support the proposed architecture: a software-based fast tracking demonstrator, an FPGA-based demonstrator, and a GPU-based demonstrator. Areas of study are highlighted in view of a final system for HL-LHC running.
Short-lived resonances can probe strongly interacting matter produced in high-energy heavy-ion collisions. In particular, K(892)$^{\mathbf{\pm}}$ is important because of its very short lifetime (around 4 fm/c), comparable to the partonic plasma phase. Also, its short lifetime can be used to study the rescattering and regeneration effects in the hadronic phase. An event shape observable like transverse spherocity is sensitive to hard and soft processes. Such an observable can be used as a tool to categorize pp collisions into isotropic (dominated by soft QCD) and jetty (dominated by hard QCD) events. This work presents the latest developments in K(892)$^{\pm}$ analysis as a function of event multiplicity and transverse spherocity using pp collisions at $\sqrt{s}$ = 13 TeV collected by ALICE. The results obtained in this analysis will be compared to those obtained for other light-flavor hadrons. The ${p}_{T}$-differential ratio of K*(892)$^{\pm}$ yields to those of long-lived stable hadrons in the same multiplicity and transverse spherocity intervals will also be presented.
The ATLAS Collaboration consists of more than 5000 members, from over 100 different countries. Regional, age and gender demographics of the collaboration are presented, including the time evolution over the lifetime of the experiment. In particular, the relative fraction of women is discussed, including their share of contributions, recognition and positions of responsibility, including showing how these depend on other demographic measures.
Relativistic heavy-ion beams at the LHC are accompanied by a large flux of equivalent photons, leading to multiple photon-induced processes. One of the most basic processes, originating from the photon-photon interactions, is the exclusive production of lepton pairs. This poster presents new measurements of exclusive dielectron and dimuon production performed by the ATLAS Collaboration, using the data from ultraperipheral lead-lead collisions at $\sqrt{s_{NN}} =5.02\rm~TeV$. The differential cross-sections as a function of several dilepton variables were measured in the inclusive sample, and for dielectron pairs also under the requirement of no activity in the forward direction. The results are compared with predictions from STARlight and SuperChic MC generators.
Development of a new framework for the derivation of order-by-order hydrodynamics from the Boltzmann equation is necessary as the widely used Anderson-Witting formalism leads to violation of fundamental conservation laws when the relaxation-time depends on particle energy, or in a hydrodynamic frame other than the Landau frame. We generalize an existing framework for the consistent derivation of relativistic dissipative hydrodynamics from the Boltzmann equation with an energy-dependent relaxation-time by extending the Anderson-Witting relaxation-time approximation. We argue that the present framework is compatible with conservation laws and derives first-order hydrodynamic equations in the landau frame. Further, we show that the transport coefficients, such as shear and bulk viscosity as well as charge and heat diffusion currents, have corrections due to the energy dependence of relaxation-time compared to what one obtains from the Anderson-Witting approximation of the collision term. The ratio of these transport coefficients are studied using a parametrized relaxation time, and several interesting scaling features are reported.
Heavy-quark symmetry (HQS), despite being approximate, allows to relate dynamically many hadron systems. In the HQS-limit heavy mesons and doubly-heavy baryons are very similar as their dynamics is determined by a light quark moving in a color field of a static source. As in the meson case, matrix elements of non-local interpolation currents between the baryon state and vacuum are determined by light-cone distribution amplitudes (LCDAs). The first inverse moment of the leading twist $B$-meson distribution amplitude is a hadronic parameter needed for an accurate theoretical description of $B$-meson exclusive decays. It is quite natural that a similar moment of doubly-heavy baryon is of importance in exclusive doubly-heavy baryons' decays. We obtain HQET sum rules for the first inverse moment based on the correlation functions containing nonlocal heavy-light operator of the doubly-heavy baryon and its local interpolating current. Numerical estimates of this moment are presented.
The applicability of hydrodynamics to study the space-time evolution of hadronic matter produced in relativistic heavy-ion collisions is one of the outstanding issues. The hadronic matter may be produced initially in the hadronic phase or may appear after a quark-gluon plasma phase produced initially revert to hadronic matter through a phase transition. The Knudsen number ($Kn$) can be used as an indicator of the degree of thermalization in the system. In this study, we obtain the variation of $Kn$ to study the degree of thermalization in an excluded volume hadron resonance gas model. $Kn$ along with other parameters like Reynolds number ($Re$) and Mach number ($Ma$) give insights into the nature of the flow in the system. The dependence of these dimensionless parameters on system size and baryonic chemical potential ($\mu_B$) are studied. The obtained values of the parameters ($Kn << 1$, $Ma ∼ 1$ and $Re >> 1$) indicate the occurrence of compressible inviscid flows at high temperatures close to the QCD phase transition region ($T ∼150−170$ MeV). The degree of thermalization of hadron gas estimated is comparable over different system sizes, indicating the applicability of hydrodynamics in interpreting the results from high multiplicity pp to heavy-ion collisions.
Reference: R. Scaria, D. Sahu, C. R. Singh, R. Sahoo and J. Alam, [arXiv:2201.08096 [hep-ph]].
The rich physics program at the high luminosity LHC (HL-LHC) requires all final state particles to be reconstructed with good accuracy. However, it also poses formidable challenge of dealing with very high pile up. Different identification algorithms need to be upgraded along with the detectors to improve the overall event reconstruction in such a hostile collision environment. The new timing device in the proposed CMS detector at the HL-LHC allows for the construction of timing observables at the track-level as well as at the jet-level. This information when given as inputs to the deep neural networks, have a potential to improve the existing algorithms used for heavy flavor (HF) jet tagging. In this poster, the latest developments on the studies for HF jet tagging performance at the HL-LHC are presented.
A study is performed on the possible Bose-Einstein Condensation (BEC) of pions in proton-proton (pp) collisions at √s = 7 TeV at the Large Hadron Collider. To have a better and clear understanding, the results of pp systems have been contrasted with the systems produced in Pb-Pb collisions. We studied the temperature and final state multiplicity dependence of the number of particles in the pion condensates. A wide range of multiplicity is considered, covering the hadronic and heavy-ion collisions, using experimental transverse momentum spectra inputs. We observe a clear dominancy of non-extensive parameter q, which measures the degree of non-equilibrium, on the critical temperature and number of particles in the pion condensates.
Femtoscopy is a technique that can be used to measure the space-time characteristics of the particle-emitting source created in heavy-ion collisions using momentum correlations between two particles. In this report, the two-pion and two-kaon femtoscopic correlations for Pb$--$Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV within the framework of (3+1)D viscous hydrodynamics combined with THERMINATOR 2 code for statistical hadronization. The femtoscopic radii or the source size for pions and kaons are estimated as a function of pair transverse momentum and centrality in all three pair directions. The radii seems to be decreasing with pair transverse momentum and transverse mass for all centralities which signals to the presence of strong collectivity in the system. Moreover, an effective scaling of radii with pair transverse mass was observed for both pion and kaons.
The implementation of a web portal dedicated to Higgs boson research is presented. A database is created with more than 1000 relevant articles using CERN Document Server API and web scraping methods. The database is automatically updated when new results on the Higgs boson become available. Using natural language processing, the articles are categorised according to properties of the Higgs boson and other criteria. The process of designing and implementing the Higgs Boson Portal (HBP) is described in detail. The components of the HBP are deployed to CERN Web Services using the OpenShift cloud platform. The web portal is operational and freely accessible on http://cern.ch/higgs.
In preparation for LHC Run 3, ATLAS completed a major effort to improve the track reconstruction performance for prompt and long-lived particles. Resource consumption was halved while expanding the charged-particle reconstruction capacity. Large-radius track (LRT) reconstruction, targeting long-lived particles (LLP), was optimized to run in all events expanding the potential phase-space of LLP searches. The detector alignment precision was improved to avoid limiting factors for precision measurements of Standard Model processes. Mixture density networks and simulating radiation damage effects improved the position estimate of charged particles overlapping in the ATLAS pixel detector, bolstering downstream algorithms' performance. The ACTS package was integrated into the ATLAS software suite and is responsible for primary vertex reconstruction. The talk will highlight the above achievements and report on the readiness of the ATLAS detector for Run 3 collisions.
We estimate in medium properties of axion i.e., its mass and self-coupling within a three flavor Polyakov loop extended Nambu–Jona-Lasinio (PNJL) model with Kobayashi-Maskawa-t’Hooft determinant interaction. We also estimate the topological susceptibility of strong interaction within the same model. It is observed that (statistical) confinement effects simulated by Polyakov loop potential play an important role in the estimation of all these quantities, particularly, near the critical temperature. Both the mass and the self-coupling of the axion get correlated with the chiral and deconfinement transition. The results for all these quantities obtained within the PNJL model is compared with chiral perturbation theory, Nambu–Jona-Lasinio (NJL) model and lattice QCD simulation results wherever available. The results for properties of axions at finite baryon densities are also presented.
Despite modern particle physics being an international endeavour, the vast majority of its educational material is only published in English. By making material available in other languages, physicists can make in-roads with new audiences – especially those very young or very old – in their home countries. The ATLAS Collaboration has published colouring books, a teaching guide, activity sheets, fact sheets and cheat sheets aimed at communicating science to a non-expert audience. An effort is underway to translate this content into as many languages as possible, taking advantage of the countless multilingual members of the collaboration. Currently all of this content is available in at least two languages other than English, with the ATLAS Colouring Book being the one available in the most languages (19 so far). The reach of this multilingual content is presented.
Non-central heavy-ion collisions at ultra-relativistic energies are unique in generating magnetic fields of the most significant strength in the laboratory. The fields produced at the early stages of the collision could affect the properties of Quantum Chromodynamics (QCD) matter formed in the heavy-ion collisions. Moreover, this transient magnetic field can also affect the thermodynamic and transport properties of the final state dynamics of the system. In this work, we investigated the thermodynamic properties such as energy density, entropy density, pressure, and speed of sound of a hadron gas in the presence of an external static magnetic field using thermodynamically consistent non-extensive Tsallis statistics. Further, the magnetization of such a system is also studied. This analysis reveals an interplay of the diamagnetic and paramagnetic nature of the system in the presence of the external magnetic field of varying strength for non-central heavy-ion collisions as one goes from RHIC to the LHC energies.
The search for the QCD critical point (CP), and the study of quark-hadron phase transition (and vice-versa), at finite baryon density and high temperature, is the main task in contemporary relativistic heavy-ion collision experiments. Fluctuation analysis with global and local measures is the basic tool to achieve this goal. Local density fluctuations are directly related to the critical behaviour in QCD. These fluctuations in the phase space are expected to scale according to universal power-law in the vicinity of critical-point. A search for such power-law fluctuations within the frame-work of the intermittency method is ongoing to locate the critical point of the strongly interacting matter. This method is used to probe the behaviour of these fluctuations through the measurement of normalized factorial moments (NFMs) in ($\eta$, $\phi$) phase space. Observations and results from the intermittency analysis performed for generated charged hadrons in Pb+Pb collisions, at two different energies, using PYTHIA8/Angantyr for centrality as well as transverse momentum bin width dependence will be presented. We also made a comparison with published EPOS3 results at 2.76TeV.
I introduce quantum mechanics on an intrinsic configuration space for baryons, the Lie group U(3), which carries the three gauge groups of the standard model of particle physics as subgroups SU(3), SU(2) and U(1). The strong and electroweak interactions become related via the Higgs mechanism. I namely settle the electroweak energy scale by the neutron to proton decay where both sectors are involved through quark flavour changes. Predictions of neutral pentaquark resonances reachable at LHCb follow in the baryon sector as does an accurate expression in the electroweak sector for the Higgs mass (yielding 125.095(14) GeV) and predictions on the couplings of the Higgs to itself and to the gauge bosons with signal strengths deviating by the presence of the up-down quark mixing matrix element. The intrinsic view means that quantum fields are generated by the momentum form on intrinsic wavefunctions and local gauge transformations in laboratory space equate translations in the intrinsic configuration space which may be likened to a generalised spin space. Further insight is gained for the Cabibbo and Weinberg angles expressed in traces of u and d flavour quark generators.
Key references:
EPL 102 (2013) 42002, Int. J. Mod. Phys. A 30 (2015) 1550078, EPL 124 (2018) 31001, EPL 125 (2019) 41001, EPL 133 (2021) 31001. See also arXiv:2007.02936.
We investigate the prospect of searching for new physics via the novel signature of same-sign diboson + ${E\!\!/}_{T}$ at current and future LHC. We study three new physics models: (i) natural SUSY models, (ii) type-III seesaw model and (iii) type-II seesaw/Georgi-Machacek model. In the first two class of models, this signature arises due to the presence of a singly-charged particle which has lifetime long enough to escape detection, while in the third model this signature originates resonantly from a doubly-charged particle produced along with two forward jets that, most likely, would escape detection. We analyze in great detail the discovery prospects of the signal in these three classes of models in the current as well as the upcoming runs of the LHC (such as HL-LHC, HE-LHC and FCC-hh) by showing a distinction among these scenarios.
We perform a sensitivity study of an unbinned angular analysis of the $B\to D^*(D\pi)\ell\nu_\ell$ decay, including the contributions from the right-handed vector current. We show that the angular observable can constrain very strongly the right-handed vector current without the intervention of the yet unsolved $V_{cb}$ puzzle.
In this work, we have modified a scenario, originally proposed by Grimus and
Lavoura, in order to obtain maximal values for atmospheric mixing angle and $CP$ violating Dirac phase of the lepton sector. To achieve this, we have employed $CP$ and some discrete symmetries in a type II seesaw model. In order to make predictions about neutrino mass ordering and the smallness of the reactor angle, we have obtained some conditions on the elements of the neutrino mass matrix of our model. Finally, within the framework of our model, we have studied quark masses and mixing pattern.
A significant challenge in the tagging of boosted objects via machine-learning technology is the prohibitive computational cost associated with training sophisticated models. Nevertheless, the universality of QCD suggests that a large amount of the information learnt in the training is common to different physical signals and experimental setups. In this article, we explore the use of transfer learning techniques to develop fast and data-efficient jet taggers that leverage such universality. We consider the graph neural networks LundNet and ParticleNet, and introduce two prescriptions to transfer an existing tagger into a new signal based either on fine-tuning all the weights of a model or alternatively on freezing a fraction of them. In the case of W-boson and top-quark tagging, we find that one can obtain reliable taggers using an order of magnitude less data with a corresponding speed-up of the training process. Moreover, while keeping the size of the training data set fixed, we observe a speed-up of the training by up to a factor of three. This offers a promising avenue to facilitate the use of such tools in collider physics experiments.
Light nuclei production is a hot research topic in heavy-ion collision at RHIC-BES. The observed non-monotonic behavior with the colliding energies[1,2] was declared to be related to the critical point of the QCD phase diagram[3,4]. In this talk, we focus on investigating the light nuclei production with and without critical fluctuations within the framework of the coalescence model.
In the first part [5], we derive the yield of light nuclei in terms of various orders of cumulants for the density distribution function by the implementation of the characteristic function of the phase space density without considering the critical fluctuations. We found that the leading terms of the phase-space cumulants in the yield of light nuclei share a similar form and could be canceled out in light nuclei ratio, whereas the higher-order ones (non-Gaussian shaped density profile) remain and play an important role in the interpretation of the behavior of light nuclei yield ratio.
In the second part [6], we introduce the static critical correlation contribution to the phase space density and derive the light nuclei production in terms of phase space cumulant. Because the leading terms of the phase-space cumulants in the yield of light nuclei share the similar form, we can construct a new light nuclei yield ratio, which is directly proportional to the critical contribution. By mapping the equation of state from the three-dimension Ising model, the new light nuclei yield ratio can describe the experiment measurements[1,2], which indicate the existence of QCD critical point and its effect on light nuclei production.
[1] H. Liu, D. Zhang, S. He, K.-j. Sun, N. Yu, and X. Luo, Phys. Lett. B 805, 135452 (2020).
[2] D. Zhang (STAR), JPS Conf. Proc. 32, 010069 (2020).
[3] E. Shuryak and J M.Torres-Rincon, Eur.Phys.J.A 56 (2020) 9,241.
[4] K.-j. Sun, F.Li and C.M.Ko, Phys.Lett.B 816 (2021) 136258.
[5] S.Wu, K.Murase, S.Tang and H.Song, in preparation.
[6] S.Wu, K.Murase, S.Zhao and H.Song, in preparation.
We analyze NMSSM scenarios containing a singlino LSP dark matter. By systematically considering several NLSP compositions, we identify and classify regions of parameter space where NLSP exhibits a long lifetime due to suppressed couplings and leads to a displaced vertex signature at the colliders. We furthermore construct viable production and decay processes at the HL-LHC to search for such displaced vertices. We illustrate a strategy to neglect the SM background with some benchmark scenarios for this type of signal.
Long-lived particles represent a well motivated approach for beyond-Standard Model (SM) physics searches. An interesting scenario is the one in which light vector mediators (dark photons), weakly coupled to the SM photon, can be produced by an exotic decay of the SM Higgs boson and decay back to SM particles after travelling a macroscopic distance. This study presents a search for light, neutral long-lived particles decaying in collimated jet-like structures containing pairs of leptons or quarks (Dark-Photon-Jets, DPJs). The search is performed on $139\rm~ fb^{-1}$ of pp collision data at √s = 13 TeV collected during the Run-2. Both the gluon-gluon fusion (ggF) and associated production with a W boson are considered for the Higgs production and dark photon decays are identified, among the overwhelming QCD and non-collision background, thanks to a selection involving dedicated triggers and deep-learning classifiers. The results obtained are interpreted in the context of simplified long-lived particle models such as the Hidden Abelian Higgs Model (HAHM) and the Falkowsk-Ruderma-Volansky-Zupan (FRVZ) model.
We study the pair production of the long-lived mediator particles from the decay of the SM Higgs boson and their subsequent decay into standard model particles. We compute the projected sensitivity, both model-independently and with a minimal model, of using the muon spectrometer of the CMS detector at the HL-LHC experiment for ggF, VBF, and Vh production modes of the Higgs boson and various decay modes of the mediator particle, along with dedicated detectors for LLP searches like CODEX-b and MATHUSLA. Subsequently, we study the improvement with the FCC-hh detector at the 100 TeV collider experiment for such long-lived mediators, again focusing on the muon spectrometer. We propose dedicated LLP detector designs for the 100 TeV collider experiment, DELIGHT (\textbf{De}tector for \textbf{l}ong-l\textbf{i}ved particles at hi\textbf{gh} energy of 100 \textbf{T}eV), and study their sensitivities.
The CMS experiment is a general-purpose detector installed in Large Hadron collider. During the High Luminosity LHC (HL-LHC) phase, it expects 10 times higher luminosity than actual LHC operation regime. Forward region of Muon system of CMS will be equipped with 3 additional triple GEM based muon stations. ME0 is the innermost layer of this tree stations which will be installed right behind the new endcap calorimeter; it will be exposed to a background particle fluxes up to 150 kHz/cm2. Recent R&D for rate capability and gain drop study brought the changing of the original GEM foils High Voltage segmentation direction. Second-generation prototype is segmented in radial direction different from the previous horizontal segmentation of GE11 and GE2/1. This study will present mainly the results of characterization of new ME0 prototype module which include the mechanical design of second-generation prototype, segmentation simulation study results, assembling process, gas tightness, HV stability test, energy spectrum, effective gain result and Its response uniformity results. This module fully characterized, as above, will be installed in GIF++ facility for high background irradiation tests and beam test studies; its initial experimental setup is also presented.
A measurement of the inclusive jet production in proton-proton collisions at the LHC at $\sqrt{s}=13$ TeV is presented. The double-differential cross sections are measured as a function of the jet transverse momentum $p_\mathrm{T}$ and the absolute jet rapidity $\|y|$. The anti-$k_\mathrm{T}$ clustering algorithm is used with distance parameter of 0.4 (0.7) in a phase space region with jet $p_\mathrm{T}$ from 97 GeV up to 3.1 TeV and $|y|<2.0$. Data collected with the CMS detector are used, corresponding to an integrated luminosity of 36.3 fb$^{-1}$ (33.5 fb$^{-1}$). The measurement is used in a comprehensive QCD analysis at next-to-next-to-leading order, which results in significant improvement in the accuracy of the parton distributions in the proton. Simultaneously, the value of the strong coupling constant at the Z boson mass is extracted as $\alpha_S(m_\mathrm{Z})= 0.1170 \pm 0.0019$. For the first time, these data are used in a standard model effective field theory analysis at next-to-leading order, where parton distributions and the QCD parameters are extracted simultaneously with imposed constraints on the Wilson coefficient $c_1$ of 4-quark contact interactions.
Two-particle normalized cumulants of particle number correlations ($R_{2}$) and transverse momentum correlations ($P_{2}$) measured as a function of relative pseudorapidity and azimuthal angle difference $(\Delta\eta, \Delta\varphi)$ provide key information about particle production mechanism, diffusivity, charge and momentum conservation in high-energy collisions. To complement the recent ALICE measurements in Pb--Pb collisions, as well as for better understanding of the jet contribution and nature of collectivity in small systems, we measure these observables in pp collisions at $\sqrt{\textit{s}}$ = 13 TeV with similar kinematic range, 0.2 $<$ $\textit{p}_{\rm T}$ $\leq$ 2.0 $\rm{GeV}/\textit{c}$. The near-side and away-side correlation structures of $R_{2}$ and $P_{2}$ are qualitatively similar, but differ quantitatively. Additionally, a significantly narrower near-side peak is observed for $P_{2}$ as compared to $R_{2}$ for both charge-independent and charge-dependent combinations like in the recently published ALICE results in p--Pb and Pb--Pb collisions. Being sensitive to the interplay between underlying event and mini-jets in pp collisions, these results not only establish a baseline for heavy-ion collisions but also allow one to understand better signals which resemble collective effects in small systems.
Differential cross sections for top quark pair ($t\bar{t}$) production are measured in proton-proton collisions at a centre-of-mass energy of 13 TeV using a sample of events containing two oppositely charged leptons. The data were recorded with the CMS detector at the LHC and correspond to an integrated luminosity of 138 fb^{−1}. Differential cross sections are measured as functions of kinematic observables of the $t\bar{t}$ system, the top quark and antiquark and their decay products, and the number of additional jets in the event not originating from the $t\bar{t}$ decay. These cross sections are measured as function of one, two, or three variables and are presented at the parton and particle levels. The measurements are compared to standard model predictions of Monte Carlo event generators with next-to-leading-order accuracy in quantum chromodynamics (QCD) at matrix-element level interfaced to parton showers. Some of the measurements are also confronted with predictions beyond next-to-leading-order precision in QCD. The nominal predictions from all calculations, neglecting theoretical uncertainties, do not describe well several of the measured cross sections, and the deviations are found to be largest for the multi-differential cross sections.
Heavy-quark production in nuclear collisions is an important tool to access the properties and evolution of a deconfined state of nuclear matter known as quark-gluon plasma. Studies of these probes in pp collisions, besides serving as a reference process, represent a powerful tool for testing various aspects of QCD. An analysis technique that was little explored until now at LHC energies is the analysis of the high-mass region of the dilepton invariant mass spectrum ($m_{\mu \mu}>m_{J/\Psi}$), which is significantly populated by the semileptonic decays of hadrons pairs containing charm and beauty quarks.
In the context of this analysis, studies based on Monte Carlo event generators, such as PYTHIA8, are crucial to reproduce the invariant mass and transverse momentum spectra of muon pairs from heavy-flavour decays, separately for charm and beauty hadrons.
In addition, dedicated Monte Carlo simulations are important to study the contribution from the semileptonic decays of light-flavour hadrons, in particular $\pi$ and $K$, which represent the main background source of this analysis.
The goal of the analysis presented in this poster is a first comparison between the mass spectrum measured by ALICE in pp collisions at $\sqrt{s}$ = 13 TeV, based on an integrated luminosity $\mathcal{L}_{int}$ of $\sim 25$ pb$^{-1}$, and the prediction of the PYTHIA8 calculations. The study will be carried out in the rapidity region 2.5 < $y$ < 4.0, which corresponds to the coverage of the ALICE muon spectrometer.
In further detail, in this poster, I will present the status of the analysis for what concerns the simulation chain and the prospects for the extraction of the heavy-quark pair cross-sections. In particular, I will discuss the fitting technique necessary to disentangle the contributions coming from charm, beauty, and light-flavours hadrons, respectively.
lectrons constitute an essential component of final states from the leptonic decay channels of W and Z bosons. Their reconstruction and identification are especially challenging in heavy-ion collisions due to high detector occupancy. Therefore, the evaluation of electron performance is crucial for precision measurements of properties of quark-gluon plasma produced in heavy-ion collisions at the LHC energies. The poster will present the measurement of electron reconstruction, identification, isolation, and trigger efficiencies in proton-lead collisions collected at 8.16 TeV in 2016. The tag and probe method will be used to derive electron efficiencies in data and MC simulation independently, and the results will be compared.
The identification of jets containing b-hadrons, b-tagging, plays an important role in many physics analyses in ATLAS. Several different machine learning algorithms have been deployed for the purpose of b-tagging. These tagging algorithms are trained using Monte-Carlo simulation samples, as such their performance in data must be measured. The b-tagging efficiencies (epsilon_b) have been measured in data using $t\bar{t}$ events in the past and this work presents the measurements in multijet events using data collected by the ATLAS detector at $\sqrt{s}=13\rm~ TeV$ for the first time. This offers several key advantages over the ttbar based calibrations, including a higher precision at low jet $p_T$ and an ability to perform measurements of epsilon_b at significantly higher jet $p_T$. Two approaches are applied and for both a profile likelihood fit is performed to extract the number of b-jets in samples passing and failing a given b-tagging requirement. The b-jets yields are then used to determine epsilon_b in data and from that scale factors to the efficiency measured in MC. The two approaches differ primarily in the discriminating variable used in the fit. At low jet $p_T$ the variable ${p_T}_{rel}$ is used, while for high jet $p_T$ the signed impact parameter significance is used. Both calibrations give measurements of the scale factors as a function of the jet $p_T$.
The LHC forward (LHCf) is a unique experiment designed on purpose to measure neutral particle production spectra in the forward region to provide high energy data for the tuning of the hadronic interaction models used by ground-based cosmic rays experiments, thanks to the excellent performance of this experimental apparatus, composed by two sampling calorimeters, called Arm1 and Arm2, located at about $±140$ m from the LHC interaction point 1 (IP1), at zero-degree angle.
In this talk we would like to present the data analysis strategy and preliminary results for the measurement of $\eta$ meson differential cross section as function of the Feynman $x_F$ variable , measured in p-p collisions at $\sqrt s=13$ TeV with the Arm2 detector, and compared with the predictions of four widely used hadronic interaction models. The importance of this observation is relied to the fact that the strange quark contribution is one of the parameters characterizing the different models, thus differences in this parameter induce a large discrepancy on the expected $\eta$ production cross section.
A measurement of the top quark pole mass in events where the top quark-antiquark pair is produced in association with one additional jet is presented. This analysis is performed using proton-proton collision data at 13 TeV collected by the CMS experiment at the CERN LHC in 2016, corresponding to a total integrated luminosity of 36.3 fb${-1}$. Events with two opposite charge leptons in the final state (ee, $\mu\mu$, e$\mu$) are analyzed. Using multivariate analysis techniques based on Machine Learning, the reconstruction of the main observable and the event selection are optimized. The production cross section is measured as a function of the invariant of the $\text{t}\overline{\text{t}}$+jet system invariant mass at parton-level, using a maximum likelihood unfolding method. The top quark pole mass is then obtained from a chi-squared fit of the theory predictions at next-to-leading order precision to the data.
Quarkonia are bound states of heavy quark--antiquark pairs. Due to their large mass, heavy quarks production mechanism takes place at hard scales of QCD, while the formation of the bound states involves soft QCD scales. Quarkonia are therefore sensitive to both perturbative and non-perturbative aspects of QCD.
In addition, their measurement in p--Pb collisions provides information on cold nuclear matter effects, such as nuclear shadowing or interaction with comoving particles.
Recent measurements reveal that $J/\psi$ yields increase with charged-particle multiplicity in pp and p--Pb collisions at the LHC. Different mechanisms were proposed to explain this observation. One of them is the influence of multiple parton interactions in the initial state of the collision. Measurements of the excited charmonium states, e.g $\psi(2S)$, as a function of charged-particle multiplicity are essential to disentangle the impact of possible final-state effects.
This poster presents the measurement of charmonium yields in pp collisions at $\sqrt{s}$ = 13 TeV and p--Pb collisions at $\sqrt{s_{\rm NN}}$ = 8.16 TeV as a function of charged-particle multiplicity, measured at central rapidity ($|\eta|<1.0$). $J/\psi$ and $\psi(2S)$ are reconstructed in their dimuon decays within $2.5
We present a prospect study on di-Higgs production in the HH to bbyy decay channel with the ATLAS experiment at the High Luminosity LHC (HL-LHC). The results are obtained by extrapolating the results from the Run 2 measurement, with 139/fb of data at a center-of-mass energy of 13 TeV, to the conditions expected at the HL-LHC. While there is no sign of di-Higgs production with the current LHC dataset, the much higher luminosity (3000/fb) and energy (14 TeV) at the HL-LHC will enable a much better measurement of this important process. We describe in detail the extrapolation process and assumptions, and multiple scenarios for the treatment of systematic uncertainties at the HL-LHC are considered. Under the baseline systematic uncertainty scenario, the extrapolated precision on the Standard Model di-Higgs signal strength measurement is 50%, corresponding to a significance of 2.2 sigma. The extrapolated 1 sigma confidence interval from a measurement of kLambda, the trilinear Higgs boson self-coupling modifier, is [0.3, 1.9].
Other comments
Quantum Chromodynamics predicts the existence of dense and hot nuclear matter which is described in terms of a deconfined medium of quarks and gluons, known as quark-gluon plasma (QGP). High energy density and temperature can be reached by colliding heavy-ions at ultra-relativistic energies, enabling the study of the QGP in the laboratory. The ALICE detector at the LHC was designed to study the properties of such deconfined medium. Quarkonia are sensitive probes of the QGP, in particular, the study of their production in Pb–Pb collisions normalized to the corresponding one in pp collisions at the same energy and scaled by the number of nucleon-nucleon collisions, known as nuclear modification factor, can shed light on the properties of the QGP. In the presence of a QGP medium, the charmonium yield would be suppressed due to color Debye screening and dissociation. Due to its larger size and weaker binding energy, the ψ(2S) is expected to be more suppressed in the medium compared to the J/ψ. However, the magnitude of the J/ψ suppression at LHC energies is smaller than that observed at lower energies at SPS and RHIC, indicating thereby that charmonium (re)generation via the re)combination of charm and anticharm-quarks, happening either in medium or at the phase boundary, plays an important role at LHC energies. The ψ(2S) production relative to J/ψ represents one possible discriminator between the two different regeneration scenarios. Due to its smaller production cross-section and branching ratio to the dilepton decay channel, the ψ(2S) measurement is more challenging as compared to the J/ψ one. The combined Run 2 data sets of ALICE allows one to extract the ψ(2S) signal over the full centrality range, in Pb–Pb collisions at √s_NN = 5.02 TeV at forward rapidity with the muon spectrometer.
In this poster, we report the ψ(2S) nuclear modification factor and ψ(2S)-to-J/ψ double ratio in Pb–Pb collisions at √s_NN = 5.02 TeV as a function of centrality and transverse momentum, using a new pp reference measured at the same energy with improved precision. All the measurements are compared with theoretical predictions.
In this work, we introduce both gluon and quark degrees of freedom for describing the partonic cascades inside the medium. We present numerical solutions for the set of coupled evolution equations with splitting kernels calculated for the static, exponential and Bjorken expanding media to arrive at medium-modified parton spectra for quark and gluon initiated jets respectively. We discuss novel scaling features of the partonic spectra between different types of media. Next, we study the inclusive jet $𝑅_{𝐴𝐴}$ by including phenomenologically driven combinations of quark and gluon fractions inside a jet. In addition, we have also studied the effect of the nPDF as well as vacuum like emissions on the jet $𝑅_{𝐴𝐴}$. Differences among the estimated values of quenching parameter for different types of medium expansions are noted. Next, the impact of the expansion of the medium on the rapidity dependence of the jet $𝑅_{𝐴𝐴}$ as well as jet $v_2$ are studied in detail. Finally, we present qualitative results comparing the sensitivity of the time for the onset of the quenching for the Bjorken profile on these observables. All the quantities calculated are compared with the recent ATLAS data.
Measurements of jet fragmentation and jet properties in pp collisions provide a test of perturbative quantum chromodynamics (pQCD) and form a baseline for similar measurements in heavy ion (A-A) collisions. In addition, jet measurements in p-A collisions are sensitive to cold nuclear matter effects. Recent studies of high-multiplicity final states of small collision systems exhibit signatures of collective effects that could be associated with hot and dense, color-deconfined QCD matter, which is known to be formed in collisions of heavier nuclei. The modification of the jet fragmentation pattern and jet properties is expected in the presence of such QCD matter. Measurements of jet fragmentation patterns and other jet properties in p-A collisions are needed in order to establish whether deconfined QCD matter is indeed generated in such small systems. In this contribution we report recent ALICE measurements of charged-particle jet properties, including mean charged-constituent multiplicity and fragmentation distribution for leading jets, in minimum bias p-Pb collisions at $\sqrt{s}$ = 5.02 TeV and minimum bias pp collisions at $\sqrt{s}$ = 13 TeV. In addition, the multiplicity dependence of these jet properties in pp collisions at $\sqrt{s}$ = 13 TeV will also be presented. Results will be compared with theoretical model predictions.
Hadronic resonances are effective tools for studying the hadronic phase in ultra-relativistic heavy-ion collisions. In fact, their lifetime is comparable to the hadronic phase and resonances are sensitive to the hadronic phase effects such as rescattering and regeneration processes which might affect the resonance yields and shape of the transverse momentum spectra. $\Lambda(1520)$ has a lifetime of around 13 fm/$\it{c}$, which lies in between the lifetimes of $K^*$ and $\phi$ resonances. The resonance to stable particle yield ratios can be used to study the properties of the hadronic phase. Recently, ALICE observed the suppression of the $\Lambda(1520)/\Lambda$ ratio in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV as a function of centrality. It is therefore interesting to investigate the multiplicity-dependent study of $\Lambda(1520)/\Lambda$ ratio for pp collisions, since this can serve as a baseline for heavy-ion collisions.
In this contribution, we present new results on the measurement of the baryonic resonance $\Lambda(1520)$ as a function of the charged-particle multiplicity in pp collisions at $\sqrt{s}$ = 5.02 and 13 TeV. The transverse momentum spectrum, the integrated yield $(\rm d \it N/ \rm d \it y )$, the mean transverse momentum $(\langle p_{\rm{T}}\rangle)$ and the $ \Lambda(1520)/\Lambda$ yield ratio will be presented as a function of the charged-particle multiplicity.
The ATLAS and CMS experiments have an ambitious search program for charged Higgs bosons. The two main searches for $H^\pm$ at the LHC have traditionally been performed in the $\tau \nu$ and $t b$ decay channels, as they provide the opportunity to probe complementary regions of the Minimal SuperSymmetric Model (MSSM) parameter space. Charged Higgs bosons may decay also to light quarks, $H^\pm \to cs/cb$, which represent an additional probe for the mass range below $m_t$. In this work, we focus on $H^\pm \to \mu \nu$ as an alternative channel in the context of two Higgs doublet model type III. We explored the prospect of looking $pp\to tb H^\pm$, followed by $H^\pm\to\mu \nu$ signal at the LHC. Such a scenario appears in 2HDM type-III where couplings of the charged Higgs are enhanced to $\mu\nu$. Almost all the experimental searches rely on the production and decay of the charged Higgs are taken into account. We show that for a such scenario, the above signal is dominant for most of the parameter space, and $H^\pm\to \mu\nu$ can be an excellent complementary search.
The identification of jets containing b-hadrons (b-jets) is essential to many aspects of the ATLAS physics programme. Multivariate algorithms responsible for establishing the jet's flavour are developed by the ATLAS Collaboration, exploiting the distinct properties and correlations of charged particle tracks within the jet and reconstructed secondary vertices. The higher pileup conditions and the growing interest for searches in the high transverse momentum regime necessitate the development of improved algorithms using state-of-the-art machine learning techniques. Recent developments in track-based tagging introduced the Deep Impact Parameter Sets (DIPS) tagger, a neural network based on the Deep Sets architecture. It exploits the permutation invariance of track features in the network training and makes use of correlations among the tracks. Consequentially, an improved performance in the identification of b-jets compared to established approaches is observed. The performance of the novel DIPS tagger is evaluated using simulated data. This poster reviews the current state-of-the-art of jet flavour tagging algorithms used by the ATLAS Collaboration.
PYTHIA8 simulates a number of physics aspects by implementing several models along with theory, these models have many free parameters that need to be tweaked for the best description of data. In this study, we use PYTHIS8.2 for the simulation of Multiparton Interactions using different PDF sets from LAHPDF6. Altogether five parameters were selected for the final tune depending on their sensitivity to the selected observables at 13TeV published by ATLAS Collaboration. Simulated experimental analysis data is obtained using the Rivet analysis toolkit. These tunes are substantial improvements on existing standard choices and describe the selected data reasonably well. Tuning results are also compared with the default tunes in PYTHIA8.2.
No analysis in ATLAS or CMS has so far searched for FCNC decays of top quarks into a new scalar (X) in a broad mass range probing branching ratios below $10^{-3}$. In the case of the Higgs boson, branching ratios $t\to Hu/c$ are predicted within the SM to be of about $O(10^{-17})/O(10^{-15})$. Several beyond-SM theoretical models predict new particles and enhanced branching ratios. In particular, simple SM extensions involve the Froggatt-Nielsen mechanism, which introduces a scalar field with flavour charge, the so-called flavon, featuring flavour violating interactions. Using the full Run 2 data, ATLAS has performed a search for a scalar of a mass in the range between 20 and 160 GeV and decaying into a pair of bottom quarks. In order to distinguish signal from background, a feed-forward neural network that uses kinematic variables together with various invariant masses of pairs of b-jets is used in the fits for the various mass hypotheses. The method, strategy and preliminary results for both FCNC decays $t\to cX$ and $t\to uX$ will be presented.
Heavy quarks are considered excellent probes to study the properties of the state of matter where quarks and gluons are deconfined, known as quark-gluon plasma (QGP). The QGP is expected to be formed in ultrarelativistic nuclear collisions. Non-prompt J/$\psi$ measurements are important to investigate the parton energy loss in the hot medium and its quark mass dependence, as they provide additional constraints to extract heavy-quark diffusion coefficients from experimental data. In addition, the prompt J/$\psi$ production provides a direct comparison with models that include (re-)generation, which is found to be the dominant production mechanism at low transverse momentum ($p_{\rm T}$) and in central collisions at the LHC. ALICE has unique tracking and particle identification capabilities down to very low momentum at midrapidity ($|y|<$ 0.9), enabling the separation of prompt and non-prompt J/$\psi$ down to $p_{\rm T}$ $\sim$ 1.5 GeV/$c$ in Pb$-$Pb collisions. In this contribution, recent ALICE results on nuclear modification factors ($R_{\rm AA}$) of prompt and non-prompt J/$\psi$, reconstructed at midrapidity in the dielectron decay channel, as a function of $p_{\rm T}$ and centrality will be presented and compared with theoretical predictions. Presented results are obtained by analyzing data from Pb$-$Pb collisions collected at $\sqrt{s_{\rm NN}}$ = 5.02 TeV during the LHC Run 2. Moreover, results will be compared with similar LHC measurements, available at higher $p_{\rm T}$.
We present two modules as part of the Czech Particle Physics Project (CPPP). These are intended as learning tools in masterclasses aimed at high-school students (aged 15 to 18). The first module is dedicated to the detection of an Axion-Like-Particle (ALP) using the ATLAS Forward Proton (AFP) detector. The second module focuses on the reconstruction of the Higgs boson mass using the Higgs boson golden channel with four leptons in the final state. The modules can be accessed at the following link: http://cern.ch/cppp.
Proton decay is a baryon number violating process and hence is forbidden in the Standard Model (SM). Baryon number violation is expected to be an important criteria to explain the matter anti-matter asymmetry of the universe. Any detection of the proton decay will serve as a direct evidence of physics beyond the SM. In SMEFT, proton decay is possible via baryon number violating dimension six operators.
In this work, we have considered the proton decay to a positron and a photon, which is expected to be an experomentally cleaner channel because of less nuclear absorption. The gauge invariant amplitude of this process involves two form factors (FFs). We present these FFs in the framework of light cone sum rules (LCSR).
We investigate the full electro-weak one-loop radiative correction to the $e^+e^- \to H^+H^-$ within the Inert Higgs Doublet Model (IHDM), at the future Higgs factory such as the ILC, CLIC, CEPC. After taken account of the theoretical and experimental constraints such as LEP, LHC and Dark matter constraints. The calculations are performed using FeynArts/FormCalc to compute the one-loop weak corrections and using Feynman Diagram Calculation (FDC) to calculate the QED contribution to the next leading order cross section, in three energies of collisions 250, 500 and 1000 GeV, by observing the $e^+e^- \to H^+H^-$ process, the detection of the new physics of IHDM can be directly done because of the large production rate and the corrections are significant
The Tile Calorimeter (TileCal) is a sampling hadronic calorimeter covering the central region of the ATLAS experiment, with steel as absorber and plastic scintillators as active medium. The scintillators are read-out by the wavelength shifting fibres coupled to the photomultiplier tubes (PMTs). The analogue signals from the PMTs are amplified, shaped, digitized by sampling the signal every 25 ns and stored on detector until a trigger decision is received. The TileCal front-end electronics reads out the signals produced by about 10000 channels measuring energies ranging from about 30 MeV to about 2 TeV. Each stage of the signal production from scintillation light to the signal reconstruction is monitored and calibrated. During LHC Run-2, high-momentum isolated muons have been used to study and validate the electromagnetic scale, while hadronic response has been probed with isolated hadrons. The calorimeter time resolution has been studied with multi-jet events. A summary of the performance results, including the calibration, stability, absolute energy scale, uniformity and time resolution, will be presented.
Since its installation in 2016, AFP took data during standard (low-beta, high-µ) and special (low-beta, low-µ) LHC fills. Performance of tracking and time-of-flight systems as well as studies of trigger performance and detector alignment will be discussed.
The expected increase in particle flux in the high-luminosity phase of the LHC (HL-LHC), with an instantaneous luminosity that can reach L ≈ 7.5 × 10^34cm^−2s^−1, will have a significant impact on the pile-upwith potentially 200 interactions per bunch crossing. The performances of electrons and photons, as well as those of jets and missing transverse energy, will be strongly degraded in the end-cap and at the forward region of the detector, where the granularity of the electromagnetic calorimeter is coarser and the momentum resolution of the Inner Tracker(ITk) is poorer. In order to mitigate the pile-up contamination coming from this high luminosity, a High Granularity Timing Detector (HGTD) is proposed in front of the LAr end-cap calorimeters, covering the pseudorapidity region between 2.4 and 4.0. The high granularity and the highprecision timing information will allow to improve the pile-up reduction. It will also improve the forward objects reconstruction, and complement the performance of the updated ITk in the forward region of ATLAS detector. This leads to an amelioration in the jet and lepton reconstruction performances.The ability of the HGTD detector to improve the pile-up jet rejection and the lepton isolation efficiency within the forward region in addition to the physics and performance results is going to be presented.
Reconstructing the type and energy of isolated pions from the ATLAS calorimeters is a key step in the hadronic reconstruction. The baseline methods for local hadronic calibration were optimized early in the lifetime of the ATLAS experiment. Recently, image-based deep learning techniques demonstrated significant improvements over the performance over these traditional techniques. We present an extension of that work using point cloud methods that do not require calorimeter clusters or particle tracks to be projected onto a fixed and regular grid. Instead, transformer, deep sets, and graph neural network architectures are used to process calorimeter clusters and particle tracks as point clouds. We demonstrate the performance of these new approaches as an important step towards a full deep learning-based low-level hadronic reconstruction.
The MIP Timing Detector (MTD) is a new sub-detector planned for the Compact Muon Solenoid (CMS) experiment at CERN, aimed at maintaining the excellent particle identification and reconstruction efficiency of the CMS detector during the High Luminosity LHC (HL-LHC) era. The MTD will provide new and unique capabilities to CMS by measuring the time-of-arrival of minimum ionizing particles with a resolution of 30-40 ps for MIP signals at a rate of 2.5 Mhit/s per channel at the beginning of HL-LHC operation. The precision time information provided by the MTD will reduce the effects of the high levels of pileup expected at the HL-LHC by enabling the use of 4D reconstruction algorithms. The central barrel timing layer (BTL) of the MTD uses a sensor technology consisting of LYSO:Ce scintillating crystal bars coupled to SiPMs, one at each end of the bar, read out with TOFHIR ASICs for the front-end. We present an overview of the MTD BTL design and show test beam results demonstrating the achievement of the target time resolution of about 30 ps.