- Compact style
- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Help us make Indico better by taking this survey! Aidez-nous à améliorer Indico en répondant à ce sondage !
The Ninth Annual Large Hadron Collider Physics (LHCP2021) conference is planned
|
---|
NEWS (03/11/2021): the LHCP2021 proceedings have been reviewed and are now available at this URL: https://pos.sissa.it/397/
NEWS (18/10/2021): the LHCP2021 proceedings are currently under review and will appear at this URL: https://pos.sissa.it/397/
NEWS (12/06/2021): the winners of the poster awards and the site selected to host LHCP 2023 have been announced in the closing plenary session
NEWS (28/04/2021): the second bulletin is now available on the conference website (click here)
NEWS (23/04/2021): poster abstracts have been reviewed and acceptance notifications have been sent by e-mail. Information about the poster and poster session formats are available here. More detailed instructions will be sent to the poster presenters by e-mail.
NEWS (22/04/2021): thanks to CERN and IUPAP sponsorships, no fees are required to participate to the LHCP2021 conference. Participants attending the online conference are required to register in order to receive by e-mail the instructions for the video connections.
The LHCP conference series started in 2013 after a successful fusion of two international conferences, "Physics at Large Hadron Collider Conference" and "Hadron Collider Physics Symposium". The conference programme will be devoted to a detailed review of the latest experimental and theoretical results on collider physics, and recent results of the LHC Run II, and discussions on further research directions within the high energy particle physics community including both theory and experiment sides. The main goal of the conference is to provide intense and lively discussions between experimentalists and theorists in research areas such as the Standard Model Physics and Beyond, the Higgs Boson, Supersymmetry, Heavy Quark Physics and Heavy Ion Physics as well as the recent progress in the high luminosity upgrades of the LHC and future colliders developments.
With great regret we have concluded that the 9th LHCP conference, to be held 7-12 June 2021, will need to be fully online, due to the Covid-19 pandemic and its uncertainties.
The conference will be maintained for the same days, with an adjusted timetable to improve remote participation from around the world, similar to that of the 2020 edition of LHCP.
MAIN DEADLINES |
---|
|
||
- opening | 12 December 2020 | |
- closing | 2 June 2021 | |
Poster abstract submission | ||
- submission deadline | 19 April 2021 | |
- acceptance notification | 23 April 2021 at the latest | |
Start of the conference |
7 June 2021 12:00 | |
Proceedings submission | 20 September 2021 |
Recent measurements of charm-baryon production at midrapidity by the ALICE collaboration in pp collisions show baryon-over-meson ratios significantly higher than those in $\rm e^+e^-$ collisions for different charm-hadron species. The charmed baryon-to-meson and charmed baryon-to-baryon ratios provide unique information on hadronisation mechanisms. In this poster, the first measurement of production cross section of $\rm \Omega_{c}^{0}$ via the hadronic decay channel $\rm \Omega_{c}^{0} \rightarrow \pi^{+} \Omega^{-}$ (and its charge conjugate) in $2
The production cross sections of open heavy-flavour hadrons are typically described within the factorisation approach as the convolution of the parton distribution functions of the incoming protons, the perturbative QCD partonic cross section, and the fragmentation functions. These last are typically parametrised from measurements in ${\rm e^+e^-}$ collisions. Measurements of charm-baryon production are crucial to study the charm quark hadronisation in pp and p--Pb collisions and its difference with respect to ${\rm e^+e^-}$ collisions. Furthermore, measurements of charm-baryon production in p--Pb collisions provide important information about Cold Nuclear Matter (CNM) effects quantified in the nuclear modification factor $R_{\rm pPb}$. Measurements in p--Pb collisions also help us to understand how the possible presence of collective effects could modify the production of heavy-flavour hadrons and to find similarities among pp, p--Pb and Pb--Pb systems.
In this poster, the latest measurements of $\Lambda^+_{\rm c}$ performed with the ALICE detector at midrapidity in pp, and the new measurement performed down to $p_{\rm T}=0$ in p--Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV are presented. This allows us to show the first ALICE measurement of $\Lambda^+_{\rm c}/{\rm D^0}$ and $\Lambda^+_{\rm c}$ $R_{\rm pPb}$ down to $p_{\rm T}$ = 0 in p--Pb collisions. The $\Lambda^+_{\rm c}/{\rm D^0}$ ratio at midrapidity in small systems is significantly higher than the one in ${\rm e^+e^-}$ collisions, suggesting that the fragmentation of charm is not universal across different collision systems. Results are compared with theoretical calculations.
The increase of the particle flux (pile-up) at the HL-LHC with instantaneous luminosities up to
L ~ 7.5 × 10$^{34}$ cm$^{-2}$s$^{-1}$ will have a severe impact on the ATLAS detector reconstruction and trigger performance. The end-cap and forward region where the liquid Argon calorimeter has coarser granularity and the inner tracker has poorer momentum resolution will be particularly affected. A High Granularity Timing Detector (HGTD) will be installed in front of the LAr end-cap calorimeters for pile-up mitigation and luminosity measurement.
The HGTD is a novel detector introduced to augment the new all-silicon Inner Tracker in the pseudo-rapidity range from 2.4 to 4.0, adding the capability to measure charged-particle trajectories in time as well as space. Two silicon-sensor double-sided layers will provide precision timing information for minimum-ionising particles with a resolution as good as 30 ps per track in order to assign each particle to the correct vertex. Readout cells have a size of 1.3 mm × 1.3 mm, leading to a highly granular detector with 3.7 million channels. Low Gain Avalanche Detectors (LGAD) technology has been chosen as it provides enough gain to reach the large signal over noise ratio needed.
The requirements and overall specifications of the HGTD will be presented as well as the technical design and the project status. The on-going R&D effort carried out to study the sensors, the readout ASIC, and the other components, supported by laboratory and test beam results, will also be presented.
In the Standard Model (SM), lepton flavour is conserved in all interactions.
Hence, any observation of lepton flavour violation (LFV) would be an
unambiguous sign of physics beyond the SM (BSM), and LFV processes are
predicted by numerous BSM models.
One way to search for LFV is in the decay of gauge bosons.
In the search presented here, the decay of the Z boson to an electron-tau or
muon-tau pair is investigated using the full Run 2 pp collision data set at
sqrt(s) of 13TeV recorded by the ATLAS experiment at the LHC.
The analysis exploits tau decays into hadrons and - for the first time in this
channel in ATLAS - into leptons.
A key ingredient of the search is the usage of a neural net to differentiate
between signal and background events in order to make optimum use of the data.
Combined with about 8 billion Z decays recorded by ATLAS in Run 2 of the LHC,
the strongest constraints to date are set with Br(Z->etau)<5.0e-6 and
Br(Z->mutau)<6.5e-6 at 95% confidence level
- finally superseding the limits set by the LEP experiments more than two
decades ago.
Since 2016 ATLAS detector is equipped with new devices - ATLAS Forward Proton (AFP) detectors. AFP aims to measure protons scattered at very small angles, which are a natural signature of so-called diffractive events. Measurement of properties of diffractive events usually require low pile-up data-taking conditions. AFP performance in such special, low pile-up runs, including evaluation of detector efficiency, will be presented.
Production of beauty quarks takes place mostly in initial hard scattering processes and can be calculated using perturbative quantum chromodynamics (pQCD). Thanks to excellent particle tracking capabilities, the ALICE experiment at the LHC is able to reconstruct beauty-hadron decay vertices, displaced hundreds of micrometers from the primary interaction vertex. The poster will present inclusive pT spectra of b jets measured in p–Pb and pp collisions at √sNN = 5.02 TeV, the corresponding nuclear modification factor, and the fraction of b jets among inclusive jets. The production cross-section of b jets was measured down to 10 GeV/c which is lower than in previous measurements of b jets done at the LHC. Low pT b-jets are expected to be more sensitive to cold nuclear matter effects in p–Pb collisions. They are an important reference for future Pb–Pb measurements, where their production provides information on color and parton mass dependence of parton energy loss.
Heavy quarks (charm, beauty), due to the large masses, mainly originate via hard partonic scattering processes in high-energy hadronic collisions. They evolve as parton showers and hadronize as back-to-back jet events.
Two particles azimuthal angular correlations triggered by electrons from heavy-flavour hadron decays can be used for heavy-flavor jet studies. Such correlation distributions contains a near-side peak around $\Delta\varphi = 0$ formed by particles associated with a high-$p_{\rm T}$ trigger particle, and an away-side peak around $\Delta\varphi = \pi$. By changing the momentum scales of the trigger and associated particles one can study the heavy-flavour jet structure. In pp collisions, heavy-flavour correlations can be used to study the production and fragmentation of heavy-quarks. In p-Pb collisions, heavy-flavour correlations can be used to test cold nuclear matter and gluon saturation effects.
In this poster, we present the current status and results of the ALICE measurement of azimuthal angular correlations of high-$p_{\rm T}$ heavy-flavour decay electrons with charged particles in pp and p-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 5.02 TeV from the LHC Run 2 data. The results from pp and p-Pb collisions will be compared with each other to investigate any modification due to cold nuclear matter effect.
In this work we present the production of charged particles associated with high-$p_{\rm T}$ trigger particles ($8<\textit{p}_{\rm T}^{\rm trig.}<15$ GeV/$c$) at midrapidity in proton-proton collisions at $\sqrt{s}=5.02$,TeV simulated with the PYTHIA 8 Monte Carlo model [1]. The study is performed as a function of the relative transverse activity classifier, $R_{\rm T}$, which is the relative charged-particle multiplicity in the transverse region ($\pi/3< \phi^{\rm trig.}-\phi^{\rm assoc.}|<2\pi/3$) of the di-hadron correlations, and it is sensitive to the Multi-Parton Interactions. The evolution of the yield of associated particles on both the towards and the away regions ($3\leq p_{\rm T}^{\rm assoc.}< 8$ GeV/$c$) as a function of $R_{\rm T}$ is investigated. We propose a strategy which allows for the modelling and subtraction of the Underlying Event (UE) contribution from the towards and the away regions in challenging environments like those characterised by large $R_{\rm T}$. We found that the signal in the away region becomes broader with increasing $R_{\rm T}$. Contrarily, the yield increases with $R_{\rm T}$ in the towards region. This effect is reminiscent of that seen in heavy-ion collisions, where an enhancement of the yield in the towards region for 0-5% central Pb$-$Pb collisions at $\sqrt{s_{\rm NN}}=2.76$,TeV was reported. To further understand the role of the UE and additional jet activity, the transverse region is divided into two one-sided sectors, "trans-max" and "trans-min" selected in each event according to which region has larger or smaller charged particle multiplicity. Based on this selection criterium, the observables are studied as a function of $R_{\rm T}^{\rm max}$ and $R_{\rm T}^{\rm min}$, respectively. Results for pp collisions simulated with PYTHIA 8.244 and Herwing 7.2 will be shown.
[1] J.Phys.G 48 (2020) 1, 015007
Liquid argon (LAr) sampling calorimeters are employed by ATLAS for all electromagnetic calorimetry in the pseudo-rapidity region |η| < 3.2, and for hadronic and forward calorimetry in the region from |η| = 1.5 to |η| = 4.9. In the first LHC run a total luminosity of 27 fb$^{−1}$ has been collected at center-of-mass energies of 7-8 TeV. After detector consolidation during a long shutdown, Run-2 started in 2015 and about 150 fb$^{-1}$ of data at a center-of-mass energy of 13 TeV was recorded. With the end of Run-2 in 2018 a multi-year shutdown for the Phase-I detector upgrades was begun.
As part of the Phase-I upgrade, new trigger readout electronics of the ATLAS Liquid-Argon Calorimeter have been developed. Installation began at the start of the LHC shut down in 2019 and is expected to be completed in 2020. A commissioning campaign is underway in order to realize the capabilities of the new, higher granularity and higher precision level-1 trigger hardware in Run-3 data taking. This contribution will give an overview of the new trigger readout commissioning, as well as the preparations for Run-3 detector operation and changes in the monitoring and data quality procedures to cope with the increased pileup.
This poster summarises the extra dimensional models being searched for using the ATLAS detector at the Large Hadron Collider, in the full Run 2 dielectron and dimuon datasets. This data was produced in proton-proton collisions at a centre-of-mass energy of 13 TeV. In particular, the limits on the ADD model are presented, from a reinterpretation of the ATLAS Run 2 dilepton non-resonant analysis. Also highlighted, is a novel search being performed for clockwork extra dimensions.
Eight years ago, the discovery of a new fundamental particle, the Higgs boson (H), was announced by the ATLAS and CMS collaborations at CERN. While elementary particles acquire their mass through their interaction with the Higgs field, the large differences in their masses as well as the origin of the three generations of fermions remain unexplained to this day and constitute the Standard Model flavour puzzle.
Measuring the coupling of each fermion to the Higgs boson is one of the most important tasks in modern particle physics. The next most promising candidate in the quark sector is the decay to a pair of charm quark and antiquark (cc).
This poster will focus on the analysis of the associated production of the Higgs boson with a W or Z boson performed by the ATLAS Collaboration using data collected between 2015 and 2018, and will describe the analysis strategy employed to search for the H→cc signal. More precisely, recent achievements in the charm tagging technology that enables the identification of jets containing charm hadrons will be presented. Our current understanding of the H→cc process will be outlined, and the various results of the ATLAS, CMS and LHCb collaborations will be compared. Finally, the interpretation of this new result as a probe to the Standard Model flavour puzzle and its large constraining power on new physics scenarios will be discussed.
The second LHC long shutdown period (LS2) was a crucial opportunity for the CMS Resistive Plate Chambers (RPC) to complete their consolidation and upgrade projects. The consolidation includes detector maintenance for gas tightness, HV (high voltage), LV (low voltage) and slow control operation. Dedicated studies were performed to understand the behaviour of RPC currents with comparison to RUN2. This paper summarises the activities performed and commissioning of CMS RPC on the surface (For RE4) and for full detector in CMS cavern in different operating conditions.
Standard dipole parton showers are known to yield incorrect subleading-colour contributions to the leading (double) logarithmic terms for a variety of observables. In this work, concentrating on final-state showers, we present two simple, computationally efficient prescriptions to correct this problem, exploiting a Lund-diagram type classification of emission regions. We study the resulting effective multiple-emission matrix elements generated by the shower, and discuss their impact on subleading colour contributions to leading and next-to-leading logarithms (NLL) for a range of observables. In particular we show that the new schemes give the correct full colour NLL terms for global observables and multiplicities. Subleading colour issues remain at NLL (single logarithms) for non-global observables, though one of our two schemes reproduces the correct full-colour matrix-element for any number of energy-ordered commensurate-angle pairs of emissions. While we carry out our tests within the PanScales shower framework, the schemes are sufficiently simple that it should be straightforward to implement them also in other shower frameworks.
We present a combined analysis of low energy precision constraints and LHC searches for leptoquarks which couple to first generation fermions. Considering all ten leptoquark representations, five scalar and five vector ones, we study at the precision frontier the constraints from $K\to\pi\nu\nu$, $K\to\pi e^+e^-$, $K^0-\bar K^0$ and $D^0-\bar D^0$ mixing, as well as from experiments searching for parity violation (APV and QWEAK). We include LHC searches for $s$-channel single resonant production, pair production and Drell-Yan-like signatures of leptoquarks. Particular emphasis is placed on the recent CMS analysis of lepton flavour universality violation in non-resonant di-lepton pairs. The excess in electron events could be explained by $t$-channel contributions of the leptoquark representations $\tilde{S}_1, S_2, S_3, \tilde{V}_1, V_2 (\kappa_2^{RL} \neq 0)$ and $V_3$ without violating other bounds. Regarding the so-called ``Cabibbo angle anomaly'', we observe that the present constraints are too restrictive to allow for a resolution via direct leptoquark contributions to super-allowed beta decays.
Several Dark Sector models predict the existence of particles with macroscopic life-times and semi-visible jets (QCD-like jets which include stable Dark Sector particles). These can lead to final states with large missing transverse momentum recoiling against at least one highly energetic jet, a signature that is often referred to as a mono-jet.
The RECAST framework is used to re-interpret the recent ATLAS mono-jet search, based on 139 $\mathrm{fb^{-1}}$ of pp collisions collision data at $\sqrt{s} = 13$ TeV, in terms of Dark Sector models not studied in the original work. Complementary results for models involving long-lived particles are found with respect to dedicated searches. Results are also interpreted for the first time at ATLAS in terms of searches for semi-visible jets produced from a QCD-like parton shower.
In this study, a new technique for event classification using Convolutional Neural Networks (CNN) is presented. Results obtained using this technique are shown and compared to more traditional Machine Learning approaches for two different physics cases.
The new technique explores the power of visual recognition, which is one of the fastest-growing areas in Artificial Intelligence, as a consequence of the “deep learning” evolution of CNNs. Since CNNs are fed with images, an original and intuitive way for encoding the event information in images has been developed, building a one-to-one correspondence that allows us to face the event classification as an image classification.
In order to take advantage of the good performance of existing CNN architectures, transfer learning has been tested, showing to be a suitable option. VGG16 has been chosen as the benchmark, which is based on the well-known architecture named AlexNet. Additionally, an alternative approach with a simpler CNN architecture has shown to give also good results when trained from scratch. Nevertheless, a comparison with a more standard technique such as a BDT (using the XGBoost library) is provided in order to confirm that the results obtained with this unexplored technique are satisfactory.
The two classifications studied correspond to current challenges in Particle Physics. First, a New Physics example corresponding to a Dark Matter search has been performed, considering a mono-top signal and several of its main background processes. The selected events were required to have exactly one lepton and at least one b-tagged jet, together with large missing transverse momentum. Second, with the same selection criteria, a tt+X classification is also carried out, in which exotic processes with four top quarks as final state are tried to be identified among other processes such as ttH or ttW.
The momentum anisotropy ($v_{n}$) of the produced particles in relativistic nuclear collisions is considered to be a response of the initial geometry or the spatial anisotropy ($\varepsilon_{n}$) of the system formed in these collisions. The linear correlation between $\varepsilon_{n}$ and $v_{n}$ measures the efficiency at which the initial spatial eccentricity is converted to final momentum anisotropy in heavy ion collisions. We have studied the transverse momentum, collision centrality, and beam energy dependence of this correlation for different charged particles using a hydrodynamical model framework MUSIC. The ($\varepsilon_{n}$ − $v_{n}$) correlation is found to be stronger for central collisions and also for n=2 compared to that for n=3 as expected. However, the transverse momentum ($p_{T}$) dependent correlation coefficient shows interesting behaviour which strongly depends on the mass as well as $p_{T}$ of the produced particles. The correlation strength is found to be larger for lighter particles in the lower $p_{T}$ region. We have seen that the relative fluctuation in anisotropic flow depends strongly in the value of $\eta/s$ specially in the region $p_{T}<$ 1 GeV unlike the correlation coefficient which does not show significant dependence on $\eta/s$
The ATLAS Muon Upgrade project is a part of the Large Hadron Collider (LHC) - High Luminosity (HL) upgrade project which aims to increase its instantaneous luminosity up to 7.5X10$^{34}$ cm$^{−2}$s$^{−1}$. The present first muon station in the forward regions of ATLAS is being replace by the so-called New Small Wheels (NSWs). The NSWs consist of resistive-strip MicroMegas (MM) detectors and small-strip Thin Gap chambers (sTGC), both providing trigger and tracking capabilities, for a total active surface of more than 2500 m$^2$. After the R&D, design and prototyping phase, series production of MM and sTGC chambers are being constructed. The NSW Upgrade project, the most challenging and complex one of the ATLAS phase-I upgrade projects, is expected to be completed with the installation of NSW in the ATLAS Underground cavern during the summer of 2021. The whole NSW structure includes 128 detectors, in total to ∼2.4 million readout channels. This new generation of readout electronics are built to stand the harsh radiation hostile conditions, where the expected background rate will reach 20 kHz/cm$^2$. Eight micromegas detectors layers are integrated into a double wedge. The mechanical integration is followed by the electronic integration and its initial validation into the data acquisition system. Each fully equipped MicroMegas doublewedge is tested at a dedicated cosmic ray facility and the high voltage settings are defined. Then, a sequence of tests follows, related to efficiency maesurement , cluster size, resolution for all the individual layers of the double wedge are performed. These steps consist the qualification of the MicroMegas sector for the final integration with the sTGC wedges before mounting them on the NSW structure. The electronics performance and cosmic rays validation results of the final validation
of Micromegas double wedges will be presented.
We study new physics contributions to CP-violating anomalous couplings of top-quark in the context of top-pair production and their consequent decays into a pair of dilepton and b-jets at the Large Hadron Collider. An estimate of sensitivities to such CP-violating interactions would also be discussed for the pre-existing 13 TeV LHC data and its projections for the proposed LHC run at 14 TeV.
Next-generation collider experiments will have to cope with extremely high collision rates, making it necessary to implement real-time event processing capabilities. Among the standard pattern recognition algorithms thought to be run on Look-Up Tables, Machine Learning methods, and in particular Deep Neural Networks, are spreading very fast and there is growing interest in executing such algorithms at trigger level to improve on-line selection performance. The main issue in running these algorithms in real-time is the amount of operation that needs to be computed. Low-latency hardware solutions exist, e.g. FPGAs, but the main constraint to the implementation is often related to the size of the model, that has to be finely tuned not to exceed the available memory. We present an approach to reduce in an optimized way the size of models based on Fully Connected Neural Networks, having under control the model performances. The number of features in input to the Deep Neural Network is reduced using a CancelOut layer, optimized through an original loss function. We compare the performances of this approach with other techniques. We use as baseline study the selection of proton-proton collision events in which the boosted Higgs boson decays to two $b$-quarks and both the decay products are contained in a large and massive jet. These events have to be selected against an overwhelming QCD background. Promising results are shown and the way for future developments is outlined.
The High-Luminosity LHC will open an unprecedented window on the weak-scale nature of the universe, providing high-precision measurements of the Standard Model (SM) as well as searches for new physics beyond the SM. The CMS Collaboration is planning to replace entirely its trigger and data acquisition systems to match this ambitious physics program. Efficiently collecting datasets in Phase 2 will be a challenging task, given the harsh environment of 200 proton-proton interactions per LHC bunch crossing. The already challenging implementation of an efficient tau lepton trigger will become, in this conditions, an even crucial and harder task; especially interesting will be the case of hadronically decaying taus. To this end, the foreseen high-granularity endcap calorimeter (HGCAL), and the astonishing amount of information it will provide, play a key role in the design of the new online level-1 (L1) triggering system. In this talk I will present the development of a L1 trigger for hadronically decaying taus based on the sole information from the HGCAL detector. I will present some novel ideas for a L1 trigger based on machine learning that can be implemented in FPGA firmware. The expected performance of the new trigger algorithm will be presented, based on simulated collision data of the HL-LHC.
The increase of luminosity foreseen for the High-Luminosity LHC phase requires the substitution of the ATLAS Inner Detector with a new tracking detector, called Inner Tracker. It will be an all-silicon system consisting of a pixel and a strip subdetector. The ATLAS wide FELIX system will be the off-detector interface to the Inner Tracker.
In order to efficiently bring the Inner Tracker into operation, the intercommunication between the DAQ and the DCS is foreseen. Such communication is mediated by OPC servers that interface to the different hardware and software resources and to the Finite State Machine, which supervises all subdetectors. This framework is designed to be flexible, so that it can easily incorporate heterogeneous resources coming from different subsystems, including the FELIX setups.
This poster describes the current status of the implementation of OPC servers for the intercommunication between the DAQ and the DCS and their integration in the FELIX setups.
To meet new TDAQ buffering requirements and withstand the high expected radiation doses at the high-luminosity LHC, the ATLAS Liquid Argon Calorimeter readout electronics will be upgraded. The triangular calorimeter signals are amplified and shaped by analogue electronics over a dynamic range of 16 bits, with low noise and excellent linearity. Developments of low-power preamplifiers and shapers to meet these requirements are ongoing in 130nm CMOS technology. In order to digitize the analogue signals on two gains after shaping, a radiation-hard, low-power 40 MHz 14-bit ADCs is developed using a pipeline+SAR architecture in 65 nm CMOS. Characterization of the prototypes of the frontend components show good promise to fulfill all the requirements. The signals will be sent at 40 MHz to the off-detector electronics, where FPGAs connected through high-speed links will perform energy and time reconstruction through the application of corrections and digital filtering. Reduced data are sent with low latency to the first level trigger, while the full data are buffered until the reception of trigger accept signals. The data-processing, control and timing functions will be realized by dedicated boards connected through ATCA crates. Results of tests of prototypes of front-end components will be presented, along with design studies on the performance of the off-detector readout system.
A series of upgrades are planned for the LHC accelerator to increase it's instantaneous luminosity to 7.5×10$^{34}$ cm$^{-2}$s$^{-1}$. The luminosity increase drastically impacts the ATLAS trigger and readout data rates. The present ATLAS Small Wheel Muon detector will be replaced with a New Small Wheel (NSW) detector which is expected to be installed in the ATLAS underground cavern by the end of the Long Shutdown 2 of the LHC. One crucial part of the integration procedure concerns the installation, testing and validation of the on-detector electronics and readout chain for a very large system with a more than 2.1 M electronic channels in total. These include 7K Front-End Boards (MMFE8, SFEB, PFEB), custom printed circuit boards each one housing eight 64-channel VMM Application Specific Integrated Circuits (ASICs) that interface with the ATLAS Trigger and Data Acquisition (TDAQ) system through 1K data-driver cards. The readout chain is based on optical link technology (GigaBit Transceiver links) connecting the backend to the front-end electronics via the Front-End LInk eXchange (FELIX), is a newly developed system that will serve as the next generation readout driver for ATLAS. For the configuration, calibration and monitoring path, the various electronics boards are supplied with the GBT-SCA ASIC (Giga-Bit Transceiver-Slow Control Adapter) which is part of the Gigabit Transceiver Link(GBT) chipset and it's purpose is to distribute control and monitoring signals to the electronics embedded in the detectors and in the ATLAS service areas. Experience and performance results from the first large-scale electronics integration tests performed at CERN on final NSW sectors will be presented.
Single top quark production is the subleading production process of top quarks at the LHC after the top quark pair production. The latest differential measurements of single top quark production (tW) cross sections are presented using data collected by the CMS detector at a center-of-mass energy of 13 TeV. The cross sections are measured as a function of various kinematic observables of the top quarks and the jets and leptons of the events in the final state. The results are confronted with precise theory calculations.
Although most of Beyond Standard Model (BSM) searches are targeting specific theory models, there has always been a keen interest in the development of model-independent methods amongst the High Energy Physics(HEP) community. Machine Learning (ML) based anomaly detection stands among the latest up-and-coming avenues for creating model-agnostic BSM searches. The focus of this research is the design of anomalous event taggers based on autoencoder models. Alongside the signal discrimination power, a high priority is placed on both signal-model and background-model independence. To this end, the autoencoder is used in conjunction with a Normalizing Flow model tasked with latent space density estimation. Both event reconstruction error and latent representation likelihood are combined in order to mitigate the bias of the resulting event anomaly score. Overall this method is showing promising anomaly detection performance without loosing much in terms of generalization power. On the multijet LHC Olympics data, it is consistently able to identify BSM signals, even in the challenging scenarios posed by the Black Box datasets, where the signal content is unknown.
A search is presented for four-top-quark production using proton-proton collision data at a centre-of-mass energy of 13 TeV collected by the ATLAS detector at the Large Hadron Collider with an integrated luminosity of 139/fb. Events are selected if they contain a same-sign lepton pair or at least three leptons (electrons or muons). Jet multiplicity, jet flavour and event kinematics are used to separate signal from the background through a multivariate discriminant, and dedicated control regions are used to constrain the dominant backgrounds. The four-top-quark production cross section is measured to be 24 +7 -6 fb. This corresponds to an observed (expected) significance with respect to the background-only hypothesis of 4.3 (2.4) standard deviations and provides evidence for this process.
The formation of partonic medium in the relativistics heavy-ion collisions is always marked by the values of the ratio of certain observables assuming $p+p$ collisions as a reference. But recent studies of small systems formed in $p+p$ collisions at the LHC energies hint towards the possibility of production of medium with collective behaviour. Results from $p+p$ collisions have routinely been used as baseline to analyse and understand the production of QCD matter expected to be produced in nuclear collisions. Therefore, results from $p+p$ collisions required more careful investigation to understand whether QCD matter is produced in high multiplicity $p+p$ collisions. With this motivation, the Glauber model traditionally used to study the heavy-ion collision dynamics at high-energies is applied to understand the dynamics of $p+p$ collisions. We have used anisotropic and inhomogeneous quark/gluon based proton density profile, a realistic picture obtained from the results of deep inelastic scattering, and found that this model explains the charged-particle multiplicity distribution of $p+p$ collisions at LHC energies very well. Collision geometric properties like impact parameter and mean number of binary collisions ($\langle N_{coll} \rangle$), mean number of participants ($\langle N_{part} \rangle$) at different multiplicities are determined for $p+p$ collisions. We further used these collision geometric properties to estimate average charged-particle pseudorapidity density ($\langle dN_{ch}/d\eta \rangle$) and found it to be comparable with the experimental results. Knowing $\langle N_{coll} \rangle$, we have for the first time obtained nuclear modification-like factor ($R_{HL}$) in $p+p$ collisions. We also estimated eccentricity and elliptic flow as a function of charged-particle multiplicity using the linear response to initial geometry and found a good agreement with experimental results.
Over the last years, Machine Learning (ML) tools have been successfully applied to a wealth of problems in high-energy physics. In this talk, we will discuss the extraction of the average number of Multiparton Interactions ($〈N_{mpi}〉$) from minimum-bias pp data at LHC energies using ML methods. Using the available ALICE data on transverse momentum spectra as a function of multiplicity we report the $〈N_{mpi}〉$ for pp collisions at √s = 7 TeV, which complements our previous results for pp collisions at √s = 5.02 and 13 TeV. The comparisons indicated a modest energy dependence of $〈N_{mpi}〉$. We also report the multiplicity dependence of $N_{mpi}$ for the three center-of-mass energies. These results are fully consistent with the existing ALICE measurements sensitives to MPI, therefore they provide experimental evidence of the presence of MPI in pp collisions.
To achieve the challenging target of 1% precision on luminosity determination at the high-luminosity LHC (HL-LHC) with instantaneous luminosity up to $7.5 × 10^{34} cm^{−2} s^{−1}$, the CMS experiment will employ multiple luminometers with orthogonal systematics. A key component of the proposed system is a stand-alone luminometer, the Fast Beam Condition Monitor (FBCM), which is fully independent from the central trigger and data acquisition services and able to operate during all times at 40 MHz providing bunch-by-bunch luminosity measurement with 1 s time granularity. FBCM is foreseen to be placed inside the cold volume of the Tracker as it utilizes silicon-pad sensors exploiting the zero-counting algorithm of hits for luminosity measurement. FBCM will also provide precise timing information with a few ns precision enabling the measurement of beam induced background. We report on the optimization of the design and the expected performance of FBCM.
The ATLAS Forward Proton physics program at CERN aims at studying soft and hard diffractive events from ATLAS proton collisions. Time-of-Flight (ToF) system is used to reduce the background from multiple proton-proton collisions. In this presentation, we describe technical details of the Fast Cherenkov model of photons generation and transportation through the optical part of the ToF detector. This Fast simulation uses Python programming language and Numba (high performance compiler). It is about 200 times faster than Geant4 simulation already implemented, and provides similar results concerning length and time distributions of photons. Moreover, this Fast simulation allows computing easily the time resolution of the different bars of ToF detector.
The New Small Wheel (NSW) upgrade is now in its commissioning phase. The future ATLAS detector sub-system will be one of the first to employ the Front-End Link eXchange (FELIX) as its Data Acquisition (DAQ) scheme. Currently, one of the main focus points of the community is to ensure proper acquiring of data from the detector media, besides validating the performance of the detectors themselves. This is being conducted at the BB5 area of CERN, where the Micromegas chambers are subjected to cosmic radiation. In this work, the software and FPGA-based tools that have been developed to aid in satisfying the stringent 2-week turnaround time per detector wedge will be described. These include applications that control the complex DAQ system and validate the data paths of thousands of readout channels and hundreds of high-speed optical links in an automated manner. Another aspect of the work is the description of methodologies that have been developed to validate the chamber’s performance under cosmic radiation. Finally, an FPGA system that performs scintillator coincidence trigger filtering will be described. The aforementioned part of the system is used to emulate the deterministic nature of the muons’ time-of-arrival during the actual run, in order to perform timing resolution performance profiling of the Micromegas detector.
The precise knowledge of the strong interaction between kaons and nucleons
is a key element to describe the interaction between hadrons in the non-perturbative regime of QCD.
Moreover, the knowledge of this interaction plays an important role in the study of the equation
of state of dense baryonic matter, and hence has important implications for the
modeling of neutron stars.
We present the first femtoscopy measurement of momentum correlations of $\mathrm{K^0_Sp}$ and $\mathrm{K^0_S\overline{p}}$ pairs in pp collisions at $\sqrt{s}=13 $ TeV measured by the ALICE experiment at the LHC.
In this study, the strong scattering parameters of $\mathrm{K^0_Sp}$ and $\mathrm{K^0_S\overline{p}}$ pairs are extracted through the Lednicky-Lyuboshitz model. This model links the experimental momentum correlation to the strong final state interaction parameters.
The extracted scattering parameters indicate that the strong interaction between $\mathrm{K^0_Sp}$($\mathrm{\overline{p}}$) is attractive, contrary to $\mathrm{K^+p}$ and $\mathrm{K^-p}$. This indicates that in the $\mathrm{K^0_Sp}$($\mathrm{\overline{p}}$)
there are no resonances below threshold which are responsible of the repulsive strong interaction between $\mathrm{K^-p}$.
The design and the status of the development of the Level-0 endcap muon trigger firmware for the ATLAS experiment at the HL-LHC are presented. The firmware reconstructs muon candidates with an improved momentum resolution by exploiting all hit data from Thin Gap Chambers (TGCs) to be available at XCVU13P FPGA mounted on the trigger and readout boards. The track segment is reconstructed by a pattern matching algorithm, where the TGC hits are compared with predefined hit patterns. Each predefined hit pattern has associated position and angle of the track segment. The algorithm with minimal utilisation of the XCVU13P FPGA resource is a major challenge. We achieved 1 cm position and 4 mrad angular resolutions, which satisfy the requirements, with less than 40% of the UltraRAM resources for full coverage of TGCs. The performance was evaluated with the post-synthesis simulation with the hit inputs from GEANT4 full simulation. The implementation was succeeded with no timing violation by optimised latch circuit locations. The results constitute an important ingredients in the development of the Level-0 endcap muon trigger firmware for HL-LHC.
FASER (ForwArd Search ExpeRiment) fills the axial blindspot of other, radially arranged LHC experiments. It is installed 480 meters from the ATLAS interaction point, along the collision axis. FASER will search for dark matter and other new, long-lived particles that may be hidden in the collimated reaction products exiting ATLAS. FASER comprises: a magnetic spectrometer built with ATLAS silicon tracker modules; four LHCb outer ECAL modules; an emulsion neutrino detector; and plastic scintillators for veto, trigger and timing. The experiment is currently in its final commissioning stages. I report on successful preliminary tests of hardware and software performance with cosmic rays on the surface, and after installation in situ. FASER will begin taking pp collision data from the start of LHC Run 3, in 2022.
ALICE analysis mostly deals with large datasets using the distributed Grid infrastructure. In Run 1 and 2, ALICE developed a system of analysis trains (so-called “LEGO trains”) that allowed the user to configure analysis tasks (or wagons) that are expected to be run on the same data. The LEGO train system builds upon existing tools: the ALICE analysis framework as well as the Grid submission and monitoring infrastructure. This centralized system improved the resource utilization and provided a friendly user interface (UI), in addition to bookkeeping functionalities. Currently, 90$\%$ of ALICE analyses use the train system. The ongoing major upgrade for LHC Run 3 will enable the experiment to cope with an increase of lead-lead collision data of two orders of magnitude compared to the Run 1 and 2 data-taking periods. In order to process this unprecedented data sample, a new computing model has been implemented, the Online-Offline Computing System (O$^2$). Analysis trains will also be the main workhorse for analysis in Run 3: a new infrastructure, Hyperloop, is being developed based on the successful concept of the LEGO trains. The Hyperloop train system includes a different and improved UI using modern responsive web tools, bookkeeping, instantaneous automatic testing, and the production of derived skimmed data. So far, about 600 Hyperloop trains have been successfully submitted to the Grid and ALICE analysis facilities using converted Run 2 data. An overview of ALICE train system concept will be exposed in this poster, highlighting the improvements of the new Hyperloop framework for analysis in Run 3.
In this talk, we will be discussing $\mathcal{O}(\alpha)$ QED corrections to $B\to K\ell^+\ell^-$ modes. The structure of the contact term is fixed by requiring the gauge invariance of the real emission amplitude. The calculation is done by providing fictitious mass ($\lambda$) to the photon, which acts as IR regulator and results are shown to be independent of it. QED effects are found to be negative. Electron channels are shown to receive large correction $\mathcal{O}(20\%)$. We will also discuss its impact on lepton flavour universality (LFU) ratio ($R_{K}^{\mu e}$).
Machine learning techniques have been quite popular recently in the high-energy physics community and have led to numerous developments in this field. In heavy-ion collisions, one of the crucial observables, the impact parameter, plays an important role in the final-state particle production. This being extremely small (i.e. of the order of a few fermi), it is almost impossible to measure impact parameter in experiments. In this work, we implement the ML-based regression technique via Gradient Boosting Decision Trees (GBDT) to obtain a prediction of impact parameter in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using A Multi-Phase Transport (AMPT) model. After its successful implementation in small collision systems, transverse spherocity, an event shape observable, holds an opportunity to reveal more about the particle production in heavy-ion collisions as well. In the absence of any experimental exploration in this direction at the LHC yet, we suggest an ML-based regression method to estimate centrality-wise transverse spherocity distributions in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV by training the model with minimum bias collision data. Throughout this work, we have used a few final state observables as the input to the ML-model, which could be easily made available from collision data. Our method seems to work quite well as we see a good agreement between the simulated true values and the predicted values from the ML-model.
The top quark pair production cross section is measured in proton-proton collisions at a center-of-mass energy of 5.02 TeV. The data collected in 2017 by the CMS experiment at the LHC corresponding to an integrated luminosity of 304 1/pb are analyzed. The measurement is performed using events with one electron and one muon of opposite sign, and at least two jets. The measured cross section is found to be 60.3 ± 5.0 (stat) ± 2.8 (syst) ± 0.9 (lumi) pb. To reduce the statistical uncertainty, a combination with the result in the l+jets channel, based on 27.4 1/pb of data collected in 2015 at the same center-of-mass energy, is then performed, obtaining a value of 62.6 ± 4.1 (stat) ± 3.0 ( syst+lumi ) pb, with a total uncertainty of 7.9%, in agreement with the standard model.
Hadronic resonances are short-lived particles whose lifetimes are comparable to the hadronic phase lifetime of the system produced in ultrarelativistic nucleon-nucleon or nuclear collisions. These resonances are sensitive to the hadronic phase effects such as rescattering and regeneration processes which might affect the resonance yields and shape of the transverse momentum spectra. In addition, event shape observables like transverse spherocity are sensitive to the hard and soft processes. They are useful tools to distinguish the isotropic and jetty-dominated events in pp collisions. Studying the dependence of the yield of resonance on transverse spherocity and multiplicity allows us to understand the resonance production mechanism with event topology and system size respectively. Furthermore, the measurements in small systems are used as a reference for heavy-ion collisions and are helpful for the tuning of Quantum chromodynamics inspired event generators. In this contribution, we present recent results on K∗(892)0 production obtained by the ALICE experiment in pp collisions at several collision energies, event multiplicities and as a function of transverse spherocity. The results include the transverse momentum spectra, yields and their ratio to the yields of long-lived particles. The measurements done using the ALICE detector will be compared with the corresponding results from models such as PYTHIA8, EPOS-LHC etc., and the measurements at lower energies.
The non-perturbative QCD effects involved in radiative tau decay $(\tau^- \rightarrow \pi^- \nu_\tau \gamma)$ are encoded in two form factors; the vector ($F_V$) and the axial vector ($F_A$) form factors. We present the computation of these form factors using light cone sum rules. The form factors involved in this decay are same as involved in radiative pion decay with the crucial difference that the momentum transfer squared, $t$, between the pion-photon system is positive, which makes these form factors timelike and also as $t$ can now take values up to $m_\tau^2$, it can produce real hadronic resonances. We calculate the analytical form for these form factors using the method of light cone sum rules and present the decay width and the invariant mass spectrum in the $\pi-\gamma$ system. We found that the structure dependent parameter, $\gamma$, i.e. the ratio of the axial vector to vector form factor at zero momentum transfer to be in good agreement with the experimental determination.
$b\to s\tau\tau$ and $b\to c\tau \nu$ measurements are highly motivated for addressing lepton-flavor-universality-violating (LFUV) puzzles, such as $R_{D^{(*)}}$, $R_{J/\psi}$ and $R_{K^{(\ast)}}$ anomalies, raised by the data of LHCb, BELLE and BarBar. The planned operation of future $e^-e^+$ colliders as a $Z$ factory provides a great opportunity to conduct such measurements, because of its relatively high production rates and reconstruction efficiency for $B$ mesons at $Z$ pole. Here we will pursue a systematic sensitivity study on these measurements at future $Z$ factories. The implications of the outcomes for LFUV new physics will be also explored.
The High Luminosity upgrade of the LHC (HL-LHC) places unprecedented requirements for background monitoring and luminosity measurements. The CMS Tracker Endcap Pixel Detector (TEPX) will be adapted to provide high-precision online measurements of bunch-by-bunch luminosity and beam-induced background. The implementation of dedicated triggering and readout systems, the real-time clustering algorithm on an FPGA and the expected performance are discussed. The innermost ring of the last layer (D4R1) will be operated independently from the rest of TEPX enabling beam monitoring during the LHC ramp and during unqualified beam conditions. The system optimisation and the dedicated timing and trigger infrastructure for D4R1 are also presented.
The precise determination of the luminosity in a collider is of crucial importance for any physics cross sections measurement since it directly translates to the precision of the cross section determination.
In a muon collider dense muon beams are necessary to achieve the target luminosity, these beams generate very high fluxes of particles coming from the muons decay along the beam pipe.
Due to the presence of ad-hoc shielding structure, designed to mitigate the effect of the beam-induced background, the forward region of the detector cannot host instrumentation for the determination of the luminosity, as in standard methods adopted by the LHC experiments.
In this poster an alternative way to determine such a fundamental parameter is proposed, taking inspiration from flavour factories such as Belle2 and BES, where the luminosity is measured by counting $e^+$ $e^-$ $\to$ $e^+$ $e^-$ Bhabha events, whose cross-section is theoretically known with high precision.
The reconstruction efficiency of large angle muon Bhabha ($\mu^+$ $\mu^-$ $\to$ $\mu^+$ $\mu^-$) events at 1.5 TeV center of mass energy is estimated at muon collider via full detector simulation, taking into account the beam-induced background effects.
Kinematic requirements are defined to optimize the signal to physics background ratio and the statistical uncertainty on the muon collider luminosity measurement that can be reached with this method is estimated.
The LUXE experiment (LASER Und XFEL Experiment) is a new experiment in planning at DESY Hamburg using the electron beam of the European XFEL. LUXE is intended to study collisions between a high-intensity optical LASER and 16.5 GeV electrons from the XFEL electron beam, as well as collisions between the optical LASER and high-energy secondary photons. The physics objective of LUXE are processes of Quantum Electrodynamics (QED) at the strong-field frontier, where the electromagnetic field of the LASER is above the Schwinger limit. In this regime, QED is non-perturbative. This manifests itself in the creation of physical electron-positron pairs from the QED vacuum, similar to Hawking radiation from black holes. LUXE intends to measure the positron production rate in an unprecedented LASER intensity regime. An overview of the LUXE experimental setup is given, in the context within the field of high-intensity particle physics. The foreseen detector systems and their sensitivity are presented. Finally, the prospects for studying BSM physics are also discussed.
Jet identification tools are crucial for new physics searches at the LHC and at future colliders. We introduce the concept of Mass Unspecific Supervised Tagging (MUST) which relies on considering both jet mass and transverse momentum varying over wide ranges as input variables - together with jet substructure observables - of a multivariate tool. This approach not only provides a single efficient tagger for arbitrary ranges of jet mass and transverse momentum, but also an optimal solution for the mass correlation problem inherent to current taggers. By training neural networks, we build MUST-inspired generic and multi-pronged jet taggers which, when tested with various new physics signals, clearly outperform the variables commonly used by experiments to discriminate signal from background. These taggers are also efficient to spot signals for which they have not been trained. Taggers can also be built to determine, with a high degree of confidence, the prongness of a jet, which would be of utmost importance in case a new physics signal is discovered.
Heavy quarks (charm and beauty) are produced at the initial stages of the relativistic hadronic collisions in hard scattering processes, and the study of their production in proton-proton (pp) collisions is an important test for calculations based on perturbative Quantum Chromodynamics (pQCD). Analysis of heavy flavor production as a function of charged-particle multiplicity provides insight into the processes occurring at the partonic level and the interplay between the hard and soft particle production mechanisms in pp collisions.
In this poster, measurements of open heavy-flavor production as a function of multiplicity, via the study of the $\mathrm{D}$-meson self-normalized yields in pp collisions at the center-of-mass energy of $\sqrt{s} = 13$ TeV is presented. The $\mathrm{D}$-meson yields are measured in different $p_{\rm{T}}$ intervals from 1 GeV/$c$ to 24 GeV/$c$ at midrapidity via their hadronic decay channels. The $\mathrm{D}$-meson self-normalized yield is found to increase stronger than linearly with increasing charged-particle multiplicity. The measurements are compared to PYTHIA 8 calculations, and with the results at $\sqrt{s} = 7$ TeV.
The ratios of the Bc(2S) to Bc, Bc(2S) to Bc, and Bc(2S) to Bc(2S) production cross sections are measured in proton-proton collisions at 13 TeV, using a data sample collected by the CMS experiment at the LHC, corresponding to an integrated luminosity of 143 fb−1. The three measurements are made in the Bc meson phase space region defined by the transverse momentum pT > 15 GeV and absolute rapidity |y| < 2.4, with the excited Bc(2S) states reconstructed through the Bc π+π−, followed by the Bc → J/ψπ+ and J/ψ → μ+μ− decays. The Bc(2S) to Bc, Bc(2S) to Bc, and Bc(2S) to Bc(2S) cross section ratios, including the unknown Bc(2S) → Bcπ+π− branching fractions, are (3.47 ± 0.63 (stat) ± 0.33 (syst))%, (4.69 ± 0.71 (stat) ± 0.56 (syst))%, and 1.35 ± 0.32 (stat) ± 0.09 (syst), respectively. None of these ratios shows a significant dependence on the pT or |y| of the Bc meson. The normalized dipion invariant mass distributions from the decays Bc(2S) → Bc π+π− are also reported.
Recent measurements of the Higgs boson production cross section in the H->WW decay channel using proton-proton collision data with CMS experiment at 13 TeV will be presented. In particular the Higgs boson production in association with leptonically decaying vector bosons is targeted, and the H->WW decays in which at least one W boson decays to leptons are considered. Results for both the inclusive production cross section and cross sections in the STXS scheme will be presented.
The discovery of the Higgs boson in 2012 by the CMS and ATLAS collaborations marked the start of the exploration of the Higgs sector of particle physics. The properties of the Higgs sector under CP symmetry have been investigated mostly in its couplings to gauge bosons. With the full Run 2 data-taking period it became possible to study the CP properties of the Yukawa coupling of the Higgs to fermions, and in particular to tau leptons. This was done reconstructing the decay planes of the two tau leptons and measuring their angular correlation. The measured mixing angle between CP-even and CP-odd couplings is (4±17)° and is consistent with the Standard Model prediction of a pure CP-even coupling and allows to constrain the allowed phase space for possible BSM scenarios. A pure CP-odd hypothesis is instead excluded with 99.7% confidence level.
A measurement of four-top-quark production using proton-proton collision data at a centre-of-mass energy of 13 TeV collected by the ATLAS detector at the Large Hadron Collider with an integrated luminosity of 139/fb is presented. A new result uses events with a single lepton (electron or muon) or an opposite-sign lepton pair, in association with multiple jets. The measured four-top-quark production cross section is found to be 26 +17 -15 fb, with a corresponding observed (expected) significance of 1.9 (1.0) standard deviations over the background-only hypothesis. The result is combined with the previous measurement performed by the ATLAS Collaboration in the multilepton final state. The combined four-top-quark production cross section is measured to be 25 +7 -6 fb, with a corresponding observed (expected) signal significance of 4.7 (2.6) standard deviations over the background-only predictions. The result is consistent within 2.0 standard deviations with the Standard Model expectation of 12.0 +/- 2.4 fb.
A measurement of the associated production of a single top quark and a W boson in the final states with an electron or muon and jets using pp collisions with $\sqrt{s} = 13$ TeV collected by the CMS detector at the CERN LHC is presented. The data used correspond to an integrated luminosity of 36 fb$^{-1}$ . This result is the first observation of the tW process in the final states containing a muon or electron and jets, with an observed significance clearly exceeding 5 standard deviations. The measured signal strength is μ = 1.24 ± 0.18, consistent with unity. The inclusive cross section is determined to be 89 ± 4 (stat) ± 12 (syst) pb.
After the discovery of the Higgs boson and its characterisation, we are entering in the precision era of the Higgs physics where we need robust measurements to spot any sign of BSM physics. Among the many available tools, fiducial measurements are one of the most used in HEP due to their model independence, longevity, and easy comparison with theoretical predictions. The production cross section is measured by removing detector effects and backgrounds using an unfolding procedure.
Integrated and differential fiducial cross sections for the production of the Higgs boson in pp collision at the LHC at a $\sqrt{s}$=13 TeV via the H$\rightarrow$ZZ$\rightarrow$4$\ell$ ($\ell$=$e$,$\mu$) channel are presented. The dataset was collected by the CMS experiment in 2016, 2017, and 2018, equivalent to a validated integrated luminosity of 137 fb$^{-1}$.
At high energy collisions, such as those achieved at the LHC, particle production is dominated by soft-QCD processes. The soft production is described by non-perturbative QCD and challenges existing phenomenological models. Global observables such as multiplicity and rapidity dependence of particle production are some of the most fundamental measurements for improving and constraining these models.
In this talk, we will present the multiplicity and pseudorapidity distributions of inclusive photons and charged particles at forward rapidities in p-Pb collisions at $\sqrt{s\rm_{NN}}$ = 5.02 TeV. The Photon Multiplicity Detector measures photon production within 2.3 < $\eta$ < 3.9. The Silicon Pixel Detector and the Forward Multiplicity Detector together measure charged particles over a wide range of -3.4 < $\eta$ < 5.0. Results on centrality evolution of particle production in p-Pb collisions will be presented both for photons and charged particles in comparison with phenomenological models (such as HIJING, DPMJET) based on different initial conditions and particle production mechanisms.
An increasing center-of-mass energy of proton-proton collisions and higher luminosities at the CERN Large Hadron Collider make it possible to study rare processes of the Standard Model (SM). In this poster, the measurements of both the inclusive and differential cross sections of top-quark–antiquark production in association with a Z boson (ttZ) are presented. Collision data corresponding to a total integrated luminosity of 139/fb, recorded in years 2015-2018 with the ATLAS detector at a center-of-mass energy of 13 TeV are analysed. Both inclusive and differential measurements are performed by selecting final states with either three or four isolated leptons (electrons or muons). The inclusive cross section is measured to be 0.99 +/- 0.05 (stat.) +/- 0.08 (syst.)~pb, in agreement with the most precise theoretical predictions of the SM. In the differential measurements both absolute and normalised cross sections are measured as a function of a number of kinematic variables which probe the kinematics of the ttZ system. Differential measurements are performed at particle and parton levels for specific fiducial phase-space volumes and are compared with theoretical predictions at different levels of precision. Based on a chi^2/ndf and p-value compatibility comparison, good agreement is observed between the measured differential cross sections and the SM predictions.
We explore a new center-of-mass energy at 5 TeV, to study diboson production in proton-proton collisions using data collected with the CMS detector. The WW, WZ, and ZZ cross sections are measured analyzing events with two, three, or four charged leptons in the final state. These measurements are compared with the best available theoretical predictions and across other experiments.
Explaining the tiny neutrino masses and non-zero mixings have been one of the key motivations for going beyond the framework of the Standard Model (SM). We discuss a collider testable model for generating neutrino masses and mixings via radiative seesaw mechanism. That the model does not require any additional symmetry to forbid tree-level seesaws makes its collider phenomenology interesting. The model includes multi-charged fermions/scalars at the TeV scale to realize the Weinberg operator at 1-loop level. After deriving the constraints on the model parameters resulting from the neutrino oscillation data as well as from the upper bound on the absolute neutrino mass scale, we discuss the production, decay and resulting collider signatures of these TeV scale fermions/scalars at the Large Hadron Collider (LHC). We consider both Drell-Yan and photo-production. The bounds from the neutrino data indicate the possible presence of a long-lived multi-charged particle (MCP) in this model. We obtain bounds on these long-lived MCP masses from the ATLAS search for abnormally large ionization signature. When the TeV scale fermions/scalars undergo prompt decay, we focus on the 4-lepton final states and obtain bounds from different ATLAS 4-lepton searches. We also propose a 4-lepton event selection criteria designed to enhance the signal to background ratio in the context of this model.
Detector concepts are being developed for the foreseen electron-positron International Linear Collider (ILC) in Japan. Set to run as a Higgs Factory, ILC will address a rich scientific program from electroweak physics to BSM. The detectors are being optimized for precision physics in a range of energies between 90 GeV and 1 TeV. This poster will summarized the required performance of detectors, the proposed implementation and the readiness of different technologies needed for the deployment at ILC.
The ATLAS Muon Spectrometer is going through an extensive Phase I upgrade to cope up with the future LHC runs of high luminosity of up to 7.5×10$^{34}$ cm$^{-2}$s$^{-1}$. The innermost and first station of the Muon end cap system, the Small Wheel, will be replaced by the New Small Wheel, which has high trigger and precision tracking capabilities. This is achieved by a combination of two detector technologies, Small-Strip Thin Gas Chamber (sTGC) and Micro Mesh Gaseous structures (MM). MM is used for the precision tracking and sTGC is used as a primary trigger detector because of its timing resolution (drift time of most electrons shorter than one bunch crossing period (25 ns) and the front-end ASICs (Application Specific Integrated Circuit), VMMs, can make measurement with precision of 2 ns along with the amplitude measurement). We are working extensively on the integration and commissioning of the front-end electronics for the sTGC chambers. Considering the complexity of the system we are dealing with, we are working to resolve many challenges to test large number of physical channels (~354K) and ASICs (more than 11K) with three different types – pads, strips and wires. We will present our experiences with the trigger and the readout performance studies of the electronics.
Longitudinal polarisation of the weak bosons is a direct consequence of Electroweak symmetry breaking mechanism providing an insight into its nature, and is instrumental in searches for physics beyond the Standard Model. We perform a polarisation study of the diboson production in the $pp \to e^+ \nu_e \mu^- \bar\nu_\mu$ process at NNLO QCD in the fiducial setup inspired by experimental measurements at ATLAS. This is the first polarisation study at NNLO. We employ the double pole approximation framework for the polarised calculation, and investigate NNLO effects arising in differential distributions.
The measurement of azimuthal correlations between two particles is a powerful tool to investigate the properties of strongly-interacting nuclear matter created in ultra-relativistic heavy-ion collisions. In particular, studying the near-side and away-side hadron yields associated with trigger particles can provide important information to understand both the jet -medium interaction and hadron production mechanism. We study two-particle correlations with V0 (K0s, Λ/Λ ) and charge hadrons as trigger particles of transverse momentum 8 < pT,trig < 16 GeV/c,and associated charged particles of 1 GeV/c < pT,assoc < pT,trig at mid-rapidity in pp and Pb–Pb collisions at a center-of-mass energy of 5.02 TeV per nucleon pair. After subtracting the contributions of the flow background v2 and v3 , the per-trigger yields are extracted for two-particle azimuthal differences |∆φ| < 0.9 on the near-side and |∆φ − π| < 1.2 on the away-side.
The ratio of the per-trigger yields in Pb–Pb collisions with respect to pp collisions , IAA, is measured in the near-side and away-side in the most central 0–10% collisions. On the near-side,a significant enhancement of IAA from 1.5 to 2 for different particles species is observed at the lowest pT,assoc. On the away-side, suppression to the level of (I AA ≈ 0.6 ) for pT,assoc>3 GeV/c is observed as expected from strong in-medium energy loss while an enhancement reaching 1.4
at the lowest pT,assoc . The data are compared to AMPT, HIJING and EPOS models. Most calculations qualitatively describe the near-side and away-side yield modification at intermediate and high pT,assoc .
The muon system of the CMS experiment has been instrumented with two wheels of triple-GEM detectors in order to ensure redundancy in the pseudo-rapidity region 1.55-2.2 so keeping the trigger rate at an acceptable level while not compromising the CMS physics potential in Run 3 of the LHC. The station, named GE1/1, provides two additional muon hit measurements which will improve the muon tracking and triggering performance in combination with the existing CSC detectors. As the commissioning phase of the detector is ongoing, prompt assessment of the muon detection performance is crucial for adjusting the operating parameters of the detector and its electronics. This contribution will present a set of analysis tools developed for the detector performance monitoring based on tools common to all the CMS muon subdetectors. Validation of the analysis based on simulations will be discussed, together with preliminary results obtained from cosmic-ray events.
The High Luminosity Large Hadron Collider (HL-LHC) will deliver five times the LHC nominal instantaneous luminosity, after a series of upgrades that will take place during the shutdown of 2024 –2026. The ATLAS Hadronic Calorimeter (TileCal) will require the complete replacement of the readout electronics in order to accommodate its acquisition system to the increased radiation levels, trigger rates, and high pile-up conditions during the HL-LHC era.The upgraded readout electronics will digitize the PMT signals from every TileCal cell for every bunch crossing and will transmit them directly to the off-detector electronics. In the counting rooms, the off-detector electronics will store the calorimeter signals in pipelined buffers while transmitting reconstructed trigger objects to the first level of trigger at 40 MHz.The TileCal upgrade project has undergone an extensive R&D program and several test beam campaigns. A Demonstrator module has been assembled using the upgraded on-detector electronics and tested in the ATLAS experiment. The Demonstrator module is operated and read out using a prototype of Tile PreProcessor (TilePPr) which also permits integrating the Demonstrator module into the present ATLAS TDAQ system. This contribution presents the status and performance of the Demonstrator module in the ATLAS experiment.
With the end of RUN-II, the LHC has delivered only 4% of the collision data expected to be available during its lifetime. The next data-taking campaign - RUN-III - will double the integrated luminosity the LHC accumulated in 10 years of operation. The Run-III will be the herald of the HL-LHC era, an era when 90% of total LHC integrated luminosity (4 ab$^{-1}$) will be accumulated allowing ATLAS to perform several precision measurements to constrain the Standard Model Theory (SM) in yet unexplored phase-spaces and in particular in the Higgs sector, only accessible at LHC. Direct searches have so far provided no indication of new physics beyond the Standard Model, however, they can be complemented by indirect searches that allow extending the reach at higher scales. Indirect searches are based on the ability to perform very precise measurements, a highly complex task at a hadron collider that will require tight control of theoretical predictions, reconstruction techniques, and detector operation. Moreover, populating extreme regions of phase-space for multi-differential production cross-section analysis will require the development and validation of Monte Carlo phase-space biasing techniques and efficient integration methods to produce the billions of events needed to cope with higher luminosities.
To answer the quest for high precision measurements in a high luminosity environment, a comprehensive upgrade of the detector and associated systems was devised and planned to be carried out in two phases. The Phase-I upgrade program foresees new features for the muon detector, for the electromagnetic calorimeter trigger system, and for all trigger and data acquisition chain and will operate to accumulate about 350 fb$^{-1}$ of integrated luminosity during the RUN-III. The RUN-III will mark the debut of a new trigger system designed to cope with more than 80 simultaneous collisions per bunch crossing. After this, ATLAS will proceed with the Phase-II upgrade to prepare for the high luminosity frontier where the ATLAS experiment will face more than 200 simultaneous collisions per bunch crossing and a high radiation level for many sub-systems. The Phase-II upgrade comprises a completely new all-silicon tracker with extended rapidity coverage that will replace the current inner tracker detector; the calorimeters and muon systems will have their trigger and data acquisition systems fully redesigned, allowing the implementation of a free-running readout system. Finally, a new subsystem called High Granularity Timing Detector will aid the track-vertex association in the forward region by incorporating timing information into the reconstructed tracks. A final ingredient, relevant to almost all measurements, is a precise determination of the delivered luminosity with systematic uncertainties below the percent level. This challenging task will be achieved by collecting the information from several detector systems using different and complementary techniques.
The presentation will focus on physics goals within the reach of Run-III and on the status of ongoing detector upgrades. An outlook toward the HL-LHC challenges will also be presented.
FASER (ForwArd Search ExpeRiment) is a new, small and inexpensive experiment designed to search for light, weakly interacting particles during Run 3 of the LHC. Such particles may be produced in large quantities in proton-proton collisions, travel for hundreds of meters along the beam axis, and can decay in two charged Standard Model particles. To reach its physics goals, a good hit resolution, and track reconstruction to separate the two closely-spaced, oppositely charged tracks is essential. In this poster, I review the physics discovery potential of FASER and the status of the track reconstruction, which is based on the ACTS toolkit. ACTS aims to provide an experiment-independent toolkit for track reconstruction.
A novel outreach project is presented that makes use of playing cards – one of the most ubiquitous toys around the world – to communicate physics in a fun, engaging manner. A custom deck of cards has been designed to inspire an interest in physics while being widely appealing to the general public and useful for gameplay, magic and cardistry. In the course of bringing this project to completion, many different social media platforms and channels are explored and used to facilitate communication about the project and physics in general. The project has presented opportunities for unusual collaborations. The cards will soon be in production and have excellent potential for use in outreach events and educational settings.
Total Ionizing Dose Effects tests and measurements are crucial requirements for solid state particle sensors and electronic control systems qualification in all the LHC present experiments and future upgrades.
These measurements can be performed not only in facilities explicitly built for this mission, but with some wisdom, also in medical or biological research facilities when some minimum requirements are present. In this poster will be shown the pianification and realization of SiPM x-ray irradiations for TID measurements realized in the italian TIFPA-INFN Trento Center laboratory.
In detail will be described the minimum flexibility required by the x-ray irradiation set-up, by the dose measurement system, the irradiation pianification and realization.
Finally will be presented the limitations observed in these measurements, how can be minimized and the final results.
Thanks to its high luminosity and center of mass energy, the future FCC-hh collider will allow us to probe processes with clean but rare final states that are unaccessible at the LHC. The study of diboson production processes poses a promising way of indirectly constraining New Physics in the context of the Higgs Boson. Specifically, the diphoton leptonic decay channels of the Wh and Zh production processes are examples for the aforementioned clean but rare final states. I will discuss our study of these channels at the FCC-hh in the SMEFT framework and how doubly differential distributions can be used to gain even better sensitivity to certain higher dimensional EFT operators.
The high-luminosity upgrade of the LHC (HL-LHC) is foreseen to reach an instantaneous luminosity a factor of five to seven times the nominal LHC design value. The resulting, unprecedented requirements for background monitoring and luminosity measurement create the need for new high-precision instrumentation at CMS, using radiation-hard detector technologies. This contribution presents the strategy for bunch-by-bunch online luminosity measurement based on various detector technologies. A main component of the system is the Tracker Endcap Pixel Detector with dedicated triggers for online measurement of luminosity and beam-induced background using pixel cluster counting on an FPGA. The potential of the exploitation of the outer tracker, the hadron forward calorimeter and muon trigger objects is also discussed, as well as the concept of a standalone luminosity and beam-induced background monitor using Si-pad sensors.
The Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider (LHC) is undergoing an extensive Phase II upgrade program to prepare for the challenging conditions of the High-Luminosity LHC (HL-LHC). A new timing detector in CMS will measure minimum ionizing particles (MIPs) with a time resolution of 30-40 ps for MIP signals at a rate of 2.5 Mhit/s per channel at the beginning of HL-LHC operation. The precision time information from this MIP Timing Detector (MTD) will reduce the effects of the high levels of pileup expected at the HL-LHC, bringing new capabilities to the CMS detector. The barrel timing layer (BTL) of the MTD will use sensors that are based on LYSO:Ce scintillation crystals coupled to SiPMs with TOFHIR ASICs for the front-end readout. In this talk we will present motivations for precision timing at the HL-LHC and an overview of the MTD BTL design, including ongoing R&D studies targeting enhanced timing performance and radiation tolerance.
High-multiplicity events in small collision systems (pp, p-Pb) at LHC energies exhibit soft-physics phenomena that are associated with collective dynamics of the quark-gluon plasma (QGP) in large collision systems, e.g. azimuthal correlation between soft particles having large pseudo-rapidity separation. Jet quenching is likewise a necessary consequence of the formation of a QGP. However, within the precision of current measurements there is no significant evidence of jet quenching in small systems. Improvement of the experimental sensitivity to jet quenching in small collisions systems is therefore essential to address the question of the limits of QGP formation.
In Run3, the LHC will carry out a brief run with O-O collisions at $\sqrt{s_{NN}}$ = 6.37 TeV. The O-O system bridges the gap in system size between pp and p-Pb on one side and Pb-Pb on the other, and provides measurement channels in which quenching effects are expected to be both observable experimentally and calculable theoretically. The poster will present projections for high-precision measurements of jet quenching in the O-O run by ALICE.
This talk discusses the prospects under various scenarios of the amount of data collected and assumptions on systematic uncertainties to explore the top squark and neutralino mass degenerate corridor. The analysis technique employs a deep neural network fed with variables sensitive to top quark spin correlation and polarization in top quark pair production. In particular, different improvements on experimental and theoretical uncertainties are studied in terms of their impact on sensitivity as well as the amount of data collected, e.g. proton-proton collisions at the LHC collected during Run II, upcoming Run III and the HL-LHC. Further improvements of the method and sensitivity are possible by a multi-differential measurements of the full spin density production matrix and other angular observables of top quarks pairs and we highlight the impact and plans of these.
Explicit expressions for quantum fluctuations of energy in subsystems of a hot relativistic gas of spin-1/21/2 particles are derived. The results depend on the form of the energy-momentum tensor used in the calculations, which is a feature described as pseudo-gauge dependence. However, for sufficiently large subsystems the results obtained in different pseudo-gauges converge and agree with the canonical-ensemble formula known from statistical physics. As different forms of the energy-momentum tensor of a gas are a priori equivalent, our finding suggests that the concept of quantum fluctuations of energy in very small thermodynamic systems is pseudo-gauge dependent. On the practical side, the results of our calculations determine a scale of coarse graining for which the choice of the pseudo-gauge becomes irrelevant.
In high-energy physics experiments, online selection is crucial to select interesting collisions from the large data volume. ATLAS b-jet triggers are designed to identify heavy-flavour content in real-time and provide the only option to efficiently record events with fully hadronic final states containing b-jets. In doing so, two different, but related, challenges are faced. The physics goal is to optimise as far as possible the rejection of light jets, while retaining a high efficiency on selecting b-jets and maintaining affordable trigger rates without raising jet energy thresholds. This maps into a challenging computing task, as tracks and their corresponding vertexes must be reconstructed and analysed for each jet above the desired threshold, regardless of the increasingly harsh pile-up conditions. We present an overview of the ATLAS strategy for online b-jet selection for the LHC Run 2, including the use of novel methods and sophisticated algorithms designed to face the above mentioned challenges. The evolution of the performance of b-jet triggers in Run 2 data is presented, including the use of novel triggers designed to select events containing b-jets in heavy ion collisions.
Because of its ability to systematically capture beyond Standard Model (SM) effects, effective field theory (EFT) has received much attention in phenomenological analyses of e.g. LHC data. Recent theoretical studies have focused on operator basis construction and loop level calculations in EFTs. In this work, we construct the complete basis for scalar $\phi^4$ EFT up to mass dimension 12, with the help of the Hilbert series method. We present high loop calculations (up to 5 loop), and find unexpected zeros and interesting symmetric structures in the anomalous dimension matrix. The method we use can be extended to more general theories, i.e. SMEFT and be applied in high precision measurements within the SMEFT framework at the LHC.
This poster presents a search for charged Higgs bosons decaying into 𝑊𝑊 or 𝑊𝑍 bosons, involving experimental signatures with two leptons of the same charge, or three or four leptons with a variety of charge combinations, missing transverse momentum and jets. Some focus will be given on the four leptons channel signature. A data sample of proton–proton collisions at a centre-of-mass energy of 13 TeV recorded with the ATLAS detector at the Large Hadron Collider between 2015 and 2018 is used. The data correspond to a total integrated luminosity of 139 fb−1. The search is guided by a type-II seesaw model that extends the scalar sector of the Standard Model with a scalar triplet, leading to a phenomenology that includes doubly and singly charged Higgs bosons. Two scenarios are explored, corresponding to the pair production of doubly charged 𝐻 bosons, or the associated production of a doubly charged 𝐻 boson and a singly charged 𝐻 boson. No significant deviations from the Standard Model predictions are observed. 𝐻 bosons are excluded at 95% confidence level up to masses of 350 GeV and 230 GeV for the pair and associated production modes, respectively.
Many extensions of the Standard Model predict the production of Dark Matter in association with Higgs bosons.
This search examines the final state of missing transverse momentum
accompanied by a bb pair coming from a Higgs boson decay. For this purpose proton-proton
collision data is used which is produced at 13 TeV centre-of-mass energy and recorded by the
ATLAS experiment at the LHC, amounting to an integrated luminosity of 139 fb−1.
The increase in integrated luminosity in conjunction with many analysis optimizations result in
a better sensitivity in comparison to previous iterations. No significant deviation from the
Standard Model is observed and the results are interpreted in the context of the 2-Higgs
doublet models with an additional vector or pseudoscalar mediator.
This poster presents a search for electroweak production of mass degenerate chargino-neutralino pairs in the context of R-parity conserving supersymmetric simplified models in which the chargino decays into W boson and the lightest neutralino, while the next-to-lightest neutralino decays into either Higgs or Z boson, in addition to the lightest neutralino. This search concentrates on final states characterized by the presence of one isolated charged lepton (either electron or muon) accompanied by jets and missing transverse momentum. The analysis exploits an integrated luminosity of 139 fb-1 which corresponds to the full Run-2 of proton-proton collisions data recorded by the ATLAS detector at Large Hadron Collider. No statistically significant evidence of a deviation from the Standard Model expectation is observed. Expected and observed 95% C.L limits are set based on the chargino and the lightest neutralino mass, assuming pure wino production cross-sections.
A novel search for exotic decays of the Higgs boson to pairs of long-lived
neutral particles, each decaying to a bottom quark pair, is performed
using 139 fb$^{-1}$ of $\sqrt{s} = 13$ TeV proton-proton collision data
collected with the ATLAS detector at the LHC. Events consistent with the
production of a Higgs boson in association with a $Z$ boson are analyzed,
with the leptonic decay of the $Z$ mitigating the trigger challenges associated
with displaced objects. Long-lived particle (LLP) decays are reconstructed from
inner detector tracks as displaced vertices with high mass and track multiplicity
relative to Standard Model processes. The analysis selection requires the
presence of at least two displaced vertices, effectively suppressing Standard Model
backgrounds. The residual background contribution is estimated using a fully data
driven technique. No excess over the Standard Model prediction is observed, and
upper limits are set on the branching ratio of the Higgs boson to pairs of LLPs.
Branching ratios of >=10% are excluded at the 95% confidence level for
LLP mean proper lifetimes $c\tau$ as small as 4 mm and as large as 110 mm.
For LLP masses below 40 GeV, these results represent the most stringent
constraints to date for this range of proper lifetimes.
This poster presents the search for new resonances decaying into a W boson and a 125GeV Higgs boson in the lvbb final state, where l=e+- or mu+-, in pp collisions at root(s)=13TeV. The search includes a channel requiring one lepton and missing transverse energy, as well as a channel with only missing transverse energy for the case where the lepton is not reconstructed. The data used correspond to a total integrated luminosity of 139fb-1 collected with the ATLAS detector at the Large Hadron Collider with the full Run-2 dataset. The search is conducted by examining the reconstructed invariant and transverse mass distributions of Wh candidates for evidence of a localized excess in the mass range of 400GeV up to 5TeV. Upper limits are placed at the 95% confidence level on the production cross-section times branching fraction of heavy W' resonances in heavy-vector-triplet models.
A Multi-TeV (√s=1.5-10TeV) muon collider providing O(1/ab)integrated luminosity will be a great opportunity to probe the most intimate nature of the Standard Model (SM) and the Electroweak Symmetry Breaking mechanism, allowing the precise measurement of the Higgs couplings to several SM particles. The study of the Higgs boson couplings to the second generations of fermions is of particular interest due to sensitivity to a whole class of new physics models. On the other end, this measurement is extremely challenging, because of the small branching ratio. We explored, for the first time, the search for H→ccbar at a multi-TeV muon collider. The μ+μ− →Hνν¯→ cc¯νν¯ signal process has been fully simulated and reconstructed with a preliminary detector design, along with the main backgrounds. A c quark-tagging algorithm has been developed, combining several observables in a single discriminator using Machine Learning techniques, with the goal to improve the rejection of jets coming from b-quark and u-d-s-g hadronization. A first estimate of the precision on the Higgs coupling with c-quark reachable with a muon collider machine will be presented.
A muon collider represents the ideal machine to reach very high center-of-mass energies (√s=1.5-10TeV) and luminosities O(0.5-10/ab). A large number of Higgs bosons will be produced mainly through the Vector Boson Fusion (VBF) processes. The VBF through Z bosons (ZZH) production process could be difficult to disentangle to the dominant WWZ, since the final state VBF muons, produced in the very forward region, could escape the detector. As a consequence, the H→ ZZ decay process turns out to be favoured to probe exclusively the Higgs bosons coupling with Z bosons at a multi-TeV muon collider. In addition, a feasibility study of H→ ZZ search in such an environment is mandatory before assessing the feasibility of more appealing measurements such as the trilinear and quartic Higgs boson coupling.
We will present, for the first time, a feasibility study of the H→ ZZ* → 4µ process in a multi-TeV muon collider. The study of the 4 muons final state, performed on fully simulated Monte Carlo samples, allows to optimize the muon reconstruction, thus providing feedback for the detector design. Reducible background induced by the muons decaying in the beam pipe and irreducible backgrounds from Standard Model have been studied, together with dedicated Machine Learning techniques for their reduction. A first estimate on the Higgs coupling to Z bosons in the 4µ channel will be provided.
Small collision systems, such as pp or p-Pb, exhibit signatures of collective effects that are thought to be associated with quark-gluon plasma formation. The absence to date of jet quenching signals raises a question about the origin of the observed collectivity and calls for more accurate jet quenching measurements in small collision systems. In this poster, the ALICE Collaboration reports results of a novel approach to search for jet quenching effects in high-multiplicity pp collisions at $\sqrt{s}$ = 13 TeV, which are selected based on charged-particle multiplicity measured using forward scintillator detectors. In the analysis, we look for broadening of the azimuthal acoplanarity measured by the semi-inclusive distribution of charged-jets recoiling from a high transverse momentum, charged trigger hadron. Jet reconstruction is performed using anti-$k_{\rm {T}}$ algorithm with $R$ = 0.4. The measured jet yield is corrected for uncorrelated background,including multi-partonic interaction, by means of a data-driven statistical method. Comparison of recoil jet distributions measured in minimum bias and high-multiplicity events reveals significant broadening in the high-multiplicity data, resembling jet quenching. However, a qualitatively similar feature is observed in pp data simulated with the PYTHIA 8 generator, which does not account for jet quenching. We will discuss the current status of the analysis, and prospects to understand the origin of this striking phenomenon.
To address the incompleteness of the Standard Model (SM), many models introduce new gauge fields and interactions, which manifest as new particles with TeV scale masses. Thus, it is imperative to understand particles and interactions at the TeV scale. An example of one such particle is the $Z^\prime$ boson, a heavy, neutral spin-1 gauge boson. Numerous ideas exist to probe the TeV scale, motivating a large volume of $Z^\prime$ searches at the LHC. However, those searches have failed to show signs of new physics. Possible explanations point to new physics having different features to what is traditionally assumed in $Z^\prime$ searches, remaining concealed in processes not yet investigated. In particular, existing searches targeting Drell-Yan processes rely on a sizable coupling for the $Z^\prime$ to light quarks ($g_{\ell}$). This talk focuses on a search for $Z^\prime$ produced via vector boson fusion processes (VBF), whose production rate is independent of $g_{\ell}$, and which has non-universal fermion couplings (NUFC). Scenarios with NUFC are motivated by the recent anomalies in the $B$-physics sector and the muon anomalous magnetic moment.
Several theoretical models of Beyond the Standard Model (BSM) physics predicts the production of new resonances at hadron collider experiments. This study, in particular, is focused on the search for Quantum Black hole (QBH) and existence of substructure of light and heavy flavor quarks in the photon + jet final state in the proton-proton collisions at a centre of mass energy of 13 TeV using the data collected by the CMS detector in the LHC Run-2 period. The exclusion limits are set on the model parameters in the absence of a signal in the data.
We present a search for non-resonant di-Higgs production in the HH->bbyy decay channel. The measurement uses 139ifb of pp collisions recorded by the ATLAS experiment at a center-of-mass energy of 13 TeV. Selected events are separated into multiple regions, targeting both the SM signal and BSM signals with modified Higgs self-couplings. No excess with respect to background expectations are found and upper limits at 95% confidence level are set on the di-Higgs production cross sections and the Higgs trilinear coupling modifier.
The trilninear self-coupling of Higgs can directly be accessed at the LHC by inclusive production of Higgs pair. A search for the non-resonant Higgs pair production via gluon-gluon fusion (ggF) and as well as Vector Boson Fusion (VBF) processes has been performed recently by CMS collaboration with the complete LHC Run-2 proton-proton collision data at center of mass energy of $\sqrt{s}$=13 TeV, in the most sensitive 2$\gamma$ + 2b jets inclusive final state. This presentation emphasizes on the results of the production cross section of inclusive diHiggs production as well as estimates of relevant coupling parameters.
A search for scalar top quark pair production at the LHC with the CMS experiment is presented. This search targets a region of parameter space where the kinematics of top squark pair production and top quark pair production are very similar because of the mass difference between the top squark and the neutralino being close to the top quark mass. The search is performed with the full run 2 data set of proton-proton collisions at a centre-of-mass energy of 13 TeV, collected by the CMS detector, using events containing dilepton pairs with opposite charge. A DNN algorithm is used to separate signal from background.
A study of long-lived dark Z boson that either couples directly to quarks or mixes kinetically with the standard model Z boson is performed. Production via a vector portal and the Higgs portal are considered. The impact of additionally mixing the standard model Higgs boson with a dark Higgs boson on the production and decays of the dark Z is evaluated. Specifically, decays with a final state of displaced dimuons are considered where the dark Z and the dark Higgs decay directly to a dimuon or indirectly via dark scalars or fermions to an even number of dimuons, which can give rise to final states with large muon multiplicities. The production and total cross sections of the processes of interest as well as decay widths and decay lengths are calculated by applying Monte Carlo simulation using the framework of MadGraph5_aMC@NLO v2.7.0. The sensitivity for such searches in Runs 2 and 3 of the Large Hadron Collider is discussed. Kinematics of the displaced dimuons is also investigated.
A search is performed for W' bosons decaying to a top and a bottom quark in the all-hadronic final state, in proton-proton collisions at a center-of-mass energy of 13 TeV using the data collected by the CMS experiment between 2016 and 2018, corresponding to an integrated luminosity of 137 fb$^{-1}$. Deep neural network algorithms are used to identify the jet initiated by the bottom quark and the jet containing the decay products of the top quark when the W boson from the top quark decays hadronically. No excess above the estimated standard model background is observed. Both left- and right-handed W' bosons with masses below 3.4 TeV are excluded at 95% confidence level, and the most stringent limits to date on W' bosons decaying to a top and a bottom quark in the all-hadronic final state are obtained.
The Strongly Interacting Massive Particle (SIMP) paradigm has recently been increasingly studied. It provides dark matter candidates as pseudo-Goldstone bound states of dark fermions under a new gauge group. In this scenario freezeout occurs through $3\to 2$ dark matter self-annihilation and points to DM particles with masses of $\mathcal O(100 ~\text{MeV})$. We study the spectrum of the lightest mesons of $Sp(4)$ gauge theory with 2 massive almost-degenerate fundamental Dirac fermions using lattice gauge theory. This setup leads to a total of 5 pseudo-Goldstone bosons which can self-annihilate. In particular, we investigate the breaking of the flavour symmetry when making the fermions non-degenerate. We report that one pseudo-Goldstone is lighter than the others while the remaining heavier four pseudo-Goldstones are still mass-degenerate.
As known, nuclear periphery in heavy nuclei is enriched by neutrons. Neutron-to-proton ratio is especially high in a very thin (<0.5 fm) surface layer of such nuclei termed neutron skin (NS). The difference between RMS radii of neutron and proton density distributions is also subtle and difficult to measure. The results obtained with different theoretical and experimental methods are characterized by large uncertainties and sometimes contradict each other. In this work we propose a new method to constrain the parameters of neutron skin by investigating the composition of spectator matter in ultracentral collisions of heavy relativistic nuclei. The yields of spectator neutrons and protons in ultracentral $^{208}$Pb–$^{208}$Pb collisions at the CERN SPS and LHC were calculated within a new version of Abrasion-Ablation Monte Carlo for Colliders model (AAMCC-MST) with accounting for preequilibrium break-up of spectator matter (prefragment) due to its irregular half-moon shape. AAMCC-MST modeling of each collision event was proceeded in several stages. Firstly, the size and shape of spectator prefragments from both colliding nuclei were defined using Glauber Monte Carlo model. Secondly, the excitation energy of the prefragments was calculated. Thirdly, the minimum spanning tree (MST) clustering algorithm was applied to both prefragments to define secondary clusters with their excitation energy estimated depending of their size. Finally, cluster decays were simulated with Fermi Break-up model from Geant4 toolkit. It is found that the simulations of ultracentral $^{208}$Pb–$^{208}$Pb collisions with accounting for NS demonstrate a modest 10% increase of the cross sections to produce a given number of spectator neutrons in comparison to calculations without NS. Similar cross sections, but calculated for events without spectator protons, demonstrate a prominent increase up to 50% due to accounting for NS. The impact of NS on the events with given numbers of spectator protons (N$_{p}$=1,2,3) is also investigated. The dependence of the considered cross sections on the parameters of density distributions of neutrons and protons in $^{208}$Pb has been studied. The considered effects of NS can be studied at the ALICE experiment at the LHC providing that the measurements of neutron and proton yields are properly corrected for the acceptance and efficiency of forward hadronic calorimeters.
Machine learning (ML) is pushing through boundaries in computational physics.
Jet physics, with it's large and detailed dataset, is particularly well suited.
In this poster I will present work done in https://arxiv.org/abs/2104.01972 on
the application of an unusual ML technique, Spectral Clustering, to jet formation.
Spectral clustering differs from much of ML as it has no "black-box" elements.
Instead, it is based on a simple, elegant algebraic manipulation.
This allows us to inspect the way the algorithm is interpreting the data, and apply physical intuition.
Infrared-collinear (IRC) safety is of critical importance to jet physics.
IRC safety requires that jets formed are insensitive to collinear splitting and soft emissions.
Spectral clustering is shown to be possible to apply in an IRC safe way, and the conditions for this are noted.
Finally, the capacity of spectral clustering to handle different datasets is shown.
Its excellent performance, both in terms of multiplicity and mass peaks is demonstrated.
The reasons for its flexibility are discussed, and potential developments offered.
The production of strange and multi-strange hadrons in heavy-ion collisions is enhanced with respect to minimum bias pp collisions. This feature has been further investigated by studying pp collisions as a function of the produced charged particle multiplicity. In pp collisions, the strange hadron yields normalised to the pion yield show an increase with the multiplicity of produced particles. The origin of this striking phenomenon remains an open question: is it related to soft particle production or to hard scattering events, such as jets?
The ALICE experiment has further studied this feature by separating strange hadrons produced in jets from those produced in soft processes. For this purpose, the angular correlation between high-${p_{\mathrm{T}}}$ charged particles and strange hadrons has been exploited.
In this poster, the recent measurement of the near-side jet yield and the out-of-jet yield of $\mathrm{K^0_S}$ and $\Xi$ is shown as a function of the multiplicity of charged particles produced in pp collisions at $\sqrt{s}=13$ TeV. The ratio between the $\Xi$ and the $\mathrm{K^0_S}$ yields is also shown, to highlight the effect of the different strangeness content of the two hadrons.
The results suggest that soft (out of jet) processes are the dominant contribution to strange particle production.
ATLAS New Small Wheels upgrade project plans to replace the inner parts of the end-caps of ATLAS Muon Spectrometer with new mechanical structures equipped by a combination of small Thin Gap Chambers (sTGC) and resistive MicroMegas (MM) detectors. During the integration of detectors, the sTGC and MM are separately tested before to be assembled. On MM detectors, tests on the noise are performed together with measurements of tracking efficiency, exploiting the cosmic rays as incoming particles.
The MM readout channels are floating copper strips with different lengths, from 284.0 mm to 1990.0 mm (capacitively coupled with carbon strips at few hundreds V). Due to the wide strip length range, the strip capacitance affects the noise with different magnitude, leading a larger spread in the baseline along a tracking plane, respect to a configuration with same size strips. It also impacts the thresholds that are proportional to baseline rms, with a settable factor. Especially if the variations in thresholds is large among close strips on a tracking layer, it compromises the time measurements for the μTPC procedure.
To uniform the thresholds on the plane, a single-channel level correction can be applied, by an implemented trimmer in the electronic boards (based on VMM ASIC).
Therefore, studies on the baseline and threshold were performed as function of the strip length.
After them, studies on the efficiency and cluster parameters were carried on as function of the thresholds. Their results will be presented.
Sophisticated machine learning techniques have promising potential in search for physics beyond Standard Model (BSM) in Large Hadron Collider (LHC). Convolutional neural networks (CNN) can provide powerful tools for differentiating between patterns of calorimeter energy deposits by prompt particles of Standard Model and long-lived particles predicted in various models beyond the Standard Model. We demonstrate the usefulness of CNN by using a couple of physics examples from well motivated BSM scenarios predicting long-lived particles giving rise to displaced jets. Our work suggests that modern machine-learning techniques have the potential to discriminate between energy deposition patterns of prompt and long-lived particles, and thus, they can be useful tools in such searches.
Jets are collimated emission of multitude of hadrons which originate from the hard partonic scatterings. They play an important role as hard probes of Quark-gluon plasma (QGP). These hard jets lose their energy through medium-induced gluon radiation and collisonal energy loss. This suppression of final state hadrons at high $p_{\rm T}$ is referred to as jet quenching. Jet quenching is an important signature of QGP and can be characterized by the jet transport parameter ($\hat{q}$), which encodes the parton energy loss in the medium. In this work, we have estimated $\hat{q}$ for $pp$ and AA collisions using the Color String Percolation Model (CSPM). CSPM is a widely used QCD-inspired model which assumes that color strings are stretched between projectile and the target. These color strings can decay into new strings via quark-antiquark pair production and subsequently hadronize to produce observed final state hadrons.
We have studied the jet transport parameter ($\hat{q}$) as a function initial percolation temperature for $pp$ and Pb-Pb collisions at LHC energies. We observe that $\hat{q}$ increases linearly with the increase in initial percolation temperature regardless of the collision system or collision energy. When studied as a function of charged particle multiplicity scaled with the nuclear overlap area, at low multiplicities, $\hat{q}$ shows a sharp increase and this dependency becomes weak at higher multiplicities. This suggests that at lower multiplicities, the system is not dense enough to quench the partonic jets, whereas with increase of multiplicity the jet quenching becomes higher. We have also studied $\hat{q}$ as a function of initial energy density and we found that in the low energy density regime, the system behaves almost like a massless hot pion gas. At higher energy densities, the system deviates from that of the ideal QGP. Finally we have also studied the dimensionless scaled jet transport parameter ($\hat{q}/T^{3}$) which can give us information about the hot and dense partonic medium. We have compared our results obtained from CSPM which has a similar behaviour with that obtained from JET collaboration.
Angularities are a class of observables of interest for jet phenomenology at the LHC. They are defined by
\begin{equation}
\lambda_\alpha^\kappa=\sum_{i\in\text{jet}}\left(\frac{p_{T, i}}{\sum_{j\in\text{jet}}p_{T, j}}\right)^\kappa \left(\frac{\Delta_i}{R_0}\right)^\alpha
\end{equation}
where $R_0$ is the jet radius and
\begin{equation}
\Delta_i=\sqrt{(y_i-y_\text{jet})^2+(\phi_i-\phi_\text{jet})^2}
\end{equation}
is the Euclidean azimuth-rapidity distance of particle $i$ from the jet axis.
The most standard example of jet angularity is the jet mass corresponding to $\kappa=1$, $\alpha=2$.
In [1] we present a phenomenological study of angularities on the highest transverse-momentum jet in LHC events that feature the associate production of a Z boson and one or more jets. In particular, we study angularity distributions that are measured on jets with and without the Soft Drop grooming procedure. We begin exploring state-of-the-art MC parton shower simulations and we qualitatively assess the impact of NLO matching and merging procedures, then we move to analytic resummation of large logarithms at NLL accuracy. Matching to NLO results is performed in order to achieve NLL' accuracy.
In [2] we use previous results to build a tagger able to determine the flavour of the leading jet in the Z+jet process with some level of confidence. The quark/gluon tagging procedure is achieved through a cut on a jet angularity and it is theoretically well-defined since it exhibits infrared and collinear safety. Now, tagging the flavour of the jet as quark-initiated, we show that it is possible to enhance significantly the initial-state gluon contributions. Exploiting both resummation and MC simulations, we perform a study of their efficiencies and their dependence on non-perturbative effects, for different angularities and different levels of grooming. The first application we want to investigate for our results is to assess in more detail the impact of these types of observable in fits of parton distribution functions.
[1] Jet Angularities in Z+jet production at the LHC - S. Caletti, O. Fedkevych, S. Marzani, D. Reichelt, S. Schumann, G. Soyez, V. Theeuwes; in preparation.
[2] Tagging the initial-state gluon - S. Caletti, O. Fedkevych, S. Marzani, D. Reichelt; in preparation.
In conjunction with the High Luminosity upgrade of the LHC, the ATLAS detector is also undergoing an upgrade to handle the significantly higher data rates. The muon end-cap system upgrade in ATLAS, lies with the replacement of Small Wheel. The New Small Wheel is expected to combine high tracking precision with upgraded information for the Level-1 trigger. To accomplish this, sTGC (small Thin Gap Chamber) and MicroMegas (MicroMesh Gas chamber) detector technologies are being deployed.
The MicroMegas detector technology is equipped with three types of electronics boards to produce trigger signals and track muons. These boards are the MMFE8 (MicroMegas Front End with 8 VMM chips), the L1DDC (Level 1 Data Driver Card) and the ADDC (ART Data Driver Card). The ART (Address in Real Time) signals produced by the MMFE8s are propagated through the ADDC and sent to the MicroMegas Trigger Processor for the decision of the Level 1 Accept trigger signal.
In order to test the functionality and efficiency of the trigger electronics, various tests are being conducted at building 899 (BB5). During the "MicroMegas ART connectivity test", internal test pulses are sent through the trigger electronics to simulate ART hits from the Front Ends to the Trigger Processor. This test is performed to validate every New Small Wheel sector and is essential to identify ADDC boards or fibers that must be replaced, tested and then repaired. Issues on MMFE8s and L1DDCs can be identified as well. Finally, the trigger processor's decision logic is being tested with cosmics data. Using the cosmics data, data acquisition, firmware and trigger logic are being improved. In this poster, the various tests and results from cosmics data will be presented.
We describe the performance of the ATLAS Forward Proton Time-of-Flight detector (ToF) in Run2 and the upgrades made to the ToF for operation in Run3. We describe picosecond laser laboratory test results, previous test beam results, and the expected operational parameters and performance of the new ToF in Run3.
During Run-2 the Large Hadron Collider (LHC) has provided, at the World's energy frontier, proton-proton collisions to the ATLAS experiment with high instantaneous luminosity of up to 2.1x10$^{34}$ cm$^{-2}$s$^{-1}$, placing stringent operational and physical requirements on the ATLAS trigger system in order to reduce the 40 MHz collision rate to a manageable event storage rate of 1 kHz, while not rejecting interesting collisions.
The Level-1 trigger is the first rate-reducing step in the ATLAS trigger system with an output rate of up to 100 kHz and decision latency of less than 2.5 microseconds. Since 2017, an important role is played by the Level 1 Topological Processor (L1Topo). This innovative system consists of two blades designed in AdvancedTCA form factor, mounting four individual state-of-the-art processors, and providing high input bandwidth and low latency data processing. Up to 128 topological trigger algorithms can be implemented to select interesting events by applying kinematic and angular requirements on electromagnetic clusters, hadronic jets, muons and total energy reconstructed in the ATLAS apparatus. This resulted in a significantly improved background event rejection rate and improved acceptance of physics signal events, despite the increasing luminosity. The L1Topo system has become more and more important for physics analyses making use of low energy objects, commonly present in the Heavy Flavour or Higgs physics events for example.
In this presentation, an overview of the L1Topo architecture, simulation and performance results during Run-2 is discussed alongside with upgrade plans for the L1Topo system to be installed for the future data taking that will start in 2022.
Muon triggers are essential for studying a variety of physics processes in the ATLAS experiment, including both standard model measurements and searches for new physics. The ATLAS muon trigger consists of a hardware based system (Level 1), as well as a software based reconstruction (High Level Trigger). The muon triggers have been optimised during Run 2 to provide a high efficiency while keeping the trigger rate low. We will present an overview of how we trigger on muons, recent improvements, the performance of the muon trigger in Run 2 data, and the improvements and the readiness for Run 3.
Four years after the deployment of the ATLAS public website using the Drupal 7 content management system, the ATLAS Education & Outreach group has completed its migration to the new CERN Drupal 8 infrastructure. We present lessons learned from the development, usage and evolution of the original web site, and how the choice of technology helped to shape and reinforce our communication strategy. We then discuss tactics for the migration to Drupal 8, including our choice to use the CERN Override theme. This theme was developed by the CERN web team to support clients like ATLAS to develop web sites in the relatively complex and non-intuitive environment of Drupal. Furthermore, CERN has encouraged usage of this theme to mitigate maintenance and ease future migration. We present the effects this choice has on the design, implementation, and operation of the new site.
Understanding events from proton interactions with residual gas in the beam pipe, with collimators or from cosmic rays, is of primary importance to identify potential risk of damage to the accelerator and experiments.
In addition, these events represent one of the main background of non-conventional physics signatures based on tracks not pointing to the interaction point, out-of-time energy deposits, or displaced decay vertices might come from signals released by long-lived heavy particles.
The characteristics of these non-collision backgrounds are illustrated in detail in order to identify, estimate and reject them by using the full ATLAS detector.
Scattering amplitudes are often split up into their gauge (su(N)) and kinematic (two copies of complexified su(2)) components. Since the su(N) gauge part is often calculated using flows of colour, it should similarly be possible to describe the su(2) $\oplus$ su(2) kinematics of an amplitude in terms of flows of chirality. In two recent papers (hep-ph:2003.05877 & hep-ph:2011.10075) we showed that this is indeed the case, introducing the chirality-flow formalism for Standard Model calculations. In the chirality-flow method (which simplifies the spinor-helicity method) Feynman diagrams can be directly written down in terms of Lorentz-invariant spinor inner products, allowing the simplest and most direct possible path from Feynman diagram to complex number. In this poster, I will introduce this method and show some examples.
The MIP Timing Detector (MTD) of the Compact Muon Solenoid (CMS) will provide precision timestamps with 40 ps resolution for all charged particles up to a pseudo-rapidity of |η|=3. This upgrade will mitigate the effects of pile-up expected under the High-Luminosity LHC running conditions and bring new and unique capabilities to the CMS detector. The endcap region of the MTD, called the Endcap Timing Layer (ETL), will be instrumented with silicon low gain avalanche detectors (LGADs), covering the high-radiation pseudo-rapidity region 1.6 < |η| < 3.0. The LGADs will be read out with the ETROC readout chip, which is being designed for precision timing measurements. We present recent progress in the characterization of LGAD sensors for the ETL and development of ETROC, including test beam and bench measurements.
Ever since the start of the LHC operations in 2009, the mission of processing, management and storage of data collected by the LHC experiments has been executed by the Worldwide LHC Computing Grid (WLCG). When the building of the infrastructure for LHC computing started (~2001) the distributed or cloud computing was non-existent and the WLCG team had to invent all of the tools from scratch. Gradually, an extensive infrastructure of computing sites spread over 5 continents and interconnected with high capacity internet connectivity was built and continuously upgraded until the current ecosystem of ~ 1 million constantly occupied computer cores, ~ 1 ExaByte of storage and massive global networking with links of 10 – 100 Gb/s throughput. The computing centers involved in WLCG are categorized as Tier 0,1 and 2. Tier-0 is CERN, 14 Tier-1s are large computing centers providing computing power and disk and tape storage. Finally, there are 146 Tier-2 sites mostly universities and scientific institutes, which can store sufficient data and provide adequate computing power for simulation and analysis tasks. Tier-2s represent a very crucial part of the WLCG ecosystem delivering about half of the global WLCG resources.
In this contribution we will present an overview of operations of the Tier-2 site in Prague, Czech republic, which provides computing and storage services for experiments ALICE and ATLAS and also some non-LHC experiments. This overview will show the crucial role of Tier-2 sites in the WLCG considering how many are involved. Our computing center is of a distributed character: the main part of resources is installed at the Institute of Physics (IoP) of the Czech Academy of Sciences (CAS); a part of the XRootD storage cluster for ALICE is operating at the Nuclear Physics Institute (NPI) of the CAS. Smaller clusters of the computing servers are located at the Faculty of Mathematics and Physics of the Charles University, and in CESNET, an association of the Czech Research and Educational institutions. We will provide a detailed overview of the Prague Tier-2 site infrastructure and operations with a special focus on the services provided for the ALICE experiment.
A series of upgrades are planned for the LHC accelerator to increase it's instantaneous luminosity to 7.5×10$^{34}$ cm$^{-2}$s$^{-1}$. The luminosity increase drastically impacts the ATLAS trigger and readout data rates. The present ATLAS Small Wheel Muon detector will be replaced with a New Small Wheel (NSW) detector which is expected to be installed in the ATLAS underground cavern by the end of the Long Shutdown 2 of the LHC. Due to its complexity and long-term operation, the NSW requires the development of a sophisticated Detector Control System (DCS). The use of such a system is necessary to allow the detector to function consistently and safely as well as to function as a seamless interface to all sub-detectors and the technical infrastructure of the experiment. The central system handles the transition between the probe’s possible operating states while ensuring continuous monitoring and archiving of the system’s operating parameters. Any abnormality in any subsystem of the detector triggers a signal or alert (alarm), which alerts the user and either adapts to automatic processes or allows manual actions to reset the system to function properly.
The International Particle Physics Outreach Group (IPPOG) is a network of scientists, science educators and communication specialists working across the globe in informal science education and outreach for particle physics. The primary methodology adopted by IPPOG requires the direct involvement of scientists active in current research with education and communication specialists, in order to effectively develop and share best practices in outreach. IPPOG member activities include the International Particle Physics Masterclass programme, International Day of Women and Girls in Science, Worldwide Data Day, International Muon Week and International Cosmic Day organisation, and participation in activities ranging from public talks, festivals, exhibitions, teacher training, student competitions, and open days at local institutions. These independent activities, often carried out in a variety of languages to public with a variety of backgrounds, all serve to gain the public trust and to improve worldwide understanding and support of science. We present our vision of IPPOG as a strategic pillar of particle physics, fundamental research and evidence-based decision-making around the world.
The LHCb experiment is a b-physics dedicated experiment at the LHC collider. It has a wide physics program, covering different fields of interest: among others, precise measurements of the CKM matrix elements and the search for lepton flavour violation and beyond Standard Model physics.
The LHCb detector has successfully performed during the Run 1 and Run 2 of the LHC, leading to important contributions in the field of flavour physics as well as physics in the forward region. Now, it is being upgraded in a first step - Upgrade I - to run at a luminosity of 2×1033cm−2s−1. An Upgrade II phase has been proposed, aiming at a full exploitation of the flavour physics potential of the High Luminosity LHC operational period. LHCb Upgrade II will run at instantaneous luminosities of up to 2×1034cm−2s−1 and accumulate a data sample corresponding to a minimum of 300 fb−1.
New design options for the Muon Detector at Upgrade II are under study, in order to deal with the increase in luminosity and readout rate, while preserving the stable operation of the detector and its highly efficient detection capability. Due to the high variability of the expected particle rates, ranging from several kHz/cm2 in the external regions up to several MHz/cm2 in the inner ones, different sub-detector technologies that could replace or simply complement the old ones are under investigation, with the aim to choose the best option in terms of granularity, radiation hardness and effective spark quenching up to integrated charges of O(C/cm2). An overview of the state of the art of the Muon Detector design for the LHCb Upgrade II will be here presented.
A direct measurement of the Higgs self coupling is very crucial to understand the nature of electroweak symmetry breaking. This requires an observation of production of Higgs boson pair, which suffers from very low event rate even at the current LHC run. In our work, we study the prospects of observing the non-resonant Higgs pair production at the high luminosity run of the 14 TeV LHC (HL-LHC). Here, we choose multiple final states based on the event rate and cleanliness, namely, $b\bar{b}\gamma \gamma$, $b\bar{b} \tau^+ \tau^-$, $b\bar{b} WW^*$, $WW^*\gamma \gamma$ and $4W$ channels and do a collider study by employing a cut-based as well as multivariate analyses using the Boosted Decision Tree (BDT) algorithm. Also, we consider various physics beyond the standard model (BSM) scenarios, for example resonant Higgs pair production, to quantify the effects of contamination when one tries to measure the SM di-Higgs signals. In a later study, we search specifically for the heavy resonant scalars ($H/A$) via their decay into two SM Higgs boson at the HL-LHC. After performing multivariate analysis using BDT algorithm in various final states, we set upper limits on the production cross-section of heavy scalar times its branching ratio into final state products for different values of heavy scalar masses. Finally, we translate these limits and put strong constraints on the $m_A-tan\beta$ parameter space (where $m_A$ and $tan\beta$ are respectively the mass of the pseudoscalar and the ratio of the vacuum expectation values of the two Higgs doublets) in the context of Minimal Supersymmetric Standard Model (MSSM).
The ATLAS level-1 calorimeter trigger (L1Calo) is a hardware-based system that identifies events containing calorimeter-based physics objects, including electrons, photons, taus, jets, and missing transverse energy. In preparation for Run 3, when the LHC will run at higher energy and instantaneous luminosity, L1Calo is currently implementing a significant programme of planned upgrades. The existing hardware will be replaced by a new system of feature extractor (FEX) modules, which will process finer-granularity information from the calorimeters and execute more sophisticated algorithms to identify physics objects; these upgrades will permit better performance in a challenging high-luminosity and high-pileup environment. This talk will introduce the features of the upgraded L1Calo system and the plans for production, installation, and commissioning. In addition, the expected performance of L1Calo in Run 3 will be discussed.
The ridge-like structure found in two-particle correlation from the proton-proton collision is one of the hot topics in high-energy heavy-ion physics. Since the medium produced from pp collision is not large enough to generate a high-temperature and high-density medium, called QGP, and therefore this phenomenon can not be suitablely understood through hydrodynamics, unlike in nucleus-nucleus collision.Jet particles lose a considerable amount of energy while they move through the collisions with partons in the medium. The momentum transferred from jets to medium partons are in the direction of jets’ motion, which might produce the collective motion of the medium such as the ridge. In this sense, the momentum kick model has been tested in nucleus-nucleus and pp collisions at various energies. [1]
In this model, the initial parton distribution of the medium is necessary to describe the scattering process kinematically. We have adopted several distribution functions: Maxwell-Boltzmann(MB), Juttner-Synge(JS), and phenomenological parton distribution functions from the soft scattering model (phPDs). However, the MB and JS can not explain a wide range of pseudo-rapidity in the Ridge structure. Therefore we study the phPDs relatively in detail.
The phPDs is parameterized with two variables: the temperature of the medium and a fall-out parameter which determines the decreasing slope at a high-rapidity region. The temperature of the medium critically contibute on the pT distribution while the fall-out parameter on the rapidity distribution. These parameters are determined from the experimental data for pp or AA collisions. [2]
Recently, this model has been used to describe the experimental data for LHC energies, using the parameters fitted to RHIC data, whose energies are 100 times less than LHC energies. Therefore the new study is necessary for the parameter settings for LHC energies.
In this study, we fitted to the simulated data for pp collisions at sqrt{sNN} = 2.76TeV using PYTHIA and new values are quite different from those for low energy data. We compare not only the pT and rapidity distribution but also the lightcone variable distribution. Using these settings, we tried to calculate the two-particle correlations and compare them to the experimental results.
[1] C. Y. Wong, Physical Review C 85, 064909 (2012)
[2] C. Y. Wong, Physical Review C 78, 064905 (2008)
[3] Huh, K. B., Cho, S., Ko, G., & Yoon, J. H. (2018). Two-particle correlation via Bremsstrahlung.
[4] Kim, B., Youn, H., Cho, S. et al. Momentum-Kick Model Application to High-Multiplicity pp Collisions at s√=13 TeV at the LHC. Int J Theor Phys (2021).
Being the heaviest particle of the model, with a mass close to the electroweak scale, the top quark is an interesting candidate for searching for new physics. The electroweak couplings of the top quarks are specially relevant in many extension of the Standard Model. Indeed, as the top-quark was not produced in the previous generation of electron-positron colliders most of its electro-weak couplings can only be constrained with the actual data coming from the Large Hadron Collider. In order to analyse if there is still room for new physics in the electro-weak couplings of the top-quark, we perform a global fit to these couplings. Following the Standard Model Effective Field Theory formalism we have constrained the Wilson coefficients of the dimension six operators that affect the top-quark electro-weak couplings. In this work we consider, for the first time, the QCD corrections at NLO for most of the processes included. Furthermore, we have included recently measured processes, such as $pp\rightarrow tZq$ and $pp\rightarrow t \gamma q$., and the first differential measurements in $pp\rightarrow t\bar{t}Z$ and $pp\rightarrow t\bar{t}\gamma$ production. Taking this into account we are able to improve the bounds significantly with respect to previous results.
Since 2013, the University of Michigan has hosted a semester-long research program for undergraduate students at CERN. The students are selected from a diverse mix of small and large universities across the USA and are embedded as CERN Users in active research programs on experiments at the laboratory. The program is modelled on the highly successful NSF-funded Research Experience for Undergraduates (REU) program, which brings 15 students each year to participate in the CERN Summer Student Program, but serves to address the very large demand for additional opportunities during the academic year. CERN mentors are selected due to their leadership skills on the experiments, as well as their ability to educate and inspire the students. Projects cover a wide range of activities from detector R&D to software development, trigger design, physics analysis and theoretical methodology, and touch nearly all aspects of the research program at CERN.
Each semester, around six students, selected from diverse backgrounds, often under-represented in our field, spend three months working at the laboratory. They live in apartment facilities in neighbouring St. Genis Pouilly, and enjoy periodic excursions to cultural centres located around Europe. Funding, which covers travel, per diem and a stipend, has come from a variety of sources, including the Richard Lounsbery Foundation, the University of Michigan Department of Physics, and most recently from the United States Mission to the International Organizations in Geneva. We present the growing success of the program, its strategic interest to the USA, and describe our current efforts to expand and improve its diverse reach to all students across the country.
One of the event shape observables, the transverse spherocity ($S_{0}$), has been studied successfully in small collision systems such as proton-proton collisions at the LHC as a tool to separate jetty and isotropic events. It has unique capability to distinguish events based on their geometrical shapes. In our work, we report the first implementation of transverse spherocity in heavy-ion collisions using a multi-phase transport model (AMPT). We have performed an extensive study of azimuthal anisotropy of charged particles produced in heavy-ion collisions as a function of transverse spherocity ($S_{0}$). We have followed the two-particle correlation (2PC) method to estimate the elliptic flow ($v_2$) in different centrality classes in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV for high-$S_0$, $S_{0}$-integrated and low-$S_0$ events. We have compared the results obtained from AMPT and PYTHIA8 (Angantyr). It is found that two-particle elliptic flow ($v_{2,2}$) is almost zero in the latter one and it is almost free from residual non-flow effects. Also, transverse spherocity does not introduce any bias to the system as far as the elliptic flow is concerned, which is clear from the comparison between the two models. We found that transverse spherocity successfully differentiates heavy-ion collisions’ event topology based on their geometrical shapes i.e. high and low values of spherocity ($S_0$). The high-$S_0$ events are found to have nearly zero elliptic flow while the low-$S_0$ events contribute significantly to elliptic flow of spherocity-integrated events.
The ATLAS experiment at the LHC can record about 1 kHz of physics collisions, out of an LHC design bunch crossing rate of 40 MHz. To achieve a high selection efficiency for rare physics events while reducing the significant background rate, a two-level trigger system is used.
The event selection is based on physics signatures, such as the presence of energetic leptons, photons, jets or missing energy. In addition, the trigger system can exploit algorithms using topological information and multivariate methods to carry out the filtering for the many physics analyses pursued by the ATLAS collaboration. In Run 2, around 1500 individual selection paths, the trigger chains, were used for data taking, each with specified rate and bandwidth assignments.
We will give an overview of the Run-2 trigger menu and its performance, allowing the audience to get a taste of the broad physics program that the trigger is supporting. We present the tools that allow us to predict and optimize the trigger rates and CPU consumption for the anticipated LHC luminosities and outline the system to monitor deviations from the individual trigger target rates, and to quickly react to the changing LHC conditions and data taking scenarios.
As an outlook to the upcoming ATLAS data-taking period in Run 3 from 2022 onwards, we present the design principles and ongoing implementation of the new trigger software within the multithreaded framework AthenaMT together with some outlook to the expected performance improvements.
Triggering long-lived particles (LLPs) at the first stage of the trigger system is very crucial in LLP searches to ensure that we do not miss them at the very beginning. The future High Luminosity runs of the Large Hadron Collider will have an increased number of pile-up events per bunch crossing. There will be major upgrades in hardware, firmware and software sides, like tracking at level-1 (L1). The L1 trigger menu will also be modified to cope with pile-up and maintain the sensitivity to physics processes. In our study we found that the usual level-1 triggers, mostly meant for triggering prompt particles, will not be very efficient for LLP searches in the 140 pile-up environment of HL-LHC, thus pointing to the need to include dedicated L1 triggers in the menu for LLPs. We consider the decay of the LLP into jets and develop dedicated jet triggers using the track information at L1 to select LLP events. We show in our work that these triggers give promising results in identifying LLP events with moderate trigger rates.
The Tile Calorimeter (TileCal) is a sampling hadronic calorimeter covering the central region of the ATLAS experiment, with steel as absorber and plastic scintillators as active medium. The High-Luminosity phase of LHC, delivering five times the LHC nominal instantaneous luminosity, is expected to begin in 2028. TileCal will require new electronics to meet the requirements of a 1 MHz trigger, higher ambient radiation, and to ensure better performance under high pile-up conditions. Both the on- and off-detector TileCal electronics will be replaced during the shutdown of 2025-2027. PMT signals from every TileCal cell will be digitized and sent directly to the back-end electronics, where the signals are reconstructed, stored, and sent to the first level of trigger at a rate of 40 MHz. This will provide better precision of the calorimeter signals used by the trigger system and will allow the development of more complex trigger algorithms. The modular front-end electronics feature radiation-tolerant commercial off-the-shelf components and redundant design to minimise single points of failure. The timing, control and communication interface with the off-detector electronics is implemented with modern Field Programmable Gate Arrays (FPGAs) and high speed fibre optic links running up to 9.6 Gb/s. The TileCal upgrade program has included extensive R&D and test beam studies. A Demonstrator module with reverse compatibility with the existing system was inserted in ATLAS in August 2019 for testing in actual detector conditions. The ongoing developments for on- and off-detector systems, together with expected performance characteristics and results of test-beam campaigns with the electronics prototypes will be discussed.
After the luminosity upgrade of the Large Hadron Collider (HL-LHC), the rate capability of the muon drift tube detectors of the ATLAS experiment will be exceeded due to the higher particle background rates. In the regions between the inner barrel and endcap of the muon spectrometer the trigger selectivity is limited. Therefore 8 new small diameter (15 mm) Muon Drift Tube chambers (so-called sMDT BIS7A) are installed in the transition regions between the inner barrel and endcap of the muon spectrometer which provide a significant high background rate capability and free space for the installation of new RPC muon trigger chambers as required for operation at HL-LHC. This is a pilot project for the whole replacement of the MDT chambers in the small azimuthal sectors of the barrel inner layer (so-called BIS1-6) by new sMDT-RPC detectors in the long shutdown 3 (LS3). An overview of the validation and installation of the new sMDT BIS7A chambers, their cavern commissioning status and their performance in the ATLAS combined runs during Milestone weeks, will be presented.
The use of Effective Field Theories (EFTs) in the search of NP is becoming more and more important given the lack of clear experimental signs of BSM physics. In particular, the Standard Model EFT (SMEFT) has become one of the most popular choices. Hence, the interest in revisiting EFTs is widespread in the community. The importance of chiral anomalies, either in gauge or global symmetries, can not be overstated because of their phenomenological and formal importance. A natural question then arise: can higher-dimensional operators in an EFT generate gauge anomalies if the renormalizable part of the EFT is anomaly-free? I will discuss whether dimension-6 operators in SMEFT can induce gauge anomalies. We find a negative answer, contrarily to what was claimed by Cata et al in a recent paper (2011.09976) and therefore I'll discuss why the triangle-diagram computations performed in the aforementioned paper lead to apparent anomalies. I will provide arguments based on conserved currents and a more innovative derivation based on the smart construction of an EFT. Poster based on 2012.07740.
Committee: Johannes Albrecht, Maria Cepeda Hermida, Tristan du Pree, Elisabetta Gallo (chair), Stefania Gori, Heather Gray, Yvonne Pachmeyer, Marek Schoenherr, José Francisco Zurita