The Ninth Annual Large Hadron Collider Physics (LHCP2021) conference is planned
|
---|
NEWS (03/11/2021): the LHCP2021 proceedings have been reviewed and are now available at this URL: https://pos.sissa.it/397/
NEWS (18/10/2021): the LHCP2021 proceedings are currently under review and will appear at this URL: https://pos.sissa.it/397/
NEWS (12/06/2021): the winners of the poster awards and the site selected to host LHCP 2023 have been announced in the closing plenary session
NEWS (28/04/2021): the second bulletin is now available on the conference website (click here)
NEWS (23/04/2021): poster abstracts have been reviewed and acceptance notifications have been sent by e-mail. Information about the poster and poster session formats are available here. More detailed instructions will be sent to the poster presenters by e-mail.
NEWS (22/04/2021): thanks to CERN and IUPAP sponsorships, no fees are required to participate to the LHCP2021 conference. Participants attending the online conference are required to register in order to receive by e-mail the instructions for the video connections.
The LHCP conference series started in 2013 after a successful fusion of two international conferences, "Physics at Large Hadron Collider Conference" and "Hadron Collider Physics Symposium". The conference programme will be devoted to a detailed review of the latest experimental and theoretical results on collider physics, and recent results of the LHC Run II, and discussions on further research directions within the high energy particle physics community including both theory and experiment sides. The main goal of the conference is to provide intense and lively discussions between experimentalists and theorists in research areas such as the Standard Model Physics and Beyond, the Higgs Boson, Supersymmetry, Heavy Quark Physics and Heavy Ion Physics as well as the recent progress in the high luminosity upgrades of the LHC and future colliders developments.
With great regret we have concluded that the 9th LHCP conference, to be held 7-12 June 2021, will need to be fully online, due to the Covid-19 pandemic and its uncertainties.
The conference will be maintained for the same days, with an adjusted timetable to improve remote participation from around the world, similar to that of the 2020 edition of LHCP.
MAIN DEADLINES |
---|
|
||
- opening | 12 December 2020 | |
- closing | 2 June 2021 | |
Poster abstract submission | ||
- submission deadline | 19 April 2021 | |
- acceptance notification | 23 April 2021 at the latest | |
Start of the conference |
7 June 2021 12:00 | |
Proceedings submission | 20 September 2021 |
Recent measurements of charm-baryon production at midrapidity by the ALICE collaboration in pp collisions show baryon-over-meson ratios significantly higher than those in $\rm e^+e^-$ collisions for different charm-hadron species. The charmed baryon-to-meson and charmed baryon-to-baryon ratios provide unique information on hadronisation mechanisms. In this poster, the first measurement of production cross section of $\rm \Omega_{c}^{0}$ via the hadronic decay channel $\rm \Omega_{c}^{0} \rightarrow \pi^{+} \Omega^{-}$ (and its charge conjugate) in $2
The production cross sections of open heavy-flavour hadrons are typically described within the factorisation approach as the convolution of the parton distribution functions of the incoming protons, the perturbative QCD partonic cross section, and the fragmentation functions. These last are typically parametrised from measurements in ${\rm e^+e^-}$ collisions. Measurements of charm-baryon production are crucial to study the charm quark hadronisation in pp and p--Pb collisions and its difference with respect to ${\rm e^+e^-}$ collisions. Furthermore, measurements of charm-baryon production in p--Pb collisions provide important information about Cold Nuclear Matter (CNM) effects quantified in the nuclear modification factor $R_{\rm pPb}$. Measurements in p--Pb collisions also help us to understand how the possible presence of collective effects could modify the production of heavy-flavour hadrons and to find similarities among pp, p--Pb and Pb--Pb systems.
In this poster, the latest measurements of $\Lambda^+_{\rm c}$ performed with the ALICE detector at midrapidity in pp, and the new measurement performed down to $p_{\rm T}=0$ in p--Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV are presented. This allows us to show the first ALICE measurement of $\Lambda^+_{\rm c}/{\rm D^0}$ and $\Lambda^+_{\rm c}$ $R_{\rm pPb}$ down to $p_{\rm T}$ = 0 in p--Pb collisions. The $\Lambda^+_{\rm c}/{\rm D^0}$ ratio at midrapidity in small systems is significantly higher than the one in ${\rm e^+e^-}$ collisions, suggesting that the fragmentation of charm is not universal across different collision systems. Results are compared with theoretical calculations.
The increase of the particle flux (pile-up) at the HL-LHC with instantaneous luminosities up to
L ~ 7.5 × 10$^{34}$ cm$^{-2}$s$^{-1}$ will have a severe impact on the ATLAS detector reconstruction and trigger performance. The end-cap and forward region where the liquid Argon calorimeter has coarser granularity and the inner tracker has poorer momentum resolution will be particularly affected. A High Granularity Timing Detector (HGTD) will be installed in front of the LAr end-cap calorimeters for pile-up mitigation and luminosity measurement.
The HGTD is a novel detector introduced to augment the new all-silicon Inner Tracker in the pseudo-rapidity range from 2.4 to 4.0, adding the capability to measure charged-particle trajectories in time as well as space. Two silicon-sensor double-sided layers will provide precision timing information for minimum-ionising particles with a resolution as good as 30 ps per track in order to assign each particle to the correct vertex. Readout cells have a size of 1.3 mm × 1.3 mm, leading to a highly granular detector with 3.7 million channels. Low Gain Avalanche Detectors (LGAD) technology has been chosen as it provides enough gain to reach the large signal over noise ratio needed.
The requirements and overall specifications of the HGTD will be presented as well as the technical design and the project status. The on-going R&D effort carried out to study the sensors, the readout ASIC, and the other components, supported by laboratory and test beam results, will also be presented.
In the Standard Model (SM), lepton flavour is conserved in all interactions.
Hence, any observation of lepton flavour violation (LFV) would be an
unambiguous sign of physics beyond the SM (BSM), and LFV processes are
predicted by numerous BSM models.
One way to search for LFV is in the decay of gauge bosons.
In the search presented here, the decay of the Z boson to an electron-tau or
muon-tau pair is investigated using the full Run 2 pp collision data set at
sqrt(s) of 13TeV recorded by the ATLAS experiment at the LHC.
The analysis exploits tau decays into hadrons and - for the first time in this
channel in ATLAS - into leptons.
A key ingredient of the search is the usage of a neural net to differentiate
between signal and background events in order to make optimum use of the data.
Combined with about 8 billion Z decays recorded by ATLAS in Run 2 of the LHC,
the strongest constraints to date are set with Br(Z->etau)<5.0e-6 and
Br(Z->mutau)<6.5e-6 at 95% confidence level
- finally superseding the limits set by the LEP experiments more than two
decades ago.
Since 2016 ATLAS detector is equipped with new devices - ATLAS Forward Proton (AFP) detectors. AFP aims to measure protons scattered at very small angles, which are a natural signature of so-called diffractive events. Measurement of properties of diffractive events usually require low pile-up data-taking conditions. AFP performance in such special, low pile-up runs, including evaluation of detector efficiency, will be presented.
Production of beauty quarks takes place mostly in initial hard scattering processes and can be calculated using perturbative quantum chromodynamics (pQCD). Thanks to excellent particle tracking capabilities, the ALICE experiment at the LHC is able to reconstruct beauty-hadron decay vertices, displaced hundreds of micrometers from the primary interaction vertex. The poster will present inclusive pT spectra of b jets measured in p–Pb and pp collisions at √sNN = 5.02 TeV, the corresponding nuclear modification factor, and the fraction of b jets among inclusive jets. The production cross-section of b jets was measured down to 10 GeV/c which is lower than in previous measurements of b jets done at the LHC. Low pT b-jets are expected to be more sensitive to cold nuclear matter effects in p–Pb collisions. They are an important reference for future Pb–Pb measurements, where their production provides information on color and parton mass dependence of parton energy loss.
Heavy quarks (charm, beauty), due to the large masses, mainly originate via hard partonic scattering processes in high-energy hadronic collisions. They evolve as parton showers and hadronize as back-to-back jet events.
Two particles azimuthal angular correlations triggered by electrons from heavy-flavour hadron decays can be used for heavy-flavor jet studies. Such correlation distributions contains a near-side peak around $\Delta\varphi = 0$ formed by particles associated with a high-$p_{\rm T}$ trigger particle, and an away-side peak around $\Delta\varphi = \pi$. By changing the momentum scales of the trigger and associated particles one can study the heavy-flavour jet structure. In pp collisions, heavy-flavour correlations can be used to study the production and fragmentation of heavy-quarks. In p-Pb collisions, heavy-flavour correlations can be used to test cold nuclear matter and gluon saturation effects.
In this poster, we present the current status and results of the ALICE measurement of azimuthal angular correlations of high-$p_{\rm T}$ heavy-flavour decay electrons with charged particles in pp and p-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 5.02 TeV from the LHC Run 2 data. The results from pp and p-Pb collisions will be compared with each other to investigate any modification due to cold nuclear matter effect.
In this work we present the production of charged particles associated with high-$p_{\rm T}$ trigger particles ($8<\textit{p}_{\rm T}^{\rm trig.}<15$ GeV/$c$) at midrapidity in proton-proton collisions at $\sqrt{s}=5.02$,TeV simulated with the PYTHIA 8 Monte Carlo model [1]. The study is performed as a function of the relative transverse activity classifier, $R_{\rm T}$, which is the relative charged-particle multiplicity in the transverse region ($\pi/3< \phi^{\rm trig.}-\phi^{\rm assoc.}|<2\pi/3$) of the di-hadron correlations, and it is sensitive to the Multi-Parton Interactions. The evolution of the yield of associated particles on both the towards and the away regions ($3\leq p_{\rm T}^{\rm assoc.}< 8$ GeV/$c$) as a function of $R_{\rm T}$ is investigated. We propose a strategy which allows for the modelling and subtraction of the Underlying Event (UE) contribution from the towards and the away regions in challenging environments like those characterised by large $R_{\rm T}$. We found that the signal in the away region becomes broader with increasing $R_{\rm T}$. Contrarily, the yield increases with $R_{\rm T}$ in the towards region. This effect is reminiscent of that seen in heavy-ion collisions, where an enhancement of the yield in the towards region for 0-5% central Pb$-$Pb collisions at $\sqrt{s_{\rm NN}}=2.76$,TeV was reported. To further understand the role of the UE and additional jet activity, the transverse region is divided into two one-sided sectors, "trans-max" and "trans-min" selected in each event according to which region has larger or smaller charged particle multiplicity. Based on this selection criterium, the observables are studied as a function of $R_{\rm T}^{\rm max}$ and $R_{\rm T}^{\rm min}$, respectively. Results for pp collisions simulated with PYTHIA 8.244 and Herwing 7.2 will be shown.
[1] J.Phys.G 48 (2020) 1, 015007
Liquid argon (LAr) sampling calorimeters are employed by ATLAS for all electromagnetic calorimetry in the pseudo-rapidity region |η| < 3.2, and for hadronic and forward calorimetry in the region from |η| = 1.5 to |η| = 4.9. In the first LHC run a total luminosity of 27 fb$^{−1}$ has been collected at center-of-mass energies of 7-8 TeV. After detector consolidation during a long shutdown, Run-2 started in 2015 and about 150 fb$^{-1}$ of data at a center-of-mass energy of 13 TeV was recorded. With the end of Run-2 in 2018 a multi-year shutdown for the Phase-I detector upgrades was begun.
As part of the Phase-I upgrade, new trigger readout electronics of the ATLAS Liquid-Argon Calorimeter have been developed. Installation began at the start of the LHC shut down in 2019 and is expected to be completed in 2020. A commissioning campaign is underway in order to realize the capabilities of the new, higher granularity and higher precision level-1 trigger hardware in Run-3 data taking. This contribution will give an overview of the new trigger readout commissioning, as well as the preparations for Run-3 detector operation and changes in the monitoring and data quality procedures to cope with the increased pileup.
This poster summarises the extra dimensional models being searched for using the ATLAS detector at the Large Hadron Collider, in the full Run 2 dielectron and dimuon datasets. This data was produced in proton-proton collisions at a centre-of-mass energy of 13 TeV. In particular, the limits on the ADD model are presented, from a reinterpretation of the ATLAS Run 2 dilepton non-resonant analysis. Also highlighted, is a novel search being performed for clockwork extra dimensions.
Eight years ago, the discovery of a new fundamental particle, the Higgs boson (H), was announced by the ATLAS and CMS collaborations at CERN. While elementary particles acquire their mass through their interaction with the Higgs field, the large differences in their masses as well as the origin of the three generations of fermions remain unexplained to this day and constitute the Standard Model flavour puzzle.
Measuring the coupling of each fermion to the Higgs boson is one of the most important tasks in modern particle physics. The next most promising candidate in the quark sector is the decay to a pair of charm quark and antiquark (cc).
This poster will focus on the analysis of the associated production of the Higgs boson with a W or Z boson performed by the ATLAS Collaboration using data collected between 2015 and 2018, and will describe the analysis strategy employed to search for the H→cc signal. More precisely, recent achievements in the charm tagging technology that enables the identification of jets containing charm hadrons will be presented. Our current understanding of the H→cc process will be outlined, and the various results of the ATLAS, CMS and LHCb collaborations will be compared. Finally, the interpretation of this new result as a probe to the Standard Model flavour puzzle and its large constraining power on new physics scenarios will be discussed.
The second LHC long shutdown period (LS2) was a crucial opportunity for the CMS Resistive Plate Chambers (RPC) to complete their consolidation and upgrade projects. The consolidation includes detector maintenance for gas tightness, HV (high voltage), LV (low voltage) and slow control operation. Dedicated studies were performed to understand the behaviour of RPC currents with comparison to RUN2. This paper summarises the activities performed and commissioning of CMS RPC on the surface (For RE4) and for full detector in CMS cavern in different operating conditions.
Standard dipole parton showers are known to yield incorrect subleading-colour contributions to the leading (double) logarithmic terms for a variety of observables. In this work, concentrating on final-state showers, we present two simple, computationally efficient prescriptions to correct this problem, exploiting a Lund-diagram type classification of emission regions. We study the resulting effective multiple-emission matrix elements generated by the shower, and discuss their impact on subleading colour contributions to leading and next-to-leading logarithms (NLL) for a range of observables. In particular we show that the new schemes give the correct full colour NLL terms for global observables and multiplicities. Subleading colour issues remain at NLL (single logarithms) for non-global observables, though one of our two schemes reproduces the correct full-colour matrix-element for any number of energy-ordered commensurate-angle pairs of emissions. While we carry out our tests within the PanScales shower framework, the schemes are sufficiently simple that it should be straightforward to implement them also in other shower frameworks.
We present a combined analysis of low energy precision constraints and LHC searches for leptoquarks which couple to first generation fermions. Considering all ten leptoquark representations, five scalar and five vector ones, we study at the precision frontier the constraints from $K\to\pi\nu\nu$, $K\to\pi e^+e^-$, $K^0-\bar K^0$ and $D^0-\bar D^0$ mixing, as well as from experiments searching for parity violation (APV and QWEAK). We include LHC searches for $s$-channel single resonant production, pair production and Drell-Yan-like signatures of leptoquarks. Particular emphasis is placed on the recent CMS analysis of lepton flavour universality violation in non-resonant di-lepton pairs. The excess in electron events could be explained by $t$-channel contributions of the leptoquark representations $\tilde{S}_1, S_2, S_3, \tilde{V}_1, V_2 (\kappa_2^{RL} \neq 0)$ and $V_3$ without violating other bounds. Regarding the so-called ``Cabibbo angle anomaly'', we observe that the present constraints are too restrictive to allow for a resolution via direct leptoquark contributions to super-allowed beta decays.
Several Dark Sector models predict the existence of particles with macroscopic life-times and semi-visible jets (QCD-like jets which include stable Dark Sector particles). These can lead to final states with large missing transverse momentum recoiling against at least one highly energetic jet, a signature that is often referred to as a mono-jet.
The RECAST framework is used to re-interpret the recent ATLAS mono-jet search, based on 139 $\mathrm{fb^{-1}}$ of pp collisions collision data at $\sqrt{s} = 13$ TeV, in terms of Dark Sector models not studied in the original work. Complementary results for models involving long-lived particles are found with respect to dedicated searches. Results are also interpreted for the first time at ATLAS in terms of searches for semi-visible jets produced from a QCD-like parton shower.
In this study, a new technique for event classification using Convolutional Neural Networks (CNN) is presented. Results obtained using this technique are shown and compared to more traditional Machine Learning approaches for two different physics cases.
The new technique explores the power of visual recognition, which is one of the fastest-growing areas in Artificial Intelligence, as a consequence of the “deep learning” evolution of CNNs. Since CNNs are fed with images, an original and intuitive way for encoding the event information in images has been developed, building a one-to-one correspondence that allows us to face the event classification as an image classification.
In order to take advantage of the good performance of existing CNN architectures, transfer learning has been tested, showing to be a suitable option. VGG16 has been chosen as the benchmark, which is based on the well-known architecture named AlexNet. Additionally, an alternative approach with a simpler CNN architecture has shown to give also good results when trained from scratch. Nevertheless, a comparison with a more standard technique such as a BDT (using the XGBoost library) is provided in order to confirm that the results obtained with this unexplored technique are satisfactory.
The two classifications studied correspond to current challenges in Particle Physics. First, a New Physics example corresponding to a Dark Matter search has been performed, considering a mono-top signal and several of its main background processes. The selected events were required to have exactly one lepton and at least one b-tagged jet, together with large missing transverse momentum. Second, with the same selection criteria, a tt+X classification is also carried out, in which exotic processes with four top quarks as final state are tried to be identified among other processes such as ttH or ttW.
The momentum anisotropy ($v_{n}$) of the produced particles in relativistic nuclear collisions is considered to be a response of the initial geometry or the spatial anisotropy ($\varepsilon_{n}$) of the system formed in these collisions. The linear correlation between $\varepsilon_{n}$ and $v_{n}$ measures the efficiency at which the initial spatial eccentricity is converted to final momentum anisotropy in heavy ion collisions. We have studied the transverse momentum, collision centrality, and beam energy dependence of this correlation for different charged particles using a hydrodynamical model framework MUSIC. The ($\varepsilon_{n}$ − $v_{n}$) correlation is found to be stronger for central collisions and also for n=2 compared to that for n=3 as expected. However, the transverse momentum ($p_{T}$) dependent correlation coefficient shows interesting behaviour which strongly depends on the mass as well as $p_{T}$ of the produced particles. The correlation strength is found to be larger for lighter particles in the lower $p_{T}$ region. We have seen that the relative fluctuation in anisotropic flow depends strongly in the value of $\eta/s$ specially in the region $p_{T}<$ 1 GeV unlike the correlation coefficient which does not show significant dependence on $\eta/s$
The ATLAS Muon Upgrade project is a part of the Large Hadron Collider (LHC) - High Luminosity (HL) upgrade project which aims to increase its instantaneous luminosity up to 7.5X10$^{34}$ cm$^{−2}$s$^{−1}$. The present first muon station in the forward regions of ATLAS is being replace by the so-called New Small Wheels (NSWs). The NSWs consist of resistive-strip MicroMegas (MM) detectors and small-strip Thin Gap chambers (sTGC), both providing trigger and tracking capabilities, for a total active surface of more than 2500 m$^2$. After the R&D, design and prototyping phase, series production of MM and sTGC chambers are being constructed. The NSW Upgrade project, the most challenging and complex one of the ATLAS phase-I upgrade projects, is expected to be completed with the installation of NSW in the ATLAS Underground cavern during the summer of 2021. The whole NSW structure includes 128 detectors, in total to ∼2.4 million readout channels. This new generation of readout electronics are built to stand the harsh radiation hostile conditions, where the expected background rate will reach 20 kHz/cm$^2$. Eight micromegas detectors layers are integrated into a double wedge. The mechanical integration is followed by the electronic integration and its initial validation into the data acquisition system. Each fully equipped MicroMegas doublewedge is tested at a dedicated cosmic ray facility and the high voltage settings are defined. Then, a sequence of tests follows, related to efficiency maesurement , cluster size, resolution for all the individual layers of the double wedge are performed. These steps consist the qualification of the MicroMegas sector for the final integration with the sTGC wedges before mounting them on the NSW structure. The electronics performance and cosmic rays validation results of the final validation
of Micromegas double wedges will be presented.
We study new physics contributions to CP-violating anomalous couplings of top-quark in the context of top-pair production and their consequent decays into a pair of dilepton and b-jets at the Large Hadron Collider. An estimate of sensitivities to such CP-violating interactions would also be discussed for the pre-existing 13 TeV LHC data and its projections for the proposed LHC run at 14 TeV.
Next-generation collider experiments will have to cope with extremely high collision rates, making it necessary to implement real-time event processing capabilities. Among the standard pattern recognition algorithms thought to be run on Look-Up Tables, Machine Learning methods, and in particular Deep Neural Networks, are spreading very fast and there is growing interest in executing such algorithms at trigger level to improve on-line selection performance. The main issue in running these algorithms in real-time is the amount of operation that needs to be computed. Low-latency hardware solutions exist, e.g. FPGAs, but the main constraint to the implementation is often related to the size of the model, that has to be finely tuned not to exceed the available memory. We present an approach to reduce in an optimized way the size of models based on Fully Connected Neural Networks, having under control the model performances. The number of features in input to the Deep Neural Network is reduced using a CancelOut layer, optimized through an original loss function. We compare the performances of this approach with other techniques. We use as baseline study the selection of proton-proton collision events in which the boosted Higgs boson decays to two $b$-quarks and both the decay products are contained in a large and massive jet. These events have to be selected against an overwhelming QCD background. Promising results are shown and the way for future developments is outlined.
The High-Luminosity LHC will open an unprecedented window on the weak-scale nature of the universe, providing high-precision measurements of the Standard Model (SM) as well as searches for new physics beyond the SM. The CMS Collaboration is planning to replace entirely its trigger and data acquisition systems to match this ambitious physics program. Efficiently collecting datasets in Phase 2 will be a challenging task, given the harsh environment of 200 proton-proton interactions per LHC bunch crossing. The already challenging implementation of an efficient tau lepton trigger will become, in this conditions, an even crucial and harder task; especially interesting will be the case of hadronically decaying taus. To this end, the foreseen high-granularity endcap calorimeter (HGCAL), and the astonishing amount of information it will provide, play a key role in the design of the new online level-1 (L1) triggering system. In this talk I will present the development of a L1 trigger for hadronically decaying taus based on the sole information from the HGCAL detector. I will present some novel ideas for a L1 trigger based on machine learning that can be implemented in FPGA firmware. The expected performance of the new trigger algorithm will be presented, based on simulated collision data of the HL-LHC.
The increase of luminosity foreseen for the High-Luminosity LHC phase requires the substitution of the ATLAS Inner Detector with a new tracking detector, called Inner Tracker. It will be an all-silicon system consisting of a pixel and a strip subdetector. The ATLAS wide FELIX system will be the off-detector interface to the Inner Tracker.
In order to efficiently bring the Inner Tracker into operation, the intercommunication between the DAQ and the DCS is foreseen. Such communication is mediated by OPC servers that interface to the different hardware and software resources and to the Finite State Machine, which supervises all subdetectors. This framework is designed to be flexible, so that it can easily incorporate heterogeneous resources coming from different subsystems, including the FELIX setups.
This poster describes the current status of the implementation of OPC servers for the intercommunication between the DAQ and the DCS and their integration in the FELIX setups.
To meet new TDAQ buffering requirements and withstand the high expected radiation doses at the high-luminosity LHC, the ATLAS Liquid Argon Calorimeter readout electronics will be upgraded. The triangular calorimeter signals are amplified and shaped by analogue electronics over a dynamic range of 16 bits, with low noise and excellent linearity. Developments of low-power preamplifiers and shapers to meet these requirements are ongoing in 130nm CMOS technology. In order to digitize the analogue signals on two gains after shaping, a radiation-hard, low-power 40 MHz 14-bit ADCs is developed using a pipeline+SAR architecture in 65 nm CMOS. Characterization of the prototypes of the frontend components show good promise to fulfill all the requirements. The signals will be sent at 40 MHz to the off-detector electronics, where FPGAs connected through high-speed links will perform energy and time reconstruction through the application of corrections and digital filtering. Reduced data are sent with low latency to the first level trigger, while the full data are buffered until the reception of trigger accept signals. The data-processing, control and timing functions will be realized by dedicated boards connected through ATCA crates. Results of tests of prototypes of front-end components will be presented, along with design studies on the performance of the off-detector readout system.
A series of upgrades are planned for the LHC accelerator to increase it's instantaneous luminosity to 7.5×10$^{34}$ cm$^{-2}$s$^{-1}$. The luminosity increase drastically impacts the ATLAS trigger and readout data rates. The present ATLAS Small Wheel Muon detector will be replaced with a New Small Wheel (NSW) detector which is expected to be installed in the ATLAS underground cavern by the end of the Long Shutdown 2 of the LHC. One crucial part of the integration procedure concerns the installation, testing and validation of the on-detector electronics and readout chain for a very large system with a more than 2.1 M electronic channels in total. These include 7K Front-End Boards (MMFE8, SFEB, PFEB), custom printed circuit boards each one housing eight 64-channel VMM Application Specific Integrated Circuits (ASICs) that interface with the ATLAS Trigger and Data Acquisition (TDAQ) system through 1K data-driver cards. The readout chain is based on optical link technology (GigaBit Transceiver links) connecting the backend to the front-end electronics via the Front-End LInk eXchange (FELIX), is a newly developed system that will serve as the next generation readout driver for ATLAS. For the configuration, calibration and monitoring path, the various electronics boards are supplied with the GBT-SCA ASIC (Giga-Bit Transceiver-Slow Control Adapter) which is part of the Gigabit Transceiver Link(GBT) chipset and it's purpose is to distribute control and monitoring signals to the electronics embedded in the detectors and in the ATLAS service areas. Experience and performance results from the first large-scale electronics integration tests performed at CERN on final NSW sectors will be presented.
Single top quark production is the subleading production process of top quarks at the LHC after the top quark pair production. The latest differential measurements of single top quark production (tW) cross sections are presented using data collected by the CMS detector at a center-of-mass energy of 13 TeV. The cross sections are measured as a function of various kinematic observables of the top quarks and the jets and leptons of the events in the final state. The results are confronted with precise theory calculations.
Although most of Beyond Standard Model (BSM) searches are targeting specific theory models, there has always been a keen interest in the development of model-independent methods amongst the High Energy Physics(HEP) community. Machine Learning (ML) based anomaly detection stands among the latest up-and-coming avenues for creating model-agnostic BSM searches. The focus of this research is the design of anomalous event taggers based on autoencoder models. Alongside the signal discrimination power, a high priority is placed on both signal-model and background-model independence. To this end, the autoencoder is used in conjunction with a Normalizing Flow model tasked with latent space density estimation. Both event reconstruction error and latent representation likelihood are combined in order to mitigate the bias of the resulting event anomaly score. Overall this method is showing promising anomaly detection performance without loosing much in terms of generalization power. On the multijet LHC Olympics data, it is consistently able to identify BSM signals, even in the challenging scenarios posed by the Black Box datasets, where the signal content is unknown.
A search is presented for four-top-quark production using proton-proton collision data at a centre-of-mass energy of 13 TeV collected by the ATLAS detector at the Large Hadron Collider with an integrated luminosity of 139/fb. Events are selected if they contain a same-sign lepton pair or at least three leptons (electrons or muons). Jet multiplicity, jet flavour and event kinematics are used to separate signal from the background through a multivariate discriminant, and dedicated control regions are used to constrain the dominant backgrounds. The four-top-quark production cross section is measured to be 24 +7 -6 fb. This corresponds to an observed (expected) significance with respect to the background-only hypothesis of 4.3 (2.4) standard deviations and provides evidence for this process.
The formation of partonic medium in the relativistics heavy-ion collisions is always marked by the values of the ratio of certain observables assuming $p+p$ collisions as a reference. But recent studies of small systems formed in $p+p$ collisions at the LHC energies hint towards the possibility of production of medium with collective behaviour. Results from $p+p$ collisions have routinely been used as baseline to analyse and understand the production of QCD matter expected to be produced in nuclear collisions. Therefore, results from $p+p$ collisions required more careful investigation to understand whether QCD matter is produced in high multiplicity $p+p$ collisions. With this motivation, the Glauber model traditionally used to study the heavy-ion collision dynamics at high-energies is applied to understand the dynamics of $p+p$ collisions. We have used anisotropic and inhomogeneous quark/gluon based proton density profile, a realistic picture obtained from the results of deep inelastic scattering, and found that this model explains the charged-particle multiplicity distribution of $p+p$ collisions at LHC energies very well. Collision geometric properties like impact parameter and mean number of binary collisions ($\langle N_{coll} \rangle$), mean number of participants ($\langle N_{part} \rangle$) at different multiplicities are determined for $p+p$ collisions. We further used these collision geometric properties to estimate average charged-particle pseudorapidity density ($\langle dN_{ch}/d\eta \rangle$) and found it to be comparable with the experimental results. Knowing $\langle N_{coll} \rangle$, we have for the first time obtained nuclear modification-like factor ($R_{HL}$) in $p+p$ collisions. We also estimated eccentricity and elliptic flow as a function of charged-particle multiplicity using the linear response to initial geometry and found a good agreement with experimental results.
Over the last years, Machine Learning (ML) tools have been successfully applied to a wealth of problems in high-energy physics. In this talk, we will discuss the extraction of the average number of Multiparton Interactions ($〈N_{mpi}〉$) from minimum-bias pp data at LHC energies using ML methods. Using the available ALICE data on transverse momentum spectra as a function of multiplicity we report the $〈N_{mpi}〉$ for pp collisions at √s = 7 TeV, which complements our previous results for pp collisions at √s = 5.02 and 13 TeV. The comparisons indicated a modest energy dependence of $〈N_{mpi}〉$. We also report the multiplicity dependence of $N_{mpi}$ for the three center-of-mass energies. These results are fully consistent with the existing ALICE measurements sensitives to MPI, therefore they provide experimental evidence of the presence of MPI in pp collisions.
To achieve the challenging target of 1% precision on luminosity determination at the high-luminosity LHC (HL-LHC) with instantaneous luminosity up to $7.5 × 10^{34} cm^{−2} s^{−1}$, the CMS experiment will employ multiple luminometers with orthogonal systematics. A key component of the proposed system is a stand-alone luminometer, the Fast Beam Condition Monitor (FBCM), which is fully independent from the central trigger and data acquisition services and able to operate during all times at 40 MHz providing bunch-by-bunch luminosity measurement with 1 s time granularity. FBCM is foreseen to be placed inside the cold volume of the Tracker as it utilizes silicon-pad sensors exploiting the zero-counting algorithm of hits for luminosity measurement. FBCM will also provide precise timing information with a few ns precision enabling the measurement of beam induced background. We report on the optimization of the design and the expected performance of FBCM.
The ATLAS Forward Proton physics program at CERN aims at studying soft and hard diffractive events from ATLAS proton collisions. Time-of-Flight (ToF) system is used to reduce the background from multiple proton-proton collisions. In this presentation, we describe technical details of the Fast Cherenkov model of photons generation and transportation through the optical part of the ToF detector. This Fast simulation uses Python programming language and Numba (high performance compiler). It is about 200 times faster than Geant4 simulation already implemented, and provides similar results concerning length and time distributions of photons. Moreover, this Fast simulation allows computing easily the time resolution of the different bars of ToF detector.
The New Small Wheel (NSW) upgrade is now in its commissioning phase. The future ATLAS detector sub-system will be one of the first to employ the Front-End Link eXchange (FELIX) as its Data Acquisition (DAQ) scheme. Currently, one of the main focus points of the community is to ensure proper acquiring of data from the detector media, besides validating the performance of the detectors themselves. This is being conducted at the BB5 area of CERN, where the Micromegas chambers are subjected to cosmic radiation. In this work, the software and FPGA-based tools that have been developed to aid in satisfying the stringent 2-week turnaround time per detector wedge will be described. These include applications that control the complex DAQ system and validate the data paths of thousands of readout channels and hundreds of high-speed optical links in an automated manner. Another aspect of the work is the description of methodologies that have been developed to validate the chamber’s performance under cosmic radiation. Finally, an FPGA system that performs scintillator coincidence trigger filtering will be described. The aforementioned part of the system is used to emulate the deterministic nature of the muons’ time-of-arrival during the actual run, in order to perform timing resolution performance profiling of the Micromegas detector.
The precise knowledge of the strong interaction between kaons and nucleons
is a key element to describe the interactio