- Compact style
- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Registration at the venue
Registration at the venue
Opening plenary session of LHCP2018
Welcome remarks from Virginio Merola, the Mayor of the City of Bologna, the INFN Bologna and Physics and Astronomy Department directors and the President of the Italian Physical Society
The Update of the European Strategy for Particle Physics will centre around the High-Luminosity operation of the LHC. Further input is being received till the end of 2018 to put in place a most comprehensive strategy – also on a global basis – that includes coverage of the burning questions till the middle of the coming decade. The Physics Beyond Colliders Study is an important element.
Opening plenary session of LHCP2018
Exotica parallel session of LHCP2018
The LHC has observed a number of potentially interesting deviations from the predictions of the Standard Model in the flavor sector, in particular in the decays of B-mesons. This talk offers a review of the experimental status of these anomalies, and their implications for physics beyond the Standard Model and associated signatures at the LHC and beyond.
Heavy Flavour of LHCP2018
A review of the status of the Unitarity Triangle Fits by the CKMFitter and UTfit collaborations. Current averages of charm mixing data are discussed
Review the theoretical basis of neutral meson mixing and CPV in mixing and decay. Discuss the constraints imposed on New Physics models based on mixing and CPV measurements.
Higgs parallel session of LHCP2018
Performance session of LHCP2018
Plenary Higgs session of LHCP2018
Outreach plenary session
Reception - light dinner
Plenary QCD session of LHCP2018
Heavy Ions parallel session of LHCP2018
This talk will address the theoretical basics and the latest developments aimed at describing collective dynamics in high-energy collisions. The focus should be exclusively on final-state-effects driven collectivity, such as those employed in hydrodynamic modelling and alike (as initial state driven effects will be covered in another session). The relevance and applicability of hydrodynamics for the description of the observable in smaller collisions systems will also be discussed.
Higgs parallel session of LHCP2018
QCD parallel session of LHCP2018
SUSY parallel session of LHCP2018
A general overview of SUSY implementations interesting for LHC - Natural SUSY, Split SUSY, etc.
Electroweak parallel session of LHCP2018
Exotica parallel session of LHCP2018
As the LHC reaches a mature and stable phase of running, new search techniques are called for in order to ensure that the maximum possible information is extracted from the data. This talk describes the most recent innovations in this area, including new kinematic variables, event geometry, and applications of machine learning to the LHC.
Outreach parallel session of LHCP2018
Significant parts of the original data and of derived data sets from the LHC experiments are openly available to scientists, educators, students and the general public on the CERN Open Data portal opendata.cern.ch, and on experiment-specific portals.
The latest data releases and some of their applications will be reviewed, with focus on both research (up to physics papers) and educational applications.
[NB: 15 minutes talk plus 2 minutes of questions]
Problem solving workshop hackathons are becoming more and more common in the world of academia. CERN and the LHC experiments have been involved in these type of projects for many years, bringing together data scientists, coders and innovators, from extremely diverse backgrounds. This talk offers a review of the most recents hackathon project and their impact on the world of high energy physics.
[NB: 15 minutes talk plus 2 minutes of questions]
The talk will present, by selected examples, how to explain the LHC physics to a general public. It will also show how the communication strategy depends strongly on the specific target.
[NB: 15 minutes talk plus 2 minutes of questions]
Two recent outreach projects are making use of public communities to enhance and build upon the first phases setup by physicists. “HiggsHunters” asked the public to search for displaced vertices in event displays, during which time a pool of trusted members arose in the associated discussion. The data this generated is now being analysed by schoolchildren, through which they can both learn the principles of scientific research and contribute directly to it. Also involving schoolchildren is “ATLAScraft”, a recreation of ATLAS and the wider CERN complex in minecraft. Here, the basic layout was provided, but students subsequently researched and created their own mini-games to explain various aspects of the LHC and detector physics to others.
[NB: 15 minutes talk plus 2 minutes of questions]
Posters session LHCP2018
The LHCb experiment at CERN operates a high precision and robust tracking system to reach its physics goals, including precise measurements of CP-violation phenomena in the heavy flavour quark sector and searches for New Physics beyond the Standard Model. The track reconstruction procedure is performed by a number of algorithms. One of these, PatLongLivedTracking, is optimised to reconstruct "downstream tracks", which are tracks originating from decays outside the LHCb vertex detector of long-lived particles, such as $K_s$ or $\Lambda^0$. After an overview of the LHCb tracking system, we provide a detailed description of the LHCb downstream track reconstruction algorithm. Its computational intelligence part is described in details, including the adaptation of the employed Machine Learning algorithms to the environment of the real-time high level trigger. The downstream tracking performance, obtained using a simulated data samples, is also presented.
Measurements of anti-deuterons in collider experiments can help to
reduce systematic uncertainties in indirect searches for dark matter.
Two predominant unknowns in these searches are the production of
secondary anti-deuterons in the cosmos from spallation processes, and
anti-deuteron production from annihilating dark matter.
LHCb is a forward spectrometer on the LHC ring, designed to measure
b-hadron decays from high energy proton-proton collisions. With the
detector's excellent particle identification capabilities, deuteron and
anti-deuteron measurements at LHCb could help to parametrise the two
cosmological processes. Recent studies of (anti-)deuteron identification
at LHCb and the prospects for measuring prompt (anti-)deuterons from
pp-collisions will be presented, as well as a working analysis of
b-baryrons decaying to deuterons.
The measurement the of W+c production cross-section provides an
opportunity to directly access the strange quark content of the proton
at the electroweak scale.
We focus on W → lν and c → D * as probes of W+c since both, W-
boson and D-Meson, can be measured with good accuracy by the CMS-
detector. Further the fragmentation of charm quarks into D-mesons is
well measured. The data taken by the CMS-experiment at the LHC
in 2016 offers sufficient statistics for an analysis of the pseudorapidity-
distribution of the muon coming from the decay of the W-boson. We
present the results for the inclusive and differential cross section of
W+charm, as well as comparisons to theoretical predictions at Next-
to-Leading order (NLO).
The results from this analysis are used as input for a QCD analysis
at NLO to determine the strange-quark distribution and extract the
strangeness fraction of the proton.
The ALICE experiment at the Large Hadron Collider (LHC) is dedicated to study the properties of the Quark-Gluon Plasma (QGP), a de-confined partonic state of strongly-interacting matter formed in relativistic heavy-ion collisions. Heavy quarks (charm and beauty), produced by parton-parton hard scatterings in the early stages of such collisions, are effective probes to study the QGP, as they are expected to experience the full evolution of the system formed in the collision.
The azimuthal correlations between heavy-flavour particles and charged particles give insight on the modification of charm-jet properties in nucleus-nucleus collisions and the mechanisms through which heavy quarks in-medium energy-loss takes place. Studies in pp collisions, besides constituting the necessary baseline for nucleus-nucleus mesurements, are important for testing expectations from pQCD-inspired Monte Carlo generators. This contribution will include the first study of azimuthal correlations of D mesons with charged particles in pp collisions at √ s = 13 TeV, the highest available energy, at the LHC, performed with the ALICE apparatus. A comparison with pp collisions results at √s = 7 TeV allows studying the energy dependence of the correlation function.
Precise calibration and monitoring of the CMS electromagnetic calorimeter (ECAL) is a key ingredient in achieving the excellent ECAL performance required by many physics analyses employing electrons and photons. This poster describes the methods used to monitor and inter-calibrate the ECAL response, using physics channels such as W/Z boson decays to electrons and pi0/eta decays to photon pairs, and also exploiting the azimuthal symmetry of the minimum bias events. Results of the calibrations obtained with Run 2 data are presented.
A description of the phase transition to the formation of a strong interacting color-deconfined state of QCD matter is given by the string percolation model which is base on two-dimensional (2D) continuum percolation of the projection of color fields.The created systems in pp and pA collisions are a non-thermalized system which departs from the thermal equilibrium limit. In the present work, we show the deviation of the thermal equilibrium limit. Moreover, that the effect of the system size plays an important role in these systems being the initial geometry effects relevant, finally consequences on the observables of these effects are presented
The identification of jets originated from b quarks is crucial for a broad range of physics analyses, from the standard model precision measurements to the new physics searches. Various b tagging algorithms exist at CMS and are further developed with the use of the machine learning techniques. The poster focuses on the studies with the comparison between the proton-proton collision data and the simulation for various input variables used in the heavy flavour tagging algorithms in several event topologies.
Neutrinos mass generation is needed to explain neutrino oscillations. Seesaw mechanisms offer a solution and motivate searches for heavy neutral leptons in either opposite-sign or same-sign leptons final states. This contribution discusses the challenging backgrounds for same-sign final states, either due to incorrectly identified jets as leptons or to mismesureaments of the electron charge.
The High Luminosity LHC (HL-LHC) will operate at a collider instantaneous luminosity up to 7.5x1034 cm-2s-1, approximately five times larger than the limit reached during the present LHC run. For the CMS experiment, this corresponds to an average pile-up of up to 200 events per crossing in the interaction region of the detector, and an integrated luminosity of up to 4000 fb-1 over 10 years of data taking is expected to be delivered. Such level of machine performances will allow to extend searches for new physics and to perform stringent tests of the Standard Model, such as precision measurements of the Higgs Boson couplings.
The CMS detector and its trigger system will need to undergo a substantial upgrade, called “Phase2 Upgrade”, affecting all subdetectors: tracking, electromagnetic and hadronic calorimeters, muon detectors, trigger and readout systems. The overall Software and Computing systems will need to be completely revisited, too: given the higher complexity of the event reconstruction, estimates from the current CMS software and using simulations for the upgrade phase indicate that the computing challenge is overall 65-200 times worse than in the current run (Run-2).
The complexity and the time span of this challenge - together with the recent ramp-up in the evolution curve of selected advanced computing techniques - like machine learning and deep learning (ML/DL) approaches - invites to explore some of them and to implement actual prototypes that test and verify their feasibility and eventual adoption. The work done so far towards ML/DL-based muon trigger algorithms for the Phase2 upgrade of the CMS detector will be presented and discussed.
Energy-frontier DIS can be realised at CERN through an energy recovery linac that would produce 60 GeV electrons to collide with the High Luminosity or High Energy LHC (LHeC) or eventually the FCC hadron beams (FCC-eh). It would deliver electron-lead collisions with centre-of-mass energies in the range 0.3-2.2 TeV per nucleon, and luminosities exceeding $5 \times 10^{32}$ cm$^{−2}$s$^{−1}$. In this poster, we will show some possibilities for physics with eA collisions. First we will present novel ways for the accurate determination of nuclear PDFs, in a hugely extended space of $x$ and $Q^2$. We will then discuss diffractive physics and, finally, the possibilities for establishing the existence of a new non-linear regime of QCD at small $x$ beyond the dilute regime described by collinear factorisation. Furthermore, we will comment on the possibilities at the LHeC and FCC-eh for analysing the transverse partonic structure of hadrons and nuclei and its corresponding fluctuations, with expected strong, direct implications on our understanding of the results obtained in present and future high-energy heavy-ion programmes.
To extend the LHC physics program, it is foreseen to operate the LHC in the future with an unprecedented high luminosity. To maintain the experiment's physics potential in the harsh environment of this so-called phase-2, the detector will be upgraded. At the same time the detector acceptance will be extended and new features such as a L1 track trigger will be implemented. Simulation studies evaluated the performance of the new, proposed detector components and the impact on representative physics channels. In case of searches for new physics, these studies also shape the future research program. The sensitivity to find new physics beyond the SM is significantly improved and will allow to extend the reach for heavy vector bosons, for SUSY, dark matter and exotic long-lived signatures, to name a few.
The work is devoted to the result of the creating a first module of the 1-st phase of the data processing center at the Joint Institute for nuclear research for modeling and processing experiments carried out on the test installations of the Large Hadron Collider. The issues related to handling the enormous data flow from the LHC experimental installations and troubles of distributed storages are considered. The article presents a hierarchical diagram of the network farm and a basic model of the network architecture levels. The project documentation of the network based on the Brocade equipment is considered. Protocols for disposal full mesh network topologies are considered. The newest modern data transfer protocol Transparent Interconnection of Lots of Links (TRILL) is presented. Its advantages are analyzed in comparison with the other possible protocols that may be used in the full-mesh topology, like a Spanning tree protocol. Empirical calculations of data routing based on a Dijkstra's algorithm and a patent formula of the TRILL protocol are given.
Two monitoring systems of the network segment and download of the data channels are described. The former is a typical packet software; the latter is a newly designed software with an application to graph drawing. The data are presented which were obtained experimentally from 40G interfaces through by each monitoring systems, their behavior is analyzed. The data accuracy in different systems is proved. The main result is that the discrepancy of experimental data with theoretical predictions to be equal to the weight balancing of the traffic when transmitting the batch information over the equivalent edges of the graph. It is shown that the distribution of the traffic over such routes is of arbitrary and inconsistent with the patent formula character.
The conclusion analyzes the issues of the traffic behavior under extreme conditions. There are two main questions to be answered. Which way does the distribution of batch data transfer over four equivalent routes occur? What happens if overload takes place? An assumption is made of the need to compare the traffic behavior in various data centers with the help of the traffic generators
We report on a preliminary study of the production of f$_{0}$(980)$\rightarrow \pi^{+}\pi^{-}$ at mid-rapidity ($\vert y \vert$~$<$~0.5) performed with the ALICE detector at the LHC in minimum bias pp collisions at the centre-of-mass energy $\sqrt{\mathit{s}}$ = 5.02 TeV. The f$_{0}$(980) signal extraction is challenging due to the large background from correlated $\pi^{+}\pi^{-}$ pairs from other resonance decays in the invariant mass window under study, as well as due to the combinatorics from uncorrelated pairs. We present in detail the strategy followed for the signal extraction and first results in terms of \textit{p}$_{\mathrm{T}}$-dependent production yields. Results are discussed and compared with production yields of other resonances and stable hadrons.
Short-lived hadronic resonances are useful probes for the investigation of the late hadronic phase of ultra-relativistic heavy-ion collisions since their lifetimes are of the same order of magnitude as the time span between the chemical and kinetic freeze-out, typically estimated to be about 10 fm/\textit{c} for central collisions. Our study in pp collisions provides a feasibility check and constitutes a reference for the measurement in high-multiplicity events (p-Pb, Pb-Pb).
The nature of the f$_0$(980) remains elusive: different interpretation of this resonance including $q\bar{q}$ states, bound states of hadrons such as $K\bar{K}$, and as tetraquark candidate are available. Studies in different collision systems are particularly interesting because they can provide information about the nature of this particle.
The study of strange hadronic resonances in pp collisions contributes to the study of strangeness production in small systems. Measurements in pp collisions constitute a reference for the study in larger colliding systems and provide constraints for tuning QCD-inspired event generators. Since the lifetimes of short-lived resonances such as $K^{*}(892)^{\pm}$ ($\tau \sim 4$ fm/$c$) are comparable with the lifetime of the fireball produced in heavy-ion collisions, regeneration and rescattering effects can modify the measured yield, especially at low transverse momentum. \
\
The first results for the $K^{*}(892)^{\pm}$ resonance obtained in inelastic pp collisions at $\sqrt{\text{s}}=$ 5.02, 8, and 13 TeV will be shown. The $K^{*}(892)^{\pm}$ has been measured at mid-rapidity via its hadronic decay channel $K^{*}(892)^{\pm}\rightarrow K^{0}_{\rm{S}}+\pi^{\pm}$, with the ALICE detector. In particular, the transverse momentum ($p_{\mathrm{T}}$) spectrum, integrated yields, $\langle p_{\mathrm{T}}\rangle$ and ratio to stable hadrons will be presented. The $K^{*}(892)^{\pm}$ results are compared with $K^{*0}$ measurements and with commonly-used Monte Carlo models. Measurements at 13 TeV are in addition a baseline for comparison with pp measurements at other LHC energies.
In spite of the recent progress in both theoretical and experimental studies many aspects of proton-proton (pp) and proton-nucleus (pA) collisions still require a detail investigation. At high collision energies, the probability of simultaneous scatterings of different pairs of partons, contributing to the same inelastic event, has to be considered. In particular, double parton scattering (DPS) processes can play a dominant role for some specific kinematic regions of multi-jet production. The DPS measurements of pA collisions provide important complementary information to that gathered from pp collisions on the nature of multiple interactions.
In this poster I will present latest theoretical results on four-jet and three-jet plus gamma production via DPS in pp and pA collisions, as well as its dependence on different kinematical cuts and different phenomenological assumptions.
The H $\to$ ZZ $\to$ 4$\ell$ decay channel ($\ell = e, \mu$) is one of the most important channels for studies of properties of the Higgs boson since it has a large signal-to-background ratio due to the complete reconstruction of the final state decay objects and excellent lepton momentum resolution. Measurements performed using this decay channel and Run 1 data include, among others, the determination of the mass, spin-parity, and width of the new boson as well as tests for anomalous HVV couplings. This analysis presents measurements of properties of the Higgs boson in the H $\to$ ZZ $\to$ 4$\ell$ decay channel at the $\sqrt{s} = 13$ TeV using 41.8 fb$^{-1}$ of $pp$ collision data collected with the CMS experiment at the LHC in 2017.
In the previous iteration, categories have been introduced targeting sub-leading production modes of the Higgs boson such as vector boson fusion (VBF) and associated production with a vector boson (WH, ZH) or top quark pair ($\mathrm{t\bar{t}H}$). Apart from a larger dataset used, the main improvements in this analysis are newly optimised lepton selection, featuring in particular the usage of a new multivariate discriminant for electrons, and improved categorisation, especially optimised towards the associated production with a top quark.
Although the observed 125 GeV boson is compatible with the SM Higgs boson, the existence of non-SM properties is not excluded due to the relatively large uncertainties. There is extensive evidence for the existence of dark matter. Invisible Higgs decay modes are realized in models allowing interactions between the Higgs boson and dark matter, for example, "Higgs-portal" models. Searches for invisibly decaying Higgs bosons are possible through missing energy signatures, exploiting various production modes: gluon-gluon fusion, vector-boson fusion, and vector-boson associated production. A search focused on the vector-boson fusion (VBF) production mode, in which two quarks besides the Higgs boson are present in the final state, using the 13 TeV dataset collected by the CMS detector at the LHC in 2016 is presented. The combination with other relevant analyses to further improve the sensitivity to the Higgs to invisible branching fraction ($\mathcal{B}(H\rightarrow \text{inv.})$) is also presented.
The reconstruction of high-momentum muons presents peculiar aspects due to the increasing probability of showering in the detector material, and is critically dependent on the relative alignment of the muon chambers with the inner tracker and among themselves. On the other hand, high-momentum muons constitute a clean signature for the decay of new hypothesised high-mass Z’ or W’ bosons, or boosted particles. Dedicated reconstruction algorithms have been developed in CMS. The performance of reconstruction and identification has been studied in the widest accessible momentum range: efficiencies, momentum resolution and absolute momentum scale, on both data and MC simulations at 13 TeV.
Results from the Dijet Resonance Search using data from 2016 and 2017 running will be shown, with emphasis on narrow resonances, for resonance masses above 1.6 TeV and a variety of new physics models used for the interpretation of the experimental results. The traditional method of estimating the QCD background is used, employing a parametrization for the background and an empirical fit of the data in the signal region.
Results from the Dijet Resonance Search using data from 2016 and 2017 running will be shown, with emphasis on wide resonances, for resonance masses above 1.6 TeV and focusing on simplified models of Dark Matter for the interpretation of the experimental results. A new method of estimating the QCD background is used, employing a data-driven methodology where events in a background dominated control region, 1.3< |Δη| < 2.6, are used to predict the background in the signal region, |Δη| < 1.3.
A key ingredient of the data taking strategy used by the LHCb experiment at CERN in Run 2 is the novel real-time detector alignment and calibration. Data collected at the start of the fill are processed within minutes and used to update the alignment, while the calibration constants are evaluated hourly. This is one of the key elements which allow the reconstruction quality of the software trigger in Run-II to be as good as the offline quality of Run 1. The most recent developments of the real-time alignment and calibration paradigm enable the fully automated updates of the RICH detectors' mirror alignment and a novel calibration of the calorimeter systems. Both evolutions improve the particle identification performance stability resulting in higher purity selections. The latter leads also to an improvement in the energy measurement of neutral particles, resulting in a 15% better mass resolution of radiative b-hadron decays. A large variety of improvements has been explored for the last year of Run 2 data taking and is under development for the LHCb detector upgrade foreseen in 2021.These range from the optimisation of the data samples selection and strategy to the study of a more accurate magnetic field description. Technical and operational aspects as well as performance achievements are presented, focusing on the new developments for both the current and upgraded detector.
Several new physics models that extend the Standard Model require the existence of Long-Lived Particle (LLP) as a solution for the problems like Dark Matter and Naturalness. The new ATLAS Phase-II setup with its huge statistics and updated detectors offers an opportunity to probe the yet unexplored region of the phase space. For muon spectrometer based searches neutral LLP decaying to collimated jets of leptons and light hadrons (lepton-jets) are of great interest. These particles offer an unique signature that can lead to an early discovery. New triggering techniques and algorithm have been developed and studied to improve the selection of both highly and poorly boosted lepton-jets.
Reliable and performant heavy flavour identification is of prime importance for the physics program of the CMS experiment. During the last years the CMS collaboration has dedicated a considerable effort to improve and expand its capabilities in this sector by applying several machine learning techniques well established in industry, but still experimental in HEP. The poster will focus on a selection of these techniques and describe the implementation details as well as the resulting gains.
An important test of the Standard Model (SM) electroweak symmetry breaking sector is the measurement of the Higgs self-interactions. Sensitivity to the Higgs self-coupling for mH = 125 GeV is evaluated through the measurement of the non-resonant di-Higgs production final states. The considered decay channels are HH → VVbb, where V=W,Z. For the non-resonant SM signal in an ideal detector parametrization, a precision of O(20%) on the SM cross-section can be estimated, roughly corresponding to a precision of O(30%) on the Higgs trilinear coupling.
Resonant states decaying into two Higgs bosons give identical signatures as the above mentioned final states, and are predicted in beyond the SM (BSM) theories, like radions (spin 0) or excitations of the graviton (spin 2) in the Randall-Sundrum model. The analysis will be extended to search for such BSM states.
The parton-level generation of the signal and the backgrounds is performed by using MadGraph5_aMC@NLO and the Delphes fast parametrisation of the FCC-hh detector is used.
A recent measurement of ttbar inclusive production cross section is presented using data collected by CMS at 13 TeV. The measurement focuses on final states with two leptons.
The top quark pair production cross section (stt) is measured in pp collisions at a center-of-mass energy of 5.02 TeV. The analyzed data have been collected by the CMS experiment at the CERN LHC and correspond to an integrated luminosity of 27.4 /pb. The measurement is performed by analyzing events with at least one charged lepton. The measured cross section is 69.5 +/- 8.4 pb. The result is in agreement with the expectation from the standard model. The impact of the presented measurement on the gluon distribution function is illustrated through a quantum chromodynamic analysis at next-to-next-to-leading order.
The precise evaluation of the tracking efficiencies is a crucial element for many physics analysis, especially those aiming at measuring production cross sections or branching fractions. In the LHCb experiment, several data-driven approaches have been conceived and continuously improved in order to provide a precise evaluation of the tracking efficiencies. They are mostly based on clean samples of muons, but the recent hints of lepton universality violation required the development of robust data-driven techniques specifically dedicated to electrons, in order to reduce the systematic uncertainties. In addition, special data streams have been recently put in place to collect and save the calibration samples selected in the LHCb software trigger for both muons and electrons, ensuring a prompt access right after the data has been collected.
The inclusive cross-section for tW production in proton-proton collisions at $\sqrt{s} = 13$ TeV is measured of a dataset corresponding to an integrated luminosity of 35.9 fb$^{-1}$ collected by the CMS experiment. The measurement is performed in events with one electron and one muon, and exploits kinematic differences between the signal and the dominating $t\bar{t}$ background through the use of multivariant discriminants designed to separate the two processes. The measured cross-section of $\sigma = 63.1 \pm 1.8~({\rm stat}) \pm 6.0~({\rm syst}) \pm 2.1~({\rm lumi})$ pb is in agreement with Standard Model expectations.
We show plans and status of a proposed search for milli-charged particles produced at the LHC. The experiment, milliQan, is expected to obtain sensitivity to charges of between 0.1e and 0.001e for masses in 0.1 - 100 GeV range. The detector is composed of 3 layers of 80 cm long plastic scintillator arrays read out by PMTs sensitive to single photo-electrons. It would be installed in an existing tunnel 33 m from the CMS interaction point at the LHC, with 17 m of rock shielding to suppress beam backgrounds. In the fall of 2017 a 1% scale “demonstrator” of the proposed detector was installed at the planned site in order to study the feasibility of the experiment, focusing on understanding various background sources such as radioactivity of materials, PMT dark current, cosmic rays, and beam induced backgrounds. I will discuss the general concept of the experiment, the results from the demonstrator, and the plan for the future.
The performance of muon identification and isolation efficiencies in CMS has been studied on data collected in pp collisions at 13 TeV at the LHC on the full 2017 dataset. The efficiencies have been computed with the tag-and-probe method, in different periods of data taking. Results obtained using data are compared with Monte-Carlo predictions.
The collisions of Partially Stripped Ions (PSI) with laser light to produce high intensity gamma-ray beams are the backbone of the Gamma Factory (GF) initiative.
The source, if realised at LHC, could significantly push up the intensity limits of the presently operating ones, reaching the flux of the order of $10^{17}$ photons/s, in the particularly interesting gamma-ray energy domain of 1 to 400 MeV.
The unprecedented-intensity, energy-tuned gamma beams, together with the gamma-beams-driven secondary beams of polarized positrons, polarized muons, neutrinos, neutrons and radioactive ions would constitute the basic research tools of the proposed Gamma Factory.
We discuss the GF concept and the preliminary estimates of the emitted gamma beams phase spaces given by two newly developed Monte Carlo codes which simulate the PSI-laser interactions.
Important analyses of the core LHCb physics program rely on calorimetry to identify photons, high-energy neutral pions and electrons. For this purpose, the LHCb calorimeter system is composed of a scintillating pad plane, a preshower detector, an electromagnetic and a hadronic sampling calorimeters. The interaction of a given particle in these detectors leaves a specific signature. This is exploited for particle identification (PID) by combining calorimeters and tracking information into multi-variate classifiers. In this contribution, we focus on the identification of photons against high-energy neutral pion and hadronic backgrounds. Performance on Run 1 data will be shown. Small discrepancies with simulation predictions are then discussed, with special emphasis on the methods to correctly estimate PID cut efficiencies by means of large calibration samples of abundant beauty and charm decays to final states with photons. Finally, the technical aspects of the collection of these samples in Run 2 are presented.
The LHCb collaboration has recently reported evidence of two pentaquark states [1]. We have constructed a classification scheme for pentaquark states and tried to describe them as compact objects [2]. The hidden-charm pentaquark states have been also described as meson-baryon molecules with coupled channels for D¯(∗)Λc
and D¯(∗)Σ(∗)c [3] and recently, for the first time, we have discussed the interplay between compact and molecular components [4]. Important predictions are also given for bottom meson-baryon molecules coupled with five-quark states [3].
[1] R. Aaij et al. [LHCb Collaboration], Phys. Rev. Lett. 115
(2015) 072001;
[2]Santopinto, Giachino, Phys.Rev. D96 (2017) no.1, 014014;
[3]Yasuhiro Yamaguchi, Elena Santopinto, Phys.Rev. D96 (2017) no.1, 014018;
[4]Yasuhiro Yamaguchi, Alessandro Giachino, Atsushi Hosaka, Elena Santopinto, Sachiko Takeuchi, Makoto Takizawa, Phys. Rev. D96 (2017) no. 11, 114031.
Many physics analyses using the Compact Muon Solenoid (CMS) detector at the LHC require accurate, high resolution electron and photon energy measurements. Particularly important are decays of the Higgs boson resulting in electromagnetic particles in the final state, as well as searches for very high mass resonances decaying to energetic photons or electrons. Following the excellent performance achieved in Run I at center-of-mass energies of 7 and 8 TeV, the CMS electromagnetic calorimeter (ECAL) is operating at the LHC with proton-proton collisions at 13 TeV center-of-mass energy. The instantaneous luminosity delivered by the LHC during Run II has achieved unprecedented values, using 25 ns bunch spacing. High pileup levels necessitate a retuning of the ECAL readout and trigger thresholds and reconstruction algorithms, to maintain the best possible performance in these more challenging conditions. The energy response of the detector must be precisely calibrated and monitored to achieve and maintain the excellent performance obtained in Run I in terms of energy scale and resolution. A dedicated calibration of each detector channel is performed with physics events exploiting electrons from W and Z boson decays, photons from pi0/eta decays, and from the azimuthally symmetric energy distribution of minimum bias events. This talk describes the calibration strategies and performance of the CMS ECAL throughout Run II and its role in precision physics measurements with CMS involving electrons and photons.
CMS-HF Calorimeters have been upgraded within the Phase I upgrade program of the CMS. These upgrades that were finalized during EYETS16/17 involved the replacement of the single anode PMTs with the 4-anode PMTs and the associated front-end electronics to read out the signals coming from these PMTs. Four-anode PMTs were more effective in reducing the noise in the HF detectors due to the window events caused by the muons hitting the PMT windows directly. These PMTs have also thinner windows, further reducing this Cherenkov radiation produced in the windows. Front-end electronic cards were designed to readout the four-anode signals in two channels instead of four to reduce the costs; also, a feature to measure the arrival time of the signals was added. The TDC information is useful to identify the window events that come earlier than the regular signals. Reading the four-anode PMTs with these new cards helped reduce the noise due to the HF in the data collected during the collisions. In this poster, details and the final commissioning of the upgrade will be given. Timing and the comparison of the charge signals from the two channels actually seen in the collision data will be shown.
In late 2017, the ALICE collaboration recorded data from Xe-Xe collisions at the unprecedented energy in AA systems of $\sqrt{s_{\rm{NN}}} = 5.44$ TeV. The $p_{\rm T}$-spectra at mid-rapidity ( $|y| < 0.5$ ) of pions, kaons and protons are presented.
The final $p_{\rm T}$-spectra are obtained by combining independent analyses with the Inner Tracking System (ITS), the Time Projection Chamber (TPC), and the Time-Of-Flight (TOF) detectors. This presentation focuses in the details of the analysis performed with TOF and in particular on the performance implications of the special Xe-Xe run conditions.
The peculiarity of these data comes from the experimental conditions: because of the lower magnetic field ($B = 0.2 \text{ T}$, less than the usual $0.5 \text{ T}$) we expect to explore a $p_{\rm T}$ region unattainable before.
A comparison between the yields at different centrality bins will also be provided.
Cathode Strip Chambers (CSC) are a crucial component of the CMS endcap muon system which will operate throughout the lifetime of the LHC and beyond, during the HL-LHC running. We present an analysis of the expected CSC performance in a HL-LHC like environment as well as studies of CSC detector longevity over the lifetime of the HL-LHC.
Big volumes of data are collected and analysed by LHC experiments at CERN. The success of this scientific challenges is ensured by a great amount of computing power and storage capacity, operated over high performance networks, in very complex LHC computing models on the LHC Computing Grid infrastructure. Now in Run-2 data taking, LHC has an ambitious and broad experimental programme for the coming decades: it includes large investments in detector hardware, and similarly it requires commensurate investment in the R&D in software and computing to acquire, manage, process, and analyse the shear amounts of data to be recorded in the High-Luminosity LHC (HL-LHC) era.
The new rise of Artificial Intelligence - related to the current Big Data era, to the technological progress and to a bump in resources democratization and efficient allocation at affordable costs through cloud solutions - is posing new challenges but also offering extremely promising techniques. Machine Learning and Deep Learning are rapidly evolving approaches to characterising and describing data with the potential to radically change how data is reduced and analysed, also at LHC.
This contribution documents the construction of a Machine Learning “as a service” solution for CMS Physics needs, namely an end-to-end data-service to serve Machine Learning trained model to the CMS software framework. The proof of concept of a first working prototype of such infrastructure, plus the demonstration on the Signal versus Background discrimination in the study of CMS all-hadronic top quark decays done with scalable Machine Learning techniques, are presented and discussed.
The standard model (SM) has been very successful in describing the phenomenology of the electroweak and strong interactions. The discovery of a Higgs boson consistent with the SM prediction at the LHC in 2012 was a major achievement. However, the SM does not answer fundamental questions. Therefore, great efforts have been made by experimental groups in order to search for new physics. Theorists have been proposing models allowing the existence of new interactions and particles, such as additional Higgs bosons in the minimal supersymmetric model (MSSM) and two-higgs-doublet model (2HDM). A search for new Higgs bosons produced in association with bottom quarks and decaying into a bottom anti-bottom quark pair is performed with the CMS detector. The data collected for this analysis were recorded in proton-proton collisions at a centre-of-mass energy of 13 TeV in 2016, corresponding to an integrated luminosity of 35.7 fb−1. No signal excess above the standard model background is observed. Stringent upper limits on the cross section times branching fraction are calculated for Higgs states with masses up to 1300 GeV at 95% confidence level. The results are also interpreted within several MSSM and 2HDM scenarios.
The first level hardware trigger system of the ATLAS experiment is expected to be fully upgraded for HL-LHC to stand the challenging performances requested with the increasing instantaneous luminosity. The Level-0 muon trigger system has to maintain or increase its data selection capability during HL-LHC, despite of the higher detector hit rate, cavern background and trigger rate. The Resistive Plate Chamber (RPC) detector provides the main trigger source in the barrel region of the Muon Spectrometer. The upgraded trigger system foresees to send RPC raw hit data to the off-detector trigger processors, where the trigger algorithms run on Field-Programmable Gate Arrays (FPGAs). The FPGA represents an optimal solution in this context, because of its flexibility, wide availability of logical resources and high processing speed.
Studies and simulations of different trigger algorithms have been performed, together with the evaluation of the performances and efficiency of the barrel trigger system. The FPGA logic resource occupancy has also been estimated.
The study of the multiplicity dependence of heavy-flavour production in pp collisions provides insight into their production mechanism and into the interplay between hard and soft processes in particle production. In addition, at the LHC energies, multiple parton interactions may also play a significative role in the heavy-flavour production.
In this contribution, we present the measurement of the heavy-flavour hadron decay electron yield as a function of transverse momentum and charged particle multiplicity at mid-rapidity ($|\eta| <$ 0.8) in pp collisions at $\sqrt{s} =$ 13 TeV. Electron identification is done within 0.5 $
In 2017 the Large Hadron Collider (LHC) at CERN has provided an astonishing 50 fb-1 of proton-proton collisions at a center-of-mass energy of 13 TeV. The Compact Muon Solenoid (CMS) detector has been able to record 90.3% of this data. During this period, the CMS electromagnetic calorimeter (ECAL), based on 75000 scintillating PbWO4 crystals and a silicon and lead preshower, has continued exhibiting excellent performance with a very stable data acquisition (DAQ) system. The ECAL DAQ system follows a modular and scalar schema: the 75000 crystals are divided in sectors (FED), each of them controlled by 3 interconnected boards. These boards are responsible for the configuration and control of the front-end electronics configuration, the generation of trigger primitives for the central CMS L1 trigger, and the collection of data. A multi-machine distributed software configures the electronic boards and follows the life cycle of the acquisition process. The ECAL electronics modular configuration is reflected in the software where a tree control structure is applied. Through a master web application, the user controls the communication with the sub-applications which are responsible for the off-detector board configurations. Since the beginning of Run 2 in 2015, many improvements to the ECAL DAQ have been implemented to reduce occasional errors, as well as to mitigate single event upsets in the front-end electronics, and to improve the efficiency. Efforts at the software level have been made to introduce automatic recovery in case of errors. These procedures are mandatory to have a reliable and efficient acquisition system.
The CMS experiment implements a sophisticated two-level triggering system composed of the Level-1, instrumented by custom-design hardware boards, and a software High Level Trigger. A new Level-1 trigger architecture with improved performance is now being used to maintain high physics efficiency for the more challenging conditions experienced during Run II. We present the performance of the upgraded L1 jet, energy sum, and missing transverse energy (MET) triggers. The upgraded trigger benefits from an enhanced granularity of the calorimeters to optimally reconstruct hadronic objects. Dedicated pile-up mitigation techniques are implemented for both jets and missing transverse energy to keep the trigger rate under control in the intense running conditions of the LHC. The performance of the new trigger system is presented, based on proton-proton collision data collected in 2017. The selection techniques used to trigger efficiently on benchmark analyses are presented, along with the strategies employed to guarantee efficient triggering for new physics.
The CMS experiment implements a sophisticated two-level triggering system composed of Level-1, instrumented by custom-design hardware boards, and a software High Level Trigger. A new Level-1 trigger architecture with improved performance is now being used to maintain high physics efficiency for the more challenging luminosity conditions experienced during Run II. The CMS muon detector contains complementary and partially redundant muon detection systems: the Cathode Strip Chambers (CSC), Drift Tubes (DT) and Resistive Plate Chambers (RPC). The upgraded L1 muon trigger combines information from these three detectors to reconstruct muons and obtain a better efficiency and lower rates. Advanced pattern recognition and MVA (Boosted Decision Tree) regression techniques implemented directly on the trigger boards allow high-momentum signal muons to be distinguished from the overwhelming low-momentum background. Algorithms for the selection of events with muons, both for precision measurements and searches for new physics beyond the Standard Model, are described in detail. The performance of the upgraded muon trigger system will be presented, based on proton-proton collision data collected in 2017.
The CMS experiment implements a sophisticated two-level triggering system composed of the Level-1, instrumented by custom-design hardware boards, and a software High Level Trigger. A new Level-1 trigger architecture with improved performance is now being used to maintain high physics efficiency for the more challenging conditions experienced during Run II. The upgraded trigger benefits from an enhanced granularity of the calorimeters to optimally reconstruct electromagnetic objects. The performance of the new trigger system is presented, based on proton-proton collision data collected in 2017. We highlight the performance of the upgraded CMS electron and photon trigger in the context of Higgs boson decays into final states with photons and electrons. The selection techniques used to trigger efficiently on these benchmark analyses are presented, along with the strategies employed to guarantee efficient triggering for new resonances and other new physics signals involving electron and photon final states.
Many extensions of the Standard Model (SM) include particles that are neutral, weakly coupled, and long-lived that can decay to hadronic and leptonic final states. Long-lived particles (LLPs) can be detected at colliders as displaced decays from the interaction point (IP), or missing energy if they escape. ATLAS, CMS, and LHCb have performed searches at the LHC and significant exclusion limits have been set in recent years.
The current searches performed at colliders have limitations. An LLP does not interact with the detector and it is only visible once it decays. Unfortunately, no existing or proposed search strategy will be able to observe the decay of non-hadronic electrically neutral LLPs with masses above $\sim$ GeV and lifetimes near the limit set by Big Bang Nucleosynthesis ($c\tau \sim 10^7-10^8$ m). Such ultra-long-lived particles (ULLPs) produced at the LHC will escape the main detector with extremely high probability.
In this talk, we describe the MATHUSLA surface detector (MAssive Timing Hodoscope for Ultra Stable neutraL pArticles) [1], which can be implemented with existing technology in time for the turn-on of the high luminosity LHC (HL-LHC). The MATHUSLA detector will consist of an air-filled decay volume surrounded by charged particles detectors (top, bottom, and sides) that provide timing and a robust multilayer tracking system located in the upper region. Ref. [1] proposes covering a total sensitive area of $200 \times 200$ square meters on the surface in the region near the interaction point of ATLAS or CMS detectors for the beginning of the HL-LHC run.
We installed a small-scale test stand ($\sim 6.5$ meters tall, covering an area of $2.5 \times 2.5$ square meters) on the surface above ATLAS IP in autumn 2017 that consists of three layers of resistive plate chambers used for timing/tracking and two layers of scintillators (top, bottom) for timing measurements to study efficiency of downward cosmic track rejection. The goal is to estimate cosmic background that mimics upward going tracks and the proton-proton collision backgrounds from ATLAS during nominal LHC operations. The test stand will resume operation above the ATLAS IP in April 2018 and collect data with pp collisions until December 2018. This will provide useful information for the design of the main detector and important inputs for the future physics and detector simulations.
We will present preliminary results obtained with data collected in 2017 and the on-going background studies. The sensitivity of MATHUSLA to various ULLP theoretical constructs will be summarized and current design concepts reviewed.
The present CMS muon system operates three different detector types: in the barrel drift tubes (DT) and resistive plate chambers (RPC), along with cathode strip chambers (CSC) and another set of RPCs in the forward regions. In order to cope with increasingly challenging conditions various upgrades are planned to the trigger and muon systems. In view of the operating conditions at HL-LHC, it is vital to asses the detector performance for high luminosity. New irradiation tests had to be performed to ensure that the muon detectors will survive the harsher conditions and operate reliably. The new CERN GIF++ (Gamma Irradiation Facility) allowed to perform aging tests of these large muon detectors. We present results in terms of system performance under large backgrounds and after accumulating charge through an accelerated test to simulate the expected dose. New detectors will be added to improve the performance in the critical forward region: large-area triple-foil gas electron multiplier (GEM) detectors will already be installed in LS2 in the pseudo-rapidity region 1.6 < |η| < 2.4, aiming at suppressing the rate of background triggers while maintaining high trigger efficiency for low transverse momentum muons. For the HL-LHC operation the muon forward region should be enhanced with another large area GEM based station, called GE2/1, and with two new generation RPC stations, called RE3/1 and RE4/1, having low resistivity electrodes. These detectors will combine tracking and triggering capabilities and can stand particle rates up to few kHz/cm². In addition to take advantage of the pixel tracking coverage extension a new detector, ME0 station, behind the new forward calorimeter, covering up to |η| = 2.8.
The ATLAS muon spectrometer comprises excellent muon trigger capabilities and high muon momentum resolution up to the TeV scale. Yet, in a small region between the barrel and end-cap parts of the muon spectrometer, a non-negligible muon trigger rate from charged particle not emerging from the pp interaction point has been observed. To prevent such fake triggers, the end of the inner ring of the muon spectrometer barrel in the barrel end-cap transition region which is currently not equipped with trigger chambers will be instrumented with resistive plate chambers (RPCs) in the next long shutdown of the LHC in 2019 and 2010. As the space in this region is extremely limited the present muon drift-tube chambers will have to be removed and replaced by an integrated system of thin-gap RPCs and small diameter muon drift-tube (sMDT) chambers. Final prototypes of both chambers have been successfully produced recently fulfilling all very tight spatial requirements and showing the expected performance. Results of mechanical measurements of the sMDT chamber geometry with coordinate measurement machine show a wire positioning accuracy of better than 10 µm. The test of the thin-gap RPC in a high-energy muon beam at CERN show full chamber efficiency and an unprecedented time resolution better than 0.5 ns. In the conference contribution the design, construction, and tests of the new sMDT and RPC chambers as well as the progress of series production will be presented.
We analyse two consequences of the relationship between collinear factorization and kt-factorization.
Firstly we show that the kt-factorization gives a fundamental justification for the choice of Q^2
done in the collinear factorization. Secondly, we show that in the collinear factorization there is an uncertainty
on this choice which will not be reduced by higher orders. This uncertainty is absent within the kt-factorization formalism.
During the next 5 years a great number of laboratories all around the world will be involved on the upgrading of the 4 principal experiments at CERN (ATLAS, CMS, LHCb, ALICE). The ATLAS Bologna group, which collaborates with the Pixel Detector's DAQ (Data AcQuisition), in the last 2 years has developed a prototype of a new board named PILUP (PIxel detector high Luminosity UPgrade); this board is a candidate for the redesign of the ATLAS DAQ required for HL-LHC (High Luminosity LHC) project. The main characteristics of this board are the embedded processor (Dual core ARM) and the large communication bandwidth (up to 60 Gb/s through optical fibers); these allow the board to manage complex systems and data transmissions suitable for the performance demanded for the next HEP (High Energy Physics) goals. The PILUP has already demonstrated the capability to manage the communication with the main board of the ATLAS DAQ upgrade, the FELIX, using different communication protocols (GBT and FullMode). This is the first result of the collaboration with the Felix group. Moreover, its features make it adaptable to be programmed as an emulator of several devices (front end electronic or read-out chips like the new RD53A). In conclusion, the experience gained by the Bologna team in the last 7 years collaborating the ATLAS Pixel Detector DAQ joined with the characteristics of this new board, open many directions for the use of the PILUP.
ATLAS electron and photon triggers covering transverse energies from 5 GeV to several TeV are essential to record signals for a wide variety of physics: from Standard Model processes to searches for new phenomena. To cope with ever-increasing luminosity and more challenging pile-up conditions at a centre-of-mass energy of 13 TeV, the trigger selections need to be optimized to control the rates and keep efficiencies high. The ATLAS electron and photon trigger evolution throughout the Run 2 will be presented, including new techniques developed to maintain their high performance even in high pile-up conditions as well as first efficiency measurements from the 2018 data taking.
The aim of the ATLAS Forward Proton (AFP) detector system is the measurement of protons scattered diffractively or electromagnetically at very small angles. The full two-arm setup (on both sides of the interaction point), which allows measurements of processes with two forward protons: central diffraction, exclusive production, and photon-induced interactions, was installed during the 2016/2017 LHC winter shutdown. In 2017, AFP participated in the ATLAS high-luminosity data taking on a daily basis. In addition, several special runs with reduced luminosity were taken. The poster will present the AFP detectors and some of the lessons learned from last year's operation.
The current ATLAS model of Open Access to recorded and simulated data offers the opportunity to access datasets with a focus on education, training and outreach. This mandate supports the creation of platforms, projects, software, and educational products used all over the planet. We describe the overall status of ATLAS Open Data (http://opendata.atlas.cern) activities, from core ATLAS activities and releases to individual and group efforts, as well as educational programs, and final web or software-based (and hard-copy) products that have been produced or are under development. The relatively large number and heterogeneous use cases currently documented is driving an upcoming release of more data and resources for the ATLAS Community and anyone interested to explore the world of experimental particle physics and the computer sciences through data analysis.
The energy and mass of jets measured with the ATLAS detector are calibrated through a multi-step process. In the last step of this chain, known as the residual in-situ calibration, events with a well-measured feature, such as the pT of a photon or the mass of a top, are used as reference to correct the calibration scale in-situ and estimate its uncertainty.
In order to constrain the Jet Energy Scale (JES) and the Jet Mass Scale (JMS) over the widest possible range of phase-space, several such techniques are combined. The response measurements and their uncertainties are then combined to give a continuous and smooth calibration scale across pT and mass.
This poster describes the procedure to combine these various techniques, with particular emphasis on the combination method and its features. We will also present the most recent results on the ATLAS jet energy and mass scale uncertainties.
The LHC plans a series of upgrades culminating in the High Luminosity LHC (HL-LHC) which will have an average luminosity 5-7 times larger than the design LHC value. The electronics of the hadronic Tile Calorimeter (TileCal) will undergo a substantial upgrade to accommodate to the HL-LHC parameters. In particular, TileCal will undergo a major replacement of its on- and off-detector electronics.
The photomultiplier signals will be digitized and transferred off-detector to the TileCal PreProcessors (TilePPr) for every bunch crossing, requiring a data bandwidth of 40 Tbps. The TilePPr will reconstruct, store and send the calorimeter signals to first level of trigger at a rate of 40 MHz. This will provide better precision of the calorimeter signals used by the trigger system and will allow the development of more complex trigger algorithms.
In parallel, the data samples will be stored in pipeline memories and the data of the events selected by the ATLAS central trigger system and transferred to the ATLAS global Data AcQuisition (DAQ) system for further processing.
Recently extensive tests have been performed recently with beam at the CERN accelerator facilities. External beam detectors have been used to measure the beam position and to generate a trigger signal when the particle beam impinges the calorimeter modules, while a Demonstrator system of the TileCal upgrade electronics has been successfully employed to read-out the calorimeter signals in parallel to the current TileCal electronics.
This contribution describes the results from the tests with beam performed at CERN, as well as the latest results on the development of the on- and the off-detector electronics, firmware, data processing and simulation components of the TileCal Demonstrator readout system.
The design of a muon detector and first-level muon trigger system for the FCC-hh baseline experiment is presented. Drift-tube chambers operated with Ar:CO2 (93:7) gas mixture at 3 bar provide a robust and cost effective solution for precise track point and angle measurement over large areas (about 1200 m2) with the required resolution of better than 50 µm and 70 µrad, respectively. To achieve this precision, only one layer of chambers is needed which, for this purpose, consist of two quadruple-layers of drift tubes separated by a 1 m high spacer frame. The wire positioning accuracy has to be better than 20 µm. This is feasible in mass production with the construction technique developed for the small-diameter Muon Drift Tube (sMDT) chambers used for the upgrade of the ATLAS muon spectrometer at High-Luminosity LHC (HL-LHC). With continuous triggerless readout, the drift-tube chambers also provide a highly selective first-level muon trigger which will be applied in ATLAS at HL-LHC. Each drift-tube chamber is combined with a double layer of thin-gap RPC chambers which provide bunch crossing identification with better than 1 ns time resolution, muon trigger seeds and coordinate measurement along the tubes. A complete layout of this detector technology has been developed for the barrel and endcap regions. The diameter of the aluminum drift tubes varies from 30 mm in the barrel and part of the endcaps to 15 mm in the innermost endcap regions depending on the background rates. The performance determined from detailed simulations is discussed. Prototype chambers are under construction for the ATLAS upgrades and have been tested in the CERN Gamma Irradiation Facility up to background rates well above the ones expected in the muon detector at FCC-hh.
We exploit the large W production at LHC run-2 to perform the first ATLAS search for right-handed neutrinos in the mass range 4-20 GeV. We probe unexplored regions of mixing strengths in which right-handed neutrinos can explain neutrino masses and matter-antimatter asymmetry and feature decay lengths of 1-100 mm, providing the striking signature of a displaced decay. The prompt lepton from the W decay is used for triggering. To reduce backgrounds to negligible levels, we select displaced vertices outside of regions of dense material which contain two leptons. We present the limits obtained in these mass regions with the full 2015+2016 LHC data, as well as their agreement with the prediction made with full simulations.
Single muon triggers are crucial for the physics programmes as hadron collider experiments. To be sensitive to electroweak processes, single muon triggers with transverse momentum thresholds down to 20 GeV and dimuon triggers with even lower thresholds are required. In order to keep the rates of these triggers at an acceptable level these triggers have to be be highly selective, i.e. they must have small accidental trigger rates and sharp trigger turn-on curves. The muon systems of the LHC experiments and experiments at future colliders like FCC-hh will use two muon chamber systems for the muon trigger, fast trigger chambers like RPCs with coarse spatial resolution and much slower precision chambers like drift-tube chambers with high spatial resolution. The data of the trigger chambers are used to identify the bunch crossing in which the muon was created and for a rough momentum measurement while the precise measurements of the muon trajectory by the precision chambers are ideal for an accurate muon momentum measurement. In our presentation we shall describe the concept for such a trigger system for two examples: the future muon trigger of the ATLAS experiment at the HL-LHC which will employ this scheme and the muon trigger of the baseline detector for the FCC-hh. We shall include a detailed description of fast track reconstruction algorithms that needed to be developed for the muon trigger.
We shall discuss the choice of the trigger hardware where we favour an FPGA based system with an embedded microprocessor for floating point operations. The conceptual studies are based on LHC collision data, simulated data, and results from laboratory tests and test-beam campaigns with demonstrator hardware for such a trigger system.
The Fast TracKer (FTK) system is a track reconstruction processor able to perform full event tracking synchronously with the ATLAS Level 1 trigger acceptance rate.
The high quality tracks produced by the system will be used by the High Level Trigger algorithms to improve the identification of physics objects such as b-jets and taus, as well as to help mitigating the effects of pileup. The combinatorial challenge of global track fitting requires the use of a custom designed track processor. The idea behind the Fast TracKer system is to simulate all possible tracks before an ATLAS data taking run. During the actual data-taking, the hits coming from the detector are compared with the hits expected from the simulated tracks. This comparison or 'pattern matching' is then followed by a two step linearized track fit. This task is executed by a system of seven custom electronics board types that will process data from the Inner Detector at the 100 kHz rate of the L1 trigger.
Currently, the FTK system is under installation and commissioning into the ATLAS Data Acquisition System. The status of the system integration will be presented and a review of the first data collected by the FTK system will be shown.
For Run 2 of LHCb data taking, the selection of PID calibration samples is implemented in the high level trigger. A further processing is needed to provide calibration samples used for the determination the PID performance, which is achieved through a centralised production that makes highly efficient use of LHCb computing resources. This poster presents the major steps of the production procedure and the charged particle PID performance measured using these calibration samples.
The LUCID detector is the main luminosity provider of the ATLAS experiment and the only one able to provide a reliable luminosity determination in all beam configurations, luminosity ranges and at bunch-crossing level.
LUCID was entirely redesigned in preparation for Run 2: both the detector and the electronics were upgraded in order to cope with the challenging conditions expected at the LHC center of mass energy of 13 TeV and with 25 ns bunch-spacing.
An innovative calibration system based on radioactive 207 Bi sources deposited on the quartz window of the readout photomultipliers was implemented, resulting in the ability to control the detectors long time stability at few percent level.
A description of the detector and its readout electronics will be given as well as preliminary results on the ATLAS luminosity measurement and related systematic uncertainties.
Tau leptons play an important role in many Standard Model and Beyond the Standard Model physics processes that are being investigated at the LHC. This poster details measurements of the performance of the reconstruction and identification of hadronic tau lepton decays using the ATLAS detector. The measurements include the performance of the identification, trigger, energy calibration and decay mode classification algorithms for reconstructed tau candidates. The performance of these algorithms is measured with Z bosons and top quark decays to tau leptons and uses the Run 2 dataset of pp collisions collected at the LHC at a centre-of-mass energy sqrt(s)=13 TeV
The W boson is a short lived particle which does not interact strongly. Thus its production rate measured in lepton decay channels can be compared between lead-lead and proton-proton collisions as a direct test of both binary collision scaling and the possible modification of parton distribution functions (nPDF) due to nuclear effects. The ATLAS detector has recorded 0.49 nb-1 of lead-lead collision data at the center-of-mass energy of 5.02 TeV, where W boson production yield is increased by a factor of eight relative to the available Run 1 data at 2.76 TeV. This study presents W+ and W- boson production yields measured differentially in lepton pseudorapidity and as a function of centrality, as well as the pseudorapidity dependence of the lepton charge asymmetry.
The production of jets in association with a W or a Z boson in proton-proton collision is an important process to study QCD in multi-scale environments. Measurements of W/Z boson production in association with heavy flavour quarks provide experimental constraints to improve the theoretical description of these precesses, which suffer from larger uncertainties than in the inclusive jet case.
A detailed knowledge of the production of jets associated with electroweak bosons is a key element in the understanding of the Higgs processes, since they represent one of the largest background for these measurements. Results for the differential production cross sections for W/Z+jets in several kinematic physics observables measured by the ATLAS experiment at the centre of mass energy of 13 TeV are presented and compared to high order QCD calculations and recent Monte Carlo simulations.
Muon reconstruction and identification play a fundamental role in many analyses of central importance in the LHC run-2 Physics programme. The algorithms and the criteria used in ATLAS for the reconstruction and identification of muons with transverse momentum from a few GeV to the TeV scale will be presented. Their performance is measured in data based on the decays of Z and J/ψ to pair of muons, that provide a large statistics calibration sample. Reconstruction and identification efficiencies are evaluated, as well as momentum scales and resolutions, and the results are used to derive precise MC simulation corrections.
The ATLAS muon spectrometer is an essential component of the detector, providing trigger and track reconstruction for every physics process containing high-energy muons. This is accomplished using several types of tracking sub-detectors, providing both a very fast trigger system and an accurate track reconstruction. A dedicated toroidal magnetic field, in order to measure the muon momentum, is provided in this outer region of the detector.
The higher interaction rate that the ATLAS detector is going to sustain during Phase I requires a better fake track rejection in critical detector regions without hindering the trigger efficiency. This is accomplished by adding new track points to the reconstruction, in particular in the pseudo-rapidity region 1<|η|<1.3. These new detector chambers, denominated BIS78, are updated Resistive Plate Chambers design, thinner than the one previously installed in order to fit in the small space made available by the upgrade of the Monitor Drift Tube chambers. The Trigger and Data Acquisition system for the new detector chambers are illustrated in this presentation, from the Front-End electronics to the Read Out software, including results from prototype tests. The system is comprised of a combination of custom and commercially available hardware of the same type that will be adopted by every ATLAS system in Phase II, thus representing also a first test-bench for the whole detector.
The LHC delivers an unprecedented number of proton-proton collisions to its experiments. In kinematic regimes first studied by earlier generations of collider experiments, the limiting factor to more deeply probing for new physics can be the online and offline computing, and offline storage, requirements for the recording and analysis of this data. In this contribution, we describe a strategy that the ATLAS experiment employs to overcome these limitations and make the most of LHC data during Run-2 - a compact data stream involving trigger-level jets, recorded at a far higher rate than is possible for full event data. We discuss the challenges posed in the analysis of this data, collected in 2016, including the custom jet calibration developed. We also present the results of that analysis, demonstrating the competitiveness and complementarity with traditional data streams.
The electromagnetic processes of annihilation of $(e^+ e^-)$ pairs, generated in high-energy nucleus-nucleus and hadron-nucleus collisions, into heavy lepton pairs are theoretically studied in the one-photon approximation, using the technique of helicity amplitudes . For the process $e^+e^- \rightarrow \mu^+\mu^-$, it is shown that -- in the case of the unpolarized electron and positron -- the final muons are also unpolarized but their spins are strongly correlated. For the final $(\mu^+ \mu^-)$ system, the structure of triplet states is analyzed and explicit expressions for the components of the spin density matrix and correlation tensor are derived.
It is demonstrated that here the spin correlations of muons have the purely quantum character, since one of the Bell-type incoherence inequalities for the correlation tensor components is always violated. In doing so, it is also established that, when involving the additional contribution of the weak interaction of lepton neutral currents through the virtual $Z^0$ boson, the qualitative character of the muon spin correlations does not change .
On the other hand, the theoretical investigation of spin structure for the processes of lepton pair production by pairs of photons ( which, in particular, may be emitted in relativistic heavy-ion and hadron-nucleus collisions ) is performed as well. For the two-photon process $\gamma \gamma \rightarrow e^+ e^-$, it is found that -- quite similarly to the process $e^+ e^- \rightarrow \mu^+ \mu^-$ -- in the case of unpolarized photons the final electron and positron remain unpolarized, but their spins prove to be strongly correlated. Explicit expressions for the components of the correlation tensor and for the relative fractions of singlet and triplet states of the final $(e^+ e^-)$ system are derived. Again, here one of the Bell-type incoherence inequalities for the correlation tensor components is always violated and, thus, spin correlations of the electron and positron have the strongly pronounced quantum character.
Analogous considerations can be wholly applied as well, respectively, to the annihilation process $e^+ e^- \rightarrow \tau^+ \tau^-$ and to the two-photon processes $\gamma \gamma \rightarrow \mu^+ \mu^-$, $\gamma \gamma \rightarrow \tau^+ \tau^-$, which become possible at much higher energies.
The Tile Calorimeter (TileCal) of the ATLAS experiment at the LHC is the central hadronic calorimeter designed for the reconstruction of hadrons, jets, tau-particles and missing transverse energy. TileCal is a scintillator-steel sampling calorimeter and it covers the region of pseudorapidity < 1.7. The scintillation light produced in the scintillator tiles is transmitted by wavelength shifting fibers to photomultiplier tubes (PMTs). The analog signals from the PMTs are amplified, shaped and digitized every 25 ns by sampling the signal. About 10000 channels of the front-end electronics measure the signals of the calorimter with energies ranging from ~30 MeV to ~2 TeV. Each step of the signal reconstruction from scintillation light to the digital pulse reconstruction is monitored and calibrated.
The performance of the calorimeter has been studied in-situ employing cosmic ray muons and a large sample of proton-proton collisions acquired during the operations of the LHC. Muons of high momentum from electroweak bosons decays are employed to study the energy response of the calorimeter at the electromagnetic scale. The calorimeter response to hadronic particles is evaluated with a sample of isolated hadrons and the modelling of the response by the Monte Carlo simulation is discussed. The calorimeter timing calibration and resolutions are studied with jets.
Results on the calorimeter performance on absolute energy scale, timing, noise and associated stabilities are presented. These results show that the TileCal performance is within the design requirements and has given essential contribution to reconstructed objects and physics results.
The discovery made at the Large Hadron Collider (LHC) has revealed that the spontaneous symmetry breaking mechanism is realised in a gauge theory such as the Standard Model (SM) by at least one Higgs doublet. However, the possible existence of other scalar bosons cannot be excluded. We analyze signatures extensions of the SM, characterized by an extra representations of scalars, in view of the recent and previous Higgs data. We show that such representations can be probed and distinguished, mostly with multileptonic final states, with a relatively low luminosity at the LHC.
Most searches for physics beyond the Standard Model at ATLAS study prompt signatures where particle tracks are associated to the primary interaction vertex. Since these searches have not found any evidence of new physics yet, it becomes more and more important to consider long-lived signatures that are much harder to probe. The reconstruction algorithms commonly used at ATLAS are highly optimized for prompt signatures and have low efficiencies for long-lived particles. To retain high efficiencies for tracks with large impact parameters, an additional tracking was developed which runs on detector hits not used by the standard tracking. This allows the reconstruction of secondary vertices inside the silicon trackers of ATLAS originating from decays of particles with a lifetime of the order picoseconds to nanoseconds. Hadronic decays with a high track multiplicity are especially challenging since they are often reconstructed as multiple displaced vertices separated in space, each with a low track multiplicity and mass, leading to degraded signal efficiencies. The reconstruction of such decays has been significantly improved recently by introducing a new procedure to merge close-by vertices. Further improvements have been implemented that increase the reconstruction efficiency of displaced vertices near disabled detector modules.
A search for new charged massive gauge bosons, called W’, decaying to tb, is performed with the ATLAS detector in the decay channel leading to final states with an electron or muon, 2 or 3 jets and missing transverse momentum. This search uses a dataset corresponding to an integrated luminosity of 36.1 fb$^{−1}$ of $pp$ collisions produced at the LHC and collected during 2015 and 2016. The data is found to be consistent with the Standard Model expectation. Therefore limits are set on the W ′ → tb cross section times branching ratio and on the W ′ boson effective couplings as a function of the W ′ boson mass.
A search for dark matter pair production in association with a Z' boson in pp collisions, at 13 TeV, using 36.1 fb$^{-1}$ of LHC pp collision data recorded with the ATLAS detector is presented. Events are characterised by large missing transverse momentum and a hadronically decaying vector boson reconstructed as either a pair of small-radius jets, or as a single large-radius jet with substructure. Results are interpreted in terms of simplified models which describe the interaction of dark matter and standard model particles.
Search for events with large missing transverse momentum (MET) recoiling against a SM particle is a probe for detecting Dark Matter (DM) at the LHC. The discovery of the Higgs boson h opens a new opportunity through the h+MET signature, with h->bb being the most probable decay channel. Depending on the amount of MET in the event, the Higgs candidate is reconstructed as a system of two b-tagged small radius jets or a single large radius jet containing two b-tagged subjets. The results are interpreted in the context of a simplified model Z’-2HDM and also model independent limits on the visible cross section are provided for h->bb+DM beyond standard model processes. The analysis of data recorded by the ATLAS detector during 2015 and 2016 has already excluded a substantial region of the parameter space of the Z’-2HDM model. Over the last year, new techniques have been studied in order to overcome the current limitations due to jet merging in the boosted regime. In this context, one of the main improvement in sensitivity is the usage of Variable-Radius track jets for b-tagging. Results including data recorded in 2017, corresponding to the unprecedented amount of luminosity of 80 fb$^{-1}$, will be presented in this poster.
Dark Matter comprises a significant part of the visible Universe. Despite a solid cosmological evidence, its particle nature and properties is still to be unraveled. Looking for the production of Dark Matter particles at particle colliders can shed light on the mystery of Dark Matter. The signature of this search is a pair of quarks coming from the decays of the Standard Model W and Z bosons, recoiling against missing transverse momentum from Dark Matter particles. Different topologies, such as small-R jets and large-R jets, are considered to identify hadronic decays of highly boosted W and Z bosons. The results using 36 fb$^{-1}$ of 2015+2016 ATLAS pp collision data are presented. Limits on a vector mediator model benchmark model as well as limits with reduced model dependence at 95% confidence level on the visible cross-section of W/Z + Dark Matter production will be presented.
The MoEDAL experiment addresses a decades-old issue, the search for an elementary magnetic monopole, first theorised in 1931 by Dirac to explain electric charge quantisation. Since then it was showed that magnetic monopoles occur naturally in grand unified theories as solutions of classical equations of motion.
The dedicated experiment can enjoy a new energy regime opened at the LHC allowing direct probes of magnetic monopoles at the TeV scale for
the first time. In this poster, recent results obtained with 13 TeV proton-proton collision data at the MoEDAL experiment, update of our previous search using nearly six times more integrated luminosity and including additional models, will be presented and discussed. MoEDAL pioneered a technique in which monopoles would be slowed down in a dedicated aluminium array and the presence of trapped monopoles is probed by analysing the samples with a superconducting magnetometer, obtaining the first LHC constraints for monopoles carrying twice or thrice the Dirac charge.
A search for pair and single production of vector-like quarks T and B with a leptonically decaying Z boson is presented. The data were collected in pp collisions at 13 TeV with the ATLAS detector at the Large Hadron Collider, corresponding to an integrated luminosity of 36.1 fb$^{-1}$. A high momentum Z boson is reconstructed from a same-flavor pair of leptons, which carry opposite charge, and are created alongside a third generation quark, conserving the charge of vector-like T (q_T=+2/3) and B (q_B=-1/3) quarks. Final states with different multiplicities of leptons, large-R jets and b-tagged small-R jets alongside other selection criteria are used to discriminate the signal from background processes. Five channels are optimized on a topology with high object momenta and are then combined for pair and for single production separately. No excess above the Standard model expectation was found, which is why lower mass limits on vector-like T and B quarks at a 95% confidence level are derived for the singlet and doublet model. In case of the single production also limits on the coupling to the Standard Model quarks are set.
Results of a search for the pair production of highly collimated groupings of photons (photon jets) in the ATLAS detector at the LHC are reported, using data from proton-proton collisions collected in 2015 and 2016 at a centre-of-mass energy of 13 TeV. Photon-jets can arise from the decay of new, highly boosted particles that are identified in the electromagnetic calorimeter as a single, photon-like cluster by the trigger system.
A search is performed for a heavy resonance decaying to WZ in the fully leptonic channel. It is based on proton-proton collision data collected by the ATLAS experiment at the LHC at a centre-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 36.1 fb$^{-1}$. Limits are set on the production cross section times branching ratio of a heavy vector particle produced either in quark-antiquark fusion or by vector boson fusion. Constraints are also obtained on the mass and couplings of a singly-charged Higgs, in the Georgi-Machacek model, produced by vector boson fusion.
The associated production of the Higgs boson with a pair of
top/anti-top quarks (ttH) is the only process providing the direct access to
the measurement of the Yukawa coupling between the Higgs boson and the top
quark. The presented results exploit the data collected during 2015 and 2016
by the ATLAS experiment during LHC collisions at a center-of-mass energy of
13 TeV. Multivariate analysis techniques are used in order to discriminate
the signal from the very large backgrounds arising mainly from top-quark
pair production. In addition, the analysis uses for the first time algorithms
specifically designed to cope with the challenging reconstruction of
hadronically decaying high-pT bosons and top quarks.
A search for type III seesaw heavy leptons decaying into pairs of leptons, a pair of jets and large missing transverse energy is presented. The results reported here use the $pp$ collision data sample corresponding to 80.0 fb$^{−1}$ of integrated luminosity collected in 2015, 2016 and 2017 by the ATLAS detector at the LHC with a centre-of-mass energy of 13 TeV. The observation of neutrino oscillation provides strong evidence that neutrinos have mass and that their masses are expected to be much smaller than those of the charged leptons. The Type III seesaw mechanism introduces additional heavy lepton triplets in the seesaw mechanism. The analysis is performed in a simplified model where only one fermionic triplet is assumed, (L+, L−, N0). This simplification has little impact on LHC physics as in most cases only the lighter states can be generated in TeV collider experiments. The search is performed in the final states that both N0 and L± decay to final states containing a W boson, the process with the largest effective cross-section. One of the W bosons decays leptonically and the other decays hadronically. Only the final states containing electrons and muons are considered, including leptonic tau decays. The combination of the opposite- and same-charge lepton channels is presented. One of the search goals is an efficient data-driven background estimation of mis-identified (fake) prompt leptons, originating from either hadronic jets or secondary weak hadron decays, and electrons with mis-identified charge in the high pile-up environment of the 2017 LHC data taking.
A search for W'-boson production in the W' --> tb -->qqbb decay channel is presented using 36.1 fb$^{-1}$ of 13 TeV proton-proton collision data collected by the ATLAS detector at the Large Hadron Collider in 2015 and 2016. The search is interpreted in terms of both a left-handed and right-handed chiral W' boson within the mass range 1-5 TeV. Identification of the hadronically decaying top quark is performed using jet substructure tagging techniques based on a shower deconstruction algorithm.
The search for heavy Higgs bosons at the LHC represents an intense experimental program, carried out by the ATLAS and CMS collaborations, which includes the hunt for invisible Higgs decays and dark matter candidates. No significant deviations from the SM backgrounds have been observed in any of these searches, imposing significant constraints on the parameter space of different new physics models with an extended Higgs sector. Here we discuss an alternative search strategy for heavy Higgs bosons decaying invisibly at the LHC, focusing on the pair production of a heavy scalar H together with a pseudoscalar A, through the production mode qq¯→Z∗→HA. We identify as the most promising signal the final state made up of 4b+EmissT, coming from the heavy scalar decay mode H→hh→bb¯bb¯, with h being the discovered SM-like Higgs boson with mh=125 GeV, together with the invisible channel of the pseudoscalar. We work within the context of simplified MSSM scenarios that contain quite heavy sfermions of most types with (10) TeV masses, while the stops are heavy enough to reproduce the 125 GeV mass for the lightest SM-like Higgs boson. By contrast, the gauginos/higgsinos and the heavy MSSM Higgs bosons have masses near the EW scale. Our search strategies, for a LHC center-of-mass energy of s√= 14 TeV, allow us to obtain statistical significances of the signal over the SM backgrounds with values up to ∼ 1.6σ and ∼ 3σ, for total integrated luminosities of 300 fb−1 and 1000 fb−1, respectively.
Supersymmetry postulates the existence of a superpartner (sparticles) whose spin differs by one half unit from each corresponding Standard Model partner. The sector of sparticles with only electroweak interactions contains charginos, neutralinos, sleptons, and sneutrinos. Charginos and neutralinos are the mass eigenstates formed from the linear superpositions of the superpartners of the charged and neutral Higgs bosons and electroweak gauge bosons. In R-parity conserving models, sparticles can only be produced in pairs and the lightest supersymmetric particle is stable and a dark matter candidate. This is typically the lightest neutralino, which can then provide a natural candidate for dark matter. When produced in the decay of heavier SUSY particles, a neutralino LSP would escape detection, leading to an amount of missing transverse momentum significantly larger than for SM processes, a canonical signature that can be exploited to extract SUSY signals. In this talk we present a set of recent searches for the electroweak production of charginos, neutralinos and sleptons decaying to final states with at least four leptons. These searches rely on proton-proton collision data delivered by the Large Hadron Collider at a center-of-mass energy of √s = 13 TeV, collected and reconstructed with the ATLAS detector.
The instantaneous luminosity of the Large Hadron Collider at CERN will be increased up to a factor of five with respect to the design value by undergoing an extensive upgrade program over the coming decade. Such increase will allow for precise measurements of Higgs boson properties and extend the search for new physics phenomena beyond the Standard Model. The largest phase-1 upgrade project for the ATLAS Muon System is the replacement of the present first station in the forward regions with the so-called New Small Wheels (NSWs) during the long-LHC shutdown in 2019/20. Along with Micromegas, the NSWs will be equipped with eight layers of small-strip thin gap chambers (sTGC) arranged in multilayers of two quadruplets, for a total active surface of more than 2500 m$^2$. All quadruplets have trapezoidal shapes with surface areas up to 2 m$^2$. To retain the good precision tracking and trigger capabilities in the high background environment of the high luminosity LHC, each sTGC plane must achieve a spatial resolution better than 100 μm to allow the Level-1 trigger track segments to be reconstructed with an angular resolution of approximately 1mrad. The basic sTGC structure consists of a grid of gold-plated tungsten wires sandwiched between two resistive cathode planes at a small distance from the wire plane. The precision cathode plane has strips with a 3.2mm pitch for precision readout and the cathode plane on the other side has pads for triggering. The position of each strip must be known with an accuracy of 30 µm along the precision coordinate and 80 µm along the beam. The mechanical precision is a key point and must be controlled and monitored all along the process of construction and integration. The sTGC detectors are currently being produced and tested in five countries and assembled into wedges at CERN for integration into ATLAS. The sTGC design, performance, construction and integration status will be discussed, along with results from tests of the chambers with nearly final electronics with beams and cosmic rays.
Over the past several years, a team based around the ATLAS Experiment at CERN in Geneva has organised public engagement and education activities at a variety of non-scientific venues. These have included the Montreux Jazz Festival (Montreux, Switzerland), the Bluedot Festival (Jodrell Bank, UK), the WOMAD Festival (Charlton Park, UK), Moogfest (Durham, NC, USA), and the Sofia Music Weeks in Bulgaria, with discussions on-going with a major European music festival as well as a festival in the United States. The goal of this effort is to engage new audiences who normally would not be drawn to science festivals and to investigate our ability to communicate scientific messages to broad, diverse audiences.
The results have been impressive, as measured through attendance (example: the first Physics Pavilion at WOMAD received 4500 visitors over 3 days and such was the success that a return invitation was received immediately for 2017 with additional space, resulting in an increased footfall of ~5500), and enthusiasm of the audience and the scientists hosting the activities. We describe the presentation material and format, the hands-on workshops, and other methods employed, as well as lessons learned on how to best optimise audience engagement. The concept can be reproduced for other festival-type environments, and adapted to suit the particular audience demographic and format of the festival.
To exploit the physics potential of the ATLAS experiment at the HL-LHC a trigger system with a first-level trigger rate of 1 MHz at a maximum latency of 10 µs will be employed. The TDCs of the current front-end electronics of the ATLAS muon drift-tube (MDT) chambers is incompatible with these trigger requirments and will have to be replaced by new TDCs. So will have the amplifier discriminator (ASD) chips which are mounted on common front-end cards with the TDCs.
A new ASD chip with 8 eights channel was developed in 130 nm CMOS technology in Global Foundries process. The system is composed by the cascade of the analog signal processing front end and the Wilkinson A/D, performing both time-over-threshold and charge measurement. The sensitivity at the output of the analog signal processing chain is 14 mV/fC, while the equivalent-noisecharge is 0.6 fC (~3.38 ke), performing <12 ns preamplifier rise time. These performances have been achieved while managing very high detector parasitic capacitance at the front-end input (~60 pF). Each channel consumes less than 10 mA from a single 3.3-V supply voltage. In 130 nm CMOS, the total area occupancy is 6.3 mm^2.
In our presentation the results of laboraty tests with test pulses as well as results which were obtained with the new chips on a muon chamber operated in a highly energetic muon beam under varying gamma background rated in CERN's Gamma IrradiationFacilily GIF++. Thanks to a higher amplification the new chip shows a better spatial resolution than the old ASD chip currently used in the ATLAS experiment.
Social media is an essential tool for communicating particle physics results to a wide audience. This presentation will explore how the nature of social media platforms has impacted the content being shared across them, and the subsequent effect this has had on the user experience. The ATLAS Experiment has adapted its communication strategy to match this social media evolution, producing content specifically targeting this emerging audience. The success of this approach is examined and the effect on user experience is evaluated.
The design and performance of the ATLAS Inner Detector (ID) trigger algorithms running online on the High Level Trigger (HLT) processor farm for 13 TeV LHC collision data with high pileup are discussed. The HLT ID tracking is a vital component in all physics signatures in the ATLAS Trigger for the precise selection of the rare or interesting events necessary for physics analysis without overwhelming the offline data storage in terms of both size and rate. To cope with the high interaction rates expected in the 13 TeV LHC collisions the ID trigger was redesigned during the 2013-15 long shutdown. The performance of the ID Trigger in both the 2016 and 2017 data from 13 TeV LHC collisions has been excellent and exceeded expectations, even at the very high interaction multiplicities observed at the end of data taking in 2017. The detailed efficiencies and resolutions of the
trigger in a wide range of physics signatures are presented for the Run 2 data, illustrating the superb performance of the ID trigger algorithms in these extreme pileup conditions. This demonstrates how the ID tracking continues to lie at the heart of the trigger performance which enables the ATLAS physics program, and will continue to do so in the future.
In the next years, the LHC accelerator will be upgraded with an increase of instantaneous and integrated luminosity, but at the same time also the data and the trigger rates will drastically increase. The ATLAS goal is to cope the big amount of data flow and at the same time trying to conserve the high muon efficiency. For this purpose the current innermost stations of the Muon Spectrometer Endcap, the small wheels, will be replaced in the 2019/2020 shutdown witha New Small Wheel (NSW) detector for high luminosity LHC runs. The NSW will feature two new detector technologies: resistive micromegas (MM) will be used as a precision detector while small strip Thin Gap Chambers (sTGC) will provide the trigger.
An overview of the design, construction and assembly procedures of the Micromegas modules will be reported. Results and characterization with cosmic rays will also be presented.
The muon spectrometer of the ATLAS detector will undergo a major upgrade during the Long Shutdown 3, in order to cope with the operational conditions at the high-luminosity LHC. The trigger and readout electronics for the Resistive Plate Chambers (RPC), Thin Gap Chambers (TGC), and Monitored Drift Tube (MDT) chambers will be replaced to make them compatible with a new trigger scheme with higher trigger rates and longer latencies. MDT precision chambers, that at the moment are not included in the hardware trigger, will be integrated into the level-0 trigger in order to sharpen the momentum threshold. The MDT front-end electronics will also be replaced. New-generation RPC chambers will be installed in the inner barrel layer to increase the acceptance and robustness of the trigger. Some of the MDT chambers in the inner barrel layer will be replaced with new small-diameter MDTs. New TGC triplet chambers in the barrel-endcap transition region will replace the current TGC doublets to suppress the high trigger rate from random coincidences in this region. A major upgrade of the power system is also planned. The Phase-II upgrade concludes the process of adapting the muon spectrometer to the ever increasing performance of the LHC, which started with the Phase-I upgrade New Small Wheel (NSW) project that will replace the innermost endcap wheels.
The time-dependent CPV direct and mixing-induced asymmetries in
B^0->\pi \pi and
B_s ->K K decays have been measured at the LHCb experiment using data
collected during Run1 at centre-of-mass energies of 7 and 8 TeV,
corresponding to 3fb^-1.
The data set was also used for a measurement of time-integrated CP
asymmetries in B^0->K \pi and B_s->\pi K decays.
The results obtained supersede those of B factories /with much
higher precision.
No other experiment has ever measured CP violation in B_s -> K K decays.
The measurements of the time-integrated CP asymmetries are the most
precise from a single experiment
to date.
During 2017, the Large Hadron Collider provided record-breaking integrated and instantaneous luminosities, resulting in huge amounts of data being provided with numbers of interaction per bunch crossing significantly beyond initial projections. In spite of these challenging conditions, the ATLAS Inner Detector (ID) track reconstruction continued to perform excellently, and this contribution will discuss the latest performance results covering the key aspects of track reconstruction. Potential areas for improvement will also be highlighted, and planned improvements to track reconstruction techniques for future data-taking periods, in areas such as track ambiguity solving and vertex reconstruction, will be outlined.
Hadronic signatures are critical to the ATLAS physics program, and are used extensively for both Standard Model measurements and searches for new physics. These signatures include generic quark and gluon jets, as well as jets originating from b-quarks or the decay of massive particles (such as electroweak bosons or top quarks). Additionally, missing transverse momentum from non-interacting particles provides an interesting probe in the search for new physics beyond the Standard Model. Developing trigger selections that target these events is a huge challenge at the LHC due to the enormous rates associated with hadronic signatures. This challenge is exacerbated by the amount of pile-up activity, which continues to grow. In order to address these challenges, several new techniques were developed to significantly improve the potential of the 2017 dataset. This talk presents an overview of how we trigger on hadronic signatures at the ATLAS experiment, outlining the challenges of hadronic object triggering and describing the improvements performed over the course of the Run-2 LHC data-taking program. The performance in Run-2 data is shown, including demonstrations of the new techniques being used in 2017. We also discuss further critical developments implemented for the rest of Run-2 and their performance in early 2018 data.
In addition to the main physics program based on proton-proton collisions, the ATLAS experiment also collects heavy-ion data with lead nuclei. Among these data sets, ultra-peripheral collisions provide a highly interesting class of events. They give an opportunity to study very rare processes involving two-photon exchange, such as light-by-light (LbyL) scattering - a phenomenon which was theoretically postulated more than 80 years ago but due to its small cross section it had not been measured directly until 2017. Based on lead-lead (Pb+Pb) collision data from 2015, the first evidence of this process was reported by the ATLAS Collaboration. The signature of LbyL scattering consists of two low-pT photons and the absence of any other activity in the detector. Triggering on such events in the ATLAS experiment is challenging as the rate of LbyL events is small compared to that of central Pb+Pb collisions which are accompanied by a lot of activity in the detector. The LbyL trigger strategy from the 2015 Pb+Pb data taking is a starting point for developing the trigger for the 2018 Pb+Pb run. Work is focused on lowering the photon pT requirement which can provide almost a factor of two gain in the accepted rate. In addition, trigger requirements need to be adjusted for the potentially larger instantaneous luminosity delivered by the LHC this year. This poster presents an approach to address the challenges of triggering on LbyL scattering in Pb+Pb collisions during 2018.
After the discovery of a Higgs boson, the measurements of its properties
are at the forefront of research. The determination of the associated
production of a Higgs boson and a pair of top quarks is of particular
importance as the ttH Yukawa coupling is large and can probe for physics
beyond the Standard Model.
The ttH production was analysed in various final states. The results are
reviewed with multileptons and $\rm H\rightarrow \gamma\gamma$ final states.
The combined results also including the $\rm H\rightarrow b\bar b$ decay
channel are presented.
The analysis was based on data taken in 2015 and 2016 by the ATLAS
experiment recorded from 13~TeV proton-proton collisions. The combined
results provide evidence for the ttH production modes and are compared
with the Standard Model (SM) predictions allowing models beyond the SM to
be constrained.
By using the non-linear HEFT it is possible to study the WW, ZZ and WZ elastic
scattering at high energies relevant for the LHC. For most of the parameter space, the scattering is strongly interacting (with the MSM being a remarkable exception). Starting from one loop computations complemented with dispersion relations and the Equivalence Theorem, we obtain different unitarization methods which produce analytical amplitudes correspondingto different approximate solutions to the dispersion relations. The partial waves obtained can show poles in the second Riemann sheet that have the natural interpretation of dynamical resonances with masses and widths depending on the HEFT parameters. We compare the different unitarizations and we find that they are qualitatively, and
in many cases quantitatively, very similar. We apply our results to elastic WW, ZZ amd WZ scattering and also we consider the photon photon and top-antitop channel at NLO order. The amplitudes obtained can be used to get realistic resonant and not resonant cross sections to be compared and to be used for a proper interpretation of the LHC data.
Plenary electroweak session of LHCP2018
Heavy Ions plenary session of LHCP2018
Heavy Ions parallel session of LHCP2018
This parallel talk should give an overview of recent developments in the QCD phenomenology underlying general purpose Monte Carlo event generators, and the role of these developments concerning treatment of nuclei. The focus should be on soft particle production, and its function in determining the differences and similarities between the underlying dynamics of ultra relativistic proton-proton and nucleus-nucleus collisions.
Higgs parallel session of LHCP2018
QCD parallel session of LHCP2018
SUSY parallel session of LHCP2018
A general overview of the SUSY particle phenomenology of interest to the LHC
Guided Visits to the City of Bologna
Outreach event organised in parallel to the LHCP2018 conference
SUSY plenary session of LHCP 2018
Performance session of LHCP2018
QCD parallel session of LHCP2018
SUSY parallel session of LHCP2018
Overview of the phenomenology of the Higgs sector in SUSY theories.
Top parallel session of LHCP2018
IAC meeting during LHCP2018:
https://indico.cern.ch/event/710640/
Restricted meeting of the International Advisory Committee of the LHCP conference series
Exotica parallel session of LHCP2018
Many proposed extensions of the Standard Model contain particles with macroscopic life-times; decaying outside of the interaction point. Such particles pose unique challenges at the LHC. This talk provides an overview of the current status of these searches, as well as prospects for new experiments (e.g. MATHUSLA, FASER, SHIP, and CODEX-b) at the LHC which can close some interesting regions of theoretical parameter space.
Heavy Flavour of LHCP2018
Present fits of the b->sll Wilson coefficients treat the non-local charm contributions as a systematic theoretical uncertainty. The theoretical basis of the underlying non-local hadronic matrix elements is introduced, and ways toward constraining them in future fits are discussed.
Discussion of the impact of non-local QED corrections on observables of the decay B_s -> mu^+ mu^- and provide an outlook for their impact on other exclusive b->smumu transitions.
Discuss the problems in constructing New Physics models that fulfill the experimental constraints from the B anomalies and beyond.
Top parallel session of LHCP2018
Upgrade/Future parallel session of LHCP2018
Exotica/Dark Matter plenary session of LHCP2018
Heavy Flavour plenary session of LHCP2018
Posters session LHCP2018
Electroweak parallel session of LHCP2018
Performance session of LHCP2018
Top parallel session of LHCP2018