Offshell-2021 — The virtual HEP conference on Run4@LHC

Europe/Zurich
Alberto Belloni (University of Maryland (US)), Kingman Cheung (National Tsing Hua University (TW)), Jan Fiete Grosse-Oetringhaus (CERN), Stefania Gori (UC Santa Cruz), Kristin Lohwasser (University of Sheffield (GB)), Stathes Paganis (National Taiwan University), Matthias Schott (CERN / University of Mainz), Mika Anton Vesterinen (University of Warwick (GB)), Chilufya Mwewa (Brookhaven National Laboratory (US))
Description

Offshell-2021 — The virtual HEP conference on Run4@LHC is a conference that discusses ideas for LHC Run 4.

Topics range from unexplored ideas for the LHC experiments to new detector and reconstruction methodologies, Machine Learning and computing at HL-LHC. We welcome also contributions on extensions or addition of experiments to the LHC ring during Run 4.

The contributions in the plenary talk sessions should come from individuals or groups of individuals, not the collaborations. However, please note that results should not be obtained using data or tools of the major LHC collaborations, in line with the policies to publish outside ALICE, ATLAS, CMS and LHCb collaborations. In addition, special sessions for contributions from the LHC collaborations are foreseen. In addition, keynote speakers from the experiments and beyond have been invited. The program is complemented by extended virtual poster sessions with contributions from young scientists of the LHC collaborations which will take place in virtual reality.

In order to suit all time zones in the world, each session takes place twice at two different times. Speakers have to speak at least in one of them and a recording is shown in the other session. If desired, the speaker can also deliver the talk in both time zones. This means every session will consist of both live and recorded elements, allowing everyone around the globe to participate actively in the conference.

Participants aiming for a plenary talk should submit an  extended abstract and once accepted, a paper is written before the conference describing their idea. The content will be presented during the conference, and selected papers will be published as an original contribution in EPJ-C after a two-step review process. Alternatively, poster presentations and proceedings will be made are available for selected abstracts that do not meet the two-step review critera. More information can be found in the publication guidelines here.

Deadlines:
Posters can be submitted either for standalone studies or through the speaker committees of the respective collaborations.
Poster Abstract Submission deadline: 15 June 2021

(further timelines here)

Session times:

Block A Block B
5:30 - 9:30 PST 15:30 - 19:30 PST
8:30 - 12:30 EST 18:30 - 22:30 EST
14:30 - 18:30 CEST 0:30 - 4:30 CEST
21:30 - 1:30 CST 7:30 - 11:30 CST

 

Contact information
Participants
  • Abhaya Kumar Swain
  • Alberto Belloni
  • Alexander Kalweit
  • Alice Ohlson
  • Amit Adhikary
  • Anju Bhasin
  • Anna Kaczmarska
  • Ante Bilandzic
  • Anton Albert Riedel
  • Aravind Thachayath Sugunan
  • Arkadiy Taranenko
  • Arnaud Jean Maury
  • Arturo Sanchez Pineda
  • Arul Prakash
  • Arvind Khuntia
  • Ben Carlson
  • Benhamida Azzeddine
  • Carlo Oleari
  • Carlos Vazquez Sierra
  • Carmelo D'Ambrosio
  • Caterina Doglioni
  • Chiara Pinto
  • Chilufya Mwewa
  • Claire Antel
  • Claude Andre Pruneau
  • Dan Guest
  • Daniel Britzger
  • Danielle Turvill
  • Danijela Bogavac
  • Despoina Sampsonidou
  • Dhruvi Saraniya
  • Diego Martinez Santos
  • Eckhard Elsen
  • Edward Basso
  • Eleni Skorda
  • Elisaveta Zherebtsova
  • Emanuele Angelo Bagnaschi
  • Emilio Xosé Rodríguez Fernández
  • Eric Wulff
  • Faig Ahmadov
  • Fatma Kocak
  • Federico Ravotti
  • Feyzollah Younesizadeh
  • Floris Keizer
  • Francesca Carnesecchi
  • Franz Muheim
  • Gabriele Ferretti
  • Giacomo Cacciapaglia
  • Giancarlo Panizzo
  • Gordon Watts
  • Graziella Russo
  • Guglielmo Frattari
  • Hsin-Yeh Wu
  • Jan Fiete Grosse-Oetringhaus
  • Janusz Oleniacz
  • Jem Aizen Mendiola Guhit
  • Joey Huston
  • Johann Christoph Voigt
  • Jose David Romo Lopez
  • José Luis Carrasco Huillca
  • Jure Zupan
  • K.C. Kong
  • Kadri Özdemir
  • Kajal Dixit
  • Karolos Potamianos
  • Kingman Cheung
  • Kristin Lohwasser
  • Kyrill Bugaev
  • Laura Reina
  • Laurent Dufour
  • Lina Alasfar
  • Lorenzo Paolucci
  • Love Kildetoft
  • Luigi Sabetta
  • Magnus Mager
  • Malgorzata Maria Worek
  • Marcelo Gameiro Munhoz
  • Marco Rossi
  • Maria Giovanna Foti
  • Maria Jose Costa
  • Maria Savina
  • Marisilvia Donadelli
  • Matteo Barbetti
  • Matthias Schott
  • Max Klein
  • Meng-Ju Tsai
  • Mesut Unal
  • michelangelo mangano
  • Mika Anton Vesterinen
  • Milene Calvetti
  • Mohammed Boukidi
  • Muhammad Farooq
  • Mukesh Kumar
  • Nathaniel Craig
  • Neelam Kumari
  • Nestor Armesto Perez
  • Nicola Rubini
  • Nitin Mishra
  • Offshell Organisers Offshell
  • Oleg Grachov
  • Oleksandr Vitiuk
  • Pai-Hsien Hsu
  • Patricia Rebello Teles
  • Patrick Haworth Owen
  • Paul Alois Buhler
  • Pavlo Panasiuk
  • Percy Cáceres
  • Peter Christiansen
  • Peter Kostka
  • Pramod Sharma
  • Pratik Kafle
  • Raghunath Sahoo
  • Roberto Covarelli
  • Rosy Nikolaidou
  • Rutuparna Rath
  • Salman K Malik
  • Saul Lopez Solino
  • Serhii Chernyshenko
  • Serpil Yalcin
  • Shufang Su
  • Shuhui Huang
  • Smbat Grigoryan
  • Solangel Rojas
  • Stathes Paganis
  • stefania gori
  • Stefano Giagu
  • Stephen Thomas Roche
  • SUBHASISH BEHERA
  • Subhojit Roy
  • Sudeshna Banerjee
  • Sukanta Dutta
  • Sumit Basu
  • Tania Natalie Robens
  • Tapan Nayak
  • TARUN KUMAR
  • Thomas Flacke
  • Tracey Berry
  • Uliana Dmitrieva
  • Ulrich Heintz
  • Uzma Tabassam
  • Valentina Sola
  • Valery Pugatch
  • Vasyl Dobishuk
  • Veronika Georgieva Chobanova
  • Vitalii Okorokov
  • Xabier Cid Vidal
  • Xiangyu Xie
  • Xing-fu Su
  • Yasmine Sara Amhis
  • Yassine El Ghazali
  • Yogesh Verma
  • Yongbin Feng
  • You-Ying Li
  • younes younesizadeh
  • Yuji Yamazaki
  • Zahra Abdy
    • 14:30 17:50
      Unexplored ideas for ALICE, ATLAS, CMS and LHCb: Block A (Europe / US East)
      Conveners: Jan Fiete Grosse-Oetringhaus (CERN), Kristin Lohwasser (University of Sheffield (GB))
      • 14:30
        Introduction (live) 10m
        Speaker: Offshell Organisers
      • 14:40
        Theory Ideas (live) 50m

        title to be confirmed

        Speakers: Laura Reina (Florida State University (US)), Nathaniel Craig, Nathaniel Craig (UC Santa Barbara), Nathaniel Craig (Stanford University)
      • 15:30
        Photo session 5m
      • 15:35
        Probing same-sign WW in hadronic signatures (live) 30m

        The production of two same-sign W bosons in association with two jets via vector boson scattering (VBS), ssWWjj, is a key probe of the mechanism of electroweak symmetry breaking, with the Higgs boson unitarising the process' cross-section. Longitu­dinal ssWWjj involves the spin­-0 modes of the gauge bosons arising from EWSB, and is very sensitive to BSM physics.

        With the Run-2 and Run-3 expected datasets, over 500 leptonic ssWWjj events are expected (W -> lv, l=e,mu), of which 35 events have two longitudinally polarised W boson, i.e. sWWjj(LL). More data is expected in Run-4 at the HL-LHC, but will not be sufficient to observe the process using only leptonic decays of the W boson.

        The pool of ssWWjj(LL) events can be increased significantly increased by considering events with at least one hadronicically-decaying W (2/3 of W bosons decay hadronically). This is a very challenging final state given the large (by a few orders of magnitude) backgrounds.

        We present ideas to systematically study ssWWjj VBS and the main backgrounds in semi-leptonic and hardonic VBS signatures, and, using substructure and jet charge techniques and deep learning, to extract the fraction of ssWWjj(LL) events.

        This paper will rely on novel ideas to tackle the problem, and on combining existing tools and techniques in a novel way to probe the ssWWjj VBS signature. No use will be made of LHC experiment's infrastructure or codebases.

        Speaker: Karolos Potamianos (University of Oxford (GB))
      • 16:05
        Prospects on searches for baryonic Dark Matter produced in \boldmath $b$-hadron decays at LHCb (live) 30m

        A mechanism accommodating baryonic Dark Matter (DM) has been proposed recently [arXiv:2101.02706,arXiv:1810.00880]. This has the advantage of explaining matter-anti-matter imbalance and DM at once, without requiring the proton to be unstable. The DM particles, carrying baryon number, would be produced in $b$-hadron decays, through a t-channel mediator. In this model, $b$-hadron branching fractions as high as few per mille are needed both by theoretical and experimental constraints.

        The DM particles produced in this model, $\Psi_{(DS)}$ carry baryon number, and have a mass in the range $0.94 < m_{\Psi_{(DS)}} < 4.34 $ GeV/c$^2$, being therefore harder to detect in general missing transverse energy (MET) searches at ATLAS or CMS. However, knowing the position of the production vertex (PV) and the decay vertex of the $b$-hadron allows a partial reconstruction of the final state and can provide a proxy to MET, hereby referred to as MPT$_{B}$, opening a window for detection at LHCb. This technique has already proven successful in the analysis of $\Lambda_b \rightarrow p \mu \nu_\mu$ decays [arXiv:1504.01568]. Therefore, the study of DM in $b$ decays at LHCb provides unique sensitivity at the LHC, covering the entire parameter space of the model. Sharp kinematic end points are also distinctive signatures of these decays.

        In this contribution, we will discuss the LHCb sensitivity to these searches, covering different topologies and giving prospects for Runs 3 and 4 of the LHC. We will give examples of $B^0$, $B^+$ and $\Lambda_b$ decays as well as discussing the advantages and disadvantages compared to other experiments, such as Belle. Notably, the use of $\Lambda_b$ baryons allows to test the entire mass range for $\Psi_{(DS)}$. For the analysis, the signals and backgrounds will be generated with Pythia [arXiv:1410.3012] and experimental effects will be emulated through a fast simulation [arXiv:2012.02692].

        We show here two baseline examples, with topologies that are convenient at LHCb: $B^0 \rightarrow \Lambda^\ast + \Psi_{(DS)},\Lambda^\ast \rightarrow pK$, and $B^+ \rightarrow \Lambda_c^+(2595) \Psi_{(DS)},\Lambda_c^+(2595) \rightarrow \pi^+\pi^- \Lambda_c^+,\Lambda_c^+ \rightarrow p K \pi$. In both cases, $\Psi_{(DS)}$ is the dark particle, charged under baryon number. These decay chains are fully reconstructible through charged tracks at LHCb, with the obvious exception of the dark particle, and give access to the $b-$hadron decay points, allowing the determination of MPT$_{B}$. A simple select and count experiment of the signals and main backgrounds indicates that, in both decays, branching fractions in the $10^{-3}-10^{-5}$ range should be at reach at LHCb. This result will be tuned once experimental effects are accounted for. Figure 1 shows the distribution of MPT$_{B}$ for the signal and background in both topologies.


        Figure caption: Expected MPT$_{B}$ distributions at LHCb for the $B^0 \rightarrow \Lambda^\ast + \Psi_{(DS)}$ (left) and $B^+ \rightarrow \Lambda_c^+(2595) \Psi_{(DS)}$ (right) decays and corresponding backgrounds. The yields are normalized to 1 fb$^{-1}$ of data and assume $pp$ collisions with $\sqrt{s}=14$ TeV and signal branching fractions of $10^{-3}$ and $m(\Psi_{(DS)})=2$ GeV/c$^2$. The distributions are at Pythia level and do not include experimental effects yet, but these will be included by the time of the conference.

        Speakers: Diego Martinez Santos (Universidade de Santiago de Compostela (ES)), Carlos Vazquez Sierra (CERN)
      • 16:35
        Upgrade and physics plans for ATLAS in LHC Run4 (live) 45m

        The ATLAS experiment has been in successful operation at the Large Hadron Collider (LHC) since 2009, collecting data from proton-proton collisions of up to $\sqrt{s}$ = 13 TeV. It is now gearing up to collect its majority of data at the high luminosity LHC. For this, upgrades are underway to face the challenge of a significant increase in number of interactions per bunch-crossing (pile-up), large detector occupancies and unprecedented collision rates. However, the upgrades will not only allow us to probe the Standard Model with greater precision and comb through finer statistics for new phenomena. It will also be an opportunity to enhance the physics potential of the experiment beyond just increasing the available data sets.
        The upgrade will include a replacement of the inner tracker and a new timing detector as well as improvements in trigger acquisition and strategy. This talk will summarise the status and expected performance of these projects followed by how these upgrades are envisaged to boost the physics potential of the experiment for the HL-LHC.

        Speaker: Claire Antel (Universite de Geneve (CH))
      • 17:20
        Exploring new possibilities to discover a light pseudo-scalar at LHCb (live) 30m
        Speakers: Gabriele Ferretti (Chalmers University, Gothenburg, Sweden), Tom Flacke (IBS CTPU Daejeon, Korea), Xabier Cid Vidal (Instituto Galego de Física de Altas Enerxías)
    • 17:50 19:00
      Poster Session
    • 00:30 03:50
      Unexplored ideas for ALICE, ATLAS, CMS and LHCb: Block B (US West / Asia)
      Conveners: Alberto Belloni (University of Maryland (US)), Stathes Paganis (National Taiwan University)
      • 00:30
        Introduction (live) 10m
      • 00:40
        Theory ideas (live) 50m
        Speakers: Nathaniel Craig, Nathaniel Craig (UC Santa Barbara), Nathaniel Craig (Stanford University), Laura Reina (Florida State University (US))
      • 01:30
        Photo session 5m
      • 01:35
        Probing same-sign WW in hadronic signatures (recorded) 30m

        The production of two same-sign W bosons in association with two jets via vector boson scattering (VBS), ssWWjj, is a key probe of the mechanism of electroweak symmetry breaking, with the Higgs boson unitarising the process' cross-section. Longitu­dinal ssWWjj involves the spin­-0 modes of the gauge bosons arising from EWSB, and is very sensitive to BSM physics.

        With the Run-2 and Run-3 expected datasets, over 500 leptonic ssWWjj events are expected (W -> lv, l=e,mu), of which 35 events have two longitudinally polarised W boson, i.e. sWWjj(LL). More data is expected in Run-4 at the HL-LHC, but will not be sufficient to observe the process using only leptonic decays of the W boson.

        The pool of ssWWjj(LL) events can be increased significantly increased by considering events with at least one hadronicically-decaying W (2/3 of W bosons decay hadronically). This is a very challenging final state given the large (by a few orders of magnitude) backgrounds.

        We present ideas to systematically study ssWWjj VBS and the main backgrounds in semi-leptonic and hardonic VBS signatures, and, using substructure and jet charge techniques and deep learning, to extract the fraction of ssWWjj(LL) events.

        This paper will rely on novel ideas to tackle the problem, and on combining existing tools and techniques in a novel way to probe the ssWWjj VBS signature. No use will be made of LHC experiment's infrastructure or codebases.

        Speaker: Karolos Potamianos (University of Oxford (GB))
      • 02:05
        Prospects on searches for baryonic Dark Matter produced in \boldmath $b$-hadron decays at LHCb (live) 30m

        A mechanism accommodating baryonic Dark Matter (DM) has been proposed recently [arXiv:2101.02706,arXiv:1810.00880]. This has the advantage of explaining matter-anti-matter imbalance and DM at once, without requiring the proton to be unstable. The DM particles, carrying baryon number, would be produced in $b$-hadron decays, through a t-channel mediator. In this model, $b$-hadron branching fractions as high as few per mille are needed both by theoretical and experimental constraints.

        The DM particles produced in this model, $\Psi_{(DS)}$ carry baryon number, and have a mass in the range $0.94 < m_{\Psi_{(DS)}} < 4.34 $ GeV/c$^2$, being therefore harder to detect in general missing transverse energy (MET) searches at ATLAS or CMS. However, knowing the position of the production vertex (PV) and the decay vertex of the $b$-hadron allows a partial reconstruction of the final state and can provide a proxy to MET, hereby referred to as MPT$_{B}$, opening a window for detection at LHCb. This technique has already proven successful in the analysis of $\Lambda_b \rightarrow p \mu \nu_\mu$ decays [arXiv:1504.01568]. Therefore, the study of DM in $b$ decays at LHCb provides unique sensitivity at the LHC, covering the entire parameter space of the model. Sharp kinematic end points are also distinctive signatures of these decays.

        In this contribution, we will discuss the LHCb sensitivity to these searches, covering different topologies and giving prospects for Runs 3 and 4 of the LHC. We will give examples of $B^0$, $B^+$ and $\Lambda_b$ decays as well as discussing the advantages and disadvantages compared to other experiments, such as Belle. Notably, the use of $\Lambda_b$ baryons allows to test the entire mass range for $\Psi_{(DS)}$. For the analysis, the signals and backgrounds will be generated with Pythia [arXiv:1410.3012] and experimental effects will be emulated through a fast simulation [arXiv:2012.02692].

        We show here two baseline examples, with topologies that are convenient at LHCb: $B^0 \rightarrow \Lambda^\ast + \Psi_{(DS)},\Lambda^\ast \rightarrow pK$, and $B^+ \rightarrow \Lambda_c^+(2595) \Psi_{(DS)},\Lambda_c^+(2595) \rightarrow \pi^+\pi^- \Lambda_c^+,\Lambda_c^+ \rightarrow p K \pi$. In both cases, $\Psi_{(DS)}$ is the dark particle, charged under baryon number. These decay chains are fully reconstructible through charged tracks at LHCb, with the obvious exception of the dark particle, and give access to the $b-$hadron decay points, allowing the determination of MPT$_{B}$. A simple select and count experiment of the signals and main backgrounds indicates that, in both decays, branching fractions in the $10^{-3}-10^{-5}$ range should be at reach at LHCb. This result will be tuned once experimental effects are accounted for. Figure 1 shows the distribution of MPT$_{B}$ for the signal and background in both topologies.


        Figure caption: Expected MPT$_{B}$ distributions at LHCb for the $B^0 \rightarrow \Lambda^\ast + \Psi_{(DS)}$ (left) and $B^+ \rightarrow \Lambda_c^+(2595) \Psi_{(DS)}$ (right) decays and corresponding backgrounds. The yields are normalized to 1 fb$^{-1}$ of data and assume $pp$ collisions with $\sqrt{s}=14$ TeV and signal branching fractions of $10^{-3}$ and $m(\Psi_{(DS)})=2$ GeV/c$^2$. The distributions are at Pythia level and do not include experimental effects yet, but these will be included by the time of the conference.

        Speakers: Diego Martinez Santos (Universidade de Santiago de Compostela (ES)), Carlos Vazquez Sierra (CERN)
      • 02:35
        Upgrade and physics plans for ATLAS in LHC Run4 (recorded) 45m

        The ATLAS experiment has been in successful operation at the Large Hadron Collider (LHC) since 2009, collecting data from proton-proton collisions of up to $\sqrt{s}$ = 13 TeV. It is now gearing up to collect its majority of data at the high luminosity LHC. For this, upgrades are underway to face the challenge of a significant increase in number of interactions per bunch-crossing (pile-up), large detector occupancies and unprecedented collision rates. However, the upgrades will not only allow us to probe the Standard Model with greater precision and comb through finer statistics for new phenomena. It will also be an opportunity to enhance the physics potential of the experiment beyond just increasing the available data sets.
        The upgrade will include a replacement of the inner tracker and a new timing detector as well as improvements in trigger acquisition and strategy. This talk will summarise the status and expected performance of these projects followed by how these upgrades are envisaged to boost the physics potential of the experiment for the HL-LHC.

        Speaker: Claire Antel (Universite de Geneve (CH))
      • 03:20
        Exploring new possibilities to discover a light pseudo-scalar at LHCb (live) 30m
        Speakers: Tom Flacke (IBS CTPU Daejeon, Korea), Xabier Cid Vidal (Instituto Galego de Física de Altas Enerxías)
    • 03:50 05:00
      Poster Session
    • 14:30 19:00
      Physics at LHC Experiments and Beyond: Block A (Europe / US East)
      Conveners: Chilufya Mwewa (Brookhaven National Laboratory (US)), Jan Fiete Grosse-Oetringhaus (CERN), Kristin Lohwasser (University of Sheffield (GB)), Mika Anton Vesterinen (University of Warwick (GB))
      • 14:30
        LHCb as a 4D precision detector in Upgrade II (live) 45m
        Speaker: Floris Keizer (CERN)
      • 15:15
        An Experiment for Electron-Hadron Scattering at the LHC (live) 30m
        Speakers: Daniel Britzger (Max-Planck-Institut für Physik München), Daniel Britzger (Deutsches Elektronen-Synchrotron (DE))
      • 15:45
        High $P_T$ Higgs excess as a signal of non-local QFT at the LHC Phase-2 (live) 30m
        Speaker: Prof. Stathes Paganis (National Taiwan University)
      • 16:15
        Upgrade of the ALICE experiment for LHC Run 4 and beyond (live) 45m

        The ALICE experiment is preparing the ITS3, an upgrade of its Inner Tracking System for LHC Run 4. The three innermost layers will be replaced by wafer-scale, truly cylindrical, ultra-thin detector layers, made of Monolithic Active Pixel Sensors. This innovative technology will permit to lower the material budget even further and to improve the tracking and vertexing capabilities. We will present the R&D programme, including the already achieved demonstration of the operability of bent MAPS and future plans.
        Moreover, we will present the plans for ALICE 3, a next-generation heavy-ion experiment for LHC Run 5. The idea of this major upgrade is to exploit silicon technologies to build an ultra-thin and fast tracker with unprecedented vertexing capabilities in combination with excellent particle identification from a silicon-based time-of-flight detector in combination with complementary approaches.

        Speaker: Francesca Carnesecchi (Gangneung-Wonju National University (KR))
      • 17:00
        Isospin extrapolation as a method to measure inclusive b->sll decays (live) 30m

        Over the last few years, several discrepancies with respect to SM predictions have arisen in studies of exclusive $b\to s \ell^{+}\ell^{-}$ decays, where a b-hadron decays into a specific final state. The interpretation for a significant fraction of these discrepancies is clouded by hadronic uncertainties. Inclusive $b\to s \ell^{+}\ell^{-}$ decays are regarded to be under better theoretical control, but are less precise experimentally.

        Previous measurements of inclusive decays rely on a sum-of-exclusives approach, whereby several specific final states are combined and the result is extrapolated to the full inclusive rate using a hadronisation model. This approach suffers from the systematic uncertainty associated with this extrapolation and is difficult to control. This can be avoided with measurements of fully inclusive decays, whereby only the two leptons are reconstructed. However this approach suffers from large backgrounds.

        Here we discuss a new approach to reconstruct inclusive decays involving isospin extrapolation. The idea is to reconstruct a charged kaon and require that it forms a common vertex with the two leptons, resulting in a semi-inclusive $b\to K^{+}\ell^{+}\ell^{-} X$ signature. The presence of the additional kaon allows for greater rejection of background and a well-defined sideband region which can be used to control it.

        Measurements of the $b\to K^{+}\ell^{+}\ell^{-} X$ signature require an extrapolation to the fully inclusive rate to account for final states with a $K^{0}$ meson. This can be done by assuming using isospin rules for the inclusive decay, which is well established in exclusive $B^{+}$ and $B^0$ $b\to s \ell^{+}\ell^{-}$ decays. The extrapolation is expected to provide a complimentary extrapolation to fully inclusive decay rate compared to traditional methods.

        The inclusive $b\to s \ell^{+}\ell^{-}$ decay rate is approximately one order of magnitude larger than exclusive decays and exists for all b-hadron species. This results in a very large signal yield compared to exclusive measurements which offers potential for very precise measurements in theoretically clean observables, in addition to those affected by hadronic uncertainties. Lepton flavour violation searches and lepton universality tests can be performed on the inclusive decay, where background can be reduced without fear of spoiling the theoretical interpretation. The reconstruction of the kaon also allows for a perfect tagging of the flavour of the b-hadron at decay. This allows for forward-backward asymmetry and CP asymmetry measurements to be performed with similar statistical advantages as for the lepton symmetry tests.

        A paper on this would have the following aspects: First by defining the problem and method approach. Then it would detail the main aspects in which background is better controlled with the additional charged kaon. For the theoretically clean observables, discussion on further reducing background and the expected signal yields for the various observables would be estimated. Finally, a discussion on the isospin extrapolation itself will be made. This includes specific issues related to the presence of the diverse admixture of different b-hadron hadron species in the LHC production.

        Speaker: Patrick Haworth Owen (Universitaet Zuerich (CH))
      • 17:30
        Prospects of the H->bb coupling measurement at the LHeC with a full detector simulation (live) 30m

        A Future Large Hadron electron Collider (LHeC) would allow to collide an intense electron beam with a proton or ion beam from the High Luminosity–Large Hadron Collider (HL-LHC). Preliminary studies suggest that not only a rich physics program in the context of deep-inelastic scattering is possible, but also that several high precision Higgs coupling measurements could be performed. In particular, studies suggest uncertainties of 0.8% and 7.4% on the Higgs boson coupling strength to b- and c-quarks respectively. However, these studies are based on fast detector simulations and hence might be subject to several optimistic assumptions. Within this study, we present prospects of the H->bb coupling measurement at the LHeC using the public software infrastructure of the ATLAS Experiment at the LHC for a full detector simulation. This approach does not only consider correctly low-level interactions between the decay particles and the detector material, but also uses state-of-the art reconstruction algorithms. Since the acceptance of the ATLAS detector is somewhat smaller than a potential future LHeC detector, dedicated extrapolation techniques for the reconstruction- and fake-rate efficiencies have been applied and their uncertainties estimated.

        Speaker: Matthias Schott (CERN / University of Mainz)
      • 18:00
        Thermodynamics of the 'Little Bang' (live) 30m

        Observation of the cosmic microwave background radiation (CMBR) by various satellites confirms the Big Bang evolution, inflation and provides important information regarding the early Universe and its evolution with excellent accuracy [1]. In little bangs, the produced fire-ball goes through a rapid evolution from partonic QGP phase to a hadronic phase, and finally freezes out. The physics of heavy-ion collisions at ultra-relativistic energies, popularly known as little bangs, has often been compared to the Big Bang phenomenon of the early Universe [2-3]. Heavy-ion experiments are predominantly sensitive to the conditions that prevail at the later stage of the collision as majority of the particles are emitted near the freeze-out. Thus, a direct and quantitative estimation of the properties of hot and dense matter in the early stages and during each stage of the evolution has not yet been possible.

        In heavy-ion collisions, the mean transverse energy ${p}_{\rm T}$ is a proxy of the temperature and the mean number of charged particles (<$N_{ch}$>) is a manifestation of energy or entropy density. The major motivation of our study is to generate a map of $\epsilon$ and T, similar to the CMBR map, using fluctuation and correlation techniques.

        The two-particle ${p}_{\rm T}$ correlations have been studied for long, and defined in terms of a dimensionless correlation function as a ratio of the differential correlator to the square of the average transverse momentum
        $P_{2} (\Delta\eta,\Delta\varphi) = \frac{\langle \Delta {p}_{\rm T} \Delta {p}_{\rm T} \rangle \left( \Delta \eta, \Delta \varphi \right)}{\langle p_{\rm T}\rangle^2} = \frac{1}{\langle p_{\rm T}\rangle^2} \frac{\int\limits_{p_{\rm T,min}}^{{p}_{\rm T,max}} ~ {\rho_2({\bf p}_1,{\bf p}_2)\, ~ \Delta {p}_{\rm T,1} \Delta {p}_{\rm T,2} ~~ \Delta {p}_{\rm T,1} \Delta {p}_{\rm T,2}}} {\int\limits_{{p}_{\rm T,min}}^{{p}_{\rm T,max}} {\rho_2({\bf p}_1,{\bf p}_2) ~~ \Delta {p}_{\rm T,1} \rm{d}{p}_{\rm T,2}}}$,
        where is the inclusive average momentum of produced particles in an event ensemble. Technically, in this type of analysis, integrals of the numerator and denominator of the above expression are first evaluated in four-dimensional space as functions of . The ratio is calculated and subsequently averaged over all coordinates. But when ${p}_{\rm T}$ is used a proxy to system temperature, it is affected by the radial flow, and falls short to capture the true 'temperature' fluctuation.
        To overcome this, we propose a 4-particle ${p}_{\rm T}$ and number fluctuations where the non-flow effect are suppressed. With this, we can link it to the thermodynamic response functions like specific heat, heat capacity and compressibility. With the advent of high luminosity and high statistics runs in Run3 and Run4 of CERN Large Hadron Collider (LHC), it will be possible to generator 4-particle correlations. The ALICE experiment is expected to take data in LHC Run5 with a completely new detector covering 8 units in rapidity. For these runs, the fluctuation map can be more effective to extract relevant physics towards thermodynamics of hot and dense matter.

        References:
        1. E. Komatsu and C. L. Bennett (WMAP science team)arXiv:1404.5415 [astro-ph.CO].
        2. W. Florkowski, Phenomenology of Ultra-relativistic Heavy-ion Collisions, World Scientific, 2010.
        3. U. Heinz, J. Phys.: Conf. Ser.455, 012044 (2013).

        Speakers: Dr Sumit Basu (Lund University (SE)), Dr Tapan Nayak (CERN, Geneva and NISER, Bhubaneswar)
      • 18:30
        Fixed Superthin Solid Target and Colliding beams in a single experiment – a novel approach to study the matter under new extreme conditions (live) 30m

        A novel approach to the experimental and theoretical study of the properties of QCD matter under the new extreme conditions is discussed. Our simulations demonstrate a possibility to reach the initial temperature over 300 MeV and baryonic charge density exceeding the ordinary nuclear density by a factor of three. Such conditions could be realized in the high luminosity (HL) experiments at LHC by means of utilizing scattering of the two colliding beams at the nuclei of a solid target which is fixed at their intersection point.
        A new distinctive feature of the proposed experiments is based on the implementation of the superthin solid target operated in the core of colliding beams. This approach is grounded on the successful data-taking process in the LHCb experiment running the colliding and fixed gaseous target modes simultaneously. The thickness of a solid target is assumed to be of the order of the upgraded gaseous target SMOG2 with a storage cell [1].
        Assuming the instantaneous luminosity of colliding proton beams at HL LHC of 1036cm−2s−1 and the fixed micropowder jet target of Ni [2] with a thickness of 1016atoms · cm−2 one can expect the triple nuclear collision rate of 100 s−1. Similar conditions might be realized by inserting into a beam core a superthin graphene target (10-100 atomic layers). We shall present entirely new mechanical construction for handling such target remotely. The third option of the superthin solid target to be discussed is based on 1 μm thick microstrip sensors [3] operated in a beams’ halo. Among the advantageous features of superthin and super-light solid micro targets are the orders of magnitude better spatial localization of primary vertices, a large variety of target nuclei, a possibility to run an experiment with a few targets simultaneously, etc.
        To simulate the triple nuclear collisions, we employed the UrQMD 3.4 model [4, 5] for the beam center-of-mass collision energies √s = 2.76 TeV. As a result of our modeling, we found that in the most central and simultaneous triple nuclear collisions the initial baryonic charge density is about 3 times higher than the one achieved in the ordinary binary nuclear collisions at this energy. In contrast to the binary nuclear collisions, the high value of baryonic charge density leads to a strong enhancement of proton and Λ-hyperon production at the midrapidity and to a sizable suppression of their antiparticles. A principally new kind of scattering reactions, like a nucleus-fireball interaction, can be studied in such experiments. We argue that at lower energies of collisions the triple nuclear collision method can be of principal importance for locating the (tri)critical endpoint of the QCD phase diagram.
        References:
        [1] LJCb Collaboration. SMOG2.TDR. CERN-LHCC-2019-005l.
        [2] V. Pugatch et al., The 13th International Conference on Cyclotrons and their Applications: Proceedings, Vancouver, BC, Canada. p. 297.
        [3] V. Pugatch, International Conference "CERN-Ukraine co-operation: current state and prospects“ Kharkiv. 15-May-2018; LHCb-TALK-2018-557.
        [4] S.A. Bass et al., Prog. Part. Nucl. Phys. 41 (1998), 225-370.
        [5] M. Bleicher et al., J. Phys. G 25 (1999), 1859-1896.

        Speaker: Prof. Kyrill Bugaev (Bogolyubov Institute for Theoretical Physics, National Academy of Sciences of Ukraine )
    • 00:30 05:30
      Physics at LHC Experiments and Beyond: Block B (US West / Asia)
      Conveners: Kingman Cheung (National Tsing Hua University (TW)), Stefania Gori (Perimeter Institute/Cincinnati University), Stefania Gori (UC Santa Cruz), stefania gori (UC Santa Cruz)
      • 00:30
        LHCb as a 4D precision detector in Upgrade II (recorded) 45m
        Speaker: Floris Keizer (CERN)
      • 01:15
        An Experiment for Electron-Hadron Scattering at the LHC (live) 30m
        Speakers: Bernhard Holzer (CERN), Max Klein (University of Liverpool (GB)), Nestor Armesto Perez (Universidade de Santiago de Compostela (ES)), Peter Kostka (University of Liverpool (GB)), Yuji Yamazaki (Kobe University (JP))
      • 01:45
        Model Compression and Simplification Pipelines for fast Deep Neural Network inference in FPGAs in HEP (live) 30m
        Speakers: Luigi Sabetta (Sapienza Universita e INFN, Roma I (IT)), Graziella Russo (Sapienza Universita e INFN, Roma I (IT))
      • 02:15
        Upgrade of the ALICE experiment for LHC Run 4 and beyond (recorded) 45m

        The ALICE experiment is preparing the ITS3, an upgrade of its Inner Tracking System for LHC Run 4. The three innermost layers will be replaced by wafer-scale, truly cylindrical, ultra-thin detector layers, made of Monolithic Active Pixel Sensors. This innovative technology will permit to lower the material budget even further and to improve the tracking and vertexing capabilities. We will present the R&D programme, including the already achieved demonstration of the operability of bent MAPS and future plans.
        Moreover, we will present the plans for ALICE 3, a next-generation heavy-ion experiment for LHC Run 5. The idea of this major upgrade is to exploit silicon technologies to build an ultra-thin and fast tracker with unprecedented vertexing capabilities in combination with excellent particle identification from a silicon-based time-of-flight detector in combination with complementary approaches.

        Speaker: Francesca Carnesecchi (Gangneung-Wonju National University (KR))
      • 03:00
        High $P_T$ Higgs excess as a signal of non-local QFT at the LHC Phase-2 (live) 30m
        Speaker: Stathes Paganis (National Taiwan University)
      • 03:30
        Isospin extrapolation as a method to measure inclusive b->sll decays (recorded) 30m

        Over the last few years, several discrepancies with respect to SM predictions have arisen in studies of exclusive $b\to s \ell^{+}\ell^{-}$ decays, where a b-hadron decays into a specific final state. The interpretation for a significant fraction of these discrepancies is clouded by hadronic uncertainties. Inclusive $b\to s \ell^{+}\ell^{-}$ decays are regarded to be under better theoretical control, but are less precise experimentally.

        Previous measurements of inclusive decays rely on a sum-of-exclusives approach, whereby several specific final states are combined and the result is extrapolated to the full inclusive rate using a hadronisation model. This approach suffers from the systematic uncertainty associated with this extrapolation and is difficult to control. This can be avoided with measurements of fully inclusive decays, whereby only the two leptons are reconstructed. However this approach suffers from large backgrounds.

        Here we discuss a new approach to reconstruct inclusive decays involving isospin extrapolation. The idea is to reconstruct a charged kaon and require that it forms a common vertex with the two leptons, resulting in a semi-inclusive $b\to K^{+}\ell^{+}\ell^{-} X$ signature. The presence of the additional kaon allows for greater rejection of background and a well-defined sideband region which can be used to control it.

        Measurements of the $b\to K^{+}\ell^{+}\ell^{-} X$ signature require an extrapolation to the fully inclusive rate to account for final states with a $K^{0}$ meson. This can be done by assuming using isospin rules for the inclusive decay, which is well established in exclusive $B^{+}$ and $B^0$ $b\to s \ell^{+}\ell^{-}$ decays. The extrapolation is expected to provide a complimentary extrapolation to fully inclusive decay rate compared to traditional methods.

        The inclusive $b\to s \ell^{+}\ell^{-}$ decay rate is approximately one order of magnitude larger than exclusive decays and exists for all b-hadron species. This results in a very large signal yield compared to exclusive measurements which offers potential for very precise measurements in theoretically clean observables, in addition to those affected by hadronic uncertainties. Lepton flavour violation searches and lepton universality tests can be performed on the inclusive decay, where background can be reduced without fear of spoiling the theoretical interpretation. The reconstruction of the kaon also allows for a perfect tagging of the flavour of the b-hadron at decay. This allows for forward-backward asymmetry and CP asymmetry measurements to be performed with similar statistical advantages as for the lepton symmetry tests.

        A paper on this would have the following aspects: First by defining the problem and method approach. Then it would detail the main aspects in which background is better controlled with the additional charged kaon. For the theoretically clean observables, discussion on further reducing background and the expected signal yields for the various observables would be estimated. Finally, a discussion on the isospin extrapolation itself will be made. This includes specific issues related to the presence of the diverse admixture of different b-hadron hadron species in the LHC production.

        Speaker: Patrick Haworth Owen (Universitaet Zuerich (CH))
      • 04:00
        Prospects of the H->bb coupling measurement at the LHeC with a full detector simulation (live) 30m

        A Future Large Hadron electron Collider (LHeC) would allow to collide an intense electron beam with a proton or ion beam from the High Luminosity–Large Hadron Collider (HL-LHC). Preliminary studies suggest that not only a rich physics program in the context of deep-inelastic scattering is possible, but also that several high precision Higgs coupling measurements could be performed. In particular, studies suggest uncertainties of 0.8% and 7.4% on the Higgs boson coupling strength to b- and c-quarks respectively. However, these studies are based on fast detector simulations and hence might be subject to several optimistic assumptions. Within this study, we present prospects of the H->bb coupling measurement at the LHeC using the public software infrastructure of the ATLAS Experiment at the LHC for a full detector simulation. This approach does not only consider correctly low-level interactions between the decay particles and the detector material, but also uses state-of-the art reconstruction algorithms. Since the acceptance of the ATLAS detector is somewhat smaller than a potential future LHeC detector, dedicated extrapolation techniques for the reconstruction- and fake-rate efficiencies have been applied and their uncertainties estimated.

        Speaker: SUBHASISH BEHERA (IIT GUWAHATI)
      • 04:30
        Fixed Superthin Solid Target and Colliding beams in a single experiment – a novel approach to study the matter under new extreme conditions (recorded) 30m

        A novel approach to the experimental and theoretical study of the properties of QCD matter under the new extreme conditions is discussed. Our simulations demonstrate a possibility to reach the initial temperature over 300 MeV and baryonic charge density exceeding the ordinary nuclear density by a factor of three. Such conditions could be realized in the high luminosity (HL) experiments at LHC by means of utilizing scattering of the two colliding beams at the nuclei of a solid target which is fixed at their intersection point.
        A new distinctive feature of the proposed experiments is based on the implementation of the superthin solid target operated in the core of colliding beams. This approach is grounded on the successful data-taking process in the LHCb experiment running the colliding and fixed gaseous target modes simultaneously. The thickness of a solid target is assumed to be of the order of the upgraded gaseous target SMOG2 with a storage cell [1].
        Assuming the instantaneous luminosity of colliding proton beams at HL LHC of 1036cm−2s−1 and the fixed micropowder jet target of Ni [2] with a thickness of 1016atoms · cm−2 one can expect the triple nuclear collision rate of 100 s−1. Similar conditions might be realized by inserting into a beam core a superthin graphene target (10-100 atomic layers). We shall present entirely new mechanical construction for handling such target remotely. The third option of the superthin solid target to be discussed is based on 1 μm thick microstrip sensors [3] operated in a beams’ halo. Among the advantageous features of superthin and super-light solid micro targets are the orders of magnitude better spatial localization of primary vertices, a large variety of target nuclei, a possibility to run an experiment with a few targets simultaneously, etc.
        To simulate the triple nuclear collisions, we employed the UrQMD 3.4 model [4, 5] for the beam center-of-mass collision energies √s = 2.76 TeV. As a result of our modeling, we found that in the most central and simultaneous triple nuclear collisions the initial baryonic charge density is about 3 times higher than the one achieved in the ordinary binary nuclear collisions at this energy. In contrast to the binary nuclear collisions, the high value of baryonic charge density leads to a strong enhancement of proton and Λ-hyperon production at the midrapidity and to a sizable suppression of their antiparticles. A principally new kind of scattering reactions, like a nucleus-fireball interaction, can be studied in such experiments. We argue that at lower energies of collisions the triple nuclear collision method can be of principal importance for locating the (tri)critical endpoint of the QCD phase diagram.
        References:
        [1] LJCb Collaboration. SMOG2.TDR. CERN-LHCC-2019-005l.
        [2] V. Pugatch et al., The 13th International Conference on Cyclotrons and their Applications: Proceedings, Vancouver, BC, Canada. p. 297.
        [3] V. Pugatch, International Conference "CERN-Ukraine co-operation: current state and prospects“ Kharkiv. 15-May-2018; LHCb-TALK-2018-557.
        [4] S.A. Bass et al., Prog. Part. Nucl. Phys. 41 (1998), 225-370.
        [5] M. Bleicher et al., J. Phys. G 25 (1999), 1859-1896.

        Speaker: Prof. Kyrill Bugaev (Bogolyubov Institute for Theoretical Physics, National Academy of Sciences of Ukraine )
    • 14:30 18:38
      New Detector and Reconstruction Methodologies, Machine Learning and Computing at HL-LHC: Block A (Europe / US East)
      Conveners: Alberto Belloni (University of Maryland (US)), Chilufya Mwewa (Brookhaven National Laboratory (US)), Matthias Schott (CERN / University of Mainz)
      • 14:30
        Experiment Keynote talk (live) 50m
        Speaker: Eckhard Elsen (Deutsches Elektronen-Synchrotron (DE))
      • 15:20
        Machine learning augmented probes of light Yukawa couplings from Higgs pair production (live) 30m

        The measurement of the trilinear Higgs self-coupling is among the primary goals of the high-luminosity LHC and is accessible in Higgs pair production. However, this process is also sensitive to other couplings. While most of them can be well-constrained by single-Higgs measurements, this does not hold true for the coupling between the Higgs and light quarks. We show that the Higgs pair production process provides a sensitive probe to these couplings. Making use of interpretable machine learning with boosted decision trees and Shapley values, a measure derived from cooperative game theory, we separate the signal $q\bar{q}\to HH \to b\bar{b}\gamma \gamma$ from the Standard Model $HH$ production and other backgrounds such as $HZ$, $t\bar{t}H$ and $b\bar{b}\gamma\gamma$. We show that, with the help of the machine learning framework, estimated bounds on light quarks coupling to the Higgs can be significantly improved compared to other probes, $\textit{cf.}$ fig. 1. The introduction of the Shapley values to assess variable importance allows the possibility of using a reduced set of kinematic variables to train the machine learning model. In what would otherwise be a black-box approach of using an over-complete set of kinematic variables, Shapley values tap into kinematic correlation and hence, allow for the selection of a minimal set of important kinematic variables that provide an optimal accuracy of signal vs. background classification.

        Furthermore, we explore the potential of disentangling the various new physics operators leading to modifications of the light quark Yukawa coupling and a modified trilinear Higgs self-coupling with the added interpretability of the multivariate analysis using machine learning techniques. Additional kinematics can be further incorporated to help break the degeneracy between the up and down quark operators.

        Speaker: Lina Alasfar (Humboldt-Universität zu Berlin)
      • 15:50
        Automate Monte Carlo simulation on hardware accelerators for Run4@LHC (live) 30m

        We propose a new framework for automatic generation of simulated events for particle physics processes in high energy physics applications, for LHC and HL-LHC configurations, with automatic support for hardware accelerators, in particular, graphics processing units (GPU). The project focus on two specific problems: automate the generation of matrix elements and phase space terms for GPU and real-time inference for machine learning tools in order to reduce the computational effort required by the Monte Carlo integrator used by the simulation. From the computational point of view, we show how to extract matrix elements from MG5_aMC@NLO and convert the analytical expressions automatically to multi-threading CPU and GPU code, increasing drastically the number of generated events when compared to the original MC implementation. Furthermore, we investigate the performance of our approach on different clusters configurations, including setups with multi-GPU, and we estimate the efficiency gains when compared to current state of the art implementations.

        Speaker: Marco Rossi (CERN)
      • 16:20
        Physics by CMS during HL-LHC (live) 45m
        Speakers: Ulrich Heintz (Unknown), Ulrich Heintz (Brown University (US))
      • 17:05
        Probing early-time dynamics with charm balance functions (live) 30m

        For a long time it was conjectured that the large azimuthal anisotropic flow observed in heavy-ion collisions implied early thermalization. After the discovery of azimuthal anisotropic flow in small systems, where it is unclear if there is time for the system to thermalize, it was realized that one can have hydrodynamization, a phenomena in which flow can build up out-of-equilibrium (chemical and thermal). Recent calculations in kinetic theory supports such a scenario where one starts out with a purely gluonic system and the light quarks first are produced when the system is already flowing. However, this also means that one cannot access early-time dynamics with light quarks.

        In this talk we propose to use low $p_{\rm T}$ charm quarks to study the dynamics of the hydrodynamization process. The advantage with charm quarks is that:
        - they are dominantly produced early in the collision with rates that should be calculable in pQCD
        - they interact with the medium building up some amount of azimuthal anisotropic flow
        - they do not thermalize (forget the past) as their final yields would then essentially be zero

        We propose to go beyond flow measurements by measuring the balance function of hadrons with (anti)charm. By subtracting charm-charm and anticharm-anticharm correlations from charm-anticharm correlations we want to directly deduce how the charm and anticharm quarks interacts with the system after their production. By measuring the evolution of the charm-anticharm correlation function from small-to-large systems we hope to provide direct experimental insights into the early-time QCD dynamics.

        Speaker: Peter Christiansen (Lund University (SE))
      • 17:35
        The next generation techniques for anisotropic flow analyses (live) 30m

        The fundamental theory of strong nuclear force, quantum chromodynamics, predicts the existence of an extreme state of matter dubbed quark--gluon plasma (QGP) under extreme values of temperature and/or baryon density, which are attainable in ultra-relativistic heavy-ion collisions at Large Hadron Collider. Properties of QGP can be inferred via its hydrodynamic response to the anisotropies in the initial state geometry. This phenomenon is known as collective anisotropic flow, and its measurements were crucial in establishing the perfect liquid paradigm about QGP properties.

        The most precise anisotropic flow measurements to date are based on the formalism of multivariate cumulants. This approach introduced the flow-specific observables in the field, $v_n\{k\}$, in terms of which most experimental results and theoretical predictions have been reported in flow analyses in the past two decades. However, it was realized recently that the usage of multivariate cumulants in flow analyses is flawed. The non-trivial properties of cumulants are preserved only for the stochastic observables for which the cumulant expansion has been performed directly, and if there are no underlying symmetries due to which some terms in the cumulant expansion are identically zero. Both of these assumptions are violated in the derivation of observables $v_n\{k\}$, which produces non-negligible systematic biases particularly in collisions with a small number of produced particles.

        In this work, we attempt for the first time to reconcile the strict mathematical formalism of multivariate cumulants, with the usage of cumulants in anisotropic flow analyses. As a consequence, the outcome of this study will produce the next generation techniques and observables for flow analyses in high-energy nuclear collisions.

        Speaker: Mr Ante Bilandzic (Technische Universitaet Muenchen (DE))
      • 18:05
        Model Compression and Simplification Pipelines for fast Deep Neural Network inference in FPGAs in HEP (live) 30m
        Speakers: Graziella Russo (Sapienza Universita e INFN, Roma I (IT)), Luigi Sabetta (Sapienza Universita e INFN, Roma I (IT))
      • 18:35
    • 18:35 19:30
      Poster Session
    • 00:30 04:35
      New Detector and Reconstruction Methodologies, Machine Learning and Computing at HL-LHC: Block B (US West / Asia)
      • 00:30
        Experiment keynote talk (recorded) 50m
        Speaker: Eckhard Elsen (Deutsches Elektronen-Synchrotron (DE))
      • 01:20
        Automate Monte Carlo simulation on hardware accelerators for Run4@LHC (recorded) 30m

        We propose a new framework for automatic generation of simulated events for particle physics processes in high energy physics applications, for LHC and HL-LHC configurations, with automatic support for hardware accelerators, in particular, graphics processing units (GPU). The project focus on two specific problems: automate the generation of matrix elements and phase space terms for GPU and real-time inference for machine learning tools in order to reduce the computational effort required by the Monte Carlo integrator used by the simulation. From the computational point of view, we show how to extract matrix elements from MG5_aMC@NLO and convert the analytical expressions automatically to multi-threading CPU and GPU code, increasing drastically the number of generated events when compared to the original MC implementation. Furthermore, we investigate the performance of our approach on different clusters configurations, including setups with multi-GPU, and we estimate the efficiency gains when compared to current state of the art implementations.

        Speaker: Marco Rossi (CERN)
      • 01:50
        Machine learning augmented probes of light Yukawa couplings from Higgs pair production (recorded) 30m

        The measurement of the trilinear Higgs self-coupling is among the primary goals of the high-luminosity LHC and is accessible in Higgs pair production. However, this process is also sensitive to other couplings. While most of them can be well-constrained by single-Higgs measurements, this does not hold true for the coupling between the Higgs and light quarks. We show that the Higgs pair production process provides a sensitive probe to these couplings. Making use of interpretable machine learning with boosted decision trees and Shapley values, a measure derived from cooperative game theory, we separate the signal $q\bar{q}\to HH \to b\bar{b}\gamma \gamma$ from the Standard Model $HH$ production and other backgrounds such as $HZ$, $t\bar{t}H$ and $b\bar{b}\gamma\gamma$. We show that, with the help of the machine learning framework, estimated bounds on light quarks coupling to the Higgs can be significantly improved compared to other probes, $\textit{cf.}$ fig. 1. The introduction of the Shapley values to assess variable importance allows the possibility of using a reduced set of kinematic variables to train the machine learning model. In what would otherwise be a black-box approach of using an over-complete set of kinematic variables, Shapley values tap into kinematic correlation and hence, allow for the selection of a minimal set of important kinematic variables that provide an optimal accuracy of signal vs. background classification.

        Furthermore, we explore the potential of disentangling the various new physics operators leading to modifications of the light quark Yukawa coupling and a modified trilinear Higgs self-coupling with the added interpretability of the multivariate analysis using machine learning techniques. Additional kinematics can be further incorporated to help break the degeneracy between the up and down quark operators.

        Speaker: Lina Alasfar (Humboldt-Universität zu Berlin)
      • 02:20
        Physics by CMS during HL-LHC (recorded) 45m
        Speakers: Ulrich Heintz (Unknown), Ulrich Heintz (Brown University (US))
      • 03:05
        Probing early-time dynamics with charm balance functions (recorded) 30m

        For a long time it was conjectured that the large azimuthal anisotropic flow observed in heavy-ion collisions implied early thermalization. After the discovery of azimuthal anisotropic flow in small systems, where it is unclear if there is time for the system to thermalize, it was realized that one can have hydrodynamization, a phenomena in which flow can build up out-of-equilibrium (chemical and thermal). Recent calculations in kinetic theory supports such a scenario where one starts out with a purely gluonic system and the light quarks first are produced when the system is already flowing. However, this also means that one cannot access early-time dynamics with light quarks.

        In this talk we propose to use low $p_{\rm T}$ charm quarks to study the dynamics of the hydrodynamization process. The advantage with charm quarks is that:
        - they are dominantly produced early in the collision with rates that should be calculable in pQCD
        - they interact with the medium building up some amount of azimuthal anisotropic flow
        - they do not thermalize (forget the past) as their final yields would then essentially be zero

        We propose to go beyond flow measurements by measuring the balance function of hadrons with (anti)charm. By subtracting charm-charm and anticharm-anticharm correlations from charm-anticharm correlations we want to directly deduce how the charm and anticharm quarks interacts with the system after their production. By measuring the evolution of the charm-anticharm correlation function from small-to-large systems we hope to provide direct experimental insights into the early-time QCD dynamics.

        Speaker: Peter Christiansen (Lund University (SE))
      • 03:35
        The next generation techniques for anisotropic flow analyses (recorded) 30m

        The fundamental theory of strong nuclear force, quantum chromodynamics, predicts the existence of an extreme state of matter dubbed quark--gluon plasma (QGP) under extreme values of temperature and/or baryon density, which are attainable in ultra-relativistic heavy-ion collisions at Large Hadron Collider. Properties of QGP can be inferred via its hydrodynamic response to the anisotropies in the initial state geometry. This phenomenon is known as collective anisotropic flow, and its measurements were crucial in establishing the perfect liquid paradigm about QGP properties.

        The most precise anisotropic flow measurements to date are based on the formalism of multivariate cumulants. This approach introduced the flow-specific observables in the field, $v_n\{k\}$, in terms of which most experimental results and theoretical predictions have been reported in flow analyses in the past two decades. However, it was realized recently that the usage of multivariate cumulants in flow analyses is flawed. The non-trivial properties of cumulants are preserved only for the stochastic observables for which the cumulant expansion has been performed directly, and if there are no underlying symmetries due to which some terms in the cumulant expansion are identically zero. Both of these assumptions are violated in the derivation of observables $v_n\{k\}$, which produces non-negligible systematic biases particularly in collisions with a small number of produced particles.

        In this work, we attempt for the first time to reconcile the strict mathematical formalism of multivariate cumulants, with the usage of cumulants in anisotropic flow analyses. As a consequence, the outcome of this study will produce the next generation techniques and observables for flow analyses in high-energy nuclear collisions.

        Speaker: Mr Ante Bilandzic (Technische Universitaet Muenchen (DE))
      • 04:05
        Thermodynamics of the 'Little Bang' (recorded) 30m

        Observation of the cosmic microwave background radiation (CMBR) by various satellites confirms the Big Bang evolution, inflation and provides important information regarding the early Universe and its evolution with excellent accuracy [1]. In little bangs, the produced fire-ball goes through a rapid evolution from partonic QGP phase to a hadronic phase, and finally freezes out. The physics of heavy-ion collisions at ultra-relativistic energies, popularly known as little bangs, has often been compared to the Big Bang phenomenon of the early Universe [2-3]. Heavy-ion experiments are predominantly sensitive to the conditions that prevail at the later stage of the collision as majority of the particles are emitted near the freeze-out. Thus, a direct and quantitative estimation of the properties of hot and dense matter in the early stages and during each stage of the evolution has not yet been possible.

        In heavy-ion collisions, the mean transverse energy ${p}_{\rm T}$ is a proxy of the temperature and the mean number of charged particles (<$N_{ch}$>) is a manifestation of energy or entropy density. The major motivation of our study is to generate a map of $\epsilon$ and T, similar to the CMBR map, using fluctuation and correlation techniques.

        The two-particle ${p}_{\rm T}$ correlations have been studied for long, and defined in terms of a dimensionless correlation function as a ratio of the differential correlator to the square of the average transverse momentum
        $P_{2} (\Delta\eta,\Delta\varphi) = \frac{\langle \Delta {p}_{\rm T} \Delta {p}_{\rm T} \rangle \left( \Delta \eta, \Delta \varphi \right)}{\langle p_{\rm T}\rangle^2} = \frac{1}{\langle p_{\rm T}\rangle^2} \frac{\int\limits_{p_{\rm T,min}}^{{p}_{\rm T,max}} ~ {\rho_2({\bf p}_1,{\bf p}_2)\, ~ \Delta {p}_{\rm T,1} \Delta {p}_{\rm T,2} ~~ \Delta {p}_{\rm T,1} \Delta {p}_{\rm T,2}}} {\int\limits_{{p}_{\rm T,min}}^{{p}_{\rm T,max}} {\rho_2({\bf p}_1,{\bf p}_2) ~~ \Delta {p}_{\rm T,1} \rm{d}{p}_{\rm T,2}}}$,
        where is the inclusive average momentum of produced particles in an event ensemble. Technically, in this type of analysis, integrals of the numerator and denominator of the above expression are first evaluated in four-dimensional space as functions of . The ratio is calculated and subsequently averaged over all coordinates. But when ${p}_{\rm T}$ is used a proxy to system temperature, it is affected by the radial flow, and falls short to capture the true 'temperature' fluctuation.
        To overcome this, we propose a 4-particle ${p}_{\rm T}$ and number fluctuations where the non-flow effect are suppressed. With this, we can link it to the thermodynamic response functions like specific heat, heat capacity and compressibility. With the advent of high luminosity and high statistics runs in Run3 and Run4 of CERN Large Hadron Collider (LHC), it will be possible to generator 4-particle correlations. The ALICE experiment is expected to take data in LHC Run5 with a completely new detector covering 8 units in rapidity. For these runs, the fluctuation map can be more effective to extract relevant physics towards thermodynamics of hot and dense matter.

        References:
        1. E. Komatsu and C. L. Bennett (WMAP science team)arXiv:1404.5415 [astro-ph.CO].
        2. W. Florkowski, Phenomenology of Ultra-relativistic Heavy-ion Collisions, World Scientific, 2010.
        3. U. Heinz, J. Phys.: Conf. Ser.455, 012044 (2013).

        Speaker: Dr Sumit Basu (Lund University (SE))
    • 04:35 05:30
      Poster Session