- Compact style
- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
The 2021 International Workshop on Future Linear Colliders (LCWS2021), arranged by Europe, was the latest in the series devoted to the study of the physics, detectors, and accelerator issues relating to the high-energy linear electron-positron colliders CLIC and ILC. It took place in an online format.
Since the last workshop (LCWS2019), many significant steps had been made. The European Strategy for Particle Physics Update 2020 positions an electron-positron Higgs factory as the highest-priority next-generation collider. A linear collider will operate as a Higgs factory during its initial stage, while maintaining a clear path for future energy upgrades. Preparations for the ILC in Japan had moved forward with ICFA announcing the establishment of the ILC International Development Team (IDT). In the US, the Snowmass process is on-going.
With a wide programme of plenary and parallel sessions, the workshop provided the opportunity to present ongoing work as well as to get informed and involved. The programme featured progress on ILC in Japan as a prominent theme, and used the ILC-IDT working groups' substructure for sessions to review the progress in accelerator design, detector developments, and physics studies. The progress of the CLIC studies in the same areas were also covered, and most sessions and topics were common.
Information on past LC schools can be found at the following website:
https://lcschool.desy.de/
The THDMa is a new physics model that extends the scalar sector of the Standard Model by an additional doublet as well as a pseudoscalar singlet and allows for mixing between all possible scalar states. In the gauge-eigenbasis, the additional pseudoscalar serves as a portal to the dark sector, with a priori any dark matter spins states. The option where dark matter is fermionic is currently one of the standard benchmarks for the experimental collaborations, and several searches at the LHC constrain the corresponding parameter space. However, most current studies constrain regions in parameter space by setting all but 2 of the 12 free parameters to fixed values. I will present results were we allow all parameters to float. We apply all current theoretical and experimental constraints. We identify regions in the parameter space which are still allowed after these have been applied and which might be interesting for an investigation at current and future collider machines.
Extensions of the Standard Model that include vector-like quarks commonly also include additional particles that may mediate new production or decay modes. Using as example the minimal linear $\sigma$ model, that reduces to the minimal $SO(5)/SO(4)$ composite Higgs model in a specific limit, we consider the phenomenology of vector-like quarks when a scalar singlet $\sigma$ is present. This new particle may be produced in the decays $T \to t \sigma$, $B \to b \sigma$, where $T$ and $B$ are vector-like quarks of charges $2/3$ and $-1/3$, respectively, with subsequent decay $\sigma \to W^+ W^-, ZZ, hh$. By scanning over the allowed parameter space we find that these decays may be dominant. In addition, we find that the presence of several new particles allows for single $T$ production cross sections larger than those expected in minimal models. We discuss the observability of these new signatures in existing searches.
We studied phenomenological implications of numerious Family Non-Universal U(1)$^\prime$ sub-models in the minimal U(1)$^\prime$ extended Supersysmmetric Model (UMSSM) possesing an extra down quark type exotic field. In doing this, we started with anomaly cancelation criteria to generate a number of solutions in which the extra U(1)' charges of the particles are treated as free parameters. We imposed existing bounds coming from colliders and astrophysical observations on the assumed sub-models and observed that current limits dictate certain orientations.
Related with potential impact of non universal charges on the Z' decays we made predictions for the existing and future experiments. We also probe the signatures of the exotic quark and the non-universality at the future Linear Colliders.
The problematic huge hierarchy between the usual 4-dimensional Planck mass scale of gravity and the ElectroWeak symmetry breaking scale can interestingly disappear at some point-like location along extra space-like dimensions where the effective gravity scale is reduced down to the TeV scale. Field theories with point-like particle locations (3-dimensional brane-worlds) or point-like interactions deserve special care. In particular it can be shown that, in contrast with usual literature, brane-scalar fields – like the SM Higgs boson – interacting with fermions in the whole space (bulk) do not need to be regularized if rigorous 4- or 5-dimensional treatments are applied: standard regularization introduces a finite width wave function for scalar fields localized along extra dimensions. The variational calculus of least action principle must also be applied strictly to derivate the fermion (Kaluza-Klein) masses and couplings, in particular by distinguishing the natural and essential boundary conditions: the higher-dimensional model – based in particular on extra compact spaces of type interval or circle (orbifold) – must be defined either completely through the action expression [necessity then for new specific brane terms bilinear in the fermion fields] or partially from additional so-called essential boundary conditions. Besides, the correct action integrand definition requires to introduce improper integrals in order to remain compatible with the fermion wave function discontinuities induced by point-like Higgs interactions. Phenomenologically, the correct treatment of the brane-localised Higgs boson could be tested via the precise measurements of the Higgs coupling to di-photon or (flavour-changing) Yukawa interactions at a linear collider.
The recent tension between local and early measurements of the Hubble constant can be explained in a particle physics context. A mechanism is presented where this tension is alleviated due to the presence of a Majoron, arising from the spontaneous breaking of Lepton Number. The lightness of the active neutrinos is consistently explained. Moreover, this mechanism is shown to be embeddable in the Minimal (Lepton) Flavour Violating context, providing a correct description of fermion masses and mixings, and protecting the flavour sector from large deviations from the Standard Model predictions. A QCD axion is also present to solve the Strong CP problem. The Lepton Number and the Peccei-Quinn symmetries naturally arise in the Minimal (Lepton) Flavour Violating setup and their spontaneous breaking is due to the presence of two extra scalar singlets. The Majoron phenomenology is also studied in detail. Decays of the heavy neutrinos and the invisible Higgs decay provide the strongest constraints in the model parameter space.
5-6 talks covering use of ILC complex beams for a variety of other physics studies than collider and dark sector physics, 2-4 talks about use of the beam for accelerator R&D and detector R&D
In this talk, I will discuss a computational set-up for calculating the production of a massive quark-antiquark pair in $e^+e^-$ collisions at order $\alpha_s^2$ in the coupling of quantum chromodynamics (QCD) at the differential level by means of the antenna subtraction method.
Theoretical predictions on the production of top quark pairs in the continuum, and the bottom quark pairs at the Z-boson resonance, will be reported.
In particular, I will focus on the order $\alpha_s^2$ QCD corrections to the heavy quark forward-backward asymmetry ($A_{FB}$) in $e^+e^-$ collisions.
In the case of the $A_{FB}$ of bottom quarks at the Z-boson resonance, the QCD corrections are determined with respect to both the bottom quark axis and the thrust axis.
I will also briefly discuss improvements on these QCD corrections by applying the scale-optimization procedure based on the Principle of Maximum Conformality.
We explore composite models with different manifestations of the Global symmetry (G) and its subgroup (H) it breaks into. One feature common to all is the possibility for the presence of one (or more) CP-odd scalar singlet states ("$\eta$") which are Goldstones of the breaking of the global symmetry. There exist possibilities where the corresponding mass corrections for the $\eta$ are highly suppressed thereby making it arbitrarily light and will be the focus of our analysis. In such a scenario, the couplings of the $\eta$ to the fermions are loop induced and are driven by the anomalous WZW interactions. Depending on the mass and the pseudoscalar decay constant "$f$", we point out regions in the parameter space where it shows up as MET, leaving a displaced vertex or may decay promptly. Each of these regions is associated with signatures which has a low SM background which could either lead to a discovery or strong constraints on the scale "$f$".
Using the method of massive operator matrix elements, we calculate the subleading QED initial state radiative corrections to the process $e^+ e^− \to \gamma^∗/Z^∗$ for the first three logarithmic contributions. The calculation is performed in the limit of large center of mass energies $s \gg m_e^2$.
These terms supplement the known corrections to $O(\alpha^2)$, which were completed recently. The newly calculated radiators can be expressed in terms of harmonic polylogarithms of argument $z$ and $(1−z)$ and in Mellin $N$-space by generalized harmonic sums.
Given the high precision at future colliders operating at very large luminosity, these corrections are important for concise theoretical predictions.
We test the assumption that fermion-loop corrections to high energy W+W− scattering are negligible when compared to the boson-loop ones. Indeed, we find that, if the couplings of the interactions deviate from their Standard Model values, fermion-loop corrections can in fact become as important or even greater than boson-loop corrections for some particular regions of the parameter space, and both types of loops should be taken into account.
A new LCIO-based data format called mini-DST has been developed, which combines PFO- and event-level information, including the output of the most important high-level reconstruction algorithms.
Originally triggered by Snowmass 2021 studies, the mini-DST is useful for beginners as the starting point of analysis.
In this talk, we discuss the basics and contents of the mini-DST, how to use it and its limitations.
The Event Data Model (EDM) is at the core of HEP experiments software frameworks. It defines the language in which physicists are able to express their ideas and also how the different software components communicate with each other. The Key4HEP project aims to develop a common software stack for future collider projects. One of the main components of Key4HEP is a common EDM in the form of EDM4hep. It is generated via the podio EDM toolkit and a prototype design of EDM4hep, based on LCIO, is in place. We will briefly discuss some technical points of the implementation of EDM4hep, and based on that highlight some of the similarities and differences between EDM4hep and LCIO. We will finish with the report of some first experiences with EDM4hep in the Key4HEP framework and give an outline of future plans.
Since 2020, CLIC and ILC take part in the Key4hep collaboration, which strives to create a common software for HEP collider design studies. Key4hep represents a flexible, multi-layered model of collaboration, where different common components like documentation, build system, data modeling, persistency and framework components are adopted as needed. This talks gives a bird's-eye view of Key4hep activities with a focus on the developments of the spack-based build system -- which allows one to install all iLCSoft components and their dependencies -- the framework core, and the framework integration of the Delphes fast simulation program, and discusses the roadmap towards the complete "turnkey software stack".
To ensure the backward compatibility between iLCSoft and Key4hep and to ease the validation of the iLCSoft processors, k4MarlinWrapper provides the necessary tools to run Marlin processors using the Gaudi framework, allowing for a smooth transition from current battle-tested particle reconstruction frameworks, to a common framework for future experiments like CLIC or FCC. It creates a wrapper that interfaces between the data formats and components of Marlin and Gaudi. It provides converters between the Marlin steering format and the Gaudi options file, translating special cases that can be found in the former. k4MarlinWrapper supports different event data formats by providing in-memory converters that can be configured to create a sequence of algorithms that mix different event data formats. It benefits from using synergies between other projects from Key4hep, to reuse and leverage the utilities from these.
The iLCDirac grid interface has been successfully used by Linear Collider community for many years. It has made it possible to isolate the users from the ever changing distributed environments by offering a consistent interface throughout the years. In this contribution we detail the current status and latest developments as well as the plans for keeping iLCDirac up-to-date with the latest developments of the underlying DIRAC framework, including new storage element types and grid infrastructures, and towards python3 compatibility.
We have produced new high statistics 250 GeV common MC samples
for the ILD physics study using the latest generator, simulation, and reconstruction packages.
Aiming for the requested statistics of MC samples for physics study,
we utilized ILCDIRAC distributed computing environment for mass production.
In this talk, we will report the estimated resource requirements
and the current status and prospects of our on-going production.
The US Particle Physics Community Planning Exercise (aka Snowmass), sponsored by the APS Division of Particles and Fields, provides an opportunity for particle physicists in the US, together with international partners, to build consensus on possible future projects and explore their potential scientific reach. The Silicon Detector (SiD) is one of two mature detector concepts proposed for the International Linear Collider (ILC). Several physics analyses inspired by Snowmass Letters of Interest (LoIs) are underway to explore the SiD precision measurement of the Higgs boson: Higgs to invisible, Higgs to long-lived particles with displaced decay vertices, Higgs CP properties, and the Higgs self-coupling. Variations of the tracking, vertexing and calorimetry are envisioned in order to estimate their impact on sensitivity to these important Higgs properties.
Determination of the Higgs boson self-coupling is crucial for understanding the structure of the electroweak symmetry breaking vacuum. I will review the theoretical extraction of the self-coupling from observed cross sections of e+e- -> Zhh and e+e- -> vvhh at high energy, current projections of experimental sensitivity, and the limiting factors for this measurement.
In the Standard Model(SM), ΗγZ coupling is a loop induced coupling, therefore it might receive relatively large correction from Beyond Standard Model(BSM) physics. It is very challenging to measure at the HL-LHC, where only 3σ significance is expected for branching ratio of H$\to$γZ. On the other hand, HγZ coupling is potentially very sensitive to new physics, for example some new heavy particles contributing to the loop, therefore it is interesting to know how well this coupling can be measured at the International Linear Collider (ILC). More over, it is found HγZ coupling plays an important role in a framework of effective field theory, for example in $e^+e^-\to$ ZH process it is necessary to know the contribution from s-channel with photon. It turns out that the anomalous HγZ, Hγγ, HZZ and HWW couplings come from a few common set of dimension-6 operators, and HγZ coupling measurement can provide very useful constraints on those operators.
In this talk, we will report the study of HγZ coupling using production channel $e^+e^-\to$ γH, with preliminary results based on the full simulation of ILD using the multivariate data analysis. Results will be given for an integrated luminosity of 2000 $fb^{-1}$(final plan) at $E_{CM}$=250 GeV.
We report on studies of the e+e− → HZ process with the subsequent decay of the Higgs boson H → ZZ , where the ZZ combination is reconstructed in the final states with two jets and two leptons. The analysis is performed using Monte Carlo data samples obtained assuming the ILD detector model, the integrated luminosity 2 ab−1 and the center-of-mass energy √s = 250 GeV. Signals are measured for four processes, which correspond to two combinations for the H → ZZ* final states and two decays of the directly produced Z boson, Z → νν̄ and Z → qq̄. To obtain the Higgs boson mass distributions, we used the variables M(jjll) and ∆M = M(jjll) − M(jj) + M(Znom), where M(Znom) = 91.2 GeV. Potential backgrounds are also estimated. The e+e− → HZ process measurement allows to obtain the width of the Higgs boson in a model-independent way.
As a multi-TeV energy-staged machine, CLIC offers millions of Higgs bosons to be produced in a low-background environment enabling measurements of most of the Higgs couplings at a few per mille level. To this end, individual measurements at different CLIC energy stages, in various Higgs production and decay channels, are subjects of global fits of the Higgs properties in model-independent or dependent way ($\kappa$-framework, EFT fit). In this talk we discuss measurements of $𝐵R(𝐻→𝑍Z^*→𝑞𝑞𝑙𝑙) (𝑙= e^±, μ^±)$ at 350 GeV and 3 TeV center-of-mass energies from the perspective of their statistical precision.
In this talk we address a potential of 3 TeV center-of-mass energy Compact Linear Collider (CLIC) to measure the Standard Model (SM) Higgs boson decay to two photons. Since photons are massless, they are coupled to the Higgs boson at a loop level, in exchange of heavy particles either from the Standard Model or beyond. Any deviation of the Higgs to di-photon branching fraction and consequently of the Higgs to photon coupling may indicate a New Physics. Measurement is fully simulated on 5000 samples of pseudo-experiments assuming integrated luminosity of 5 ab-1 with unpolarized beams.
In Standard Model (SM) Higgs Boson pair production initiated by photons ($\gamma \gamma \to h h$) is loop-generated process and thereby is very sensitive to any new couplings and particles that may come in loops. Composite Higgs Models (CHMs) provide an alternate mechanism to address the hierarchy problem of SM where Higgs could be a bound state of a strongly interacting sector instead of being an elementary field. These set of models apart from modifying the SM Higgs couplings could also introduce new effective couplings that can have substantial impact on the loop processes. In this work [1] we have studied the impact of Composite Higgs models in $\gamma\gamma \to h h$ (Di-Higgs) production process.
References
[1] A. Bharucha, G. Cacciapaglia, A. Deandrea, N. Gaur, D. Harada, F. Mahmoudi and K. Sridhar, [arXiv:2012.09470 [hep-ph]].
The naturalness problem motivates new physics beyond the Standard Model (SM). The Higgs sector in neutral naturalness models provides a portal to the hidden QCD sector, and thus Higgs coupling measurements play a central role in exploring the model parameter space. We investigate a class of neutral naturalness models, in which the Higgs boson is a pseudo-Goldstone boson with the radial mode at the TeV scale. Integrating out the radial mode, we obtain various dimensional-six operators in the SM effective field theory, and calculate the low energy effective Higgs potential with radiative corrections. With Higgs precision measurements at a future Higgs factory, we explore the implication on the model parameter spaces. We also study the constraints from the future electroweak precision measurements. And we find that both Higgs and electroweak precision measurements at the future lepton colliders lead to comparable limits.
We study the process $e^+ e^- \to \ell^+ \ell^- h\left(b\bar{b}\right)$ considering centre-of-mass
energies $\sqrt{s} = \{250, 1000, 3000\} \, \text{GeV}$ using resolved- and boosted-analysis
techniques to reconstruct the Higgs boson. We show that this process probes the tensor
structure of the $hZZ^*/hZ\bar{f}f$ couplings via Higgs-strahlung and $Z$-boson fusion in the
dimension-six Standard Model Effective Field Theory (SMEFT). Upon exploiting the interplay
between the sensitivity of high-energy $e^+ e^-$ colliders and beam polarisation, we obtain
projected bounds for the tensor structures of the Higgs couplings to $Z$-bosons by the use of
total rates and differential distributions with respect to energy variables, which complement the
existing constraints by the $Z$-pole and diboson measurements at LEP.
We study the measurement of Higgs boson self-couplings in $2\rightarrow 3$ vector boson scattering processes in proton colliders and lepton colliders in the framework of Standard Model Effective Field Theory, taking the examples of $W^{\pm}_L W^{\pm}_L\rightarrow W^{\pm}_L W^{\pm}_L h$ and $W^+_L W^-_L\rightarrow h h h$. First, by taking Goldstone equivalence theorem and analysing the amplitudes in high energy, we find that
the ratio of beyond-standard-model amplitudes to standard model ones approaches to $\frac{{\mathcal A}^{\text{BSM}}}{{\mathcal A}^{\text{SM}}} \sim \frac{E^2}{\Lambda^2}$. The dependence of amplitudes on the Wilson coefficient $c_6$ is mainly through a 5-point contact scalar vertex. Second, using MadGraph5_aMC@NLO for simulating full processes without decaying the heavy bosons, we find the full cross sections remain very sensitive to relevant Wilson coefficients. Sensitivity of full cross sections of $W^{\pm}_L W^{\pm}_L\rightarrow W^{\pm}_L W^{\pm}_L h$ on $c_6$ requires $p_T$ cuts on the final states to reduce the enhanced SM cross sections due to Sudakov logarithms. $\sigma/\sigma_{\text{SM}}$ in lepton colliders at $\sqrt{s}\ge3$ TeV and proton colliders at $\sqrt{s}\ge 27$ TeV is comparable to, or even better than, di-Higgs channel at LHC
Beyond standard model (BSM) particles should be included in effective field theory in order to compute the scattering amplitudes involving these extra particles. We formulate an extension of Higgs effective field theory which contains arbitrary number of scalar and fermion fields with arbitrary electric and chromoelectric charges. The BSM Higgs sector is described by using the non-linear sigma model in a manner consistent with the spontaneous electroweak symmetry breaking. The chiral order counting rule is arranged consistently with the loop expansion. The leading order Lagrangian is organized in accord with the chiral order counting rule. We use a geometrical language to describe the particle interactions. The parametrization redundancy in the effective Lagrangian is resolved by describing the on-shell scattering amplitudes only with the covariant quantities in the scalar/fermion field space. We introduce a useful coordinate (normal coordinate), which simplifies the computations of the on-shell amplitudes significantly. We show the high energy behaviors of the scattering amplitudes determine the "curvature tensors" in the scalar/fermion field space. The massive spinor-wavefunction formalism is shown to be useful in the computations of on-shell helicity amplitudes.
Electroweakly Interacting Massive Particles (EWIMP) is one of the best dark matter candidates as represented by Wino or Higgsino in SUSY. There are several methods to search for such particles at collider, and one of them is using indirect probe, which is as follows. EWIMP modifies the self energy of electroweak gauge bosons via loop contribution, and this result in a slight change in the cross section of the standard model (SM) Drell-Yan process. By comparing the SM prediction of Drell-Yan process and experimental result, we can search for EWIMP indirectly. It is known that this effect becomes maximal when EWIMP mass is the half of center-of-mass energy from 1-loop level calculation, also we pointed out higher order calculation and non-perturbative effects are important in such a energy region.
A highly granular silicon-tungsten electromagnetic calorimeter (SiW-ECAL) is the reference design of the ECAL for International Large Detector (ILD) concept, one of the two detector concepts for the detector(s) at the future International Linear Collider. Prototypes for this type of detector are developed within the CALICE Collaboration. The technological prototype addresses technical challenges such as integrated front-end electronics or compact layer and readout design.
During Autumn/Winter 2019/20 a stack of up to 22 layers with a dimension of ~18×18×25cm^3 was compiled. A beam test at DESY is planned for May 2021. We will present the status of the hardware aspects of the prototype and the status of the implementation in simulation.
A prototype of a digital pixel EM calorimeter, EPICAL-2, has been designed and constructed, following up on a previous prototype [1]. It consists of alternating W absorber and Si sensor layers, with a total thickness of ~20 radiation lengths, an area of $\mathrm{30mm\times30mm}$, and ~25 million pixels. The new EPICAL-2 detector employs the ALPIDE pixel sensors developed for the ALICE ITS upgrade. This R&D is performed in the context of the proposed Forward Calorimeter upgrade for the ALICE experiment, but it also serves the general understanding of the principle of a fully digital calorimeter.
We will report on first results regarding alignment and calibration from cosmics and on the calorimeter performance measured with the DESY electron beam. The prototype shows good energy resolution and linearity, comparable with those of a SiW calorimeter with analog readout. We will also show first results of shower-shape studies with unprecedented spatial precision.
[1] JINST13 (2018) P01014.
The increase of the particle flux (pile-up) at the HL-LHC with luminosities of L ≃ 7.5 × 1034 cm−2s−1 will have a severe impact on the ATLAS detector reconstruction and trigger performance. The end-cap and forward region where the liquid Argon calorimeter has coarser granularity and the inner tracker has poorer momentum resolution will be particularly affected. A High Granularity Timing Detector (HGTD) is proposed in front of the LAr end-cap calorimeters for pile-up mitigation and for luminosity measurement.
It will cover the pseudo-rapidity range from 2.4 to 4.0. Two Silicon sensors double sided layers will provide precision timing information for MIPs with a resolution better than 30 ps per track in order to assign each particle to the correct vertex. Readout cells have a size of 1.3 mm × 1.3 mm, leading to a highly granular detector with 3 millions of channels. Low Gain Avalanche Detectors (LGAD) technology has been chosen as it provides enough gain to reach the large signal over noise ratio needed.
The requirements and overall specifications of the HGTD will be presented as well as the technical proposal. LGAD R&D campaigns are carried out to study the sensors, the related ASICs, and the radiation hardness. Laboratory and test beam results will be presented.
CMS is building a High Granularity sampling Calorimeter (HGCAL), which will replace the existing endcap calorimeters (electromagnetic and hadronic) as part of the CMS phase-II upgrade to prepare for the High-Luminosity phase of the LHC (HL-LHC), due to start around 2027. The HGCAL includes two compartments: the CE-E and CE-H for measurements of electromagnetic and hadronic showers respectively. The CE-E uses lead, copper and copper-tungsten as absorbers, with silicon sensors as active elements. The CE-H uses stainless steel as absorber and a mixture of silicon and scintillator as active elements, with silicon in the high-radiation regions and scintillator in the lower radiation regions. We present results of 2018 CERN beam tests of a 28-layer CE-E, including energy and position resolution, as well as first beam tests at DESY of scintillator tileboards, equipped with irradiated SiPMs
Compact Linear Collider (CLIC) was proposed as the next energy-frontier infrastructure at CERN, allowing to study $e^{+}e^{-}$ collisions at three centre-of-mass energy stages: 380 GeV, 1.5 TeV and 3 TeV. The main goal of its high-energy stages is to search for the new physics beyond the Standard Model (SM). The Inert Doublet Model (IDM) is one of the simplest SM extensions and introduces four new scalar particles: $H^{\pm}$, $A$ and $H$; the lightest, $H$, is stable and hence it is a natural dark matter (DM) candidate. A set of benchmark points is considered, which are consistent with current theoretical and experimental constraints and promise detectable signals at future colliders.
Prospects of observing pair-production of the IDM scalars at CLIC were previously studied for signatures with two leptons in the final state. In the current study, discovery reach for the IDM charged scalar pair-production is considered for the semi-leptonic final state at the two high-energy CLIC stages. Full simulation analysis, based on the new CLIC detector model, is presented for five selected IDM scenarios. Results are then extended to the larger set of benchmarks using DELPHES fast simulation framework. The CLIC detector model for DELPHES has been modified to take pile-up contribution from the beam-induced $\gamma \gamma$ interactions into account, which is crucial for the presented analysis. Results of the study indicate that heavy, charged IDM scalars can be discovered at CLIC for most of the proposed benchmark scenarios, with very high statistical significance.
The direct pair-production of the tau-lepton superpartner, stau, is one
of the most interesting channels to search for SUSY. First of all the stau is
with high probability the lightest of the scalar leptons. Secondly the
signature of stau pair production signal events is one of the most difficult
ones, yielding to the 'worst' and so most global scenario for the searches.
The current model-independent stau limits comes from analysis performed at
LEP but they suffer from the low energy of this facility. Limits at LHC are
extended to higher energies but they are only valid under strong constraints.
ILC, a future electron-positron collider with
energy up to 500 GeV and upgrade capability, is a promising scenario for SUSY
searches. The capability of the ILC for determining exclusion/discovery limits
for the stau in a model-independent way is shown in this contribution, together
with an overview of the current state-of-the-art.
Results of the last studies of stau pair-production at the ILC are presented,
showing the improvements with respect to previous results.
We introduce here a new method to measure the Higgs decay branching ratios at future e⁺e⁻ Higgs factories, by directly exploiting class numeration.
Given the clean environment at a lepton collider, we build an event sample highly enriched in Higgs bosons and essentially unbiased for any decay mode.
The sample can be partitioned into categories using event properties linked to the expected Higgs decay modes.
The counts per category are used to fit the Higgs branching ratios in a model independent way.
The result of the fit is directly the set of branching ratios, independent from any measurement of a Higgs production mode.
Special care is given to an appropriate treatment of the statistical uncertainties.
In this contribution, the current status of our implementation of this analysis within the ILD concept detector is presented.
One of the important goals at the future $e^+e^-$ colliders is to measure
the top-quark mass and width in a scan of the pair production threshold.
However, the shape of the pair-production cross section at the
threshold depends also on other model parameters, as the top Yukawa
coupling, and the measurement is a subject to many systematic uncertainties.
Presented in this work is the most general approach to the top-quark mass
determination from the threshold scan at CLIC, with all relevant model
parameters and selected systematic uncertainties included in the fit procedure.
Expected constraints from other measurements are also taken into account.
The top-quark mass can be extracted with precision of the order of 30 to
40 MeV, including considered systematic uncertainties, already for 100
fb$^{-1}$ of data collected at the threshold. Additional improvement
is possible if the running scenario is optimized. With the
optimisation procedure based on the genetic algorithm the statistical
uncertainty of the mass measurement can be reduced by about
25%. Influence of the beam energy profile on the optimisation procedure and
the expected statistical precision of the measurement is verified by comparing
results obtained assuming luminosity spectra of CLIC, ILC and FCCee.
In supersymmetric extensions of the Standard Model,higgsino-like charginos and neutralinos are preferred to have masses of the order of the elecktroweak scale by naturalness arguments. Light higgsinos are also well motivated from a top-down perspective. Such light $\tilde{\chi}^{\pm}_{1}$, $\tilde{\chi}^{0}_{1}$ and $\tilde{\chi}^{0}_{2}$ states can be almost mass degenerate. In this talk the analysis of two benchmark points which exhibits mass difference of O [GeV] in the higgsino sector is presented. Due to their mass degeneracy it is very difficult to observe the decay of such higgsinos at hadron colliders. ILC being an $e^+e^-$ collider has the prospect of providing very clean physics environment to observe or exclude such scenarios. However, in addition to the desired $e^+e^- \rightarrow \tilde{\chi}^{+} \tilde{\chi}^{-}$ processes, parasitic collisions of real and virtual photons radiated off the $e^+e^-$ beams occur at the rates depending on the center of mass energy (250 GeV - 1 TeV) and other beam parameters. For instance, at a centre of mass energy 500 GeV the expectation value is about 1.05 $\gamma \gamma$ events per bunch crossing. In the given higgsino scenarios, visible decay products have low transverse momenta due to their small mass differences. This so called $\gamma \gamma$ overlay has a very similar topology to our signal event which makes the removal of overlay very challenging. The standard methods to remove $\gamma \gamma$ background e.g $k_t$ algorithm method remains inadequate. This talk presents a proposed solution namely a newly developed track grouping algorithm which is based on the concept of displaced signal and $\gamma \gamma \rightarrow$ low $p_T$ hadron overlay vertices. By applying the track grouping algorithm to separate $\gamma \gamma \rightarrow$ low $p_T$ hadron tracks from the higgsino decay tracks, an analysis has been performed using the full detector simulation for the International Large Detector (ILD). The results from the analysis and a comparison with the previous study which was performed without the inclusion of $\gamma \gamma \rightarrow$ low $p_T$ hadron events is made to understand the impact of the overlay on the higgsino analysis.
Future $e^+e^-$ colliders are excellent tools to probe fundamental physics beyond Standard Model via Higgs and electroweak precision measurements.
Modern silicon detectors are able to measure time-of-arrival with high precision of O(10 ps). This can be used to measure the time-of-flight (TOF) of the particles and improve their identification.
We develop reconstruction and calibration algorithms based on TOF information to separate $\pi^{\pm}$, $K^{\pm}$, $p$, $\bar{p}$ particles at future Higgs factory detectors. Furthermore, we study how to implement fast timing silicon layers in the tracking and/or calorimeter systems, in order to derive requirements on the time resolution. As an example case, the ILD detector concept is studied.
The $K^{\pm}$ mass measurement is a simple benchmark to test the performance of TOF algorithm. A precision at the level of 10 keV can be expected, which would significantly improve the knowledge of the $K^{\pm}$ mass.
The last decades have seen the development of calorimeters with pixels smaller than 1 cm² or even 1 cm³ considering the extent in depth. Today it looks possible to measure the time of the pixel energy deposits with a resolution similar to their size (1 cm = 30 ps), even though limitations linked to technology will come in. What can bring such a performance to the performances of the calorimeter itself or of the detector globally? In this paper a description of different contributions is offered. This embeds time-of-flight applications as well as helping shower pattern recognition by imposing proper succession of the shower hits, say causality.
To face the increase of radiation levels and to maintain the high physics performance during the HL-LHC, an upgrade of the Compact Muon Solenoid (CMS) experiment will replace the existing forward calorimeters with a new high granularity sampling calorimeter (HGCAL). The current design of HGCAL uses silicon sensors as active material in the highest radiation regions and plastic scintillator tiles in the regions of lower dose/fluence. Particle shower reconstruction in high-granularity calorimeters in high-density environments, such as HL-LHC, is a very interesting challenge. A typical event at the HL-LHC consists of about 140-200 superimposed collisions where many showers tend to overlap. Due to the extreme combinatorial complexity, conventional algorithms are expected to fail the requirements on memory and CPU time consumption. A novel reconstruction approach is being developed to fully exploit the granularity and other significant features of the detector, such as precision timing. The main purpose of the new iterative reconstruction framework called TICL (The Iterative CLustering) is to process hits that are built in the detector and return particle properties and probabilities. The framework is fully modular, allowing the user to test and validate different components and it is designed such that new algorithms or techniques (e.g. Machine Learning) can be plugged on top easily. In view of the expected pressure on the computing capacity in the HL-LHC era, the algorithms and their data structured are also being designed with heterogeneous computing and parallelisation in mind. Preliminary results show that significant speed-up can be obtained running the clustering algorithm on GPUs. This talk will describe the approaches being considered and show the latest developments and results.
International Large Detector, ILD is a detector concept of the ILC. It is required to measure various kinds of the final state particles very precisely using ILD and jet energy scale (JES) measurement is one of the important parts. In order to reduce the systematic error of the JES measurement, we tried to calibrate the jet energy using reconstructed jet energy from other measured variables. We reconstructed the jet energy using measured jet mass, jet angles and photon angles in the e+e- ->γZ process. We performed full simulation and evaluated how accurate JES can be calibrated. We will discuss the JES reconstruction results and report the possibility for the calibration.
A Monolithic CMOS Pixel Sensors (CPS), named MIMOSIS, is currently being developed by IPHC/IKF/GSI to equip the Micro-Vertex Detector (MVD) of the CBM heavy ion experiment at FAIR/GSI in the TJ 180nm technology. It features about 500 000 pixels with in-pixel discrimination and data driven read-out. The first full size prototype (MIMOSIS-1) has been fabricated in 2020 in different epitaxial variants. Functionnal tests are ongoing and a status of the first characterizations will be provided. Sensors adapted to the ILC requirements are expected to be directly derivable from this chip, with spatial resolution of about 4 mum, a time resolution of about 1-2 mus and an instantaneous data flow of about few GB/s.
A second part of the talk will evoke the newly available 65 nm process. This technology is expected to offer new perspectives and improvements in terms of granularity, time resolution, power consumption and possibly stitching to cover large area detectors. Several laboratories coordinated by CERN (ALICE ITS3 WP2 and CERN EP WP 1.2) realized a first joined submission in 2020. IPHC (supported by the CREMLIN+ WP7 program) has contributed to this effort concentrating on different test structures and several fully functionnal prototypes (CE_65) with analog output, offering the possibility to be tested in beam in order to explore the charge collection and the VFE performances of the technology for charged particle detection.
Finally, general perspectives on the way to achieve the ILC vertex detector requirements will be provided.
The Silicon Pixel Tracker (SPT), a 30 Gpixel detector, was first proposed at LCWS2008 as an improvement to the baseline ILC tracking systems. Since then there has been huge progress in the field, with developments such as the 12.5 Gpixel ITS2 for ALICE. We report on how this and other progress has enabled an even better performance spec than in 2008, using state-of-art Monolithic Active Pixel (MAPS) devices. Finally, there is scope for further developments, taking advantage of stacked devices using wafer-to-wafer bonding. This technology has reached an astonishing degree of perfection in commercial imaging systems, and is now becoming available to the scientific community.
The CLIC Tracker Detector (CLICTD) is a monolithic pixel sensor featuring pixels of 30 microns x 37.5 microns and a small collection diode. The sensor is fabricated in a 180 nm CMOS imaging process, using two different pixel flavours: the first with a continuous n-type implant for full lateral depletion, and the second with a segmentation in the n-type implant for accelerated charge collection. Moreover, it features an innovative sub-pixel segmentation scheme that allows the digital footprint to be reduced while maintaining a small sub-pixel pitch. CLICTD was developed to target the requirements for the tracking detector of the proposed future Compact Linear Collider CLIC. Most notably, a temporal resolution of a few nanoseconds and a spatial resolution below 7 microns are demanded. In this contribution, test-beam measurements of CLICTD are presented and the performance of the sensor is evaluated with regard to the CLIC requirements.
CMOS sensors (MIMOSA like) were successfully implemented in the STAR tracker. LHC experiments have shown that efficient B tagging, reconstruction of displaced vertices and identification of disappearing tracks are necessary (1-2). An improved vertex detector is justified for the ILC. To achieve a point-to-point resolution below the one-µm range while improving other characteristics (radiation hardness and eventually time resolution) we will need the use of 1-micron pitch pixels. Therefore, we propose a single MOS transistor that acts as an amplifying device and a detector with a buried charge-collecting gate. Device simulations both classical and quantum, have led to the proposed DotPiX structure. With the evolution of silicon processes, far well below 100 nm line feature, this pixel should be feasible. We will present this pixel detector and the present status of its development in both our institution (IRFU) and in other collaborating labs (CNRS/C2N).
References:
1- Low Mass Dark Sector Searches at ATLAS and at CMS , Monica Verducci , INFN and University of Roma Tre, Light Dark Matter 2017, 24-28 May 2017 La Biodola-Isola d’Elba
2-Performance of tracking and vertexing techniques for a disappearing track plus soft track signature with the ATLAS detector, The ATLAS Collaboration, ATL-PHYS-PUB-2019-011, 29th March 2019
Authors: Nicolas Fourches (IRFU), Coauthors : Charles Renard (C2N), Geraldine Hallais (C2N)
The vast majority of foreseen upgrades to existing particle physics detectors, as well as future Linear Collider experiments will continue to be based on silicon sensors as main tracking device. This means sensors will become even more of a cost driver than they already are today. In addition, sensors in the Float-Zone technology currently used in the LHC experiments are available from only a very small number of manufacturers in the large quantities required. Therefore, alternative detector technologies and designs that are cost-effective and that can be realised through widely established commercial industrial production processes are becoming more and more relevant.
One important group of candidates are sensors realised in CMOS technology. Typically, industrial CMOS foundries are equipped for high volume production but fabricating chips that are much smaller in area than in particular the full size strip sensors in production for e.g. LHC Phase-II experiment upgrades today.
In order to obtain sensors in the large dimensions required, several neighbouring reticles have to be connected in a process known as stitching.
The passive strip sensors presented in this contribution were designed and developed in a p-CMOS technology and produced by a European manufacturer. Stitching of up to five different reticles was used on the strip sensors to obtain detectors with strip lengths of up to 4 cm. Sensors in our study comprise three different flavours of strip sensors fabricated on a 150 𝜇m thick wafer made with the passive p-CMOS 150 nm process.
Following initial electrical characterisation on a probe station, the sensors were tested in the laboratory with Sr-90 sources and IR-lasers. Results from two batches of sensors are presented in this study, with an improved backside processing on the second batch of sensors to enhance the HV performance of the initial batch. Our results include position-resolved signal and signal-to-noise measurements to understand the behaviour of the sensors. In this context, we also evaluate the impact of stitching on the sensor functionality. Based on our results, we are able to demonstrate that stitching does not show any negative effect on the sensor performance, and, hence, the stitching of CMOS strip sensors can be considered successful.
Despite the discovery of the Higgs boson with a mass of 125 GeV, the structure of the Higgs sector remains unknown. In light of the current situation that a second Higgs boson has not been discovered, indirect searches of such a new particle through observables for Higgs bosons are more and more important. This requires accurate theoretical predictions for such observables in order to compare them with the precision measurements in experiments. In this study, we calculated the full one-loop corrections to the decay widths for various charged Higgs boson decays in the framework of Next-to-Minimal Supersymmetric Model (NMSSM) with CP violation. In this talk, we discuss the impact of the NLO corrections for the branching ratio of each decay mode in a wide range of parameter space that is compatible with the experimental constraints.
In the framework of the CP conserving Two Higgs Doublet Model (2HDM), type I and II, we analyze the sensitivity to triple Higgs couplings at future high(er) energy electron-positron colliders, such as ILC and CLIC. We study the production cross section of two neutral Higgs bosons in two channels: $e^+e^-\to h_i h_j Z$ and $e^+ e^- \to h_i h_j \nu \bar{\nu}$ within several benchmark planes that exhibit large values of triple Higgs couplings while being in agreement with all existing theoretical and experimental constraints. We analyze the sensitivity to the triple Higgs couplings of those processes and how they can change with the energy, in particular at the energy stages and luminosities projected for the future linear colliders. We finally present some individual points to illustrate in more detail the effects of the triple Higgs couplings on the di-Higgs production cross sections and we discuss possible strategies to reach sensitivity to the triple Higgs couplings.
The 2HDMS is based on the CP-conserving 2HDM extended by a complex singlet
field. We impose an additional Z3 symmetry on the potential. This leads to a
Higgs-sector similar to the Next-to-Minimal Supersymmetric SM (NMSSM),
while having fewer symmetry conditions compared to supersymmetric models. We
introduce the theoretical background of this model and set it up for
phenomenological studies. For this we study theoretical constraints including
tree-level perturbative unitarity,
boundedness from below conditions and vacuum stability constraints.
Furthermore we look at experimental constraints from direct searches for
BSM Higgs bosons at colliders. The impact on the phenomenology of a future linear collider will be incorporated.
CMS reported a ∼ 3$\sigma$ excess at ∼ 96 GeV in the $pp\rightarrow H\rightarrow\gamma\gamma$ channel. In the same mass range, a ∼ 2$\sigma$ excess in the $e^+ e^-\rightarrow Z H, H\rightarrow b\bar{b}$ channel has been reported at LEP as well. We interpret the experimental excesses as the lightest Higgs boson in the Two-Higgs-Doublet Model with a complex singlet (2HDMS) with type II Yukawa structure. We demonstrate the model can fit both excesses simultaneously while being in agreement with all other existing theoretical and experimental constraints. In this talk, we will present the scan of parameter space of 2HDMS and discuss the "best fit" points from the scan. Furthermore, we will also study the experimental uncertainties of specific Higgs couplings that can be obtained at the future International Linear Collider (ILC) with 250 GeV center-of-mass energy.
We present the current status of the assessment of the theoretical issues involved in reaching the targeted 0.01% precision for the FCC-ee/LC luminosity prediction. We also discuss its synergies with other precision theory requirements and efforts for the FCC-ee/LC physics programs.
While the Standard Model (SM) predicts a branching ratio of the Higgs boson decaying to invisible particles of O(0.001), the current measurement of the Higgs boson coupling to other SM particles allows for up to 20% of the Higgs boson width to originate from decays beyond the SM (BSM). The small SM-allowed rate of Higgs boson decays to invisible particles can be enhanced if the Higgs boson decays into new particles such as dark matter. Upper limits have been placed on BR(Hinv) by ATLAS and CMS at O(0.1), but the hadron environment limits precision. The ILC `Higgs factory' will provide unprecedented precision of this electroweak measurement. Studies of the search for Higgs-to-invisible processes in simulation are presented with SiD, a detector concept designed for the ILC. Preliminary results for expected sensitivity are provided, as well as studies considering potential systematics limitations.
The matter-antimatter asymmetry of the universe may result at least partially from CP violation. CP violation in mesons and neutrinos is too small to account for matter-antimatter asymmetry, motivating a search for CP violation in the Higgs sector. We present a study of the potential measurement of CP symmetry of the Higgs boson at the International Linear Collider (ILC) by the SiD experiment. We study the H --> tau+,tau- channel, which is particularly useful for CP analysis of leptonic Higgs decays because of its high branching ratio and the ease of extracting CP-sensitive statistics from tau decay products. Our method uses a double neural network system which takes energy and multiplicity statistics as inputs to tag tau events and their decay paths. We use CP-sensitivity-based event weighting methods to avoid strict cuts and make use of tau+- --> pi+-, pi+-pi0, l+-, pi+-2pi0, pi-+2pi+- decay modes. We focus on ZH, Z --> e+e-, mu+mu- events for simplicity. Our workflow performs very well against the dominant four-fermion background and yields strong preliminary mixing angle precision estimates. These results could help improve the precision of Higgs CP violation measurements at the ILC.
We examine the region of the parameter space of the Next to Minimal
Supersymmetric Standard Model (NMSSM) and the Minimal Supersymmetric Standard Model~(MSSM) with a light neutralino~($M_{\tilde{\chi}_1^0} \leq$~62.5~GeV) where the SM-like Higgs boson can decay invisibly, the thermal neutralino relic density is smaller than the measured cold dark
matter~(DM) relic density, and where experimental constraints from LHC searches, flavour physics and DM direct detection are satisfied. We observe allowed regions of parameter space in the NMSSM and the MSSM where the lightest neutralino could have a mass as small as $\sim 1~{\rm GeV}$ and $\sim 35~{\rm GeV}$, respectively, while still providing a significant component of relic dark matter. We then examine the prospects of probing the NMSSM and the MSSM with a light neutralino via invisible Higgs boson width measurements at the ILC. We also explore the complementarity between future direct detection experiments and Higgs boson invisible width measurement at the ILC. In the NMSSM with light neutralino, we find that the ILC will be able to probe parameter space points in the $M_{\tilde{\chi}_1^0} < 10~{\rm GeV}$ region which may be forever outside the reach of DM detectors. We also find that the ILC will be able to probe a considerable fraction of parameter space points which fall outside the projected reach of future DM detectors in the $10~{\rm GeV} < M_{\tilde{\chi}_1^0} < 62.5~{\rm GeV}$ region.
We investigate a scenario inspired by natural supersymmetry, where neutrino data is explained within a low-scale seesaw scenario. For this the Minimal Supersymmetric Standard Model is extended by adding light right-handed neutrinos and their superpartners, the R-sneutrinos. Moreover, we consider the lightest neutralinos to be higgsino-like. We first update a previous analysis and assess to which extent does existing LHC data constrain the allowed slepton masses. Here we find scenarios where sleptons with masses as low as 175 GeV are consistent with existing data. However, we also show that the up-coming run will either discover or rule out sleptons with masses of 300 GeV, even for these challenging scenarios.
We then take a scenario which is on the borderline of observability of the upcoming
LHC run assuming a luminosity of 300 fb$^{-1}$. We demonstrate that a prospective
international $e^+ e^-$ linear collider with a center of mass energy of 1 TeV will
be able to discover sleptons in scenarios which are difficult for the LHC. Moreover,
we also show that a measurement of the spectrum will be possible within 1-3 per-cent
accuracy.
Right handed neutrino is proposed as the extended of SM. We consider the possibility of exploring the RHN at ILC500. We study the RHN pair production using Delphes mini-DST.
The Electromagnetic Calorimeter (ECAL) of the CMS detector has played an important role in the physics program of the experiment, delivering outstanding performance throughout data taking. The High-Luminosity LHC will pose new challenges. The four to five-fold increase of the number of interactions per bunch crossing will require superior time resolution and noise rejection capabilities. For these reasons the electronics readout has been completely redesigned. A dual gain trans-impedance amplifier and an ASIC providing two 160 MHz ADC channels, gain selection, and data compression will be used in the new readout electronics. The trigger decision will be moved off-detector and will be performed by powerful and flexible FPGA processors, allowing for more sophisticated trigger algorithms to be applied. The upgraded ECAL will be capable of high-precision energy measurements throughout HL-LHC and will greatly improve the time resolution for photons and electrons above 10 GeV.
The Silicon-Tungsten ECAL (SiW-ECAL) of ILD will require about 10,000 detector slabs of 1.4 to 1,8 m in length. For the ease for building and testing, the slabs are made of stitched detector elements of 18×18 cm², composed of a Front-End Board (FEB), hosting the readout ASICs for 1024 channels, on which the Silicon sensors are glued.
Various types of detector elements have been successfully tested individually; the first attempt to chain them into a long slab in 2018, while globally positive, hinted at some improvements.
As its predecessor, the new FEB will handle 16 SKIROC 2A chips, amplifying, shaping, pipelining and digitizing the data generated by collisions at the International Linear Collider ILC, taking advantage of its pulsed operations to reduce the power dissipation.
This presentation describes the FEB design, adapted for slabs composed of up to 10 FEB, to perform power supply distribution, now locally pulsed, clock distribution and readout chain through optimisation of board stack-up and signals routing.
Beside a careful handling of all the necessary parts from the SKIROC 2A chips, the new design also implements a local high voltage distribution, used to polarize the sensors, in order to reduce intervention and handling.
The Tile Calorimeter (TileCal) is a sampling hadronic calorimeter covering the central region of the ATLAS experiment. TileCal uses steel as absorber and plastic scintillators as active medium. The scintillators are read-out by the wavelength shifting fibres coupled to the photomultiplier tubes (PMTs). The analogue signals from the PMTs are amplified, shaped, digitized by sampling the signal every 25 ns and stored on detector until a trigger decision is received. The TileCal front-end electronics reads out the signals produced by about 10000 channels measuring energies ranging from about 30 MeV to about 2 TeV. Each stage of the signal production from scintillation light to the signal reconstruction is monitored and calibrated to better than 1% using radioactive source, laser and charge injection systems. The performance of the calorimeter has been measured and monitored using calibration data, cosmic ray muons and the large sample of proton-proton collisions acquired in 2009-2018 during LHC Run-1 and Run-2.
The High-Luminosity phase of LHC, delivering five times the LHC nominal instantaneous luminosity, is expected to begin in 2028. TileCal will require new electronics to meet the requirements of a 1 MHz trigger, higher ambient radiation, and to ensure better performance under high pile-up conditions. Both the on- and off-detector TileCal electronics will be replaced during the shutdown of 2025-2027. PMT signals from every TileCal cell will be digitized and sent directly to the back-end electronics, where the signals are reconstructed, stored, and sent to the first level of trigger at a rate of 40 MHz. This will provide better precision of the calorimeter signals used by the trigger system and will allow the development of more complex trigger algorithms. Changes to the electronics will also contribute to the data integrity and reliability of the system. New electronics prototypes were tested in laboratories as well as in beam tests.
Results of the calorimeter calibration and performance during LHC Run-2 are summarized, the main features and beam test results obtained with the new front-end electronics are also presented.
The Analog Hadron Calorimeter (AHCAL) concept developed by the CALICE collaboration is a highly granular sampling calorimeter with 3*3 cm^2 plastic scintillator tiles individually read out by silicon photomultipliers (SiPMs) as active material.
After building a large technological prototype and testing it in particles beams at DESY and CERN in 2018, the hardware developments and tests are now focused on two areas:
- an alternative readout ASIC which supports operation in power-pulsing mode as well as continuous readout,
- an alternative scintillator geometry (Megatiles) where the segmentation of larger scintillator plates into small tiles is achieved by grooves filled with reflective material.
The talk will present the current status of these developments.
The Semi-Digital Hadronic CALorimeter (SDHCAL), developed within the CALICE collaboration, is proposed to equip the future ILD detector of the ILC.
A technological prototype has successfully has provided excellent results in terms of energy linearity and resolution but also tracking and PID capabilities.
To validate completely the SDHCAL option for ILD, new R&D activities have started. The aim of such activities is to demonstrate the ability to build large detectors (> 2m2) GRPC with a new version of readout electronics and a new detector interface board with the aim to have the capability to address up to 432 ASICs of 64 channels each by the latter.
In addition, a new mechanical structure using electron beam welding is used to build the mechanical that will host the active layer made of GRPC and their embedded electronics.
Sophisticated machine learning techniques have promising potential in search for physics beyond Standard Model (BSM) in Large Hadron Collider (LHC). Convolutional neural networks (CNN) can provide powerful tools for differentiating between patterns of calorimeter energy deposits by prompt particles of Standard Model and long-lived particles predicted in various models beyond the Standard Model. We demonstrate the usefulness of CNN by using a couple of physics examples from well motivated BSM scenarios predicting long-lived particles giving rise to displaced jets. Our work suggests that modern machine-learning techniques have potential to discriminate between energy deposition patterns of prompt and long-lived particles, and thus, they can be useful tools in such searches.
Extended Higgs models with CP violation have a possibility to solve the baryon number asymmetry of the universe by electroweak baryogenesis. However, the electric dipole moment (EDM), which is highly sensitive to new CP-violating effects, has not been observed so far. In this talk, we consider the testability of CP violation of a scenario in which the EDM is suppressed by cancellation among extra CP-violating phases. We discuss CP-violating effects appearing in the angular distribution of particles produced by the decay of extra Higgs bosons by using simulation results assuming future experiments at the International Linear Collider. This talk is based on arXiv:2101.03702 [hep-ph] and arXiv:2004.03943 [hep-ph].
We study the search for an extra scalar S boson produced in association with the Z boson at the
International Linear Collider (ILC). The study is performed at center-of-mass energies of 250
GeV and 500 GeV based on the full simulation of the International Large Detector (ILD). In
order to be as model-independent as possible, the analysis uses the recoil technique, in particular
with the Z boson decaying into a pair of muons. As a result, exclusion cross-section limits are
given in terms of a scale factor k with respect to the Standard Model Higgs-strahlung process
cross section. These predicted results, covering all possible searching regions of the extra scalars
at the 250 GeV ILC and the 500 GeV ILC, can be interpreted independently of the decay modes
of the S boson.
The Compact LInear Collider (CLIC) is a proposed TeV-scale high-luminosity electron-positron collider at CERN. CLIC will allow us to study the Higgs boson properties with very high precision. These measurements can also result in a direct or indirect discovery of "new physics", Beyond the Standard Model (BSM) phenomena, which could help us to understand the nature of dark matter (DM). SM-like Higgs boson or new heavy scalar decays with the emission of invisible DM particles can be the only way to observe "new physics" effects at achievable energy scales and establish the connection between Standard Model (SM) and BSM sectors.
We studied the possibility of measuring invisible Higgs boson and additional heavy scalars decays with experiment at CLIC running at 380 GeV and 1.5 TeV. The analysis is based on the WHIZARD event generation and fast simulation of CLIC detector response with DELPHES. We estimated the expected limits on the invisible decays of the 125 GeV Higgs boson, as well as the cross section limits for the production of an additional neutral Higgs scalar, assuming its invisible decays, as a function of its mass. Extracted model-independent branching ratio and cross section limits were then interpreted in the framework of the vector-fermion dark matter model to set limits on the mixing angle between the SM-like Higgs boson and the new scalar of the "dark sector".
One of the primary goals of the proposed future collider experiments is to search for dark matter (DM) particles using different experimental approaches. High energy $e^+e^-$ colliders offer unique possibility for the most general search based on the mono-photon signature. As any $e^+e^-$ scattering process can be accompanied by a hard photon emission from the initial state radiation, analysis of the energy spectrum and angular distributions of those photons can be used to search for hard processes with invisible final state production and to test the nature and interactions of the DM particles. Dedicated procedure of merging the matrix element calculations with the lepton ISR structure function was developed to model the Standard Model background processes contributing to mono-photon signature with WHIZARD.
We consider production of DM particles at the International Linear Collider (ILC) and Compact Linear Collider (CLIC) experiments. Detector effects are taken into account within the DELPHES fast simulation framework. Limits on the light DM production in a generic model are set as a function of the mediator mass and width based on the expected two-dimensional distributions of the reconstructed mono-photon events. Limits on the mediator coupling to electrons are presented for a wide range of mediator masses and widths. For light mediators, for masses up to the centre-of-mass energy of the collider, results from the mono-photon analysis are more stringent than the limits expected from direct resonance search in SM decay channels.
It is commonly believed that Dark Matter (DM) should exist in the form of new, Beyond-the-Standard-Model stable particles.
Such particles, however, have not yet been detected, which means that interactions between DM and SM must be very weak. Dark particles, even if they are already produced at existing colliders, evade detection due to tiny signal-to-background ratio.
Future $e^+e^-$ colliders, providing large luminosity and collision energy as well as very clean collision environment (which means low background), can be especially useful in the search for dark particles.
In this talk, we will focus on comparison between expected signatures of dark particles of various spins produced in $e^+e^-$ collisions. The analysis will be performed basing on simple, but QFT-consistent and fully renormalizable, models of one-component DM interacting with SM through the Higgs portal. Due to their simplicity, the models can serve as a first approximation of more complicated theories, even involving more than one dark component (given that one of DM species dominates). Apart of estimating chances of DM detection, we will try to determine under what circumstances the cases of different spins could be disentangled.
We are developing kinematic fitter which can deal with arbitrary resolution functions. Kinematic fitting is the constrained optimization method which uses distributions of fit parameters and kinematic relations among the parameters. In order to treat non-Gaussian distributions, for example b-jet energy distribution, our kinematic fitter is implemented based on the log-likelihood method.
In this talk, we report the operation verification of our kinematic fitter using the ZH process which decays into b-jets. The b-jet resolutions are also evaluated as the input of the fitting.
In this talk, we show the recent results of our R&D works on the Machine Learning Application to the Collider Experiments.
In RCNP, Osaka University, in Japan, we form a group which consists with about 20 researchers on both information science and collider physics (experiment and theory) to work on the R&D of machine learning application to the collider experiments, as a research project in RCNP. The R&D works are related to the data analysis (Belle, ILC), detector calibration (ILC, ECAL energy calibration), accelerator control (KEK Linac), and Lattice QCD, etc. In this talk, we'd like to show the recent activities on the machine leaning application based on the low-level feature data (physics analysis, and ECL calibration) and reinforcement learning (accelerator control).
Jet clustering is one of the main key to obtain better physics results because
reducing mis-clustring leads to improve the mass resolution of the resonances especially in multi-jet situation.
Present jet clustering is far from a good tool for reconstructing jets. We need to tackle the problem
and should explore the possibility of constructing better jet clustering algorithm.
Recently, DeepLearning has been established in data science field, and is applied to many other different tasks.
Assigning each particle to the correct jet is equivalent to paint tracks with corresponding color, or if tracks are clustered,
to segment a corresponding region in r-phi plane.
In computer vision field, these tasks are called as "Semantic (Instance) Segmentaion".
It is worth trying those idea to jet clustering algorithm.
We will report the current status of the jet clustering study using DeepLearning.
We developed a novel algorithm of vertex finding for future lepton colliders such as the International Linear Collider. We deploy two networks; one is simple fully-connected layers to look for vertex seeds from track pairs, and the other is a customized Recurrent Neural Network with an attention mechanism and an encoder-decoder structure to associate tracks to the vertex seeds. The performance of the vertex finder is compared with LCFIPlus.
Tau lepton physic plays an important role in the research programme at future e+e- experiments. To fully exploit the physics potential of machine and experiments, and for a cost-effective detector design, it is important to to implement from start advanced Machine Learning methods in the development of the detector. With this respect we report here on an ongoing study on τ-identification (leptonic and hadronic tau decays, and jets from QCD) in the IDEA dual-readout calorimeter concept, using modern machine learning methods based on differentiable deep neural networks (Convolutional-NN and Graph-NN).
Accurate simulation of physical processes is
crucial for the success of modern particle physics.
However, simulating the development and interaction of particle showers with calorimeter detectors is a time consuming process and drives the computing needs of large experiments at the LHC and future colliders. Recently, generative machine
learning models based on deep neural networks have shown promise in speeding up this task by several orders of magnitude. We investigate
the use of a new architecture --- the Bounded Information Bottleneck Autoencoder ---
for modelling electromagnetic showers in the central region of the Silicon-Tungsten calorimeter
of the proposed International Large Detector. Combined with a novel second post-processing network, this approach
%for the first time
achieves an accurate simulation of differential distributions
including for the first time the shape of the minimum-ionizing-particle peak
compared to a full GEANT4 simulation for a high-granularity calorimeter with
27k simulated channels.
The Circular Electron Positron Collider (CEPC) has been proposed for Higgs factory in the next few decades. To achieve the required performance precision, it is critical to optimize the design of the Machine-Detector-Interface(MDI), as well as the Interaction Region(IR). In this work, we will present the latest design and study status of the CEPC MDI IR, covering the overall introduction, mechanical design, thermal analysis, background and shielding study, and other issues. Based on the design parameters presented in the CEPC Conceptual Design Report (CDR), we will introduce the optimized design(including the mechanical design) of the IR components, especially the central beam pipe, and also the super-conducting magnets. We have also updated the thermal analysis relating to HOM and SR, and the detailed background simulation containing event generation, tracking, and detector impact evaluation. We have also introduced several mitigation measures and the optimized design to improve the performance and stability of the CEPC MDI IR. In addition, we will discuss the lessons we have learned and possible improvements in our future study.
The uncertainty of energy measurement of the circular electron-positron collider beam is required to be less than $10 \mathrm{MeV}$ for accurate measurement the Higgs/W/Z bosons' mass. It's proposed a new scheme of microwave-beam Compton backscattering to measure the beam energy by detecting the maximum energy of scattered photons. Choosing the ${TM_{010}}$ mode of the standing wave cavity, the Poynting vector exists in the radial direction and the length of the cavity does not affect the resonant frequency. When the resonator cavity is placed vertically in the beam tube, the electron beam collides head to head with microwave photons as it passes through the cavity. After this process, the scattered photons emit from vacuum tube of the synchrotron radiation. To minimize the background noise from the synchrotron radiation, a combination of polyethylene and lead are used to shield synchrotron radiation photons. At the same time, the computer simulation technology(CST) software is used to simulate the frequency and field changes due to the holes for the electron beam penetration in the cavity. The measurement uncertainty of the beam energy can reach the order of $\mathrm{6 MeV}$.
Future high-energy $e^{+}e^{-}$ colliders will provide some of the most precise tests of the Standard Model. Statistical uncertainties are expected to improve by orders of magnitude over current measurements.
This provides a new challenge in accurately assessing and minimizing systematic uncertainties. Beam polarisation may hold a unique potential to isolate and determine the size of systematic effects.
So far, studies have mainly focused on the statistical improvements from beam polarisation. This study aims to assess the impact on systematic uncertainties.
A combined fit of precision observables, such as cross-sections, asymmetries and anomalous gauge couplings, together with systematic effects is performed on 2-fermion and 4-fermion final-states. Different setups of available beam polarisations and luminosities are tested with and without systematic effects.
The dependence of the uncertainties and correlations for the varying setups informs the relevance of beam polarisation for isolating systematic effects.
Effects observed for this analysis may qualitatively apply to other analyses as well. Future collider efforts can use this knowledge in their design studies to maximize their physics potential.
Measuring Higgs properties is one of the most important research topics at the Higgs factory.
In this talk, we discuss the prospects of measuring the branching fraction of the Higgs boson decaying into muon pairs at the ILC.
We also discuss the impact of the transverse momentum resolution for this analysis.
We report here the experimental prospects on the measurement of cross section and the forward-backward asymmetry for quark and antiquark production in electron positron collisions at 250 GeV at the International Linear Collider operating polarised beams. Thanks to the beam polarisation, we can separate the four independent chirality combinations of the electroweak couplings maximizing in this way the sensitivity to new physics. We discuss the results for several quark flavours in the final state.
To achieve the goal of the measuring at the per mile level the Z-$q\bar{q}$ couplings, possible by cumulating 200O fb-1 at 250 GeV, we developed various experimental methods, inspired by LEP1 and SLC. These methods require exceptional experimental capabilities for tracking, vertexing and particle identification capabilities, specially high power of discrimination for charged hadron identification with a precise $\frac{dE}{dx}$ measurement. These studies have been performed using the International Large Detector model and simulation tools.
The precision-measurement goals for the Linear Collider detectors place strict constraints on the pixel size and the amount of material allowed in the vertex and tracking layers. Low-mass interconnect technologies suitable for small pitch hybridization as well as for the integration of modules are therefore required. An alternative pixel-detector hybridization technology based on Anisotropic Conductive Films (ACF) is under development to replace the conventional fine-pitch flip-chip bump bonding. The new process takes advantage of the recent progress in industrial applications of ACF and is suitable for time- and cost-effective in-house processing of single devices. This new bonding technique developed can also be used for the integration of hybrid or monolithic detectors in modules, replacing wire bonding or solder bumping techniques. This contribution introduces the new ACF hybridization and integration technique, and shows the first test results from Timepix3 hybrid pixel assemblies and from the integration of ALPIDE monolithic pixel sensors to flex circuits.
The EUDET/AIDA beam telescopes are instruments widely used within the experimental high energy physics community, e.g by the detector groups of the LHC experiments, Belle-II, and of course by future linear collider groups. They provide an excellent pointing resolution of down to 2μm even at energies as low as O(1GeV), which makes them very well suited as reference tracking systems at the DESY II Test Beam. However, after about ten years of successful operation, they require certain upgrades to keep up with the ever-increasing demands in the field of detector development. The long readout cycles of the MIMOSA26 pixel sensors in combination with the absence of a precise time measurement do not allow for relevant timing studies. Furthermore, the maximum particle rate that can be processed without leading to ambiguities in the track reconstruction is limited to approximately 3kHz.
This talk will present projects and plans to tackle the above issues that partly also offer synergies with generic tracking detector developments. In addition, it will give an outlook on the mid- and longterm prospects in view of the EUDET-type beam telescopes approaching their end of life.
Beam telescopes at test beam facilities are a key technology driver for the design of high precision silicon trackers, both as a test bed for new technologies and to verify their performance. The Lycoris strip telescope is a new large active area beam telescope designed, as part of the AIDA 2020 project, as a general infrastructure upgrade for the DESY II Test Beam Facility. The main component at the heart of the Lycoris telescope is the Silicon Detector (SiD) main tracker sensor.
The sensor has a large active area of $9.2 \times 9.2 \, \mathrm{cm}^2$ and is designed to achieve a micron level single point resolution. This is accomplished through a very high strip density resulting in a pitch of $25\, \mu \mathrm{m}$ achieved via a novel signal routing method using extra metalization layers to a top bump-bonded readout ASIC.
Extensive tests were conducted in 2020 with the full system in multiple test beam campaigns at the DESY II Test Beam Facility in order to determine the performance of the SiD tracker sensor and the Lycoris telescope as a whole.
In this talk, some of the current results such as, achieved single point resolution, charge response and single plane efficiency of the sensors will be presented.
In this contribution, we will present the status of the technological developments at IMB-CNM to fabricate 50 m thick Inverse Low Gain Avalanche Detectors (iLGAD) for pixelated timing detectors.
The iLGAD sensor concept is one of the most promising technologies for enabling the future 4D tracking paradigm that requires both precise position and timing resolution. In the iLGAD concept, based on the LGAD technology, the readout is done at the ohmic contacts, allowing for a continuous unsegmented multiplication junction. This architecture provides a uniform gain over all the active sensor area (100% fill factor).
The soundness of this detection concept was successfully demonstrated in a first generation of 300 m thick iLGAD sensors. Currently, we are developing 50m thick pixelated iLGADs optimized for timing with a periphery design able to sustain higher electric fields and a simpler single-side manufacturing process.
This activity is carried out in the context of the RD50 and AIDAInnova projects with the participation of the CERN-SSD, IFAE, IFCA, IMB-CNM, NIKHEF, University of Hamburg, University of Santiago de Compostela and University of Zurich.
In the first part of this contribution, the principles of operation of AC-LGAD, the first silicon detector based on resistive read-out, are illustrated. Then, we outline how AC-LGADs can enable the construction of a low-mass low-power silicon tracker with excellent spatial (2-3 microns/hit) and temporal (20 ps/hit) resolutions.
We developed an interdisciplinary fs-laser-based unique technology platform to test and explore new frontiers in light and optics to build up new knowledge that could advance existing strategies for further silicon technology development, emphasizing LGAD timing sensors. In collaboration with ELI Beamlines facility and ELI BioLab, the advanced fs-laser-based TCT/SPA-TPA infrastructure will extend our ability to see the structures and signatures of LGAD fatalities in test beams and to pave the path towards mitigation of the underlying mechanisms causing these fatalities. Furthermore, it will also help define the upper limits for critical bias working conditions at extreme fluences (LHC-HL, FCC) and lower fluences (ILC, CLIC). Here we present an overview of the project aiming to set new testing strategies supporting further LGAD development.
A new method is presented to extract quark masses from collider data on Higgs production and decay rates. We find a value for the bottom quark MSbar mass at the scale of the Higgs boson mass of mb(mh) = 2.6 +/- 0.3 GeV from recent measurements by ATLAS and CMS. This result is compatible with the prediction of mb(mh) from the evolution of the world average for mb(mb) and thus provides further evidence for the scale evolution, or "running" of the bottom quark mass. Future precision measurements of Higgs decay rates are expected to improve this result considerably. We assess, in particular, the potential of a future "Higgs factory" electron-positron collider.
This abstract is related to the abstract presented by Adrian Irles and Seidai Tairafune, on determinations of the bottom quark mass in Z-boson production.
The violation of the CP symmetry is one of Sakharov's conditions for the matter/anti-matter asymmetry of the Universe. Currently known sources of CP violation in the quark and neutrino sectors are insufficient to account for this. Is CP also violated in the Higgs sector? Could the 125 GeV mass eigenstate be a mixture of even and odd CP states of an extended Higgs sector, or is CP explicitly violated in Higgs interactions? With what precision could such effects be measured at future electron-positron colliders? These questions will be discussed in the light of the latest and ongoing studies at ILC and CLIC.
Neutrinos are probably the most mysterious particles of the Standard Model. The mass hierarchy and oscillations, as well as the nature of their antiparticles, are being currently studied in experiments around the world. Moreover, in many models of the New Physics, baryon asymmetry or dark matter density in the universe are explained by introducing new species of neutrinos. Among others, heavy neutrinos of the Dirac nature were proposed to solve mysteries of the Universe. Such neutrinos with masses above the EW scale could be produced at future linear e+e- colliders, like the Compact LInear Collider (CLIC) or the International Linear Collider (ILC).
We studied the possibility of observing decays of heavy Dirac neutrino in qql final state at ILC running at 1 TeV and CLIC at 3 TeV. The analysis is based on the WHIZARD event generation and fast simulation of detector response with DELPHES. Dirac neutrinos with masses from 200 GeV to 3.2 TeV are considered. We estimated the limits on the production cross sections and on the neutrino-lepton coupling, and compared them with current limits coming from the LHC running at 13 TeV, as well as the expected future limits from hadron colliders. Obtained results are stricter than any other limit estimates published so far.
Future e+e- colliders are prime tools to search for physics Beyond the Standard Model charged under the electroweak force only. A particular example are scalar partners of the charged leptons, known as sleptons in supersymmetric extensions of the Standard Model. The decays of such scalar lepton partners involve additional neutral fermions (neutralinos in supersymmetric models), which are good dark matter candidates. Future e+e- colliders would be able to probe most of the kinematically accessible parameter space of such models, i.e., where the mass of the scalar lepton partner is less than half of the collider’s center-of-mass energy, with only a few days of data. Besides constraining more general models, this would allow to probe some well motivated dark matter scenarios in the Minimal Supersymmetric Standard Model, in particular the incredible bulk and stau coannihilation scenarios.
A study of prospects for SUSY based on scanning the relevant
parameter space of (weak-scale) SUSY parameters, is presented. In
particular, I concentrate on the properties most relevant to evaluate
the experimental prospects: mass differences lifetimes and decay-modes.
A scan over SUSY parameter space was done, requiring that the NLSP
was a bosino or a stau - the hardest cases - with mass not larger than
a few TeV.
The observations are then confronted with estimated experimental
capabilities, including - importantly - the detail of simulation these
estimates are based upon. Conclusions on realistic prospects are presented.
Like at the LHC, tests of neutrino mass models will constitute a leading component of the new physics programs at proposed experiments such as the ILC, CepC/CppC, and FCC-ee/hh. This challenge requires the engineering of new search strategies, employing novel production mechanisms, and ultimately the development of Monte Carlo (MC) simulation software that feed into modern simulation tool chains. This includes, for example, the HeavyN
, TypeIISeesaw
, EffectiveLRSM
, SMWeinberg
, and VPrime
FeynRules UFO libraries, which are now in wide-spread use by the high energy community. In this talk, we give an overview of the MC tools available for simulating neutrino mass models at collider experiments.
Studying the properties of Standard Model (SM) – like Higgs boson becomes one important window to explore the physics beyond the SM. In this work, we present studies about the implications of the Higgs and Z-pole precision measurements at future Higgs Factories. We perform a global fit to various Higgs search channels to obtain the 95% C.L. constraints on the model parameter spaces of Two Higgs Double Model (2HDM). In the 2HDM, we analyse tree level effects as well as one-loop contributions from the heavy Higgs bosons. The strong constraints on cos(β − α), mΦ and heavy Higgs mass splitting can be complementary to direct search of the LHC and Z pole precision measurements. We also compare the sensitivity of various future Higgs factories, namely Circular Electron Positron Collider (CEPC), Future Circular Collider (FCC)-ee and International Linear Collider (ILC).
FCAL performs R&D for highly compact electromagnetic calorimeters foreseen to instrument the very forward region of a detector at future e+e− colliders. Two special calorimeters are foreseen, the Luminosity Calorimeter (LumiCal) and the Beam Calorimeter (BeamCal), for a precise and fast, potentially bunch-by-bunch, luminosity measurement. During the last years FCAL has studied finely-segmented silicon-tungsten or GaAs-tungsten sandwich calorimeters. The segmentation and sampling were optimised using Monte Carlo simulations and requirements on the performance defined. Prototypes of fully assembled detector planes and calorimeter prototypes, readout by dedicated FE electronics, are studied in test beams and found to match the requirements in terms of position and energy resolution and compactness. This talk covers the results obtained from several test beam studies on the performance of a partly instrumented calorimeter, the measurement of the Moliere radius, electron/photon discrimination and backscattering. Preliminary results from a recent test beam measurement, using for the first time the new developed FLAME readout, will be presented.
The very high luminosity reach of the FCC-ee is obtained by having separate storage rings for electrons and positrons, which cross at a +/- 15 mrad angle at the interaction points, and by strong focussing obtained via by a set of quadrupoles the last of which has it face at L*=2.2 m from the IP. The crossing of the beam lines by the detector solenoidal field necessitates the insertion of a set of compensating solenoidal magnets in front of the quadrupoles pushing the luminosity monitors far into the detector volume at about 1 m from the IP, where space is severely limited. To exploit the enormous FCC-ee statistics, systematic precisions will have to be minimised. For the absolute luminosity measurement an ambitious goal of 10^-4 has been defined. A conceptual luminometer design has been defined primarily focussing on the necessary geometrical tolerances a the micron level. Extensive development work is needed in order to integrate this design into the challenging machine detector interface region and to optimise the design also with the focus on the overall detector hermeticity.
The very forward region of a detector at future e+e- collider is the one of the most challenging regions to instrument. A luminometer – compact calorimeter dedicated for precision measurement of the integrated luminosity at a permille level or better is needed. Here we review a feasibility of such precision at CEPC, considering systematic effects arising from the detector mechanical precision and beam-related requirements. We also discuss capabilities of experimental determination of the beam-energy spread, as well as the impact of electromagnetic deflection from the perspective of integrated luminosity precision requirements at the Z0 pole.
A MPGD (Micro Pattern Gaseous Detector) based TPC can provide higher tracking resolution thanks to its small ExB effect compared to MWPC based detectors. We're investigating 2-track separation capability for a GEM-based TPC using electron beam. Since there is not much multi-track for each event, we have produced a pseudo multi-track event merging 2 events. We will report the details of the analysis method and the result of the efficiency on 2-track separation.
Particle identification is one of the most important and difficult goal for high energy physics.
Ionization of matter by charged particles is the primary mechanism used for particle identification (dE/dx), but the large and inherent uncertainties in total energy deposition represent a limit to the particle separation capabilities: even in the most favorable momentum region (relativistic rise), the typical separation between energy loss curves relating to different particles is smaller than the spread around the relative mean values.
Cluster counting technique takes advantage from the primary ionization Poissonian nature and offers a more statistically significant way to infer mass information. The method consists in singling out, in ever recorded detector signal, the isolated structures related to the arrival on the anode wire of the electrons belonging to a single ionization act (dN/dx).
Tracking devices, like drift chambers, provide measurements of the energy loss along the particle track, which, if combined with a measurement of momentum, allow to infer the mass of the ionizing particle.
We investigate the potential of cluster counting technique for He based drift chamber, developing different algorithms to simulate the ionization cluster generation in Geant4.
Indeed, Geant4 is a powerful software, that can simulate a full detector and collider events, but cannot investigate the fundamental properties and performances of the sensible elements (drift cells), which are studied in more details by Garfield++ .
We simulate, with both software, 2 m long tracks of five particles at different momenta passing through 1 cm long side box filled with 90% He and 10% iC4H10.
Different algorithms to achieve reasonable results are developed, but the common key for all the ways explored is a model for the kinetic energy of cluster containing single electron and cluster containing more than one electron, built using Garfield++ simulations.
The algorithms reproduce the number of cluster distribution which follow the expected Poissonian shape and the cluster size distribution whose shape is similar to the one expected, moreover the results obtained confirm that cluster counting allows to reach a resolution 2 times better than traditional dE/dx method.
A full Geant4 simulation of the IDEA tracking system is developed to test the tracking performance and the reconstruction algorithms will be implemented in the drift chamber hit creation.
A prime target of the ILC physics program is the precision measurement of the masses of known fundamental particles such as the top quark and the Higgs, W, and Z bosons. The measurement of the absolute center-of-mass energy scale is a primary issue for most determinations, and this will rely critically on the knowledge of the tracker momentum scale. By using particle decays, especially of $\mathrm{K}^{0}_{S}$ and $\Lambda$, one can constrain the tracker momentum scale and as a by-product improve the measurements of the masses of various hadrons. This method if proven realistic has the potential to open up a comprehensive precision polarized Z scan physics program in which the center-of-mass energy systematics are under good control.
The alignment of a detector aims at the description of the detector geometry as accurately as possible, such that the tracking resolution is not degraded by detector misalignments. The algorithm used for the alignment of the Inner Detector (ID) of the ATLAS experiment consists of a minimisation of the track-to-hit residuals in a sequence of hierarchical levels, ranging from the mechanical assembly structures to the local sensors. Following this strategy, a precision in the ID alignment at the level of µm was achieved, despite the difficult conditions of LHC Run 2, where time-dependent movements and deformations affected different detectors during the data taking. The minimisation of the track-to-hit residual alone is not sensitive to some systematic detector deformations that introduce biases in the track parameters while leaving the measured track-to-hit residuals unchanged. For the determination and correction of these so-called weak modes, several dedicated analyses using resonances decaying into muons or electrons are carried out. These techniques allow to minimise the biases in the track parameter through the introduction of track constraints in the alignment procedure. After the alignment campaign, the residual sagitta bias is reduced to less than ~0.1 TeV^{-1}. Biases in the impact parameters are also reduced following similar techniques. Finally it has been measured that the remaining global momentum bias is of 0.9x10^{-3}, and that the ID is free of radial expansions.
The discovery of the Higgs boson has revealed that the Higgs quartic coupling becomes small at very high energy scales. Guided by this observation, we introduce Higgs Parity, which is a spontaneously broken symmetry exchanging the standard model Higgs with its parity partner. In addition to explaining the small Higgs quartic coupling, Higgs Parity can provide a dark matter candidate, solve the strong CP problem, and arise from an SO(10) grand unified gauge symmetry. We will show that the Higgs Parity symmetry breaking scale is determined by standard model parameters including the top quark mass and predicts experimental signals such as the dark matter direct detection rate and the proton decay rate. As a result, Higgs Parity provides a tight correlation between the precision measurement of the top quark mass at future linear colliders and these experimental signals.
We present two new extractions of the QCD coupling constant at the Z pole, $\alpha_S(m_Z)$, from detailed comparisons of inclusive W and Z hadronic decays data to state-of-the-art perturbative Quantum Chromodynamics calculations at next-to-next-to-next-to-leading order (N$^{3}$LO) accuracy, incorporating the latest experimental and theoretical developments. In the W boson case, the total width computed at N$^{3}$LO is used for the first time in the extraction. For the Z boson pseudo-observables, the N$^{3}$LO results are complemented with the full two- and partial three-loop electroweak corrections recently made available, and the experimental values are updated to account for newly estimated LEP luminosity biases. A combined reanalysis of the Z boson data yields $\alpha_S(m_Z) = 0.1203 \pm 0.0028$, with a 2.3\% uncertainty reduced by about 7\% compared to the previous state-of-the-art. From the combined W boson data, a value of $\alpha_S(m_Z) = 0.101 \pm 0.027$ is extracted, with still large experimental uncertainties but also reduced compared to previous works. The levels of theoretical and parametric precision required in the context of QCD coupling determinations with permil uncertainties from high-statistics W a nd Z boson samples expected at future $e^+e^-$ colliders such as the FCC-ee, are discussed in detail.
In this talk I will describe our recent work on N$^3$LL+NNLO resummed prediction for 2-jettiness differential distribution for boosted $t\bar t$ pairs produced in $e^+e^-$ collisions calculated in the framework of SCET+(boosted) HQET. The prediction incorporates a precise short distance top mass scheme, such as the MSR scheme. Renormalon subtractions in the mass and soft function play a key role in improving the stability of the peak position, and the allow for determination of the top MSR mass with perturbative uncertainties well below 100 MeV. The result has important application for Monte Carlo top mass calibration.
Precision studies of the Higgs boson at future $e^+e^-$ colliders can help to
shed light on fundamental questions related to electroweak symmetry breaking,
baryogenesis, the hierarchy problem, and dark matter.
The main production process, $e^+e^- \to HZ$, will need to be controlled with
sub-percent precision, which requires the inclusion of next-to-next-to-leading
order electroweak (NNLO) corrections. The most challenging class of diagrams are
planar and non-planar double-box topologies with multiple massive propagators in
the loops. This article proposes a technique for computing these diagrams
numerically, by transforming one of the sub-loops through the use of Feynman
parameters and a dispersion relation, while standard one-loop formulae can be
used for the other sub-loop. This approach can be extended to deal with tensor
integrals. The resulting numerical integrals can be evaluated in minutes on a
single CPU core, to achieve about 0.1% relative precision.
The need for fast detector simulation programs is emphasised, both
in terms of the need for ``rapid response'' to new
results - in particular from the LHC - and new theoretical ideas,
and in terms of how to cope with multi-billion simulated event samples.
The latter would arise both from the need to be able to
simulate significantly more events than expected in the real data, also for
high cross-section processes, and the need to scan multi-parameter
theories.
The {\it Simulation `a Grande Vitesse}, SGV, is presented, and is shown to
be able to address these issues.
It must be emphasised that SGV is a {\it detector simulation} program,
unlike parametric smearing codes such as Delphes, and therefore
yields results that can be expected to emulate the experimental reality
much better.
Indeed, all aspects of the tracking performance as given by SGV is shown to reproduce
very closely that of the full simulation and reconstruction of the ILD
concept.
Still, the execution speed of SGV is the same as that attained by parametric
codes.
SGV can take it's input from a number of formats (stdhep, LCIO, ...), or
internally call event generators. No predefined output format is used, but
running examples of producing full LCIO DST-output or root ntuples are
provided.
One of the important goals of the proposed future $e^+e^-$
collider experiments is the search for dark matter particles
using different experimental approaches. The most general search
approach is based on the mono-photon signature, which is expected
when production of the invisible final state is accompanied by a
hard photon from initial state radiation. Analysis of the energy
spectrum and angular distributions of those photons can shed light
on the nature of dark matter and its interactions. Therefore, it
is crucial to be able to simulate the signal and background samples
in a uniform framework, to avoid possible systematic biases. The
WHIZARD program is a flexible tool, which is widely used by
$e^+e^-$ collaborations for simulation of many different "new
physics" scenarios.
We propose the procedure of merging the matrix element calculations
with the lepton ISR structure function implemented in WHIZARD.
It allows us to reliably simulate the mono-photon events, including
the two main Standard Model background processes: radiative neutrino pair
production and radiative Bhabha scattering.
We demonstrate that cross sections and kinematic distributions of
mono-photon in neutrino pair-production events agree with
corresponding predictions of the ${\cal KK}$MC, a Monte Carlo generator
providing perturbative predictions for SM and QED processes,
which has been widely used in the analysis of LEP data.
KKMCee is providing high precision Standard Model predictions for the lepton or quark pair production process at any future lepton colliders (excluding Bhabha process, including muon colliders). It features second order QED photonic corrections with advance soft photon resummation at the amplitude level and full control over longitudinal and transverse polarizations both for beams an outgoing fermions. It includes also decays of tau leptons and hadronization of quarks. It was recently upgraded with better library of first order electroweak corrections. Simulation of the beam energy spread, which is essential both for linear and circular colliders, was also recently upgraded. Auxiliary KKFoam MC program (in c++) with simplified matrix element and partial ananytical integration was added recently for precision testing of KKMCee predictions. Future development plans include translating the entire program into c++ (already in the process), improvements of the matrix element for the neutrino channel, inclusion of leading third order QED non-soft corrections. providing functionality of fast fitting of the initial parameters, and more.
The 4-, 5- and 6-jet resolution scales for the Durham jet algorithm in $e^+ e^-$ collisions are resummed, using an implementation of the well known CAESAR formalism within the Sherpa framework. Results are presented at NLO+NLL' accuracy. In particular the impact of subleading colour contributions is evaluated. Hadronisation corrections are studied using matrix-element plus parton-shower predictions from SHERPA and VINCIA.
In this talk, we present the results for constraining the effective field theory describing the top quark couplings through the $e^{-} e^{+} \rightarrow t \bar{t}+$jet process.
The analysis is performed at two center-of-mass energies of
500 and 3000 GeV considering a realistic simulation of the detector response and the main sources of background.
The expected upper limits at 95\% CL are obtained on the new physics couplings
using the dileptonic $t \bar{t}$ final state.
We find that the 95\% CL bounds on dimensionless Wilson coefficients considered in this analysis could be probed down to $10^{-4}$.
We discuss a possibility that the parameter space of the two Higgs doublet model is significantly narrowed down by considering the synergy between direct searches for additional Higgs bosons at the (HL-)LHC and precision measurements of the Higgs boson properties at future e+e- colliders such as the International Linear Collider (ILC). We show that, in the case where the coupling constants of the discovered Higgs boson are slightly different from the predicted values in the standard model, most of the parameter space is explored by the direct searches of extra Higgs bosons, in particular for the decays of the extra Higgs bosons into the discovered Higgs boson, and also by the theoretical arguments such as perturbative unitarity and vacuum stability. [arXiv:2010.15057]
We propose to utilize angularity distributions in Higgs boson decay to probe light quark Yukawa couplings at $e^+e^-$ colliders. Angularities $\tau_a$ are a class of 2-jet event shapes with variable and tunable sensitivity to the distribution of radiation in hadronic jets in the final state. Using soft-collinear effective theory (SCET), we present a prediction of angularity distributions from Higgs decaying to quark and gluon states at $e^+e^-$ colliders to ${\rm NNLL}+\mathcal{O}(\alpha_s)$ accuracy. Due to the different color structures in quark and gluon jets, the angularity distributions from $H\to q\bar{q}$ and $H\to gg$ show different behaviors and can be used to constrain the light quark Yukawa couplings.
We show that the upper limit of light quark Yukawa couplings could be probed up to $15\sim 22\%$ level of the bottom quark Yukawa coupling in the Standard Model.
In this talk, I discuss the phenomenology of a minimal model for GeV-scale Majorana dark matter (DM) coupled to the standard model lepton sector via a charged scalar singlet. The theoretical framework extends the Standard Model by two $SU(2)_L$ singlets: one charged Higgs boson and a singlet right-handed fermion. The latter plays the role of the DM candidate. We show that there is an anti-correlation between the spin-independent DM-Nucleus scattering cross-section ($\sigma_{\rm SI}$) and the DM relic density for parameter values allowed by various theoretical and experimental constraints. Moreover, we find that even when DM couplings are of order unity, $\sigma_{\rm SI}$ is below the current experimental bound but above the neutrino floor. Furthermore, we show that the considered model can be probed at High Energy lepton colliders using e.g. the mono-Higgs production and same-sign charged Higgs pair production.
The leptophilic weakly interacting massive particle (WIMP) is realized in a minimal renormalizable model scenario where scalar mediators with lepton number establish the WIMP interaction with the standard model (SM) leptons. We perform a comprehensive analysis for such a WIMP scenario for two distinct cases with an SU(2) doublet or singlet mediator considering all the relevant theoretical, cosmological and experimental constraints at present. We show that the mono-photon searches at near-future lepton collider experiments (ILC, FCC-ee, CEPC, etc.) can play a significant role to probe the yet unexplored parameter range allowed by the WIMP relic density constraint. This will complement the search prospect at the near-future hadron collider experiment (HL-LHC). Furthermore, we discuss the combined model scenario including both the doublet and singlet mediator. The combined model is capable of explaining the long-standing muon (g-2) anomaly which is an additional advantage. We demonstrate that the allowed region for anomalous muon (g-2) explanation can also be probed at the future colliders which will thus be a simultaneous authentication of the model scenario.
In this talk we will discuss about a general anomaly free U(1) extension of the Standard Model which describes a small neutrino mass after the seesaw mechanism. In this scenario a new force carrier called 𝑍′ Can be introduced which plays an interesting role to study a variety of phenomenological aspects including forward backward asymmetry, left right asymmetry, Higgs physics and dark matter phenomenology. We will describe these interesting phenomenological aspects in this talk with great detail. Such phenomena could be observed at the currently running experimental facilities. These scenarios could also be tested at the proposed experiments in the near future. Detailed case study will be presented.
In gauge-Higgs unification (GHU), the 4D Higgs boson appears as a part of the fifth dimensional component of 5D gauge field. Recently, an $SO(11)$ GUT inspired $SO(5)\times U(1)\times SU(3)$ GHU model in has been proposed. In the GHU, Kaluza-Klein (KK) excited states of neutral vector bosons, photon, $Z$ boson and $Z_R$ boson, appear as neutral massive vector bosons $Z'$s. The $Z'$ bosons in the GHU couple to quarks and leptons with large parity violation, which leads to distinctive polarization dependence in, e.g., cross sections and forward-backward asymmetries in $e^-e^+\to \mu^-\mu^+, q\bar{q}$ processes.
In the talk, we discuss fermion pair production in $e^-e^+$ linear collider experiments with polarized $e^-$ and $e^+$ beams in the GUT inspired GHU. Deviations from the SM are shown in the early stage of ILC 250 GeV experiments. The deviations can be tested for the KK mass scale up to about 15 TeV.
This talk is mainly based on Phys.Rev.D102(2020)015029.
Searches for light, weakly coupled particles are an important component of the physics program at present and future colliders. A classic benchmark for a potential vector-boson mediator between the standard model and the dark sector is the hypothetical dark photon, which could be produced either directly or through a dark Higgs boson. As part of the US Snowmass process, we are studying the sensitivity for detection of long-lived dark photons at the ILC, using the Higgs portal production mode and displaced decays of long-lived dark photons as a benchmark to study the SiD detector performance for detection of displaced decays. In this talk, we will outline our plans for the study, and discuss progress so far, including first looks at both fast and full SiD simulation of long-lived dark photons produced via Higgs-strahlung at sqrt(s) = 250 GeV.
Two fermion production at the International Linear Collider (ILC) will allow sensitive indirect
searches for new interactions, e.g. such as heavy gauge boson Z ′. Tools available at ILC to
measure the chirality of such new interactions include the ILC s polarised beams and the tau lepton
polarisation.
Tau polarisation is extracted by measuring the distribution of tau decay products, and relies on
the correct identification of tau decay mode. Especially for high energy taus, this requires a suitably-
designed detector and sophisticated event reconstruction.
I performed simulation for the purpose of reconstruction of events including hadronic decay of tau
lepton pairs generated in the ILC experiment.
In the context of the Energy Frontier of Snowmass 2021, we are developing a neural network tagger for identifying the flavour of a jet with a physics focus on strange decays of Higgs bosons. The tagger will be deployed as part of prospect studies for SM H->ss measurements as well as for BSM heavy Higgs measurements, H(+)->cs, at future lepton colliders. In particular, these studies are performed using samples reconstructed using the International Large Detector at the proposed International Linear Collider. Jet-level variables as well the constituent particles within a jet are provided as inputs to the neural network, and different neural network architectures are studied. In this talk, we will present our progress to-date, including the performance of our preliminary jet taggers, and the work which remains to be done.
We will present the motivation to study ee->ss. In addition we will present first data quality checks with recently produced two-fermion samples at 250 GeV for ILD. These checks concern in particular Kaon identification that exploit the dE/dx capabilities of the ILD-TPC.
The Standard Model(SM) can not explain why measured quark masses have different values and why the mass disparity between them. However, we can consider the energy dependence of quark mass, and these values change from measured values at a higher energy scale. Furthermore, some new particles such as SUSY contribute, this energy dependence will deviate from the SM's expectation. Based on this idea, some models such as GUTs predict mass unification of third-generation particles(b, tau) at the GUT scale, and they are candidates that can approach problems of mass. Therefore, the verification of the b quark mass's energy dependence at a higher energy scale provides a QCD theory test. Additionally, it can be a probe of new physics.
The b mass at Z-pole measured at LEP and SLD. They were in good agreement with SM, but there was no indication of new physics. As a next challenge, this study simulated b quark pair production events at 250GeV ILC and estimated b mass measurement accuracy at the 250GeV energy scale.
It turns out that the precision of b mass measurement at 250GeV is 1GeV. Additionally, it turns out that the current Monte Carlo sample of quark pair production has some problems, and they affect the center value of the observable which this study uses and its statistical error. It is updating now. Moreover, Giga-Z ILC can meausre b quark mass at Z-pole at a better precision than LEP and SLD.
The Higgs boson decay modes to heavy $b$ and $c$ quarks are crucial for the Higgs physics studies. The presence of semileptonic decays in the jets originating from $b$ and $c$ quarks causes missing energy due to the undetectable neutrinos. A correction for the missing neutrino momenta can be derived from the decay kinematics up to a two-fold ambiguity. The correct solution can be identified by a kinematic fit, which exploits the well-known initial state at an $e^{-}e^{+}$ collider by adjusting the measured quantities within their uncertainties to fulfill the kinematic constraints. The ParticleFlow concept, based on the reconstruction of individual particles in a jet allows understanding the individual jet-level uncertainties at an unprecedented level. The modeling of the jet uncertainties and the resulting fit performance will be discussed for the example of the ILD detector. Applied to $H\rightarrow b\bar{b}/c\bar{c}$ events, the combination of the neutrino correction with the kinematic fit improves the Higgs mass reconstruction significantly, both in terms of resolution and peak position.
To achieve the physics requirements in the future e$^+$e$^-$ collider, the high resolution tracker for the particle track reconstruction and particle identification are demanded. Time Projection Chamber(TPC) is one of the main concept proposal of the central tracker detector, it has an excellent performance on the moment measurement, dE/dx measurement and the spatial resolution.
Based on the studies of the previous TPC readout module with the continue ion backflow suppression function, a TPC prototype integrated with 266$~$nm UV laser track system has been developed in Institute of High Energy Physics (IHEP). This prototype has an active readout area of 200$\times$200$~$mm$^2$ and the drift length of 500$~$mm, the narrow laser beams can imitate straight ionization tracks at predefined position(<2$\mu m$). it is placed on an anti-vibration pneumatic optical platform, where a central spring, a pendulum bar and an auto inflation system damp any vibration down to amplitudes of less than $\lt$1$\mu m$, there is the 1280 channels FEE as the readout, and the high voltage of 20,000$~$V for the field cage have been done. Some tests are very well using TPC prototype integrated 42 UV laser tracks. In this talk, the update results of the commissioning and the spatial resolution will be presented.
A high performance central tracker is essential for precision measurements of Higgs properties at the ILC. The LCTPC-Asia group is developing a GEM based readout module for a TPC proposed as the central tracker of the ILD. Results from its test beam data taken in 2016 at DESY with the large prototype TPC (LP1) were reported multiple times in the past workshops of this series. This time we focus on inclined tracks and analyze their incident angle effect. A finite incident angle produces an extra charge spread over the readout pads and, together with the fluctuations of the positions and sizes of primary ionization clusters, degrades the spatial resolution as compared to that for the normal incidence. This so-called angular pad effect is expected to be further amplified by gas gain fluctuations. In this talk, we will report our preliminary results regarding the angular pad effect on the Asian GEM module, including the estimated effective number of primary ionization clusters and its comparison with a simulation result by the Heed package of Garfield++.
The largest phase-1 upgrade project for the ATLAS Muon System at Large Hadron Collider (LHC) is the replacement of the present first station in the forward regions with the New Small Wheels (NSWs). The NSWs consist of two detector technologies: Large size multi-gap resistive strips Micromegas (MM) and small-strip Thin-Gap Chamber (sTGC). The sTGC modules are called “trigger chambers” and Micromegas modules “precision chambers” despite having comparable tracking and trigger performances. The MM chambers are ionization-based Micro-Pattern Gaseous Detectors (MPGD) made up of parallel plates, having a thin amplification region separated from the conversion region via a thin metallic mesh. They will be mainly used as precision tracking detectors with a high spatial resolution, efficiency better than 95% per single plane, in a highly irradiated environment of the ATLAS experiment. Along with the MM, the NSWs will be equipped with eight layers of sTGC chambers arranged in multilayers of two quadruplets, for a total active surface of more than 2500m2. To achieve the good precision tracking and trigger capabilities in the high background environment of the high-luminosity LHC, each sTGC plane must have a spatial resolution better than 100μm to allow the Level-1 trigger track segments to be reconstructed with an angular resolution of approximately 1mrad. The frontend electronics are implemented in about 2000 boards including the 4 custom-designed ASICs capable of driving trigger and tracking primitives to the backend trigger processor and readout system. The readout data flow is designed through a high-throughput network approach and fast-timing. The large number of readout channels, short time available to prepare and transmit trigger data, large volume of output data, harsh radiation environment, and the need of low power consumption all impose great challenges on the system design, integration and commissioning. In this talk, the design, construction, performance and status of the ATLAS NSW upgrade project will be discussed. The timing structure of the proton beams of the LHC and of the electron-positron beams proposed for the International Linear Collider (ILC) are quite different, but the NSW detector technologies can be adapted as a powerful timing detector for precise muon tracking of high-multiplicity events at future electron-positron colliders. Concepts and ideas on how ATLAS muon gaseous chamber technology can be adapted for a detector at the ILC will be summarized. Generic characteristics of MM and sTGC will be provided for pattern recognition at the ILC.
Developments for a TPC at ILC with MPGD readout have been conducted for more than two decades. A new scheme (called ERAM) for charge spreading with a resistive-capacitive anode has been recently tested in a beam at DESY. Preliminary results are presented. It is shown that this new scheme, where the Micromegas mesh is at ground, allows the distortions near the module boundaries to be reduced by an order of magnitude, while providing a better flexibility, compared with the conventional scheme. Synergies with the near detectors of T2K, T2 HyperK, and SAND are briefly discussed.
It has been considered that final assembly of the ILD solenoid should be carried out at an assembly hall which is ground floor of the experimental cavern, because the completed ILD solenoid is too huge to be delivered from the factory. While referring to CMS fabrication experience, we have been discussing manufacture plan with production companies. One third block of coil winding can be transported from the factory, then the coil composed in the hall is installed into a cryostat which is manufactured in the hall, too. And the transportation cost has been estimating. Status of research on fabrication will be presented.
The Higgs trilinear coupling can serve as a unique probe to investigate the structure of the Higgs sector and the nature of the electroweak phase transition, and to search for indirect signs of New Physics. At the same time, classical scale invariance (CSI) is an attractive concept for BSM model building, explaining the apparent alignment of the Higgs sector and potentially relating to the hierarchy problem. A particularly interesting feature is that the Higgs trilinear coupling is at one-loop order universally predicted in all CSI models, and deviates by 67% from the (tree-level) SM prediction -- making it accessible at the ILC.
In this talk, I will show how this result is modified at two loops. I will present results from the first explicit computation of two-loop corrections to the Higgs trilinear coupling in classically scale-invariant BSM models. Taking as examples an N-scalar model and a CSI variant of a Two-Higgs-Doublet Model, I will show that the inclusion of two-loop effects allows distinguishing different scenarios with CSI, although the requirement of correctly reproducing the known 125-GeV mass of the Higgs boson severely restricts the allowed values of the Higgs trilinear coupling.
A light pseudoscalar in extended Higgs sector provides solution to muon anomalous magnetic moment and/or dark matter. We explore the prospect of Yukawa production of such a light boson which can exist in an extended Higgs sector like 2HDM. Considering ILC "Higgs factory" with CM Energy of 250 GeV, we show that the available parameter space can be examined by the (tau) Yukawa process at 5 sigma with integrated luminosity of 2000 fb inverse. It is also possible to reconstruct mass of such a light particle at lepton colliders through multi tau final state.
We study the scenario of the two Higgs doublet model, where the Higgs potential respects the twisted custodial symmetry at high energy scale. In this scenario, experimental data for the Higgs boson couplings and those for the electroweak precision observables can be explained even when the masses of the extra Higgs bosons are near the electroweak scale. We also discuss the predictions on the mass spectrum of the additional Higgs bosons and also those on the coupling constants of the standard-model-like Higgs boson, which make it possible to test this scenario at the current and future collider experiments. This talk is based on JHEP 02 (2021) 046 [arXiv:2009.04330].
The experimental measurements on flavour physics, in tension with Standard Model predictions, exhibit large sources of Lepton Flavour Universality violation. We perform an analysis of the effects of the global fits to the Wilson coefficients assuming a model independent effective Hamiltonian approach, by including a proposal of different scenarios to include the New Physics contributions. A discussion of the implications of our analysis in leptoquark models is included. We conclude with an overview of the impact of the future generation of colliders in the field of B-meson anomalies.
Material budget and distance to the interaction point are amongst the key sensor performance figures that determine the tracking and vertexing capabilities of inner tracking systems. To significantly improve these numbers, ALICE is carrying out the R&D for replacing its inner-most tracking layers by truly cylindrical layers made from wafer-scale, bent sensors (Inner Tracking System 3, "ITS3"). At target thicknesses of 20-40um, these sensors become flexible enough to be held in place using minimal mechanics made from carbon foam. The R&D for the central component of this development, the wafer-scale sensor, is being carried out together with the CERN EP R&D programme, and a first prototype submission in 65nm is currently being produced. At the same time, electrical and mechanical mock-ups (using existing ALPIDE sensors and black wafers) are used to verify the concept of bent MAPS. This contribution summaries the R&D roadmap focussing on the sensor development and will give an overview on results obtained from bent MAPS in beam.
For the HL-LHC upgrade the current ATLAS Inner Detector is replaced by an all-silicon system. The Pixel Detector will consist of 5 barrel layers and a number of rings, resulting in about 14 m2 of instrumented area. Due to the huge non-ionizing fluence (1e16 neq/cm2) and ionizing dose (5 MGy), the two innermost layers, instrumented with 3D pixel sensors (L0) and 100μm thin planar sensors (L1) will be replaced after about 5 years of operation. All hybrid detector modules will be read out by novel ASICs, implemented in 65nm CMOS technology, with a bandwidth of up to 5 Gb/s. Data will be transmitted optically to the off-detector readout system. To save material in the servicing cables, serial powering is employed for low voltage.
Large scale prototyping programs are being carried out by all sub-systems.
The talk will give an overview of the layout and current status of the development of the ITk Pixel Detector.
The Mu3e experiment searches for the lepton flavour violating decay µ→ eee with an ultimate aimed sensitivity of 1 event in 10^16 decays. This goal can only be achieved by reducing the material budget per tracking layer to X/X0 ≈ 0.1 %. High-Voltage Monolithic Active Pixel Sensors (HV-MAPS) which are thinned to 50 µm serve as sensors. Gaseous helium is chosen as coolant. This talks presents results of recent studies related to the sensor prototypes, the helium cooling, and module prototyping. The recent chip submission MuPix10 has proven its functionality regarding efficiency and time resolution.
The helium cooling system for the inner tracker could be verified using a full-scale prototype. Both findings will be used this spring to operate demonstrator modules equipped with 6 sensors inside the Mu3e magnet.
The success of the Belle II experiment relies for a large part on the very high instantaneous luminosity, close to 8x10^35 cm^-2.s^-1, expected from the SuperKEKB collider. The beam conditions to reach such luminosity levels generate a large rate of background particles in the inner detection layers of Belle II, which exceeds by far the rate of particles stemming from elementary collisions. This beam-induced background creates stringent constraints on the vertex detector, in addition to the requirements coming from physics capability. The SuperKEKB accelerator and Belle II experiment have started full operation in 2019, establishing in 2020 a world record with an instantaneous luminosity of 2.4x10^34 cm^-2.s^-1. The current Belle II vertex detector (VXD), made of a combination of DEPFET pixel sensors and Double-Sided Silicon Strip Detectors (DSSD), has been operating very satisfactorily. While efforts are still ongoing to mitigate beam-induced backgrounds, current prospects for the related occupancy rates in the VXD layers at full luminosity fall close to the acceptable limits of the employed technologies. To reach the nominal luminosity parts of SuperKEKB like the final focusing magnets will be modified with a time frame currently predicted to be around Long Shutdown 2 in 2026. Thus, the Belle II collaboration is considering the possibility to install an upgraded VXD system on the same time scale. Such an upgrade should provide a sufficient safety factor with respect to the background rate expected at the nominal luminosity and possibly enhance performances for tracking and vertexing. Several technologies are under consideration for the upgrade. One approach consists in improving performances of the technologies present in Belle II: faster DEPFET sensors for innermost layers, thinner and more granular DSSDs for the remaining layers. New monolithic technologies for pixel sensors are also under discussion, namely SOI and CMOS. They offer a combination of granularity, speed, low material budget and radiation tolerance matching well Belle II requirements and could be exploited to design a fully pixelated VXD, also benefiting from significant developments made in recent years for other experiments. Following this last concept, both simplified and complete simulations have been conducted to evaluate tracking and vertexing performances with various geometries (e.g. number of layers, addition of disks) and technical specifications (e.g. granularity, speed). This talk will review the context of the proposed VXD upgrade in Belle II, providing some details of the existing technological proposals and discussing performance expectations from simulations.
Highly granular electromagnetic calorimeter based on scintillator strip with SiPM readout (Sc-ECAL) is under development in the framework of the CALICE collaboration for future electron-positron colliders such as ILC and CEPC. The fully integrated technological prototype with 32 layers has been constructed to demonstrate the performance of Sc-ECAL with more realistic technical implementation. The assembly of the prototype has been completed and the commissioning is in progress. The technological prototype is supposed to be tested in beam at the DESY test beam facility this year. The status and prospects of the Sc-ECAL technological prototype will be reported.
strong text We present status of scintillator ECAL development in Shinshu University. In particular, we are manufacturing and adjusting an ILC type module that integrates a readout electronic circuit and a scintillator sensor in a layer. We have manufactured two layers of such module, and have made fine adjustments to calibrate. Currently, we are conducting performance verification using cosmic ray muons. We are also pursuing optimization of the shape of the scintillator strip. In particular, we will discuss the shape and surface treatment of the dent where the semiconductor optical sensor is installed with uniformity measurements and simulation work.
The scintillator-based electromagnetic calorimeter (ScECAL) is one of the technology options for ECAL at future electron-positron colliders. The performance of double-sided SiPM redout method on scintillator strip and strip-SiPM misalignment effect have been studied in lab test. The performance of the calorimeter with a realistic design of the scintillator strip including the measured performance of the strip is under study using the ILD model simulation. The preliminary results from the studies will be reported.
Electromagnetic calorimeter based on scintillator strip with SiPM readout (Sc-ECAL) is one of the technology options for the ECAL at the International Linear Collider (ILC). The SiPM output will become non-linear light when a large amount of light is injected to SiPM and SIPM saturates. The SiPM saturation is measured with a new method based on scintillation light excited by injecting UV-laser to scintillator. The new method will be described with preliminary results from the measurements.
I discuss how to employ collinear factorisation theorems for the computation of generic hard-production processes at e+e- colliders, in particular by stressing the strong analogies with their analogues which are routinely used in the context of LHC physics. I shall briefly describe some recent work on the universal behaviour associated with small-angle emissions in QED, that leads to the definition of "parton" distribution functions, and the work in progress on the implementation of these ideas into the automated MadGraph5_aMC@NLO framework.
Applications of Quantum Chromodynamics to collider phenomenology largely rely on factorization, the separation of universal low-energy dynamics from perturbative high-energy physics. Factorization of cross sections was originally established at the leading power in an expansion in the ratio of these energies, but in view of precision physics subleading terms become relevant. I will present some recent work on subleading power factorization in the simpler case of QED amplitudes and its connnection with alternative approaches.
WHIZARD is a multi-purpose Monte Carlo event generator very well suited for the simluation of lepton collider physics.
In this talk, we are reporting on the recent theoretical and technical developments with respect to the implementation of next-to-leading-order perturbative corrections and the UFO interface to use models beyond the Standard Model within WHIZARD.
A prototype of a digital pixel electromagnetic calorimeter, EPICAL-2, was designed and constructed. It consists of a sandwich construction of W absorbers and Si sensor layers, with a total thickness of approximately 20 radiation lengths and a cross section of $\mathrm{30\,mm\times30\,mm}$. This design is the next step in pixel calorimetry following up on a previous prototype using MIMOSA sensors [1]. The new EPICAL-2 detector employs the ALPIDE pixel sensors developed for the ALICE ITS upgrade. The pixel size is $\mathrm{29.24\,\mu m\times26.88\,\mu m}$, and the full detector consists of ~25 million pixels. This R&D is performed in the context of the proposed Forward Calorimeter upgrade (FoCal) for the ALICE experiment, but it also serves the general understanding of the principle of a fully digital calorimeter.
We have used the Allpix2 framework [2] to perform Monte Carlo simulations of the detector response and the shower evolution in EPICAL-2. The detailed detector geometry was implemented and simulation parameters were tuned to reproduce electron test beam results. The general performance of EPICAL-2 for electromagnetic showers was investigated, particularly in terms of energy resolution and linearity for both, the total number of pixel hits and the total number of clusters. In addition, more detailed microscopic features of the shower development and the propagation of particles were studied.
[1] JINST13 (2018) P01014.
[2] Nucl. Instr. Meth. A 901 (2018) 164 – 172.
The Semi-Digital Hadronic Calorimeter (SDHCAL) is proposed to equip the future ILD detector at ILC. A technological prototype of the SDHCAL developed within the CALICE collaboration has been extensively tested in test beams. The talk will summarize the prototype performances in terms of hadronic shower reconstruction from the most recent analyses test beam data.
The Analog Hadron Calorimeter (AHCAL) concept developed by the CALICE collaboration is a highly granular sampling calorimeter with 3*3 cm^2 plastic scintillator tiles individually read out by silicon photomultipliers (SiPMs) as active material. We have built a large scalable engineering prototype with 38 layers in a steel absorber structure with a thickness of ~4 interaction length. The prototype was exposed to electron, muon and hadron beams at the DESY and CERN testbeam facilities in 2018. The high granularity of the detector allows detailed studies of shower shapes and shower separation with the PandoraPFA particle flow algorithm as well as studies of hit times. The large amount of information is also an ideal place for the application of machine learning algorithms.
The presentation will give an overview of the ongoing analyses.