The DAE-BRNS High Energy Physics (HEP) Symposium is a premier event held every 2 years in India, supported by the Board of Research in Nuclear Sciences (BRNS), Department of Atomic Energy (DAE), India. The symposium will consist of parallel and invited plenary sessions. A poster session will also be held to provide an opportunity to young researchers to showcase their research. The deliberations, from both experimental and theoretical perspectives, are expected to cover a variety of topics in particle physics, astroparticle physics, cosmology, heavy-ion physics and related areas. This year XXV edition is being hosted by IISER Mohali
Duration of Plenary talks: 30 (25+5) minutes
Duration of Mini review talks: 25 (22+3) minutes
Duration of Parallel talks: 15 (13+2) minutes
Both the poster and oral presentations will be part of the proceedings.
Poster Size : A1 ( Portrait size )
The Large Hadron Collider beauty (LHCb) detector is a single-arm forward spectrometer at the LHC, designed for the study of heavy flavour physics. In this review, an overview of the detector performance and a few recent results in the field of beauty and charm physics are presented. The LHCb experiment has also undergone a major upgrade in preparation for Run 3 of LHC, the talk will also highlight the current activities undertaken during this period.
The study of photodisintegration of $^7Li$ is of importance to Nuclear Physics, Particle Physics and Astrophysics. Primordial abundances of light elements such as $D$, $^3He$, $^4He$ and $^7Li$ are predicted by Big Bang theory of early universe and is of great interest to cosmologists. Lithium, being fragile gets destroyed easily at relatively low temperatures. WMAP measurements have inferred that $^7Li$ abundance is two to three times more than that inferred by the low metallicity halo stars [1]. In the recent years based on lithium isotopoes series of experimental measurements are being carried out using High Intensity Gamma Ray Source (HIGS) at Duke Free Electron Laser Laboratory. Experiments [2]- [3] were carried out , to measure the differential cross section of the photoneutron reaction channel in photodisintegration of $^7Li$, where the progeny nuclei is in the ground state as well as in excited states . Theoretical study on photodisintegration of deuteron was carried out using a model independent formalism [4] - [7] and in these studies, it was shown clearly that there could be 3 different $E1_\nu$ amplitudes leading to final relative n-p state . Subsequently, evidence for the existence of these three amplitudes was found in experimental studies [6] at slightly higher energies in different context.
Using the same approach, model independent formalism was developed for photodisintegration of $^7Li$ [8] and an analysis was carried out to study the differential cross section with linearly polarized photons. Extending this study we propose to discuss the reaction channel $^7Li+\gamma \to$ $^6Li+n$ with initially circularly polarized photons.
References
[1] Alain Coc et al., APJ. 600, (2004) 544.
[2] W. A. Wurtz et al., Phys. Rev. C. 84, (2011) 044601 .
[3] W. A. Wurtz et al., Phys. Rev. C. 92, (2015) 044603 .
[4] G. Ramachandran and S. P. Shilpashree, Phys. Rev. C. 74, (2006) 052801(R).
[5] G. Ramachandran, Yee Yee Oo and S. P. Shilpashree, J. Phys. G: Nucl. Part. Phys. 32, (2006) B17–B21.
[6] S. P. Shilpashree, Swarnamala Sirsi and G. Ramachandran, Int. Jou. Mod. Phys. E. 22, (2013) 1350030 .
[7] S. P. Shilpashree, Physica Scripta. 97 (2022) 7.
[8] Aswathi. V, Venketaramana Shastri and S. P. Shilpashree, Jou. Phy. Con. Ser. 2156, (2021) 012213.
In recent years, cosmological experiments like PLANCK-2018 [1,2] and BICEP/KECK [3] have shown the efficacy of single field slow-roll inflaton potential in explaining various experimental parameters regarding LSS, CMBR anisotropy and polarization data with significant precession. Therefore, obtaining a low energy effective inflationary theory consistent with such a class of potentials from superstring theory has been a subject of major efforts, although it is seriously at tension with swampland conjecture [4] and trans-Planckian censorship conjecture (TCC) [5]. In this paper, we have proposed that, such a connection is in principle possible in a pleasant way, if we stabilize all Kahler moduli by incorporating several perturbative and non-perturbative quantum corrections in the Kahler potential and super-potential respectively, a suitable uplifting mechanism and a novel canonical normalization technique. Our framework is based on $10d$ type-IIB superstring theory compactified on a $T^6/Z_N$ Calabi-Yau (CY) orientifold, equipped with three magnetized non-interacting and intersecting $D7$ branes, $O7$ planes and the non-trivial quantised $RR$ and $NS$ closed $3$-form world volume fluxes threading the $4$-cycles of $CY$-volume. The perturbative corrections arising from $\alpha'^3$ expansion in LVS [6], multi-graviton scattering upto one-loop with log-correction [7] and non-perturbative corrections related to $E3$-instanton [8] and gaugino condensation [9] break the supersymmetric no-scale structure giving an $F$-term $AdS_4$ potential which is dynamically uplifted by $D$-term potential originating from $U(1)$ charges of $D7$ branes in gravitational sector thereby providing the inflaton potential after normalization. All the parameters of the derived $dS_4$ potential are carefully tuned to maintain the inflationary plateau region. Cosmological parameters are obtained by $k$-space analysis of cosmological perturbations by dynamical horizon exit method [10] and found [11] to be consistent with PLANCK and BICEP/KECK constraints viz., $n_s=0.9652-0.9662$, $r=5.8\times 10^{-4}-6.2\times 10^{-4}$, $N=55.0-56.7$,$n_t=(-7.28\times 10^{-5} )-(-7.76 \times 10^{-5})$ at $k=0.001-0.009$ Mpc$^{-1}$.
References:
1. AKRAMI Y. et al., Astron. Astrophys., 641 (2020) A 10 .
2. AGHANIM N., et al., Astron. Astrophys., 641 (2020) A6.
3. ADE P.A. R. et al., Phys. Rev. Lett., 127 (2021) 151301.
4. AGARWAL P., OBIED G.,STEINHARDT P. J. and VAFA C., Phys. Lett. B, 784 (2018) 271.
5. BEDROYA A., VAFA C., JHEP 09 (2020) 123, 1909.11063.
6. BECKER K., BECKER M., HAACK M., and LOUIS J., JHEP, 06 (2002) 060.
7. ANTONIADIS I., CHEN Y., and LEONTARIS G.K., JHEP 01 (2020) 149.
8. BAUME F., MARCHESANO F. and WIESNER M., JHEP 04 (2020) 174.
9. HAACK M., KREFL D., LUST D., VAN PROEYEN A. and ZAGERMANN M., JHEP 01 (2007) 078.
10. SARKAR A., SARKAR C. and GHOSH B., JCAP 11 (2021) 029.
11. LET A., SARKAR A.,SARKAR C., and GHOSH B., EPL 139 (2022), no. 5 59002
Institutional mail: adsarkar@scholar.buruniv.ac.in
The W ± and Z-boson production is extensively studied at hadron colliders since it represents an important benchmark of the Standard Model (SM). In LHC energies, the measurements of W ± and Z-boson in p-p collisions had been done at √s = 8 and 13 TeV. The Electroweak theory and Quantum Chromodynamics (QCD) calculations at Next-to-Leading Order (NLO) and Next-to-Next-to-Leading Order (NNLO) in perturbation theory described these measurements well. The study of p-p collisions provides a valuable test bench for validation of the analysis strategy in p-Pb and Pb-Pb collisions [1, 2]. The study using muonic channel in p-p collisions in ALICE experiment is done at √s = 8 TeV only. The particle event generator, POsitive Weight Hardest Emission Generator (POWHEG) is used to generate hard events like the electroweak bosons (W ± ,Z) upto NLO (Next-to-Leading Order) calculation. In this work, a simulation study is done for the production of W± and Z-boson via muonic channel using PYTHIA8 event generator at lower and higher energies with respect to √s = 8 TeV i.e. at √s = 5.02 and 13 TeV. The PYTHIA8 event generator calculation has upgraded by adding perturbative calculation NLO recently [3]. The simulation results can be compared with the results obtained from POWHEG calculations.
[1]. JHEP 02 (2017) 077; ALICE Collaboration.
[2]. JHEP 09 (2020) 076; ALICE Collaboration.
[3]. Comput. Phys. Commun. 246 (2020) 106910; T. Sjostrand.
The quark-hadron transition that happens in heavy ion collisions is likely influenced by the effects of rotation and magnetic field, both present due to the geometry of a generic non-head-on impact. The simultaneous imposition of these two phenomenological parameters would lead to a modification of the conventional phase diagram for QCD matter. We explore the deconfinement transition between quark-gluon plasma and hadron gas to map the continuous crossover region in a multi-dimensional domain spanned by baryon-chemical potential, external magnetic field and angular velocity as phase space coordinates. By utilizing a method involving the statistical hadronization model, the deconfinement temperature estimate is obtained and observed to decrease nearly monotonously along each of the axes, the drop being most pronounced when all three quasi-control parameters (collision energy and centrality dependent) take on high values. Some interesting implications of our novel results are outlined along with suggestive directions for further research.
Any grand unified model is plagued with particles capable of inducing
proton decay. Identifying all potential scalar proton decay mediators
stemming from different irreducible representations of SO(10), we
will show their coupling with the Standard Model fermions, tree-level
contributions of the effective strength of $B-L$ conserving ($d=6$),
and $B-L$ violating ($d=7$) operators to proton decay width expression.
Through the computed branching ratio of various decay modes of proton
in a realistic SO(10) model based on $10_H$ and $\overline{126_H}$, we
will enumerate distinct features of scalar mediated proton decay
including bound on the mass of the proton decay mediators.
Feynman integrals at any order of perturbation satisfy the Gelfand-Kapranov-Zelevinsky (GKZ) system of partial differential equations. In an ongoing collaboration, we present the automation of two techniques, namely the Groebner deformation method and the method of triangulations of point configurations, to solve such equations arising in the context of Feynman integrals, in the form of Mathematica packages, with support from specialised software such as Macaulay2
and TOPCOM. The requisite A-matrix of a Feynman integral could be obtained from the package either via the Lee-Pomenransky representation or the PDE associated with the Feynman integral. As applications, we show that our package allows one to compute both NLO and NNLO Feynman integrals and express their result as multivariate hypergeometric functions.
We investigate two-body nonleptonic weak decays of bottom meson involving heavy to heavy meson transitions into pseudoscalar and axial-vector mesons. The form factors, decay amplitudes, and branching ratios of CKM-favoured and suppressed modes are calculated in the relativistic and non-relativistic framework within the factorization hypothesis. We find that the branching ratios of several decays are of the order of measured experimental ratios. In the heavy quark limit, these results favor higher contributions from color-suppressed diagrams in order to explain the existing experimental data.
We present a background model for TEXONO experiment that is situated in the Kuo-Sheng Neutrino Laboratory under 50-ton passive shielding house. The model includes background contributions from both internal and external contaminations. We adopt the Geant4-based simulation framework to develop the background model, taking into account all contributions from nine radioactive nuclides: 40 K, 208 Tl, 210 Pb, 212 Bi, 212 Pb, 214 Bi, 226 Ra, 228 Ac, and 234 Th which are identified from the experimental reference data. The airborne radioactive nuclides related to reactor is also included in this study. In order to include the cosmic ray induced background into model, intensive studies are in progress.
The mass spectra of bottomonium $b\bar{b}$, is calculated using Cornell potential in a non-relativistic framework, with spin dependent corrections corresponding to the spin-orbit, spin-spin and tensor interactions added perturbatively. The radial and orbital Regge trajectories are also studied for the same. Further, we estimated the wave function at the origin to predict the decay widths of bottomonium states annihilating to leptons and photons. The obtained masses of the bottomonium states are compared and found to be in excellent agreement with experimental results. Also, we investigate the effect of various model parameters on the prediction of annihilation decay widths.
Quantum gravity has been studied using various approaches, and all of these approaches introduce a fundamental length scale in the theory. Non-Commutative space-time is an approach which incorporates this fundamental minimum length scale naturally. Though length scale at which Casimir effect is measured and the scale at which quantum gravity effects are expected are very different, it is worth studying the possible modification of the Casimir effect due to space-time non-commutativity. Casimir effect is the phenomenon wherein a physical force between macroscopic boundaries confining space, such as the ones introduced by placing two parallel plates arise due to the vacuum fluctuation of quantized field. It is shown that vacuum fluctuations of the quantized electromagnetic field, leads to either attraction or repulsion force between the plates depending on the geometry of the plates. Effects of existence of minimal length scale and presence of extra dimensions on the Casimir effect has been studied in recent time. Thus it is of intrinsic interest to study the Casimir effect in Doplicher-Fredenhagen-Roberts (DFR) space-time, a non-commutative space-time that naturally introduces a minimum length scale and also has extra dimensions.
Here we study the Casimir effect by analyzing the vacuum fluctuation of scalar field in lorentz invarient non-commutative space-time, DFR space-time. This is calculated by studying the scalar field when there are two parallel plates, seperated by a distance, and and modeled by two $\delta$-function. We calculate modifications to Casimir force and Casimir energy for both at zero and finite temperature. This is done in two ways; first by treating the extra spatial dimensions introduced in the DFR space-time the same manner as usual spatial dimensions of commutative space-time, and in the second, the extra dimension are treated as a compact dimensions.
References:
[1] S. Doplicher, K. Fredenhagen and J. E. Roberts, Phys. Lett. B 331 (1994) 29.
[2] C. E. Carlson, C. D. Carone and N. Zobin, Phys. Rev. D 66 (2002) 075001.
[3] R. Amorim, Phys. Rev. D 78 (2008) 105003.
[4] E. M. C. Abreu, A. C. R. Mendes, W. Oliveira and A. Zagirolamim, SIGMA 6 083 (2010)
[5] H. B. G. Casimir, Koninkl. Ned. Akad. Wetenschap. Proc 51 (1948) 793.
[6] K. A. Milton, J. Phys. A: Math. and Gen. 37 (2004) R209.
[7] K. A. Milton, The Casimir Effect Physical manifestations of Zero-Point Energy, World Scientific, Singapore, 2001).
[8] I. Brevik, S.A. Ellingsen and K. A. Milton, New J. Phys. 8 (2006) 236
[9] E. Harikumar, S. K. Panja and V. Rajagopal, Nucl. Phys. B 950 (2020) 114842.
[10] E. Harikumar and S. K. Panja “Casimir effect in DFR space-time" https://arxiv.org/abs/2110.05004.
In the work reported in this paper, we have analyzed generalized Chaplygin gas (GCG) and modified generalized Chaplygin gas (MGCG) in an interacting scenario. The equation of state parameter has been analyzed in both the cases and the stability of the models has been discerned through squared speed of sound. Stability against gravitational perturbations has been observed for both GCG and MGCG interacting with pressureless dark matter. Also, the generalized second law (GSL) of thermodynamics has been tested for different enveloping horizons and validity of GSL has been observed throughout. Furthermore, $f(T)$ gravity has been reconstructed with GCG and MGCG and phantom behaviour has been observed through reconstructed EoS parameters. The squared speed of sound has been derived for $f(T)$ gravity and stability of the model has been established through its positivity.
Jet substructure modification due to different aspects of jet quenching is studied using jet shape and jet fragmentation observables. The jet shape contains information about the transverse energy distribution inside a jet and the jet fragmentation function describes the longitudinal momentum distribution of hadrons inside a reconstructed jet. These measurements provide insight into the jet quenching process in the quark-gluon plasma created in the aftermath of ultra-relativistic collisions between two nuclei. We investigate events produced in Au+Au collisions at $\sqrt{ s_{NN}}$= 200 GeV and Pb+Pb collisions at $\sqrt{ s_{NN}}$ = 5.02 TeV to explore the dependence of modifications based on the centrality and in combination with various energy loss mechanisms using the JETSCAPE framework. The JETSCAPE framework provides a modular and extensible Monte-Carlo event generator for the simulation of high-energy nuclear collisions. We further examine the coherence effects in heavy-ion collisions that reduce the medium-induced emission rate at RHIC and LHC energies.
We use an anti-de Sitter/Quantum Chromodynamics (AdS/QCD) based holographic light-front wavefunction for the $J/\psi$ meson, in conjunction with the Color dipole model cross-section to investigate the cross-sections data for exclusive $J/\psi$ electroproduction. We used the dipole model parameters fitted to the most recent 2015 high precision HERA data on inclusive Deep Inelastic Scattering (DIS). Our results suggest that the holographic meson light-front wavefunction with color dipole model is able to give a successful description for rate of diffractive $J/\psi$ electroproduction for HERA data at small $x$ in a wide range of $Q^2$ for the quark mass $m_c= 1.27 $ GeV. We also computed the rapidity distributions of $J/\psi$ meson in dipole model proton-lead ultraperipheral coliisions(UPC). Our predictions are in good agreement with the experimental data of ALICE.
In the context of the formation of quark gluon plasma, whether the anisotropic flow in small collision system has the same underlying origin as that of the heavy-ions continues to be a matter of debate. Although the measurements of two and multi-particle correlations apparently suggest that azimuthal anisotropy in small systems is a consequence of collective excitation driven by the initial spatial asymmetry, the effects originating from few particle correlations (non-flow), however, can not be completely ruled out. In-fact, it has been realized that non-flow effects can bias these measurements in such a way that can fake the signatures of collective dynamics even if it is absent.
In this work, we explore such biases in pp collisions by sampling jet-enriched event samples simulated from Pythia8 event generator with multiple parton interaction (MPI) enabled. We will present results for n-particle (n >= 2) cumulants as a function of charge particle multiplicity for different event categories ranging from "jetty" to isotropic and show a model dependent quantification of bias introduced by jet or jet-like non-flow effects.
In the light of various CMB missions, the potential offered by AdS swampland conjectures is investigated. Recent CMB observations bound the sixth-order self-coupling of the inflaton field in the AdS swampland conjectures. Current observations can not rule out the inflaton field arising from AdS swampland conjectures.
The experimental measurements of the LFU ratios $R_{D^{(*)}}$, $R_{K^{(*)}}$ and $R_{J/\psi}$ strongly hint the presence of new physics beyond the standard model in $b\rightarrow c\ell\nu_\ell$ and $b\rightarrow s\ell\ell$ transitions, as these values show a tension of about $(2-3)\sigma$ from their standard model predictions. In this work, we investigate the possible manifestation of new physics in $b \rightarrow u \ell \nu_\ell$ transitions and its effects on various semileptonic $q^2$-spectra. In particular, we analyse the decay modes $B^{+}\rightarrow \eta^{(\prime)}\ell^+\nu_\ell$, within a standard model effective field theory approach. We use the available experimental data on various leptonic and semileptonic decays to constrain the parameter space of the relevant new physics couplings.
In the context of quark mixing phenomenon, understanding Cabbibo-Kobayashi-Maskawa (CKM) matrix and related phenomenology has important implications for the Flavor Physics including discovery of physics beyond the Standard Model. With continuous refinements in the data, the CKM phenomenology has been saddled with several persistent ‘anomalies’ which motivate one to explore some of the aspects of CKM matrix. For example, it has been observed that there is a persistent $3\sigma$ difference between $(V_{ub})_{excl.}$ and $(V_{ub})_{incl.}$ values. Similarly, it has been found that $V_{cb}$, although quite precisely known, exhibits a significant divergence in $(V_{cb})_{excl.}$ and $(V_{cb})_{incl.}$ values. Also, recently divergence has been noticed in the unitarity constraints of the first row of the CKM matrix.
One possible way to explain this divergence, perhaps, is to go to fourth generation of quarks, for which there is no theoretical bar. In the present work, we attempt to constraint the fourth row and fourth column of the CKM matrix. Using the presently refined data of CKM elements, our preliminary results put upper limits on the elements of the fourth row and fourth column, e.g.,
$$ \begin{pmatrix} 0.9734-0.9740 & 0.2229-0.2261 & 0.0033-0.0043 & \le 0.0529\\
0.2130-0.2290& 0.9650-1.0090&0.0382-0.0438& \le 0.1480\\
0.0074-0.0086& 0.0366-0.0410&0.9530-1.0730& \le 0.3006\\
\le 0.0841& \le 0.1331 & \le 0.3005 & \le 0.9999
\end{pmatrix}.$$
Minimal $U(1)_{L_\mu-L_\tau}$ extended Standard Model (SM) is well motivated and incorporates the discrepancy between the theoretical prediction and experimental observation of muon anomalous magnetic moment. We study the possibility of identifying the Beyond Standard Model (BSM) Higgs of $U(1)_{L_\mu-L_\tau}$ sector (otherwise required to break the additional symmetry) as the inflaton in the early universe. Within this framework, the BSM Higgs inflaton needs to be non-minimally coupled to gravity to satisfy the Planck-2018 CMB constraints. Although the structure so far seems to be trivial, we observe that studying the cosmological history from inflaton through the reheating phase to late-time, along with the $n_s-r$ constraints, leaves us with a small window of possible reheating temperature, which is also a function of $L_\mu-L_\tau$ model parameters. The combined requirements of satisfying the Planck 2018 bounds and solving $(g-2)_\mu$ restricts the mixing between the inflaton and the SM Higgs. This in turn makes our scenario falsifiable at upcoming lifetime frontier experiments like FASER, depending on the choice of inflaton-gravity non-minimal coupling.
In this work we have reported a study on the viscous generalized Chaplygin gas (GCG) in the presence of bulk viscosity and interacting scenario. Reconstruction schemes have been manifested in Einstein and modified $f(T)$ gravity framework. Non-viscous cases have also been taken into account. The equation of state (EoS) parameter has been studied under the various circumstances and the stability of the models has been shown through the sign of squared speed of sound. The GCG interacting with pressure-less dark matter has been found to be behaving like quintom in presence of bulk viscosity in Einstien's framework and in f(T) gravity scenario a phantom like behaviour has been reported. The equation of state parameter has been studied for this reconstructed model along with the deceleration parameter and the statefinder pair
{r,s}.The statefinder trajectory has been found to interpolate between dust and $ΛCDM$ phase of the universe. Cosmological evolution of primordial perturbations has been studied through scalar metric fluctuations in Einstein's and as well as $f(T)$ gravity framework. Finally,the reconstruction scheme has been examined using statistical analysis, Shannon entropy and Gaussian Mixture Model.In the section of Gaussian Mixture Model (GMM) we have created different clusters by the sets of data that appear close together. Although GMM does unification of early and late time universe.In addition to reconstruction in $f(T)$ gravity framework we have checked the possibility of avoidance of big-rip alongside a statistical exploration of the model.
A Cosmic Muon Veto Detector (CMVD) is being built around the existing RPC-based Mini-Iron Calorimeter (Mini-ICAL) to study the feasibility of a shallow depth neutrino experiment. The CMVD uses 4.5 m long extruded plastic scintillator strips. A Di-Counter made up of two extruded scintillator strips, is the basic building block of the CMVD. Two fibers embedded along the length of the strips are used to carry the scintillation photons to both the sides, where Hamamatsu made SiPMs (S13360-2050VE) sense photons and produce pulse with a charge proportional to the energy deposited by the passing Muon. A total of 380 Di-Counters will be required to cover four sides of the Mini-ICAL. The front-face of Mini-ICAL is not covered by CMVD for operational reasons. Each Di-Counter consists of 8 SiPMs which adds up to 3,040 readout channels. A charge resolution of 10 fC and a dynamic range of 100 pC are required to identify and veto true muon events from the background signals in SiPM.
The proposed readout electronics system will acquire charge and timing information from all the 3040 SiPMs on Mini-ICAL trigger. The extrapolated muon trajectory will be compared with the signal in CMVD to measure the veto efficiency. A Data Acquisition (DAQ) module is being designed to readout 40 SiPM channels. The raw SiPM signals are transmitted to the DAQ module using HDMI cables. DAC generated bias voltage for SiPMs is supplied using the same cable. Each of the SiPM signals is first amplified with a trans-impedance amplifier of gain 1200Ω and its pulse profile is digitised at 1GHz using a DRS4 ASIC. Since a DRS4 channel has 1024 cells, the SiPM pulse profile for the last 1024 ns (~1 µs) is available at any point of time. Considering a mini-ICAL trigger latency of 300 ns, a digitization window of more than 100 ns is available for muon pulses from CMVD detector. A 12-bit pipelined ADC is used to digitize the pulse profile of the SiPM signals. A zero suppression logic is used to filter data from channels with no hits. An Ethernet controller interfaced with an FPGA is used to handle the data communication between the DAQ module and a backend server.
This paper will briefly introduce the CMVD detector. A detailed description of the readout scheme of the detector along with the expected performance parameters of the scheme will be presented.
Gas Electron Multiplier (GEM) detectors have been used in various applications because of their outstanding spatial and time resolutions, high-rate handling capabilities, and design flexibility. GEM detectors are a potential instrument for nuclear and particle physics studies. The GEM detector operates in an environmentally friendly and sustainable gas combinations. The purity and quality of the gases are essential for the stable operation of GEM detectors. Impurities in a gas mixture and ambient conditions significantly impact the detector's operational performance. In this talk, we will discuss the ever expanding uses of sensors that offer more practical and affordable systems for monitoring environmental conditions inside detectors and hence allows us to control its performance. The sensor has been integrated with a monitoring system and tested ambient deep inside the detector’s volume. Data from these tests were used to evaluate the performance of the detector, refine the results, and validate their applicability in the variable conditions. Gas chromatography of the flushed gas has been done to establish the aging effect and the ways to mitigate it. A correlation between gain and various parameters like temperature, pressure, humidity, and gas contaminations have been established, which proved extremely critical for the optimal performance of the detector.
The problem of late-time cosmic acceleration is one of the critical issues in the scientific community. Various theoretical models that predict the acceleration in the late time phase have been presented. Out of these models, the non-canonical scalar field models have gained lots of popularity in recent years. The tachyon field model is one of these models that has been studied in detail by adopting the dynamical stability techniques. Usually, the dynamics of the tachyon field coupled with barotropic fluid minimally or non-minimally are studied by considering some form of field potential. With this conventional approach, there exists a variety of choices of the form of potential, which gives rise to similar dynamics. As a result, it is hard to distinguish one form of potential from another. This study focuses on the dynamics of the tachyon field (canonical and phantom) coupled minimally with barotropic fluid by choosing some parametrization of the equation of state(Eos) of the tachyon field ref. arXiv:2208.10352. These parameterized equations of states have standard forms and are time-dependent. One case is the Taylor series expansion of the Eos near the present time, which shows some severe cosmological limitations in phase space dynamics. The other parameterized equation of state is Hubble parameter-dependent and resembles the dark energy equation of state when the Hubble parameter becomes constant. In contrast, the other proposed Eos parametrization produces stable phase space dynamics in the late-time cosmic acceleration phase.
Despite the discovery of the Higgs boson, the Higgs sector of the standard model is still not fully established. In particular, the self couplings of the Higgs boson, and its couplings with gauge bosons, are still to be fully determined. We consider electroweak corrections to the process $H\rightarrow ZZ\rightarrow 4l$. The corrections depend on the $HHH$ and $ZZHH$ couplings. We investigate this dependence in the $\kappa$-framework. We find that the width depends on $HHH$ coupling significantly. The dependence on $ZZHH$ coupling is only marginal. We also discuss the dependence on $ZZWW$ coupling.
The signature of noncommutativity on various measures of entanglement has been observed by considering the holographic dual of noncommutative super Yang-Mills theory. We have followed a systematic analytical approach in order to compute the holographic entanglement entropy corresponding to a strip-like subsystem of length l. The relationship between the subsystem size (in dimensionless form) (l/a) and the turning point (in dimensionless form) introduces a critical length scale (lc/a ) which leads to three domains in the theory, namely, the deep UV domain (l< lc; aut >>1; aut~aub), deep noncommutative domain (l> lc ; aub>aut>> 1) and deep IR domain (l> lc ; aut< 1). This in turn means that the length scale lc distinctly points out the UV/IR mixing property of the non-local theory under consideration. We have carried out the holographic study of entanglement entropy for each domain by employing analytical and numerical techniques. We then compute the minimal cross-section area of the entanglement wedge by considering two disjoint subsystems A and B. On the basis of EP= EW duality, this leads to the holographic computation of the entanglement of purification. The correlation between two subsystems, namely, the holographic mutual information I(A: B) has also been computed. Moreover, the computations of EW and I(A: B) has been done for each of the domains in the theory. We have then briefly discussed the effect of the UV cut-off on the IR behaviors of these quantities.
Emission properties of the Astrophysical objects such as Neutron Stars are found using mass, pressure profile and thermal cooling rate. In this current work, we determine the cooling rate of spherically symmetric neutron star as a function of time and distance from the star's centre using NSCool code. Here we first find the mass, pressure and baryon number density profile of the non-rotating neutron stars using modified Tolman–Oppenheimer–Volkoff (TOV) system of equations in the presence of intense magnetic field. We used here a constant value of magnetic field and a distance dependent magnetic field in TOV equations to obtain the profile. We employ three different equation of states to solve the TOV equations by assuming that the core of Neutron Stars is composed of a hadronic matter. By employing above profile, we obtain the cooling rate with and without magnetic field to examine the effect of magnetic field for three different equations of states. Observed temperature of a few Neutron Stars have also been plotted along with calculated values for comparison. Finally, emissivity of axions as a dark matter candidates has been calculated as a result of the nucleon Bremsstrahlung mechanism with and without magnetic field.
Energy dependence of information entropy is examined using the multiplicity distributions(MDs) of produced charged particles in pp collisions at ISR, SPS and LHC energies. The findings reveal that MDs at these energies exhibit a new type of scaling if the variable involved is the 'Information entropy' of the distribution, $S = -\Sigma Pn ln Pn$. Similar entropy scaling has also been observed too in AA collisions at AGS and SPS energies. The observed entropy scaling has been argued to be a special case of the more general case of multifractal structure. The analysis is extended to estimate the Reyni's order q information entropy, $I_q = \frac{1}{q-1} ln(\Sigma (Pn)^q)$, for q $\neq$ 1. For q = 1 $\lim_{q \rightarrow 1}I_q = I_1 = S$. This in turn gives the generalized dimensions of order q as, $D_q = I_q/ Y_m$; $Y_m$ being the maximum rapidity. As the quantity $\Sigma(Pn)^q$ scales with improving resolution $\delta\eta$, like multifractal moments $G_q$, Renyi's entropy may also be used to study the multifarctal characteristic of multiparticle production. It is observed that $D_q$ monotonically decreases with increasing order q(= 2 to 8). This can be correlated to q point integral and hence the observed trends of q dependence of $D_q$ suggest the multifractal nature of MDs. In CHS (constant specific heat) approximation, $D_q$ dependence of q may be represented by the relation $D_q \simeq (a-c) +c (lnq/q-1)$, where a is the information dimension $D_1$, while c is referred to as the multifractal specific heat.
The observed linear dependence of $D_q$ on lnq/q-1 for the various data sets considered does suggest the presence of multifractality in pp collisions in the energy range $\sim$ 30 GeV to 7 TeV. Furthermore, the values of multifractal specific heat c are found to be nearly the same $\sim$ 0.1 in full and limited $\eta$ windows for all the data sets considered. Almost similar values of c have also been obtained earlier for hadron-hadron collisions in the energy range $\sim$ 19 - 540 GeV. Such a linear dependence of $D_q$ on lnq/q-1 has been reported in heavy-ion collisions in the energy range $E_{Lab} \sim$ 10 to 200A GeV/c. The values of c in these investigations have been obtained to be $\sim$ 0.25, somewhat higher than those observed for pp data. These observations, therefore, suggest that the CHS approximation is applicable to multiparticle production in pp and AA collisions at relativistic and ultra-relativistic energies and the parameter 'c' may be regarded as the universal characteristic at high energy hadronic collisions. The findings are compared with the Monte Carlo Model PYTHIA-8.2. Event samples corresponding to various energies are simulated and analyzed. The effects of multiparton interaction(MPI) and color re-connection(CR) have also been looked into. The event samples are simulated by setting the codes in different modes; MPI on or off and CR on or off for the purpose. These results will be presented.
We present a new methodology to perform the epsilon-expansion of hypergeometric functions with linear epsilon-dependent Pochhammer parameters in any number of variables. Our approach allows one to perform Taylor as well as Laurent series expansion of multivariable hypergeometric functions. Each of the coefficients of epsilon in the series expansion is expressed as a linear combination of multivariable hypergeometric functions with the same domain of convergence as that of the original hypergeometric function thereby providing a closed system of expressions. We present illustrative examples of hypergeometric functions in one, two, and three variables which are typical of Feynman integral calculus.
The concept of dark energy has been proposed to explain the observed accelerated expansion of the universe. One popular and trivial choice of source of dark energy is the scalar field. We choose the tachyonic field, one of the scalar fields, as a candidate for dark energy and discuss it in a model where matter and dark energy (tachyonic scalar field) are allowed to interact with each other. The age of the universe and the coupling strength of the interaction has been estimated and discussed.
In this work, we have discussed how a multivariate algorithm, namely XGBoost, can deal with the disagreement between data and simulation in any analysis. The goal is to train the model in the control channel and extract scale factors which can be used for event re-weighting in the corresponding signal channel. In $B_{s}\to \phi\mu^{+}\mu^{-}$ analysis, $B_{s}\to J/\psi\phi$ is the normalization channel and is used to train the XGBoost model. $B_{s}\to J/\psi\phi$ simulation and data (where the background is statistically subtracted: \textit{splot}) are supplied as inputs to XGBoost with important kinematic variables chosen as input variables. The multivariate algorithm learns the discrepancies between data and simulation through these input variables. It represents them in terms of probability distributions which are later used to determine the scale factors.
Straw tubes are drift chambers made of a gas filled conducting cylinder acting as cathode, and a wire stretched along the axis of the cylinder acting as an anode. The Straw Tube Trackers (STTs) are a low mass tracking system with excellent vertex, momentum, angular and time resolution, and particle identification. Straw Tube based tracking detector is proposed for one of the Near Detectors in the long baseline neutrino experiment, Deep Underground Neutrino Experiment (DUNE) at Fermilab. Our group at Panjab University has assembled a single straw tube and designed a prototype (1.8 m x 50 cm) for the SAND STT modules. Also, gas chambers have been fabricated. Gas leak tests and other properties of the assembled straw tube are studied and reported. Simulation studies using GARFIELD++, a C++ based simulation of tracking detectors, is done for the dimensions and properties of the assembled single straw tube using different gas mixtures. The dependences of the gas gain on high voltage are presented. The dependences of spatial resolution and efficiency on the high voltage and thresholds are also reported using the simulation.
Chiral Magnetic Wave (CMW), induces electric quadrupole moment in quark-gluon plasma produced in heavy-ion collisions, which removes degeneracy between elliptic flow of positively and negatively charged particles [1]. The charge-dependent elliptic flow as a function of the charge asymmetry ($A_{ch}$) serves as an important tool for study of CMW. We performed this study on 13.5 million Au+Au collisions at $\sqrt{s_{NN}}$ = 200 GeV simulated events using a multiphase transport (AMPT) model with string melting. We have also generated AMPT events with externally injected quadrupole moment.
We use Event Shape Engineering (ESE) technique to study dependence of slope of $\Delta v_2(A_{ch}$) with average $v_2$ and to extract the CMW fraction in generated samples of events. In addition, we will also report the differential three particle correlator, which may further help in elucidating experimental observations of CMW.
References
[1] Phys. Rev. Lett. 107, 052303 (2011)
The Non-Standard Interactions (NSIs) are subdominant effects due to unknown couplings of neutrinos, often appearing in various extensions of the Standard Model, which may impact the neutrino oscillations through matter. It is important and interesting to explore the impact of NSIs in the ongoing and upcoming promising neutrino oscillations experiments. In this work, we have probed the imprints of a scalar-mediated NSI in three upcoming long-baseline (LBL) experiments (DUNE, T2HK and T2HKK). The effects of scalar NSI appear as a medium-dependent correction to the neutrino mass term. Its contribution scales linearly with matter density, making LBL experiments a suitable candidate to probe its effects. We show that the scalar NSI may significantly impact the oscillation probabilities, event rates at the detectors and the $\chi^2$-sensitivities of $\delta_{CP}$ measurements. We present the results of a combined analysis involving the LBL experiments (DUNE+T2HK, DUNE+T2HKK, DUNE+T2HK+T2HKK) which offer a better capability of constraining the scalar NSI parameters as well as an improved sensitivity toward CP-violation.
References:
[1] L. Wolfenstein, Neutrino Oscillations in Matter, Phys. Rev. D 17 (1978) 2369.
[2] S.-F. Ge and S. J. Parke, Scalar Nonstandard Interactions in Neutrino Oscillation, Phys. Rev. Lett. 122 (2019) 211801 [1812.08376].
[3] A. Medhi, D. Dutta and M. M. Devi, Exploring the effects of scalar non standard interactions on the CP violation sensitivity at DUNE, JHEP 06 (2022) 129 [2111.12943].
[4] A. Medhi, M. M. Devi and D. Dutta, Imprints of scalar NSI on the CP-violation sensitivity using synergy among DUNE, T2HK and T2HKK, 2209.05287.
Isolated ideal neutron stars (NS) of age $>10^9$ yrs exhaust thermal and rotational energies and cool down to temperatures below $\mathcal{O}(100)$ K. Accretion of particle dark matter (DM) by such NS can heat them up through kinetic and annihilation processes. This increases the NS surface temperature to a maximum of $\sim 2550$ K in the best case scenario. The maximum accretion rate depends on the DM ambient density and velocity dispersion, and on the NS equation of state and their velocity distributions. Upon scanning over these variables, we find that the effective surface temperature varies at most by $\sim 40\%$. Black body spectrum of such warm NS peak at near infrared wavelengths with magnitudes in the range potentially detectable by the James Webb Space Telescope (JWST). Using the JWST exposure time calculator, we demonstrate that NS with surface temperatures $\sim 2400$ K, located at a distance of 10 parsec can be detected through the F150W2 (F322W2) filters of the NIRCAM instrument at SNR\, $\sim 10$ (5) within 24 hours of exposure time.
In the vicinity of Planck length scale, only where the quantum gravitational effects are expected to be observed, any attempt towards localization of an event inevitably results in gravitational collapse. To avoid such a scenario one needs to postulate noncommutative algebra between space-time coordinates, which are now promoted to the level of operators. On the other hand, a consistent formulation of Quantum mechanics itself, with time being an operator is a challenging and longstanding problem. Here we have given a systematic way to formulate non-relativistic quantum mechanics on 1+1 dimensional “quantum” space-time (Moyal type noncommutativity) in a user friendly way, which mandates the formulation of an equivalent commutative theory. Although the effect of noncommutativity of space-time should presumably become significant at a very high energy scale, it is intriguing to speculate that there should be some relics of the effects of quantum spacetime even in a low energy regime. With this motivation in mind we undertake the study of time dependent system, for example forced harmonic oscillator in quantum space-time, where time is also an operator and have shown the emergence of geometric phase, which vanishes if the noncommutative parameter is put to zero, proving the fact that, occurrence of geometric phase for the above system is totally dependent on the non-commutativity of spacetime.
The light sea quark distribution functions $\bar{d}(x)$ and $\bar{u}(x)$ have been calculated explicitly for proton using the chiral constituent quark model which has connotation of chiral symmetry breaking and SU(3) symmetry breaking. In view of the latest SeaQuest data the results have been discussed thoroughly for the light antiquark asymmetries and the Gottfried integral.
Correlation between the various observables, like, multiplicity, sum of event transverse momenta or the net charge of particles produced in pp collisions at LHC energies within intervals separated in pseudorapidity and azimuth angle is regarded as a sensitive tool to investigate the collision dynamics and test the models of hardon production. In the present work forward-backward (FB) correlation method is used to measure the net charge angular correlations. Two small windows of width $\delta\eta$ in pseudorapidity and $\delta\phi$ in azimuth angle are considered and placed in such a way that their values lie at ($\eta_1,\phi_1$) in backward and at ($\eta_2,\phi_2$) in forward region. The value of net charge $Q = n_+ - n_-$ in these windows on event-by-event basis are estimated and the FB net charge correlation coefficient is calculated using the relation $b_{QQ} = (\langle Q_FQ_B\rangle - \langle Q_F\rangle\langle Q_B\rangle)/(\langle Q_F^2\rangle - \langle Q_F\rangle^2)$. Events corresponding to pp collisions at $\sqrt{s}$ = 7 and 13 TeV are simulated using the Monte Carlo event generator PYTHIA-8. The events are simulated by setting the color re-connection on/off and Bose-Einstein (B.E) effects on/off. In the present study of the variations of $b_{QQ}$ with $\eta_{sep}$ in different $\phi$ regions for 7 TeV pp data, it is observed that the value of $b_{QQ}$ are negative except for $\eta_{sep}$ =0 and $\delta\phi = 0$, when B.E effects are switched as `on'. The absolute value of $b_{QQ}$ are observed to decrease with increasing $\eta_{sep}$ and approach to zero for $\eta_{sep} \sim$ 1. This indicates that there is a transition between negative and positive correlation coefficients at higher rapidity gaps. Some structured variations in the values of $b_{QQ}$ are seen in the region of smaller $\eta_{sep}$, which seems to be of interest. Similar structured behavior have been observed in balance function studies too. The effects of Bose-Einstein correlation and the color re-connection on the correlation coefficient may also be noticed in the figure, particularly, in the region of small $\eta_{sep}$. These findings will be discussed in detail and a comparison with 13 TeV results will be presented.
Within the framework of the symmetric-asymmetric Gaussian barrier distribution (SAGBD) approach, the fusion cross-sections for 12C + 144,154Sm reactions are theoretically analyzed in the energy range lying below to well above the Coulomb barrier. In the SAGBD approach, the multidimensional nature of nuclear interaction potential is included by using a Gaussian type of weight function within simple Wong formula. The fusion dynamics of 12C + 144, 154Sm systems are strongly influenced due to the involvement of intrinsic structure of reaction partners, and such effects are described in terms of λ & V_CBRED. Because the projectile is same for both systems, the rotational states of the well deformed target (154Sm) are found to be dominant one in enhancing the fusion cross-sections at sub-barrier energies while the low-lying vibrational states of 144Sm are found to be responsible for the reported sub-barrier fusion enhancement of 12C + 144Sm reaction in comparison to the expectations of the one-dimensional barrier penetration model (BPM). For the studied reactions, the SAGBD calculations are found to appropriate for describing the behavior of the fusion cross-sections around the Coulomb barrier.
We revisit the status of asymptotic symmetries in higher even dimensions and propose a definition of superrotation charge beyond linearized gravity. We prove that there is a well-defined spacetime action of the superrotation charge on the space of asymptotically flat geometries. Additionally, we demonstrate that the Ward identity associated with superrotation charges follows from the subleading soft graviton theorem, which is a universal constraint (in d > 4) along with the leading soft graviton theorem.
Left-Right symmetric models have been a natural extension to the standard model based on the fact that the standard model is predominantly left-handed. So it is normal to think that the left-right symmetry is restored at high energies. The spontaneous breaking of this left-right symmetry depends on the specific model and the minimal model involves extra Higgs triplets or doublets to give rise to small neutrino masses via seesaw mechanism. Another perk of the left-right symmetric models is that they can be easily embedded within GUT theories like $SO(10)$. Now, the breaking of such a discrete symmetry can be of two types: 1st order or 2nd order. If it is 1st order, bubbles of true vacuum are created in the sea of false vacuum of the field responsible for the breaking. On the other hand, in 2nd order case, the field just chooses one of the vacua randomly at random places and no bubbles are created. Two regions of different vacua are separated by so called domain walls. After the phase transition, bubbles (in case of 1st order) and domain walls (in case of 2nd order) move through space and produce gravitational waves which can be detected in future GW experiments. We try to show this complete picture of two kinds of phase transitions and gravitational waves production mechanisms, and try to look for detectable signatures on the stochastic gravitational wave background.
Existing endcap calorimeters of the CMS experiment cannot cope with the radiation or pileup expected during the high-luminosity operation of the LHC. Their jet energy resolution also needs to be augmented in order to enhance the physics reach of the experiment. At high jet energy, a correct association between the charged particle tracks and the calorimetric clusters is very important, motivating the deployment of a calorimeter with very high granularity (‘HGCAL’). We at TIFR are involved in the R&D and prototyping activities towards hosting an assembly center for the HGCAL modules. These involve a study of precision gluing, wirebonding, visual inspection, and encapsulation of wirebonds with an 8-inch prototype hexagonal baseplate, sensor, and printed circuit board. The talk will give an overview of our activities.
The most dominant but experimentally difficult decay channel of the HIggs boson, into a pair of bottom quarks
has already been established using the Higgs boson production associated with a vector boson (VH, V = W or Z),
when V decays leptonically. The relatively more abundant and the second largest Higgs boson production mode,
the vector boson fusion (VBF) process, is suitable to study this decay mode more precisely. Though marked by a
all hadronic final states the topology of the VBF events provides a good handle to tackle the QCD multi-jet background
significantly. Dedicated VBF topological event triggers and rigorous use of machine learning (ML) make the result
sensitive. The recent measurement by the CMS collaboration using Run-2 data of the LHC will be presented in
this talk.
Over the past decade, data from ground based cosmic ray air shower arrays such as Pierre Auger Observatory and Ice-Cube Neutrino Observatory have consistently revealed a deficit in the number of muons predicted by air shower simulations compared to observations, at more than 8 standard deviations in combined statistical significance, leading to the so called muon puzzle. Resolving this puzzle remains a key challenge for accurate measurement of cosmic ray mass composition for indirect detection experiments, as well as for neutrino astronomy. We present a systematic study of the uncertainties in atmospheric lepton flux predictions, from hadronic interaction models. The discrepancy starts at TeV energies and is thus tractable with data from the LHC. Prospects with forward hadron production data from pp, p-Pb, and p-O collisions at LHC will be discussed.
In 1988 the European Muon Collaboration (EMC) at CERN shocked the physics community by announcing that the sum of the spins of the three quarks that make up the proton is much less than the spin of the proton itself, later on which is known so-called "proton spin puzzle." Physicists have been unable to answer a seemingly simple question: where does proton spin come from? How the proton’s spin originates from its constituents like quarks and gluons and their interactions, which are regulated by quantum chromodynamics (QCD), is a key issue in nuclear and particle physics. In order to address this issue, the spin sum rules divide the proton’s spin into its quark and gluon spin and angular momentum components. The quark and gluon spin componants comes from the parton distribution functions, while the orbital angular momenta are related to the Generalized Parton Distribution functions (GPDs). So bascially in this talk I will adress the individual contributions to the proton spin from all the constituents. In particular, we will look at two different types of decompositions of the proton spin: (1) the non-gauge invariant Jaffee-Manohar decomposition, and (2) the gauge invariant decomposition proposed by Ji.
In the framework of black hole chemistry, we present an equipartition theorem for four dimensional AdS black holes with spherically symmetric and static horizons in Einstein gravity. It is found that at high temperatures, the total enthalpy of the spacetime is equally shared among the putative microstructures of the black hole horizon with each degree of freedom contributing an energy $k_BT$. This strengthens the analogy between the thermodynamics of gases and that of black holes. Finally, we demonstrate the consistency of our result from a holographic point of view.
In particle physics, the Glashow-Weinberg-Salam (GWS) model of the electroweak (EW) interactions describes the fundamental parameters, i.e, coupling constant ($\alpha_{EM}$), Fermi constant ($G_{F}$), W boson mass ($M_{W}$), Z boson mass ($M_{Z}$) and $\theta_{W}$, referred as the Weak Mixing Angle or the Weinberg Angle. This angle is a fundamental parameter in the Standard model (SM), probing mixing of W and B fields and can be defined as,
$\sin^{2}\theta_{W}$ = 1- $\frac{M_{W}^{2}}{M_{Z}^{2}}$.
Due to the difference of the Z boson couplings for the left-handed and right-handed fermions (f), an asymmetry is observed in the angular distribution between the oppositely charged leptons produced in Z-boson decays. This asymmetry depends on the weak mixing angle between the neutral states associated to the U(1) and SU(2) gauge groups. To all order in perturbation theory, the effective weak mixing angle ($sin^{2}\theta_{eff}^{f}$) is related to the vector ($g_{V}^{f}$) and axial vector couplings ($g_{A}^{f}$). This is identical for all leptons because of lepton universality. At present, the two most precise experimental measurements (LEP, SLD) disagree by about 3$\sigma$.
Therefore, the non-SM process dependence should be investigated further to get a hint for new physics. Also, this measurement is an overall test of the EW sector. This is also an indirect measurement of the mass of the W-boson. So, the precise measurement of the weak mixing angle is a study of immense importance.
A precision extraction using high luminosity LHC (HL-LHC) data can help settle the long standing issue of the discrepancy between precise measurement from LEP and SLD data. Searches at HL-LHC will profit from the much larger statistics, slightly higher energy and upgraded detectors. Additionally, extending the pseudo-rapidity acceptance with the upgraded detectors is expected to significantly reduce both the statistical and PDF uncertainties. Direct searches for new physics will continue at the HL-LHC with enhanced sensitivity. These all possibilities will be discussed in details. Also, the effect of the recently found (by CDF Collaboration) W-boson mass on the precise measurement of the electroweak fundamental parameters will be discussed. Emphasis will be given on the predictions from theory and simulations regarding the measurement of asymmetry, parton distribution functions, higher order effects, etc. We hope that the results will generate much interest among the scientific community as these are very important findings to predict the future experimental measurements.
Collectivity is an essential feature of the strongly interacting matter formed in the deconfined phase of quarks and gluons in the collisions of nuclei at relativistic energies. Experimentally such collective behaviour has been observed in heavy ion collisions at RHIC and LHC energies. The other observations, like strangeness enhancement, also support the existence of quark-gluon plasma in heavy-ion collisions. In comparison, the formation of QGP in $p+p$ collisions is ignored so far due to the lack of particle number density in p+p collisions. Thus over the decades, the $p+p$ collision systems have served as a baseline for heavy-ion collisions and helped in revealing the unconventional demeanour of the heavy-ion collisions. In contrast to that, recently, collective behaviour and strangeness enhancement have been observed in $p+p$ collisions at the LHC energies. These observations suggest that the existence of a QGP-like medium in small systems cannot be ruled out completely if collision energy is large enough. Inspired by these observations, we attempt to investigate the existence of QGP-like medium in $p+p$ collisions at $\sqrt{s} =$ 5.02, 7 and 13 TeV collisions energies. We employ (1+1) D second-order viscous hydrodynamics in this study to account for the QGP medium evolution. Further, we use the Unified Model of Quarkonia Suppression (UMQS) to explain the experimental data available in the form of the normalized charmonium yield with respect to the normalized charged-particle multiplicity. Our UMQS model contains possible QGP effects which govern the net quarkonia yield in ultrarelativistic collisions. Our theoretical study supports the idea of the existence of a QGP-like medium in $p+p$ collisions.
The Light Front formulation of quantum field theories has been put to use in the study of hadron physics in the last few decades. However, like in the conventional formulation, Light Front field theories too are fraught with infrared divergences in the presence of massless particles. We employ the coherent state formalism to deal with these divergences at the amplitude level for Light Front QCD in the perturbative regime. We firstly obtain the coherent states, in Light Front formulation, to be used in calculating the transition amplitudes for the processes $e^{+}e^{-}\rightarrow 2\;jets$ and $e^{+}e^{-}\rightarrow 3\;jets$. We then discuss the cancellation of infrared divergences - both soft and collinear - for these processes at the amplitude level.
At collider machines operating at energies much above the electroweak scale, all Standard Model particles will appear essentially massless, including the nominally heavy ones. The kinematic consequences of this can make the signals for the Standard Model, and for other models, very different from the signals at the LHC or other colliders of the past. These differences are explained and some of the common signals are revisited in the context of very high energy colliders.
The origin of cosmic ray particles is still largely unknown since they are
deflected on their journey to the Earth by magnetic fields. However, very high
energy (VHE) photons that can be produced by both leptonic and hadronic
processes, are attenuated by extragalactic background light, i.e. they cannot be
probed distances larger than z ∼ 1 at energies above ∼ 1 TeV. In comparison,
only hadronic processes can produce an astrophysical neutrino flux which would
travel unattenuated and undeflected from the source to the Earth. Thus, astro-
physical neutrino observations are crucial to identify CR sources, or to discover
distant VHE accelerators. The KM3NeT detector for Astroparticle Research
with Cosmics in the Abyss (ARCA), with a cubic kilometer instrumented vol-
ume, is currently being built in the Mediterranean Sea. KM3NeT has a view
of the sky complementary to IceCube neutrino detector. It serves an excellent
pointing resolution (< 0.2◦f or > 10 TeV neutrinos) as well as would be sen-
sitive in a large energy range (GeV - PeV), for the upgoing neutrinos. In this
contribution, we present a stacking analysis, that predicts the significance of a
global excess of track-like events in KM3NeT data in correlation with a list of
point-like sources. Different samples of sources are tested in this analysis: Fermi
gamma-ray astrophysical source catalog with a) 1045 BL lac objects and b) 650
radio quasars. We apply a thermal model to study the neutrino production from
the mentioned γ-ray source samples. We try to find a correlation between the
KM3NeT data and the observed Fermi extragalactic sources.
With the increasing complexity and growing volume of data taken by the current experiments, the enormous challenge of isolating potential BSM signatures from the known Standard Model(SM) footprints is an active area of research in HEP. Machine learning (ML) algorithms are appropriate for analyzing large amounts of data and can find more intrinsic patterns in multidimensional data. We explored different data dimension reduction algorithms such as Principal Component Analysis(PCA) or Uniform Manifold Approximation and Projection(UMAP) to better understand the multidimensional data in lower dimensional latent space. We have also shown that preserved data structure in the latent space can be used as a potential classifier in an object or event-level classification task. We benchmark their performance against the popular Deep Neural Network based classifier in the context of classifying prompt leptons from fake leptons in various class imbalance and training statistics scenarios.
We investigate the nature of the complex retarded potential of a heavy quark moving in a hot and dense static quark gluon plasma. The well-known concept of the retarded potential in electrodynamics is extended to the context of the heavy-quark by modifying the static vacuum Cornell potential through Lorentz transformation to the static frame of the medium. The resulting potential in the vacuum is further corrected to incorporate the screening effect offered by the thermal medium. To do so, the retarded Cornell potential is modified by the dielectric function of the static QGP medium. We present the numerical results for the real and imaginary parts of the potential along with the analytical expression of potential approximated by a small velocity limit. The relative motion of a heavy quark with respect to the static QGP medium breaks the spherical symmetry of the potential. We show the angular variation of both the real and imaginary parts of the potential at different velocities. Finally, we present the thermal width of quarkonia in the QGP medium derived using the imaginary part of the potential and study its dependence on velocity and temperature.
We usually give importance to discrete symmetries, which helps us to investigate neutrino phenomenology. So, in here we consider a very simple permutation symmetry group i.e. $S_3$ to examine neutrino masses and mixing in the linear seesaw framework. To our help we have introduced modular symmetries which are advantageous in avoiding the requirements of multiple flavon fields which in return helps to keep away the intricacies of vacuum alignments. In this way we endeavor to clarify the effect and significance of modular $S_3$ symmetry which is considered in explaining the neutrino mixing viable with the current observations. We additionally talk about the non-zero reactor mixing angle and attempt to oblige the model parameters accordingly, also, discuss about leptogenesis briefly.
In this work we calculate the time evolution of local gauge invariant field theoretical model, comprising
of a scalar field coupled to vector gauge field. Assuming a linear relationship between phase angles
α(x) at two closely separated space-time points x and x′ = x − δ, with 0 < δ < 1, we obtain an explicit
dependence of scalar field φ(x) at x and x′ in terms of Wilson-line variable. Using the modified value of
field φ(x′) we evaluate the effective coupling of this system in dimension d < 4 near the critical region.
In the mean-field approximation, we found that the scalar self coupling λ at Wilson-Fisher fixed point
of this system is modified as λ∗ = λW F /t4, where λW F = 16π2
3 (d − 4) and t is the time of evolution.
With this modified coupling we found that the density of active states for this system behave as Ω ∝ 1
t4 .
Most of the hadronic B decays observed proceed through a "D" meson (D, D$^*$, D$^{**}$) , as b$\rightarrow $c transitions dominate among other b transitions. D$^{**}$ indicates the collection of non-strange charm mesons falling in the mass range of 2.2 - 2.8 GeV/c$^2$.
We present the study of B to charm decays in the Belle experiment with 711 fb$^{-1}$ electron-positron collision data recorded at the center of mass energy at the $\Upsilon$(4S) resonance mass. For this analysis, we employ the missing mass method, in which the other B-meson is reconstructed in several hadronic final states, and charm-meson is searched in the recoil of accompanying ($\rho$ or $\pi$) mesons. The study will result in the first measurement of the decay B$\rightarrow $D$^{**}\rho$.
The decay D^{0}->K_{s}K_{s} is a singly Cabibbo-suppressed transition that involves the interference between cubar->ssbar and cubar->ddbar amplitudes, mediated by the exchange of a W boson at the tree level, that can generate CP asymmetries at the 1% level, even if the Cabibbo-Kobayashi-Maskawa phase is the only source of CP. Current experimental measurements of the CP asymmetry in D^{0}->K_{s}K_{s} decays are still limited by the statistical precision, with the best measurement performed by Belle experiment using data at an integrated luminosity of 921 fb^{-1}: A_{CP}(D^{0}->K_{s}K_{s}) = (-0.02+-1.53+-0.02+-0.17), where the first uncertainty is statistical, the second systematic and the third due to the CP asymmetry of the reference D^{0}->K_{s}π^{0} mode.
A_{CP} in D^{0}->K^{+}K^{-} is measured with 0.11% precision, Therefore, using D^{0}->K^{+}K^{-}as the control mode reduces the uncertainty due to the control mode in addition to making the analysis simpler. In this work, we report the preliminary measurement of the raw asymmetry of the D^{0}->K^{+}K^{-} decay using the Belle II Simulation. The final goal of this analysis is to measure the CP asymmetry in D^{0}->K_{s}K_{s}, using D^{0}->K^{+}K^{-} as the reference mode.
We calculate the gravitational form factors (GFFs), pressure, shear and energy distributions for a quark state dressed with a gluon at one loop in QCD. We call this model as dressed quark model (DQM). We use the light-front Hamiltonian approach. In the light-front gauge, we use a two component formalism to eliminate the constrained fields. The state may be thought of as a perturbative model for a relativistic spin-$\frac{1}{2}$ composite system having a gluonic degree of freedom. We compare the results with model calculations for a nucleon.
Detection of delayed sub-TeV photons from Gamma-Ray Bursts (GRBs) by MAGIC and HESS has proved the promising future of GRB afterglow studies with the Cherenkov Telescope Array, the next-generation ground-based gamma-ray astronomy observatory. With the unprecedented sensitivity of CTA, afterglow detection rates are expected to increase dramatically in the coming decade. We embark on exploring the multi-dimensional afterglow parameter space to see the detectability of sub-TeV photons by CTA. Sub-TeV emission is always due to the self-Compton process. We find that jets with high kinetic energy decelerating into a dense ambient medium are better candidates for CTA. We apply our results in the context of short-duration GRBs and counterparts to Neutron Star mergers from the local universe.
NO$\nu$A is a long-baseline accelerator neutrino experiment at Fermilab that aims at precision neutrino oscillation analyses and cross-section measurements. Large uncertainties on the absolute neutrino flux affect both of these measurements. Measuring neutrino-electron elastic scattering provides an in-situ constraint on the absolute neutrino flux. In this analysis the signal is a single, very forward-going electron shower with
$E_{e}{\theta_{e}}^{2}$ peaking around zero. After the electron selection, the primary background for this analysis is the beam $\nu_{e}$ charged current events ($\nu_{e}$ CC). Muon removed electron-added (MRE), events are constructed from $\nu_{\mu}$ CC interactions by removing the primary muon track and simulating an electron in its place. It helps us to understand the consequence of
hadronic shower mismodelling on $\nu_{e}$ selection. This talk presents an overview of on-going MRE studies and a plan for how this sample can be used to provide a data-driven constraint on the $\nu_{e}$ CC backgrounds present in the $\nu$-e analysis.
Soft function captures the IR singularities, and in the perturbation theory, it is known to exponentiate. Its logarithm can be organized in terms of collections of Feynman diagrams called webs. Studies into the classification of webs to high perturbative orders have been carried out up to four loop orders. Using next-to-eikonal Feynman rules, the soft function can be generalized to the next-to-soft function, and consequently, webs can be generalized to next-to-eikonal webs to include corrections to the soft function.
In this talk, I present an approach to study the color structure of next to soft function upto three-loop order.
Next-to-next-to-leading order (NNLO) QED corrections are an important ingredient for different low-energy experiments such as MUonE, MUSE, and P2. In this talk, we will discuss the computation of such higher-order corrections to different observables relevant to the above experiments. To compute these corrections it is important to keep the masses of leptons finite, which regularises the collinear singularities. Soft singularities are treated with dimensional regularisation and using FKS$^l$ subtraction. We will discuss the implementation of various QED processes at NNLO in the McMule framework. We discuss the phenomenological results for muon-electron scattering, including the dominant NNLO corrections. In addition, we also discuss a stable implementation of the numerically delicate real-virtual matrix elements for the Bhabha scattering as well as Møller scattering.
In the current study we are demonstrating a bounce cosmology with generalized holographic cutoffs. The bounce realization arising from the application of holographic principle has been demonstrated with a modified gravity framework. Considering a multiplicative bouncing scale factor we have shown the four types of singularities. For this scale factor we have consider a scenario having holographic background evolution and derived a number of constraints depending upon the beheaviour of the equation of state parameter $w_\Lambda$. Accordingly we have carried out cosmographic analysis where the state finder trajectory has attained $\{r=1,s=0\}$ i.e. $\Lambda$CDM fixed point. This is attainable in the pre bounce ekpyrotic contraction scenario and in the post bounce expansion $s$ is going to $-\infty$ for finite $r$. In the next phase the holographic dark energy with generalized cut off is considered as the background fluid and the constraints obtained earlier are imposed to study the bounce realization of the same. The infrared cut off are reconstructed for the bounce realization and finally some specific viable models of $f(T)$ gravity are investigated for the bouncing scenario and the consequences of the UV correction are discussed. Adding simple correction to the particle and future event horizons due to the UV cutoff are used to obtain nonsingular bouncing solutions.
DUNE (Deep Underground Neutrino Experiment) is a long baseline neutrino oscillation experiment that is currently being built to study the $\nu_{\mu}-\nu_e$ oscillations, which will eventually help in determining the neutrino mass-hierarchy, CP violation in the lepton sector and many other exciting areas of particle physics. A Near Detector (ND) Complex comprising three detectors - ND-GAr, ND-LAr, SAND, is proposed to be built $575$ m from the neutrino source to monitor unoscillated $\nu_{\mu}$. The oscillated $\nu_e$ will be detected at the far detector $1300$ km from the neutrino source in a $70$ kton Liquid-Argon volume. It is important to measure the neutrino beam flux at the source precisely in order to reduce uncertainties on the $\nu_e$ to $\nu_\mu$ ratio measurements at the FD.
SAND (System for on-Axis Neutrino Detection) is one of the key detectors at DUNE to monitor the neutrino beam. It consists mainly of low-density Straw Tube Trackers (STTs), and a Liquid Argon volume (GRAIN) to obtain the necessary precision in neutrino flux measurements. The simulation studies performed to optimize the geometry, for reducing flux uncertainties, will be presented at the symposium.
Exploration of entanglement entropy and obtaining the Page curve in the context of eternal black holes associated with top-down holographic duals of QCD-like theories at high temperatures and intermediate coupling, has been missing in the literature. In this talk, I will explain how we obtain the Page curve of an eternal black hole relevant to the M-theory dual of thermal QCD-like theories at high temperatures and intermediate coupling (effected via inclusion of terms quartic in curvature in M-theory). We consider two candidate surfaces: Hartman-Maldacena-like surface and the island surface in the context of the aforementioned eternal black hole. We calculate the entanglement entropy contribution from both these surfaces, using Dong's formula for higher derivative gravity theories. Entanglement entropy contribution from Hartman-Maldacena surface has (an approximate) linear time dependence and diverges at late times. After the Page time, the entanglement entropy contribution from the island surface dominates which saturates the linear time growth of the entanglement entropy of the Hawking radiation and we obtain a (near) perfect Page curve. Interestingly, we found consistency between the Page curve obtained from computation of areas of Hartman-Maldacena-like and island surfaces with the Page curve obtained earlier using Dong's formula at ${O}(\beta^0)$ and ${O}(\beta^0)$ contribution to the entanglement entropy from Hartman-Maldacena-like surface provides us "Swiss-Cheese'' structure in "Large'' $N$ "Scenario''. Finiteness of ratio of entanglement entropy of the island surface to the thermal entropy and positivity of Page time requires lower and upper bound on the black hole horizon radius. Further, with the inclusion of the $O(R^4)$ terms in M theory, the turning point associated with the HM-like surface/IS being in the deep IR, results in a relationship between $l_p$ and $r_h$ along with a conjectural $e^{-{O}(1) N^{1/3}}$-suppression (motivated by $S_{EE}^{IS, \beta^0}/S_{BH}\sim2$). We obtain a hierarchy with respect to this N-dependent exponential in $S_{EE}^{HM, \beta^0}, S_{EE}^{IS, \beta^0} (O(\beta^0))$ and $S_{EE}^{HM, \beta}, S_{EE}^{HM, \beta} (O(\beta))$.
The detailed avalanche, saturated avalanche, and streamer simulation can help understand the detector physics behind the Resistive Plate Chamber (RPC). From a 3D Monte Carlo simulation of an avalanche inside an RPC, the transition from avalanche to saturated avalanche (when electron gain and loss are almost the same) followed by a streamer may be understood in more detail. Such simulation is preferable to search for the optimum voltage and alternate gas mixtures.
Garfield++ with appropriate interfaces to Heed (primary ionization), Magboltz (transport properties), and neBEM (electric field) is a freely available C++-based software using which one can make the numerical geometry of a gas detector and examine the physics inside them. All the methods available in Garfield++ to generate an avalanche follow the 3D particle model. One of the advantages of the particle model is that one can extract information (drift velocity, energy, etc.) from every step of the avalanche with detailed tracking of each primary and secondary. Since the methods of GARFIELD++ are sequential, they are resource-hungry and time-consuming. This is especially true when attempts are made to study the effect of space charge accumulation within the device. At the same time, the space charge field plays a crucial role while an avalanche is developing. The dynamic change of the electric field inside the RPC due to space charges limits the gain of an avalanche, which is called the space charge effect.
In this work, the primary goal is to build a numerical model to calculate the dynamic space-charge field inside an RPC and implement it in the GARFIELD++ framework. The multithreading technique OpenMP has been applied to calculate electric field, drift line, electron gain, and space charge field to address the issue of extensive time consumption. Here, the space charge region is modeled by using several charged lines. Therefore, the field has been estimated for those line charges. The field calculations have also been verified with the neBEM. All these modifications in GARFIELD++ have been applied by introducing a new class, pAvalancheMC. An example is provided to show the performance of pAvalancheMC. Moreover, the details of the transition of an avalanche into a saturated avalanche have been discussed.
Anjali S and Saurabh Gupta
Department of Physics, National Institute of Technology Calicut,
Kozhikode - 673 601, Kerala, India
E-mail: anjalisujatha28@gmail.com
Abstract: We investigate a system of particle constrained to move on a torus knot via the framework of superfield formalism and derive the off-shell nilpotent and absolutely anti-commuting (anti-)Becchi-Rouet-Stora-Tyutin (BRST) symmetries. Further, we demonstrate the existence of the off-shell nilpotent and absolutely anti-commuting (anti-) co-BRST symmetry transformations by the means of Lagrangian formulation. The anti-commutator of these aforementioned nilpotent and continuous symmetry transformations furnishes a symmetry - bosonic symmetry, which leaves the Lagrangian quasi-invariant. Moreover, we procure all the conserved charges - the generators of corresponding symmetry transformations in the theory. Finally, we show that the algebra satisfied by these continuous symmetries (and corresponding charges) is analogous to the algebra of the de Rham cohomological operators of differential geometry. Thus, we prove that the constrained system of particle on a torus knot provides an exciting toy model for Hodge theory, where the existing continuous symmetries capture a physical realization of differential operators at the algebraic level.
We report various energy reconstruction algorithms used by the CMS hadron calorimeter (HCAL) during the LHC Run-2. The signal pulse of deposited energy in the HCAL subdetector is a function of time, and hence it overlaps with adjacent pulses due to the high pileup scenario and short proton-proton bunch crossing time (25 ns). The correct contribution of the signal pulse can be estimated using the known pulse shapes of the energy deposition. The talk describes the performance of the algorithms developed to mitigate the effect of adjacent bunch crossings on the local HCAL energy reconstruction in Run 2.
We consider two BSM scenarios with scalar leptoquarks (LQ), motivated by neutrino mass, muon $g-2$, and anomalies in $B$-decay ratios. A combination of a singlet and a doublet scalar LQ can generate one-loop Majorana neutrino mass, and contribute to the observed muon and electron $g-2$ values, while satisfying bounds from lepton flavour violating decays. A carefully chosen parameter space in this model carries discovery and discernability potential at the LHC/FCC from pair production, with different finalstates. On the other hand, extending the SM with a singlet and a triplet scalar LQ separately can categorically explain the observed tensions in $B$-decay ratios. With a minimal set of couplings, the singlet contributes to $R(D)/R(D^*)$, and the triplet to $R(K)/R(K^*)$. The Yukawa couplings can be probed from single production of the LQs at the LHC/FCC, and their $5\sigma$ reach for a range of mass is studied.
Neutrino oscillation experiments use nuclear targets to achieve the necessary interaction events to improve statistics. The inevitable nuclear effects arise due to the sophisticated nuclear environment and our poor understanding of the neutrino interaction with the targets gives rise to systematic uncertainties in the determination of neutrino oscillation parameters. In order to precisely determine the neutrino physics for neutrino experiments such as DUNE, the neutrino-nucleus interaction must be well-understood and the neutrino energy must be reconstructed accurately. In this work, we studied the uncertainties arising due to the pion production in the neutrino interaction with the Argon target for the DUNE energy range which is important for reducing systematic uncertainties for precision physics
The energy momentum tensor from the system created in heavy-ion collision could be decomposed into an equilibrium and an out-of-equilibrium component. The out-of-equilibrium part of the energy momentum tensor when expressed using the linear response theory takes a form in terms of a complex singular frequency. The imaginary part of this complex frequency corresponds to non-hydrodynamic mode. An additional term with a coefficient called relaxation time ($\tau_\pi$) has to be introduced in the consitutive hydrodynamic equations in order to restore causality in the Navier-Stokes equation. In the study presented, we look at the elliptic flow in peripheral heavy-ion collisions for all centrality classes for the two values of relaxation time. We use second order viscous hydrodynamics with IPGlasma initial condition which includes event-by-event fluctuations. From $v_2$ vs $p_T$ results obtained for all centralities we found that the relaxation time acts as a regulator of non-hydrodynamic mode. And breakdown of hydrodynamics could be inferred by analysing the flow curves with the two relaxation times.
Following the discovery of the Higgs boson by the ATLAS and CMS experiments at the LHC, began the zeal for measuring its coupling with other Standard Model (SM) particles. The Higgs Yukawa couplings to light quarks (u,d,s) are currently unknown and the study of inclusive decays of the Higgs boson to these states are extremely challenging due to the large multijet background. In this scenario, rare exclusive decays of the Higgs boson into a light meson and a photon are thought to be important indirect probes for such couplings. In this study we have looked into H→ φγ →k+ k- γ and H→ ργ →π+ π- γ decays in the context of High Luminoscity LHC (HL-LHC). While the SM predicts these couplings to be very small, potential deviations are predicted in several models beyond the SM. The rates and efficiency associated with these channels at Level1(L1) trigger level in HL-LHC will be presented. This analysis allows to have a perspective of the prospects of triggering such events at HL-LHC which has the benefits of inclusion of tracking at L1 and 10 fold increase in luminoscity. As a benchmark, analogous decays of the Z boson into φ/ρ and a photon at branching fractions much lower than the Higgs boson are also studied owing to the large Z boson production cross section.
We study the possibility of existence of deconfined quark matter in the core of neutron stars and non-radial oscillation modes in neutron and hybrid stars. A relativistic mean field model is used to describe the nuclear matter at low densities and zero temperature while Nambu--Jona-Lasinio model is used to describe the quark matter at high densities and zero temperature. A Gibbs construct is used to describe the quark-hadron phase transition at large densities. Within the model, as the density increases, a mixed phase appears at density about $2.5$ times the nuclear matter saturation density $(\rho_0)$ and ends at density about $5~\rho_0$ beyond which the pure quark matter phase appears. It turns out that a stable hybrid star of maximum mass, $M=2.27~M_{\odot}$ with radius $R=14$ km, can exist with the quark matter in the core in a mixed phase only. The quark-hadron phase transition in the core of maximum mass hybrid star occurs at radial distance, $r_c=0.27R$ where the equilibrium speed of sound shows a discontinuity. Existence of quark matter in the core enhances the non-radial oscillation frequencies in hybrid stars compared to neutron stars of the same mass. This enhancement is more for the $g$ modes. The non-radial oscillation frequencies depend on the vector coupling in NJL model. The values of $g$ and $f$ mode frequencies decrease with increase the vector coupling in quark matter.
The CKM elements $|V_{ub}|$ and $|V_{cb}|$ show a discrepancy between the exclusive and inclusive determinations. These determinations are masked with hadronic and other uncertainties, and thus can’t be unambiguously taken as implying new physics. In this talk, we consider a new observable: the ratio of these two CKM elements, $R_{V} ≡ \frac{|V_{ub}|}{|V_{cb}|}$, which is found to receive negligible corrections due to hadronic as well as QED effects. It is observed that the $R_{V}$ as constructed from exclusive determinations of $|V_{ub}|$ and $|V_{cb}|$ agrees quite well with that constructed from the inclusive determinations of these CKM elements.
Hence, we show that $R_{V}$ is a cleaner observable, and can serve as an excellent tool
for the test of the Standard Model.
The Time Projection Chamber (TPC) [1] has the capability of three-dimensional particle tracking. We are developing a bulk Micromegas [2] based prototype TPC at SINP. In the present work, we have measured the detector gain, energy resolution, and electron transparency of the 128 um Micromegas in argon-based gas mixtures to optimize the operating drift and amplification field. We observe the Fe55 spectrum in the Argon CO2 gas mixture with a volumetric ratio of 90:10. We have found the photo peak of Fe55 can be resolved with 100 V/cm drift and a 38 kV/cm amplification field in a test box. We also calculated the effective drift field from a finite element field solver named COMSOL. We have used a drift field of 113 V/cm along the central axis of the prototype TPC and a 38.28 kV/cm field in the amplification region of the Micromegas detector to observe the Fe55 X-ray source photo peak spectrum in the prototype TPC. This is a preliminary result and proof of concept that our TPC is working. In the future, we will make a segmented anode to observe the alpha particle tracks with pure helium, methane, and isobutane gas.
[1] D. R. Nygren, ”Proposal to Investigate the feasibility of a Novel Concept in Particle Detection”, LBL internal report,(1974)
[2] S. Anvar et al., Large bulk Micromegas detectors for TPC applications, Nucl. Instrum. Meth., A 602 (2009) 415.
In this paper, we attempt to find the neutrino oscillation parameters at low energy scale from high energy scale input values. These oscillation parameters are generated through radiative corrections under renormalization group equations (RGEs) in minimal supersymmetric model (MSSM). We assume that some particular symmetries exist at very high energy scale and such symmetries can lead to specific form of leptonic mixing patterns like Bimaximal (BM), Tri-bimaximal (TBM) and Golden ratio (GR). For the present analysis, we have taken up Golden ratio mixing pattern at high scale. This is compatible with the low energy neutrino oscillation parameters obtained from radiative corrections and latest cosmological bound $|\Sigma m_{i}|<0.12$ eV. Our analysis includes the effects of CP phases along with the variation of SUSY breaking scale in the range 1 TeV$\leq m_{s}\leq $14 TeV in evolving the neutrino oscillation parameters.
The experimental observations from the colliders established the standard model (SM), is the most successful phenomenological framework to explain the non-gravitational interactions of fundamental particles at high energy. Non-zero neutrino mass and dark matter cast a shadow over its success. This necessitates the extension of the SM. The most straightforward and elegant extension of the SM to explain these two phenomena is the Scotogenic model, where the SM particle spectrum extends with three isospin singlet right-handed neutrinos and one doublet scalar while all of these being odd under Z2 symmetry. In this work, we have considered the lightest right-handed neutrino as the dark matter candidate and freeze-out mechanism for producing observed dark matter relic density. The charged lepton flavor violation decay processes constrain the upper side of Yukawa coupling while observed relic density limits the lower side. We have performed a unique parameterization to attain the highest possible Yukawa coupling while satisfying LFV and DM constraints. The reduced number of free parameters and large Yukawa coupling make the model predictability at lepton colliders very high. Collider phenomenology for possible signatures performed at lepton colliders and the required luminosities estimated for detection.
There have been different proposals for signatures of the formation of a deconfined thermal medium(quark-gluon plasma) in heavy-ion collisions. The suppression of $J/\Psi$ in the deconfined medium is one of the cleanest signals among many other signatures like elliptic flow, jet quenching etc. However, there are very few signals effective for the formation of QGP in small systems such as the systems produced in proton-proton, proton-deuteron and deuteron-deuteron collisions. Here the medium formed is shown to be very short-lived compared to that formed in heavy ion collision as the system undergoes 3-dimensional spherical expansion from the very beginning of the hydrodynamic phase. We model the small systems for different values of sizes of the
system after the Gubser flow solution and we infer that the expansion phase is smaller by a factor of at least 2. We then calculate the
dissociation probability of $J/\Psi$ through the non-adiabatic evolution of the state using the time-dependent perturbation theory for different values of thermalization time. We find no significant dissociation of J/Psi in small systems in contrast to the systems produced in Au-Au/Pb-Pb collisions, thereby establishing that quarkonia($J/\Psi$) suppression may not be a successful signature for the formation of the thermal medium for proton-proton/proton-deuteron or deuteron-deuteron collisions.
Resistive Plate Chamber (RPC) is a gaseous detector, which is going to be used as an active detector element for the Iron CALorimeter (ICAL) experiment, which is planned to be built by the India-based Neutrino Observatory (INO). A gas mixture consisting of R134a (95.2%), Iso-Butane (4.5%) and SF6 (0.3%) is used to operate the RPCs in the Avalanche mode. Composition of gas mixture plays a crucial role in the RPC detector performance. Since the past four years, a closed loop gas system (CLS) is in operation for 20 RPCs in the mini Iron CALorimeter (miniICAL) detector at Madurai. In the CLS system, the gas mixture flowing out of the RPCs is routed back to the inlet after a suitable purification. Top-up of the fresh gas ensures to maintain the differential pressure at the outlet header within the range of -3 mmWC to +8 mmWC. The amount of gas top-up is the measure of the leakage in the gas circulation path. During the CLS operation, the top-up of the gas was observed in the intervals of 50 to 90 minutes, which has shown an increase in the leakage. Extensive leakage testing and arresting operations were carried out on the system. The number of gas pipe joints have been reduced to a minimum in the CLS path and faulty pneumatically operated solenoid valves have been replaced or repaired. These refurbishing operations in CLS system have resulted in achieving the top-up cycle of 3 and half days. Residual Gas Analyser (RGA) has been used to find out the quality of gas mixture, which is flowing in the RPC detectors. After a brief introduction of the CLS system, the refurbishment operations of the CLS system will be presented in detail in this talk.
Relativistic dissipative hydrodynamics is an effective macroscopic theory of a near-equilibrium system. It is a tool to explore the collective behaviour of the strongly interacting medium produced in heavy-ion collisions. The ideal hydrodynamic simulation deals with the evolution equations of hydrodynamic variables derived from the conservation laws using the equation of state as an input. To incorporate the dissipative effects one must rely on some additional microscopic prescription. Relativistic kinetic theory is a viable option in this regard. It is a statistical framework that describes the macroscopic quantities using the single-particle phase-space distribution function. The dissipative quantities are expressed in terms of the nonequilibrium distribution function that can be obtained employing the Boltzmann Equation. The microscopic interactions are incorporated in the Boltzmann equation via the collision integral, making it an integro-differential equation. The Relaxation Time Approximation (RTA) is a simplification of that, where all interactions are governed by a relaxation time. However, as an artifact of this simplification, one needs additional matching conditions to satisfy the conservation laws of energy-momentum and number current. In this work, we have derived the relativistic dissipative hydrodynamics from a more general Bhatnagar-Gross-Krook (BGK) collision kernel which converges to the RTA description as a limiting case. The BGK kernel conserves the number current by construction and thus no specific matching condition for the same is necessary. This additional freedom leads to a class of physically consistent hydrodynamic descriptions constrained by a single matching condition ensuring energy-momentum conservation. Thus, this framework provides a platform to explore the effects of general matching conditions on transport coefficients and their deviation from the usual RTA prescription. We have also proposed a modified BGK collision kernel which can be particularly useful for system with vanishing net baryon density and derived dissipative hydrodynamics from it.
Hadronic resonances are a unique tool to study the properties of the hadronic phase created after high energy collisions via regeneration and rescattering of their decay products. Studying the dependence of the yield of resonances on transverse spherocity and multiplicity allows us to understand the resonance production mechanism with event topology and system size, respectively. Furthermore, the measurements in small systems are used as a reference for heavy-ion collisions and are helpful for the tuning of Quantum Chromodynamics-inspired event generators.In this contribution, we present recent results on hadronic resonance production as a function of event multiplicities and transverse spherocity. The results include the transverse momentum spectra, yields, mean transverse momentum (⟨pT⟩) and their ratio to the yields of long-lived particles. These measurements will be compared with Monte Carlo predictions.
A simulation-based projection study has been performed for a search for a vector-
like top quark partner T in proton-proton (pp) collisions at √s = 14 TeV. The search
considers the operational conditions of the High-Luminosity LHC (HL-LHC). The
production pp → TT is followed by the decays T → bW, T → tH, and T → tZ with
equal branching fractions of 1/3. Events with one electron or muon, missing trans-
verse momentum and jets are considered. For an integrated luminosity of 3000 fb−1,
the search projects to exclude a T mass below 1750 GeV at the 95% confidence level.
Conversely, a T quark with mass up to 1440 GeV can be discovered at the HL-LHC
with a significance of five standard deviations.
We are searching for dark matter and with the help of neutrinos in these days. There are three known types or flavors of neutrinos, and they have some rather strange properties. One of these strange properties is their helicity. Elementary particles have spin, and when they travel the spin is either oriented along their direction of motion (right-handed helicity) or opposite to their motion (left-handed helicity). Most particles can have either helicity depending on the interaction, but the helicity of neutrinos is always left-handed. We aren’t entirely sure why, but we do know that if right-handed neutrinos exist they wouldn’t interact with regular matter through the electroweak force. They would only interact with matter gravitationally, so they are known as sterile neutrinos.
If sterile neutrinos exist, and they are just regular neutrinos with right-handed helicity, then they would be hot dark matter and not the cold dark matter we’re looking for. But there are some theories where sterile neutrinos are much more massive than regular neutrinos. These heavy sterile neutrinos could comprise dark matter. That is if they exist.
If there are heavy sterile neutrinos out there, they could be discovered by their radioactive decay. Heavy particles can decay into lighter particles over time, so it’s possible that sterile neutrinos can decay to their lighter counterparts, emitting x-ray photons in the process. In an effort to discover these x-ray emissions, a team combed through data from the Chandra X-Ray Observatory. They didn’t find any evidence of sterile neutrinos. Their results weren’t strong enough to entirely rule out the idea, but it does narrow down the theoretical candidates a bit. Specifically, the study places a hard limit on how sterile neutrinos can decay if they exist.
We still don’t know what dark matter is. Studies like this might seem disappointing, but they play an important role. By narrowing down our options, they force us to focus on more viable dark matter candidates. We’ve learned something more, but for now, we are still in the dark.
We are searching for dark matter and with the help of neutrinos in these days. There are three known types or flavors of neutrinos, and they have some rather strange properties. One of these strange properties is their helicity. Elementary particles have spin, and when they travel the spin is either oriented along their direction of motion (right-handed helicity) or opposite to their motion (left-handed helicity). Most particles can have either helicity depending on the interaction, but the helicity of neutrinos is always left-handed. We aren’t entirely sure why, but we do know that if right-handed neutrinos exist they wouldn’t interact with regular matter through the electroweak force. They would only interact with matter gravitationally, so they are known as sterile neutrinos.
If sterile neutrinos exist, and they are just regular neutrinos with right-handed helicity, then they would be hot dark matter and not the cold dark matter we’re looking for. But there are some theories where sterile neutrinos are much more massive than regular neutrinos. These heavy sterile neutrinos could comprise dark matter. That is if they exist.
If there are heavy sterile neutrinos out there, they could be discovered by their radioactive decay. Heavy particles can decay into lighter particles over time, so it’s possible that sterile neutrinos can decay to their lighter counterparts, emitting x-ray photons in the process. In an effort to discover these x-ray emissions, a team combed through data from the Chandra X-Ray Observatory. They didn’t find any evidence of sterile neutrinos. Their results weren’t strong enough to entirely rule out the idea, but it does narrow down the theoretical candidates a bit. Specifically, the study places a hard limit on how sterile neutrinos can decay if they exist.
We still don’t know what dark matter is. Studies like this might seem disappointing, but they play an important role. By narrowing down our options, they force us to focus on more viable dark matter candidates. We’ve learned something more, but for now, we are still in the dark.
We review sterile neutrinos as possible Dark Matter candidates. After a short summary on the role of neutrinos in cosmology and particle physics, we give a comprehensive overview of the current status of the research on sterile neutrino Dark Matter. First we discuss the motivation and limits obtained through astrophysical observations. Second, we review different mechanisms of how sterile neutrino Dark Matter could have been produced in the early universe. Finally, we outline a selection of future laboratory searches for keV-scale sterile neutrinos, highlighting their experimental challenges and discovery potential.
Sterile neutrinos are, as we say in the US, a whole new ball game. Unlike standard model neutrinos, we don’t know if they are real. And unlike the neutrinos we know, they seem to only interact through gravity. Sounds boring, you might think. Why bother? First of all, everybody knows that I like a good dark matter candidate. I am especially fond of one that I can argue should exist anyway, regardless of our missing, invisible matter problem. Sterile neutrinos share two of my favourite qualities for a hypothetical particle: they are well-motivated and they happen to be interesting dark matter candidates.
We think sterile neutrinos should exist thanks to a property of standard model neutrinos: handedness. Specifically, the neutrinos we know are lefties (and antimatter antineutrinos are all righties). Though I am referring to this as handedness, this property – formally known as chirality – isn’t quite like everyday life because it isn’t classical. Like particle spin, it is a quantum feature.
Every known particle can come in both left and right-handed forms – apart from neutrinos. They come only as left-handed particles. Naturally, over the years, physicists have wondered whether there are right-handed neutrinos (and left-handed antineutrinos).
The sterile neutrino is that hypothetical right-handed neutrino. It is named “sterile” because it only interacts through gravity. While this property makes sterile neutrinos different from other neutrinos, they do have a mass and aren’t electrically charged, just like standard model neutrinos. This means they could be dark matter and, unlike standard model neutrinos, they potentially have sufficient mass to explain the apparent gravitational impact of dark matter’s presence.
“Detecting ordinary neutrinos is difficult enough. That work is even more complicated with sterile neutrinos”
Those of us who are theorists get the exciting work of figuring out how the idea that sterile neutrinos could be dark matter would work mathematically. Experimentalists get the joy – and incredible challenge – of going out and looking for physical evidence.
One of these searches recently caused some headlines by finding a null result: no sterile neutrinos. Detecting ordinary neutrinos is difficult enough. That work is even more complicated with sterile neutrinos, which can only be “seen” through their interactions with quantum fluctuations of their standard model counterparts. To find sterile neutrinos, you have to look for a specific type of behaviour in everyday neutrinos.
The experiment that recently announced results, MicroBooNE, is located at Fermilab, not far from Chicago. It consists of a large container of argon attached to a beamline where neutrinos are produced by colliding protons together. It is easier to follow the trajectory of neutrino events in argon, due to its high density and sensitivity to the charged particles that are produced in the collisions.
MicroBooNE’s primary task is to better understand how neutrinos interact with argon and to try to replicate the hints seen in earlier experiments that sterile neutrinos are real. Two experiments, MiniBooNE and LSND, saw an excess of muon neutrinos oscillating into electron neutrinos over distances that didn’t physically make sense. This oddity could be explained if the muon neutrinos were first becoming sterile neutrinos, before changing into electron neutrinos.
Sadly for some, the MicroBooNE team announced recently that it hadn’t, so far, seen the same electron neutrino excess. This is consistent with data from other experiments, leaving us with quite the mystery. Why are different experiments getting different results? We don’t know.
But even if nothing turns up once we have explored every place this hypothetical particle could be hiding, that will still be valuable. If sterile neutrinos turn out to only be a figment of the particle theorist’s imagination, we will know it is time to move on.
A search for high mass resonances decaying into a pair of W bosons is presented. The analysis is based on proton-proton collisions observed by the CMS experiment at the CERN LHC for full Run 2, corresponding to an integrated luminosity of 138 fb−1 at sqrt(s) = 13 TeV. The analysis considers the fully leptonic final state . New techniques are implemented in the analysis to improve the sensitivity of the search, especially in the very high mass range. The search is performed in a mass range from 115 GeV to 5 TeV, and for various width hypotheses. The effects of background and signal interference are also considered. The results are presented as 95% confidence level upper limits on the product of the cross section and branching ratio on the production of a high mass resonance, as well as exclusion limits are derived on various two-higgs-doublet models and minimal supersymmetric standard model benchmark scenarios.
The conservation of lepton flavor is one of the accidental symmetries of the SM. Charged lepton flavor violating processes are forbidden in the SM. Still, some new physics models, such as the leptoquark model, predict these processes that could be observed in a high-energy physics experiment.
Bottomonium system is a good place to study such processes. Belle experiment is a flavor physics experiment at KEKB asymmetric e^− - e^+ collider, at KEK, Japan. It mainly collected the data at the energy of Υ(4S), but it also collected some data at Υ(nS; n = 1, 2, 3), and Belle has the world’s largest data sample for Y(2S). We will present a search for charged lepton flavor violation in Υ(2S) → lτ (l =e, mu) decay where tau is reconstructed from τ → l ν_l ν_τ and τ →π^+π^0ν_τ using 25 fb−1 data collected at Υ(2S) resonance with Belle Detector.
We search for the decay $B_s^0\rightarrow J/\psi \pi^0$ using 121.4 $fb^{-1}$ of data collected at $\Upsilon(5S)$ resonance state by the Belle detector at the KEKB asymmetric energy $e^+e^-$ collider located at the High Energy Accelerator Research Organisation, KEK, in Japan. In the Standard Model, the decay is expected to be rare, proceeding through the $W$-boson exchange and annihilation processes. The quantitative amplitudes predictions for such transitions have resisted attempts and differ significantly between approaches. The QCD factorization (QCDF) approach suffers significant uncertainties due to the endpoint singularities, whereas the perturbative QCD (pQCD) does not differentiate among exchange and annihilation topologies. Nevertheless, pQCD provides more precise predictions in the charmless decays. However, annihilation topologies with charm in the final state have not been studied in detail. The experimental investigation of the issue is very desirable since the decay mode has the potential to advance the field. The present experimental upper limit of $1.2\times10^{-3}$ at 90\% confidence level (CL) on the branching ratio of $B_s^0\rightarrow J/\psi\pi^0$ was set by L3 collaboration in 1997. This analysis will be the first attempt to search for this decay using the available dataset from the Belle experiment with an expectation of reaching the SM sensitivity.
The CDF-II collaboration’s recent high-precision measurement of $W$ boson mass, $M_{W}^{\text{CDF}}$ = 80.4335 $\pm$ 0.0094 GeV, indicates $7-\sigma$ deviation from the SM expectation $M_{W} = 80.354 \pm 0.007$ GeV. This leads us to investigate the extension of SM, which can account for aforementioned problems with SM. We investigate the possibility of the well-known canonical Scotogenic model, where dark matter particle running in the loop generates neutrino masses, to explain the CDF-II measurement. For both scalar and fermionic dark matter possibilities, we simultaneously examine the constraints coming from a) neutrino mass, oscillation, neutrinoless double beta decay and lepton flavour violation experiments, b) from dark matter relic density and direct detection experiments c) from the oblique $S$, $T$, $U$ parameter values consistent with CDF-II W boson measurement. We show that the viable parameter space of doublet scalar carrying a dark parity charge is nearly ruled out by the new CDF-II measurement while the fermionic dark matter in the canonical Scotogenic model can simultaneously explain all the aforementioned issues.
In our second work, we focused on $U(1)_{B-L}$ gauged SM extension. We demonstrate that $B-L$ extended models can explain the revised best fit values for $S$, $T$, and $U$ following the CDF II results. We studied the parameter space of models with and without mixing between neutral gauge bosons. We also reviewed the dark matter constraints and demonstrated that there are parameter space which is compatible with current W boson mass, relic abundance, and direct detection experiments.
During peripheral heavy ion collisions in RHIC and LHC experiments, a huge magnetic field can be created. So, quark gluon plasma (QGP), produced in this heavy ion collision experiments, can face this strong magnetic field, which can decay with time. The electrical conductivity of QGP can be guiding quantity for this decay profile of the magnetic field. Present work has tried to explore the possible numerical band of conductivity, obtained in existing references. We have used them as inputs to predict corresponding decay pattern of magnetic field.
In Low scale leptogenesis(LSL) models a small RH neutrino mass can dynamically be generated if there is a weak coupling between a long-lived scalar field and RH neutrinos despite having a large VEV, $v_\Phi$. In such a scenario, the correlation shared by the non-standard scalar era driven by $\Phi$ and $M_i$s provides an excellent opportunity to study the fingerprints of LSL on primordial gravitational waves. We study the gravitational waves originating due to the inflationary blue-tiled tensor power spectrum and propagating through the scalar epoch, which depending upon $M_i$s provides two significant insights. Firstly, taking LSLs seriously even for very high scale reheating GWs with a significant blue-tilt don’t violate the BBN bound. Secondly, it unravels an opportunity to test LSLs via a low frequency and complementary high frequency doubly peaked GW background. Taking recent results on GWs from PTAs at face value and taking it as a measure at low frequencies allows one to get possible signatures of LSL mechanisms at higher frequencies.
Neutrinos are massless in SM but one can introduce Majorana neutrino masses effectively through a dimension-five lepton number non-conserving operator $-\mathcal{L}_\nu^{d=5}=\frac{1}{\Lambda}(\overline{L}\Phi)(\Phi^TL^c)+h.c.$. The linear seesaw mechanism provides a interesting UV-completion of this operator realized in the simplest $SU(3)_c\otimes SU(2)_L\otimes U(1)_Y$ gauge structure. In addition to Standard Model particles, there are three quasi-
Dirac leptons, and a second, leptophilic, scalar doublet. Our proposal is consistent with electroweak precision tests, neutrino physics, rare decays, as well as restrictions from lepton flavour violation and collider experiments. Striking signatures are expected at colliders associated to the production of neutrino mass mediators.
We propose a two-component fermionic Dark Matter(DM) in a minimal
U(1)B extension of Standard Model(SM) with the inclusion of one complex
scalar S(1, 1, 0, −3) along with the usual Higgs doublet. Out of the 3 exotic
fermions added for anomaly cancellation, DM emerges as a mixture of the neutral component of the fermionic doublet and a singlet fermion. The motivation of
our work lies in the fact that in the case of one-component DM candidate, the
spin-independent direct-detection(SIDD) cross-section comes out to be larger
than the experimental bounds and is thus ruled out. So, in this work, we take
two-component singlet-doublet fermionic Dark Matter, where a mixing angle(θ)
dependent term between final mass eigenstates of DM particles can significantly
relax the SIDD cross-section to within the experimental limits with the right
choice of mixing angle. In the model, U(1)B symmetry is broken by the scalar
S and a reminent Z2 symmetry ensures the stability of DM candidates. The
model thus offers a viable parameter space for a stable DM candidate that can
be probed from direct search, collider, and GW experiments.
Astrophysical and cosmological observations unambiguously established the existence of non-luminous but abundantly present Dark Matter, which interacts gravitationally. Among various models, Weakly Interacting Massive Particles (WIMPs) are the most popular and well-motivated candidates.
I shall first present a new detector technology, named Snowball – Chamber - its design, involved DAQ system, physics motivation and our progress so far. The target material is highly purified (20 nm filtration) supercooled water. Our previous run studies have shown direct evidence of nucleation of the target by neutrons, which caused a + 0.7 °C warmer freezing point on average than the control with a statistical significance > 5$\sigma$. Currently, we are improving the system to overcome the pitfalls we faced in the 1st run. So next, I shall discuss my upgraded 3D model of the Detector and improved Geant4 simulation models which are underway and will show the results of the calibration runs using RTDs and Thermocouples. This presentation will contribute to this unique technology with future scopes and possibilities across fields.
We analytically calculate the conversion probability $P_{\mu e}$ in the presence of sterile neutrinos, with exact dependence on $\Delta m^2_{41}$, and with matter effects explicitly included. Using perturbative expansion in small parameters, we show that the terms involving mixing angles $\theta_{24}$ and $\theta_{34}$ can be separated out, with the effects of the latter only arising due to neutral current forward scattering of active neutrinos. Moreover, the conversion probability $P_{\mu e}$ can be rearranged as a summation of terms of the form $\sin x/x$, which helps in the physical understanding of where the effects of different possible values of $\Delta m^2_{41}$ dominate.
We further focus on the identification of sterile mass ordering at a long baseline experiment like DUNE. Our analytic expressions allow us to predict how the effects of sign of $\Delta m^2_{41}$ would manifest, for both possible signs of $\Delta m^2_{31}$. We numerically calculate the sensitivity of DUNE to sterile mass ordering over a large range of $\Delta m^2_{41}$ and explain the features of this sensitivity using our analytic expressions.
Strangeness production has been suggested as a sensitive probe to the early dynamics of the deconfined matter created in heavy-ion collisions. Ratios of particle yields involving strange particles are often utilized to study freeze-out properties of the nuclear matter, such as the strangeness chemical potential and the chemical freeze-out temperature. The $d$+Au collisions connect between Au+Au and $pp$ collisions, and supply the baseline for the study of strangeness enhancement in the deconfined matter. The study of nuclear modification factors for strange hadrons in $d$+Au collisions can also help to understand Cronin-like effects.
In this work, we will present new measurements on the production of strange hadrons ($K{_S}{^0}$, $\Lambda$, $\Xi$, $\Omega$) for different rapidity intervals in $d$+Au collisions at $\sqrt{s_{\rm{NN}}} =$ 200 GeV, recorded by the STAR experiment in 2016. We will report transverse momentum ($p_{\rm{T}}$) spectra, $p_{\rm{T}}$ integrated yield dN/dy, average transverse momentum, yield ratios, nuclear modification factors, and rapidity asymmetry ($Y_{\rm{asym}}$) for those strange hadrons. The physics implications of the measurement on the collision dynamics will be discussed.
Measuring the trilinear Higgs self-coupling parameter $\lambda_{HHH}$, which crucially describes the shape of Higgs potential, is among the key mandates at the Large Hadron Collider (LHC) experiments. In proton-proton collisions, this coupling can be probed directly by studying the production of the Higgs boson pair. Due to the rarity of the HH production signal, the analysis usually requires enhancing the signal-to-background ratio in the observed sample by multivariate analysis (MVA) techniques. To this end, we have made a comparative study of several MVA-based classifiers to distinguish the Higgs boson pair (HH) production signal from the dominant irreducible background of top pair associated Higgs boson production in the inclusive final state of $b\overline{b}+\gamma\gamma$ using an available simulated dataset. Our study indicates better performance of the graph-based Message Passing Neural Network (MPNN) over other classifiers considered. This talk will present the basic features of different networks considered and the results from MPNN.
We have provided a modified grand canonical ensemble formulation for a multi-component hadron resonance gas system. We have considered the attractive as well as repulsive interaction among the constituent baryons (antibaryons) and obtained a Van der Waals type equation of state. The weak decay contributions of the heavier resonances have also been taken into account. Using our formulation we have calculated several relative hadronic yields as well as nucleon (antinucleon) densities in the system in a thermodynamically consistent manner. It is found that the particle ratios get significantly modified in the case of Van der Waals interactions for a baryon rich system and at high temperatures. We find that by employing the Van der Waals type equation of state, we can reasonably predict several particle ratios obtained in the CERN SPS at 80A and 40A GeV within a temperature range of 155-165 MeV for baryon chemical potentials 300 MeV and 500 MeV, respectively for the two cases. In this approach the repulsive force is assumed to exist between pairs of two baryons and pairs of two antibaryons, while it is purely attractive between a baryon-antibaryon pair. The values of attractive and repulsive parameters have been obtained from the earlier studies which are required to reproduce the ground state properties of nuclear matter. We have also studied the effect of the variation of these parameters on our results.
The measurement of the production cross section and transverse momentum ($p_T$) spectrum of $Z$ boson at the LHC provides first tests of the Standard Model (SM). This measurement could be sensitive to exotic physics processes in new energy regime. The $Z$ boson production is also a common background process for many other physics analyses and therefore it must be well understood. In this contribution, we will present a study of $Z$ boson production in association with jets with p-p collisions at a center-of-mass energy of 13.6 TeV at LHC, using leading-order event generators such as PYTHIA and HERWIG. The $Z$ boson has been reconstructed in $\mu^{+}\mu^{-}$ and $e^{+}e^{-}$ decay channels using different kinematic selections. These selection criteria involve each of the leptons having transverse momentum ($p_T > 20$ GeV) and within the central region ($|\eta|< 2.4$) of the detector. The jets (anti-$k_T$) produced are constrained within cone size of $r = 0.4$, having transverse momentum ($p_T$) of jets greater than 30 GeV and $|\eta|<1.3$. A comparison of $Z_{p_{T}}$ spectrum with both the generators will be presented.
The superconformal bootstrap program for $\mathcal{N}=2$ superconformal field theories was initiated by Rastelli.et.al. The main ingredient for bootstrapping any CFTs is a four point function and which can be expressed interms of conformal partial waves. In this work we have computed the superconformal partial waves of the four-point correlator $\langle JJ\Phi\Phi^{\dagger}\rangle$, in which the external operator $J$ is the superconformal primary of the 4D $\mathcal{N} = 2$ stress-tensor multiplet and $\Phi$ is primary operator of chiral multiplet . We have used full fledged superembedding formalism for our work. In $\mathcal{N} = 2$ SCFTs, the three-point functions $\langle J J O\rangle$ and $\langle\Phi\Phi^{\dagger}\mathcal{O}\rangle$ with general multiplet $\mathcal{O}$ contain two independent nilpotent superconformal invariants and new superconformal tensor structures, which can be nicely constructed from variables in superembedding space, and the three-point functions can be solved in compact forms. We computed the superconformal partial waves corresponding to the exchange of long multiplets where the result for odd spin is consistent with non trivial constraints by the decompositions of $\mathcal{N}=2$ multiplet into several $\mathcal{N}=1$ multiplets.
References
A. L. Fitzpatrick, J. Kaplan, Z. U. Khandker, D. Li, D. Poland and D. Simmons-Duffin, JHEP 1408, 129 (2014) doi:10.1007/JHEP08(2014)129 [arXiv:1402.1167 [hep-th]]
Z. U. Khandker, D. Li, D. Poland and D. Simmons-Duffin, JHEP 1408, 049 (2014) doi:10.1007/JHEP08(2014)049 [arXiv:1404.5300 [hep-th]].
Z. Li and N. Su, JHEP 1605, 163 (2016) doi:10.1007/JHEP05(2016)163
[arXiv:1602.07097 [hep-th]].
W. D. Goldberger, Z. U. Khandker, D. Li and W. Skiba, Phys. Rev. D 88, 125010 (2013) doi:10.1103/PhysRevD.88.125010 [arXiv:1211.3713 [hep-th]].
A. M. Polyakov, Nonhamiltonian approach to conformal quantum field theory, Zh. Eksp. Teor. Fiz. 66 (1974) 23–42.
R. Rattazzi, V. S. Rychkov, E. Tonni, and A. Vichi, Bounding scalar operator dimensions in 4D CFT, JHEP 0812 (2008) 031, [arXiv:0807.0004].
P. Liendo, I. Ramirez and J. Seo, JHEP 1602, 019 (2016) doi:10.1007/JHEP02(2016)019 [arXiv:1509.00033 [hep-th]]
Slava Rychkov,EPFL Lectures on Conformal Field Theory in D $\geq$ 3 Dimensions [arXiv:1601.05000]
We revisit the symmetries of an isolated horizon (IH), exploiting some freedom in the choice of intrinsic data. The supertranslations are realized as additional symmetries. Furthermore, it is shown that all smooth vector fields tangent to the cross sections are Hamiltonian. We show that joining two IHs which differ in these Hamiltonians and boundary data, under the action of a supertranslation, necessarily requires the inclusion of an intermediate phase which requires the inclusion of a stress-energy tensor. In this phase, the boundary is null and nonexpanding but not an IH and invariably leads to a violation of the dominant energy condition. The assumptions allow us to reconstruct the (classically) pathological stress-energy tensor also.
A long-baseline Deep Underground Neutrino Experiment (DUNE) is a novel and ambitious setup which will come-up in midwestern United State.This world class laboratory will not only address the fundamental questions about the nature of elementary particles and their role in the universe but it aims to announce groundbreaking discoveries.
In DUNE the measurements of neutrino oscillation parameters will be made by comparing the detected event rates in the far detector with the predictions made from the un-oscillated neutrino flux measured at the near detector. At a depth of around 1480m (4.30 km. w.e), the DUNE far detector will be the biggest liquid argon time projection chamber (TPC) detector. It will have the ability to look at the astrophysical objects through cosmic neutrinos which are hard to observe through other messenger particles and at the same time it will also search for Weakly Interacting Massive Particles (WIMPs) using neutrino-induced upward through-going muons. An understanding of the atmospheric neutrino background will be required to realize the goals of DUNE . As we know that meson decay results into production of neutrinos along with charged leptons hence neutrino background can be strongly constrained by the measurement of the atmospheric muon flux.
The existing direct and indirect methods of muon spectrometry at accelerator-based and cosmic-ray (magnetic spectrometers, transition radiation detectors) experiments involve certain technical problems and limitations in the higher energy region. These disadvantages vanish in this alternate method where the muon energy is estimated by measuring the energy of secondary cascades formed by muons losing their energy in the matter, mainly due to the Bremsstrahlung process. In this research work, we are attempting to implement this technique to reconstruct the muon energy and it’s direction at LArTPC proposed for DUNE.
We consider the twisted-diffeomorphism framework of canonical noncommutative spaces in which the noncommutative version of metric tensor, Christoffel symbols, curvature tensors and curvature scalars are constructed in terms of their commutative counterparts. Further, we consider the two commutative spaces that are related to each other by a non-injective coordinate transformation i.e., a local-diffeomorphism transformation. We analyze the nature of curvatures of these two spaces after the introduction of canonical type noncommutativity of coordinates in these two spaces. Although the nature of curvature of these two spaces is same in the commutative case, it is fundamentally altered in the noncommutative case if at least one of the components of the metric tensor depends on more than one canonically noncommuting coordinates. One significant result is that a noncommutative Minkowski spacetime with such a metric structure is not flat. That is, the effect of noncommutativity in such cases naturally brings the gravitational effect into the theory. Another result which is geometrically significant is that a flat 2-dimensional commutative space with an appropriate metric structure can develop curvature after the introduction of canonical noncommutativity between its coordinates.
We report modifications to the traditional Blast wave fit to momentum spectra of particles at mid rapidity emitted from central Au+Au collisions at √sN N = 3.0 GeV in STAR, and compare to HADES Collaboration results at √sN N = 2.4 GeV. We explore a scenario of a
Gaussian shape of emission source in rapidity, which modifies the boost invariance assumption of the traditional Blast wave model. Such a modification is to be expected when the produced fireball is fully hadronic and the ideal hydrodynamics breaks down. The modified Blast wave model seems to be able to unify the π/K/p spectra to common temperature and transverse flow velocity, indicating that the thermal equilibrium is achieved in the hadronic system produced in 3.0 GeV Au+Au collisions.
Measurements of two particle correlations are sensitive to several characteristics of the
medium created in heavy ion collisions. Looking at the correlations of charged and
neutral kaons might provide information about the potential formation of disoriented
chiral condensates (DCCs). Previous ALICE measurements have indeed shown a
strong anti-correlation between charged and neutral kaons, which is qualitatively
consistent with the formation of DCCs. The initial goal of this analysis is to perform
charged and neutral kaon identification with high purity using the ALICE detector. Once
the neutral and charged kaons are cleanly identified, they can be used to construct the
two-particle correlation function. The measurements of a more differential
analysis of these correlations as function of $\Delta \varphi$ and $\Delta \eta$ from Pb-
Pb collisions at $\sqrt{s_{\rm{NN}}} =$ 5.02 TeV will be shown.
While the triplet-like Higgses up to a few hundred GeV masses are already excluded for a vast region of the model parameter space from the LHC searches, strikingly, there is a region of this parameter space that is beyond the reach of the existing LHC searches, and doubly/singly-charged and neutral Higgses as light as 200 GeV or even lighter are allowed by the LHC data. We study several search strategies targeting different parts of this LHC elusive parameter space at two configurations of $e^-e^+$ colliders --- 500 GeV and 1 TeV centre of mass energies. We find that a vast region of this parameter space could be probed with 5$\sigma$ discovery with the early $e^-e^+$ colliders' data.
The flavor symmetry-breaking scale of the Froggatt-Nielsen (FN) mechanism is very weakly constrained by current experiments and can lie anywhere from a few TeV to the Planck scale. We develop ultraviolet (UV) complete models that generate the FN mechanism, with a global $U (1)_{\rm{FN}}$ flavor symmetry for two commonly used charge assignments. We explore the possibility of a strong first-order phase transition (SFOPT) induced by the flavon, using the one-loop finite temperature effective potential. We show that for flavor symmetry-breaking scales of ∼ $10^4 − 10^7$ GeV, the associated stochastic gravitational wave (GW) background may be strong enough to be detected at upcoming GW observatories such as the Big Bang Observer (BBO) and the Einstein Telescope (ET). We identify viable regions of the parameter space for the best detection prospects. Both flavor models can produce a detectable GW background, however, the GW signature does not discriminate between them.
We consider canonical noncommutativity among spacetime coordinates which gives rise to twisted Conformal algebra and twisted Poincare algebra. Different aspects of Weyl Tensor in four-dimensional noncommutative spacetime are discussed. We calculate the noncommutative correction to Weyl Tensor in noncommutative Minkowski spacetime in Spherical polar coordinate and Conformally compactified coordinate. We analyze and compare the structure of causality in Minkowski spacetime in Spherical polar coordinate and Conformally compactified coordinate. The effect of Moyal type spacetime noncommutativity and noncommutative correction of Weyl Tensor in conformally compactified coordinate on the asymptotic behavior of noncommutative Minkowski spacetime is also discussed.
We study the possibility of generating baryon asymmetry of the universe from dark matter (DM) annihilations during non-standard cosmological epochs. Considering the DM to be of weakly interacting massive particle (WIMP) type, the generation of baryon asymmetry via leptogenesis route is studied where WIMP DM annihilation produces a non-zero lepton asymmetry. Adopting a minimal particle physics model to realise this along with non-zero light neutrino masses, we consider three different types of non-standard cosmic history namely, (i) fast expanding universe, (ii) early matter domination and (iii) scalar-tensor theory of gravity. By solving the appropriate Boltzmann equations incorporating such non-standard history, we find that the allowed parameter space is consistent with DM relic and observed baryon asymmetry gets enlarged with the possibility of lower DM mass in some scenarios. While such lighter DM can face further scrutiny at direct search experiments, the non-standard epochs offer complementary probes on their own.
We propose a scoto-seesaw framework in an $A_4$ flavor symmetric framework which can explain the TM2 mixing pattern. In this set up, we explain tri-bimaximal mixing (TBM) pattern in a type I seesaw mechanism with two right-handed neutrinos. As TBM pattern cannot explain non-zero reactor mixing angle, we introduce a scotogenic contribution with one fermion which combining with seesaw mechanism successfully explain the observed value of the non-zero reactor angle. The scotogegic part acts as a deviation from TBM pattern to explain the correct neutrino oscillation data and provide suitable candidate for dark matter. Our model can distinguish between normal and inverted ordering of neutrino masses for some specific values of some model parameters. Due to the flavor symmetric construction, lepton flavor violating decay of $\mu \rightarrow e \gamma $ vanishes in the scotogenic contribution and sets some lower limit on the type I seesaw right-handed neutrinos. We have also predicted the effective mass parameter appearing in neutrinoless double beta decay which can be tested in the future experiments.
I will present a brief overview of astrophysical and cosmological constraints on dark matter and dark energy.
It is the most common assumption is that the pressure inside the neutron star (NS) is isotropic in nature. In this study, we calculate the anisotropic pressure inside the NS and calculate its effects on some properties such as mass, radius, compactness, and surface curvature. To obtain the NS properties, we use the relativistic mean-field equation of states. We observed that anisotropy has significant effects on the surface curvature of the NS.
Recent image of the M87*
and Sgr A*
black hole by EHT collaboration has opened a new portal to unlock various mysteries of the universe. Due to extreme gravity around a black hole, there will be an enhanced distribution of dark matter, which will have a significant effect on the image of the black hole. One certain feature of a black hole image is the black hole shadow, which can be used to extract information about this dark matter environment. There have been various models of dark matter, which propose an effective (but very weak) interaction of dark matter with light, which leads them to have a fractional charge and is thus called millicharged dark matter. In this talk, I will present to you the effect of this millicharged dark matter environment on the shadow of a black hole. I will also show the proposed bound on millicharged dark matter parameter space, based on more precise future observation of the black hole shadow.
Ref.: Exploring millicharged dark matter components from the shadows
Lalit S. Bhandari(IISER, Pune), Arun M. Thalapillil(IISER, Pune)
JCAP 03 (2022) 03, 043
We reconstruct late-time cosmology in a model-independent manner using the technique of Principal Component Analysis (PCA). We propose a variant of PCA which can be used to find out the functional form of late-time cosmological quantities. In the methodology we only need the tabulated dataset of the quantity we want to reconstruct and as an output we get the functional form of it in terms of the independent variable of the dataset. In this work we particularly focus on the reconstruction of the equation of state of dark energy. The analysis is carried out in two different approaches. The first one is a derived approach, where we reconstruct the observable quantity using PCA and subsequently construct the equation of state parameter. The other approach is the direct reconstruction of the equation of state from the data. A combination of PCA algorithm and calculation of correlation coefficients are used as prime tools of reconstruction. We carry out the analysis with simulated data as well as with real data. The derived approach is found to be statistically preferable over the direct approach. The correlation coefficient calculation also enables us to find out the final number of principal components we have to keep in the final reconstruction process. The reconstructed equation of state indicates a slowly varying equation of state of dark energy.
In this talk, I will show applications of the state-of-the-art supervised, un-supervised and weakly-supervised machine learning (ML) algorithms to solve problems in cosmology and astronomy. I will show ML-based galaxy clusters' mass modeling to capture the Sunyaev Zel'dovich (SZE) and Cosmic Microwave Background (CMB) lensing effects, using convolutional neural networks (CNNs). I will show an application of self-organizing maps (SOMs) to discover new radio sources in the Australian Square Kilometre Pathfinder surveys (ASKAP). I will also present state-of-the-art weakly supervised ML methods to classify and segment radio galaxies on cosmological scales. All these methods are domain agnostic and can be easily applied to other fields of physics.
A review of DPS measurements from CMS experiment will be presented. A comparison of these results with respect to measurements from other experiments will also be shown.
The top quark is the heaviest known elementary particle. It has deep connections to the electroweak symmetry breaking mechanism owing to its large mass. It decays faster than average time required for hadronization, thus enabling direct access to bare quark properties. Top quarks often serve as the window to new physics via its direct couplings to heavy resonances predicted by theories beyond the standard model. Stringent limits on the models explaining matter-antimatter asymmetry can be determined by carefully studying the processes involving the top quark. In this talk, a summary of latest results with the top quarks at the Large Hadron Collider will be presented.
We report a precise simultaneous measurement of the mass and decay width of the top quark in the $t$-channel, which is the most dominant production process for single top quarks at the LHC. The final state comprises a top quark along with a light quark, giving rise to at least two jets, of which one arises from the hadronization of b-quark, an isolated high-momentum lepton (electron or muon), and a large missing transverse momentum due to an escaping neutrino from the W decay. The study uses $138\,\mathrm{fb}^{-1}$ proton-proton collision data recorded by the CMS experiment during 2016–2018 at $\sqrt{s} = 13$ TeV. Dominant standard model backgrounds are studied in complementary regions defined based on the number of b- and light-quark jets in the final state. A multivariate technique that relies on deep neural networks has been deployed to separate signal from backgrounds. The top-quark mass is reconstructed using kinematic information from the W boson and the b jet. We obtain the top quark mass and decay width from a fit to its reconstructed mass distribution using a suitable combination of parametric shapes.
A search for the electroweak production of a vector boson scattering using a WV (V=W/Z) pair with two jets is reported, where W decays leptonically while the other boson (V) decays hadronically, resulting in a semi-leptonic final state. The data correspond to an integrated luminosity of 138 fb$^{-1}$ of the proton-proton collision produced at the center of mass energy of 13 TeV collected by the CMS experiment at LHC. Events are categorized into two groups based on the hadronically decaying W boson, whether it is reconstructed as one large-radius jet or as a pair of resolved jets. In this talk, I will be presenting the cross-section measurement in WV channel at the symposium. The cross-section is reported in a fiducial phase space defined at the parton level. All parton transverse moments must be greater than 10 GeV, and at least one pair of outgoing partons must have an invariant mass greater than 100 GeV. The observed electroweak signal strength is 0.85 $\pm$ 0.12 (stat)$^{+0.19}_{-0.17}$(Syst), corresponding to a signal significance of 4.4 standard deviations. The simultaneous measurement of the electroweak production agrees with the standard model prediction.
The GRAPES-3 experiment located at the Cosmic Ray Laboratory in Ooty India is home to the world’s largest muon telescope. It consists of 16 modules based on nearly 4000 proportional counters (PRCs). Another muon telescope of similar area is under construction. The old data acquisition (DAQ) system of the muon telescope is a conventional one which works on a hardware trigger generated by the coincidence of signals from the four layers of PRCs. The deadtime of the system is ~10%. A new field programmable gate array (FPGA) based trigger-less muon data acquisition (TMDAQ) system was developed using the high-level-trigger (HLT) read-out receiver card (H-RORC) boards which were used at LHC point 2, CERN, Geneva during Run I phase of LHC by ALICE experiment. The objective of the TMDAQ is to record the pulse characteristics, arrival time and pulse width of each pulse from an individual PRC continuously. For monitoring the performance of the PRCs and the hardware, various monitoring modules were developed along with SPI and USB communication modules, and a smart auto-reset algorithm was also incorporated in the design to add the redundancy of the DAQ. The deadtime of the new system is negligibly small. This system has been successfully installed in 50% modules of the existing muon telescope. We will present the design and unique features of the TMDAQ system along with some performance
LHC's Run-3 officially started in July 2022 with LHC delivering first beam at record energy of 13.6 TeV. It is for the first time when LHC experiments are using full software triggers based on GPUs capable of doing full event reconstruction at 30 MHz proton-proton collision rate.
In this talk, I will cover the design architectures and programming principles of Run-3 High level triggers (HLT) of major LHC experiments mainly focusing on HLT1 of LHCb and CMS. I will also share some of the experiences of commissioning of some new algorithms developed for these hybrid systems.
At TIFR, we are participating in the development, fabrication and assembly of intricate stepped and non-stepped hole (NSH) frontend electronics boards, baseplates with insulation, and large area Copper-Tungsten (25:75) composite material plates, to be used as an absorber in the electromagnetic section of HGCAL. The overall development employs close interaction with Indian electronics and powder metallurgy industries and translates these efforts into the final product with desired specifications. NSH prototype boards have been recently fabricated, followed by extensive mechanical and electrical tests at TIFR and CERN. Similarly, PCB-based baseplates have been fabricated in India and delivered in large numbers to the collaboration as a part of the pre-series production campaign for the prototype HGCAL detector. This talk will cover these R&D efforts carried out at TIFR.
In a hard interaction at the LHC, the partons due to the QCD confinement property hadronize to form jets. The identification of jets and flavor tagging of jets is very important to many physics analyses for precise measurements of the standard model and new resonance searches. The performance studies on identifying the quark and gluon jets using different discriminators and their tagging efficiency are presented.
Measurements of heavy-flavour tagged jets, and heavy-flavour particle azimuthal correlation with charged particles allow for comparisons of the heavy quarks (charm and beauty) production, propagation, and hadronization across different collision systems. Comparison of measurements performed in pp with p--Pb collisions can help studying the possible modification of the heavy-quark production and hadronization inside jets due to cold-nuclear-matter effects, while possible effects relative to the formation of the quark-gluon plasma can be studied by comparing measurements performed in pp and Pb--Pb collision systems. The measurement of heavy-flavour jets gives a direct access to the initial parton kinematics and can provide further constraints for heavy-quark energy loss models.
In this contribution, the measurement of azimuthal correlations between D-mesons and charged particles in pp collisions at $\sqrt{s}=$ 5.02, 7, and 13 TeV, in p-Pb collisions at $\sqrt{s_{\rm{NN}}}=$ 5.02 TeV and the azimuthal correlations between heavy-flavour decay electrons and charged particle in pp collisions at $\sqrt{s}= 5.02$ TeV are presented. The D-meson tagged jets measurements in pp at $\sqrt{s}= 5.02$, and 13 TeV, in p-Pb and Pb-Pb collision at $\sqrt{s_{\rm{NN}}}= 5.02$ TeV, and the measurements of the fragmentation function and radial shape of jets containing a $\Lambda_{c}$ in pp collision at $\sqrt{s}= 13$ TeV with ALICE will be shown. The data are compared to simulations performed with different Monte Carlo event generators, which can help to investigating the heavy-quark production and hadronization processes. Finally, an evaluation of the performance for $\rm{D^{0}-\bar{ D^{0}}}$ correlation studies based on a simulated analysis for ALICE~3 will be presented.
Jet energy loss is investigated using the nuclear modification factor (R$_{AA}$) observable in heavy ion collisions at RHIC and LHC energies. We employ Jet Energy-loss Tomography with a Statistically and Computationally Advanced Program Envelope (JETSCAPE) framework to depict jet quenching phenomena, to analyze the multi-stage jet evolution in quark-gluon plasma (QGP) medium. In this work, Pb-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV and Au-Au collisions at $\sqrt{s_{NN}}$ = 200 GeV are produced using the JETSCAPE framework for various jet energy loss models, including MATTER, LBT, Martini, and AdSCFT. Furthermore, jet interactions are compared for three centrality classes ranging from 0 to 10%, 30 to 40%, and 60 to 80% in both QGP medium and vacuum to investigate the nuclear modification factor. We also report the dependence of the transverse momentum of jets while comparing the different energy loss mechanisms.
Recent results in high-multiplicity pp collisions show features similar to those that are associated with the formation of a quark-gluon plasma in heavy-ion collisions [1]. Investigating the modification of the intra-jet properties as a function of event multiplicity in pp collisions can provide deeper insight into the nature of these effects. We will present the recent measurements of multiplicity dependence of charged-particle jet properties (average charged particle multiplicity and fragmentation functions) for leading charged-particle jets. Jets are reconstructed using anti-$k_{\rm T}$ jet finding algorithm with radius parameter $R$ = 0.4 in the jet $p_{\rm T}$ range from 5 - 110 GeV/$c$ at midrapidity in pp collisions at $\sqrt{s}$ = 13 TeV with ALICE.
[1] Vardan Khachatryan et al. Phys. Lett. B 765 (2017), JHEP 09 (2010)
Charged lepton flavor violation has long been recognized as an unambiguous signature of New Physics. Here, we describe the physics capabilities and discovery potential of New Physics models with charged lepton flavor violation in the tau sector as its experimental signature. Current experimental status from the B-Factory experiments BaBar, Belle and Belle II, and future prospects at Super Tau Charm Factory, LHC, EIC and FCC-ee experiments to discover New Physics via charged lepton flavor violation in the tau sector are discussed.
Several indications of lepton non-universality observables have been perceived in semileptonic $B$ meson decay processes, both in the neutral-current ($b \to s ll $) and charged-current ($b \to c l \bar \nu_l$) transitions, recently. Influenced by these fascinating quotients, we examine the semileptonic decays involving the $b \to c l \bar \nu_l$ quark level transitions. We execute it through a model-independent analysis in order to probe the nature of new physics. Taking into consideration the most general effective Hamiltonian, we scrutinize $\Lambda_b \to \Lambda_c \tau \bar \nu_\tau$, $ B_c^+ \to \eta_c \tau^+ \nu_\tau$, and $ B \to D^{**} \tau \bar \nu_\tau$ (where $D^{**} = \{D^*_0, D_1^*, D_1, D_2^*\}$ are the four lightest excited charm mesons) processes, in the presence of new physics. We perform a global fit to different sets of new coefficients, making use of the measurements on $R_{D}$, $R_{D^{*}}$, $R_{J/\psi}$, $P_{\tau}^{D^*}$ and the upper limit on Br($B_c^+ \to \tau^+ \nu_\tau$). We then inspect the effect of constrained new couplings on the branching ratios, forward-backward asymmetry parameters, lepton non-universality ratios (LNU), and lepton and hadron polarization asymmetries of these decay modes.
The hierarchical structure of fermion masses and mixings is a major unsolved problem in the Standard model. Fermion masses as radiative corrections emerge as one of the reliable explanations for this problem. We propose a framework based on a class of abelian gauge symmetries in which the masses of only the third generation quarks and leptons arise at the tree level, and the lighter generations masses are induced radiatively with new gauge bosons in the loops. We have shown that this class of abelian symmetries are flavour non-universal in nature. We construct an explicit renormalizable model based on two U(1) which reproduces the observed fermion mass spectrum of the Standard Model, and discuss some phenomenological aspects of the flavourful new physics.
The measurement of CP violating weak phase $\phi_s$ is achieved using the data collected by the CMS experiment at $\sqrt{s}=13$ TeV in the sample of 48500 reconstructed $B^0_s\rightarrow J/\psi \phi \rightarrow \mu^+\mu^-K^+K^-$ events corresponding to an integrated luminosity of 96.4fb$^{-1}$. The parameters are extracted by performing a time-dependent and flavor-tagged angular analysis of the $\mu^+\mu^-K^+K^-$ final state.
This talk will discuss these recent results and their combination with the previous CMS measurement at $\sqrt{s}=8$ TeV, with particular emphasis on the adopted methodology and employed novel opposite-side muon flavor tagger based on ML techniques.
Probing solar modulation of galactic cosmic rays in the interplanetary medium and terrestrial atmosphere with muon has recently gained appreciable importance. The muons have superior penetrating power, generated at an altitude higher than the thunderclouds. Thunderstorms drastically change the atmospheric electric field, which causes variation in muon count rate. At GRAPES-3, we are observing thunderstorm-induced muon events continuously for the past two decades. With the inclusion of spatially distributed electric field mills (EFMs), the observation focused from 2011 onwards. The competitive use of EFM and muon data resulted in the detection of about half a thousand statistically significant events during 2011-2020. A better understanding of the related physical processes is possible with a thorough study of the seasonal, diurnal, and event-specific variations [1]. Here, we will present the thunderstorm-induced muon events observed during the summer and discuss the processes responsible for their formation in detail.
Reference
1. P.K. Nayak et al., Thunderstorm-Induced Muon Events (TIMEs) at GRAPES-3 Experiment, 15th Quadrennial Solar-Terrestrial Physics (STP-15) Symposium, 21 - 25 February 2022, Indian Institute of Geomagnetism, Mumbai (Virtual) (to be communicated to Journal of Atmospheric and Solar-Terrestrial Physics)
The deflection of cosmic rays (CRs) in the interstellar magnetic field results in an almost isotropic flux as observed on Earth. However, anisotropies of different angular scales have been predicted at the level of ~$10^{−4}−10^{−3}$. Small-scale anisotropic structures on angular scales of ≤ 60◦ have been predicted due to relative diffusion of CRs in the local turbulent magnetic fields, the contribution of local sources and several other factors. The GRAPES-3 experiment, consisting of a dense array of scintillator detectors, records over a billion cosmic ray events per year in the TeV-PeV energy range, hence it is suitable for probing cosmic ray anisotropy due to its high statistics. A careful analysis was performed to probe such an exceedingly small magnitude of anisotropy which gets overwhelmed by systematics such as atmospheric or detector effects. Several small-scale anisotropic structures were observed using four years of GRAPES-3 data which are consistent with the observations of some major air shower arrays, collecting very high statistics. The details about the observed anisotropic structures will be presented.
Cosmic strings generate wakes as they move through the universe. The wake leaves a distinct imprint on the background plasma. Magnetic fields are also generated in the wake of a cosmic string due to the inhomogeneity of the electron distribution and due to the presence of shocks in the wake. The presence of the magnetic field and the high Reynolds number in the wake of the cosmic string lead to various interesting consequences in the string wake. One such consequence is the possibility of magnetic reconnection in the cosmic string wake. Currently, there is a strong initiative to identify the signatures left behind by the cosmic string wake. We propose that magnetic reconnections in cosmic string wake may lead to a large radiation burst which can be identified as a Gamma Ray Burst.
As a part of its R&D, the ICAL collaboration has built a small prototype module called mini-ICAL to study the detector performance, and engineering challenges in the construction of large-scale magnet,
and magnetic field measurement systems as well as to test the ICAL electronics in the presence of the magnetic field. This detector was also used to measure the charge-dependent muon flux and to study the feasibility of cosmic muon veto for a shallow depth neutrino experiment. The mini-ICAL consists of 11 layers of iron plates (dimension 4\,m$\times$4\,m$\times$0.056\,m) with an inter-layer gap of 45\,mm. The RPC (area $\sim$2\,m$\times$2\,m) detectors are inserted between the iron layers. Kalman filter based track fitting algorithm is being used for reconstructing 4-vectors of muons
in the ICAL experiment. The same algorithm is also being used for the mini-ICAL with 10 layers of RPCs. The cosmic ray data collected by the detector is also used to measure the charge ratio (R) of the number of $\mu^{+}$ to $\mu^{-}$ arriving at the Earth's surface.
This paper discusses the results obtained from the mini-ICAL detector and its comparison with the results of extensive air shower (EAS) simulation.
Multiwavelength observations of Supernovae (SNe) have revealed the presence of dense Circumstellar Material (CSM) around their progenitor stars. This CSM is formed due to heavy mass loss that the progenitor stars suffer a few years prior to their death as SNe. High energy protons accelerated in SN explosion interacting with this CSM can produce secondary particles like high energy neutrinos and gamma-rays. We term such SNe as Young Supernova (YSNe) as this interaction generally lasts for about a year after explosion. In this work, we estimate the spectra of high energy neutrinos and gamma rays emitted by different types (IIn, II-P, Ib/c, and IIb/II-L) of YSNe. Type IIn produces the largest neutrino and gamma-ray fluxes, followed by Ib/c and II-P. Telescopes like IceCube (neutrino) and Fermi-LAT (gamma-ray) might detect type IIn upto 10 Mpc, while the remaining types are detectable at smaller distances. The different classes of YSNe can also produce diffuse backgrounds of high energy neutrinos and gamma-rays. The contribution to these diffuse backgrounds is found to be dominated by type IIn YSNe, followed by type II-P and Ib/c YSNe. The diffuse neutrino background from YSNe explains very well the IceCube High Energy Starting Events (HESE). Interestingly, the gamma-ray counterpart to diffuse background do not create tension to the resolved Isotropic Gamma-ray Background (IGRB) measured by Fermi-LAT.
Present work investigates the properties of a proto-quark star (PQS) using Polyakov Chiral $\text{SU(3)}$ Quark Mean Field (PCQMF) model in the presence of a strong magnetic field. Considering various snapshots of PQS along the star evolution, the analysis of longitudinal and transverse equation of state (EoS) is carried out. Also, the effect of vector interaction on magnetized PQS with the density-dependent strong magnetic field is considered and the critical value of a magnetic field for stable magnetized PQS is calculated. The derived EoS can be beneficial for the study of mass-radius of PQS and can be compared with recent astrophysical observations.
The couplings of the Higgs boson (H) with massive gauge bosons of weak interaction (V = W, Z), can be probed in single Higgs boson production at the proposed future Large Hadron-Electron Collider (LHeC). In this talk, I will be presenting the collider reach on the new physics parameters of the HVV couplings assessed through the azimuthal angle correlation between missing energy (electron) and forward jet in the charged (neutral) current processes at future electron-proton collider with 60 GeV electron and 7 TeV proton energies. I will present the statistical analysis leading to the exclusion limits on individual new physics parameters as a function of luminosity. The effect of the presence of other new physics parameters will also be discussed.
The non-thermally produced freeze-in dark matter is an attractive alternative to look beyond the weakly interacting massive particle (WIMP) paradigm. With the singlet-doublet dark matter model, a simple extension to the Standard Model (SM), we probe the light dark matter parameter space, assuming feeble couplings between SM particles and the dark matter candidate. We tried to show that in the non-standard cosmological background, a collider probe using jet substructure analysis can be very effective in terms of exclusion capability and serve as a complimentary probe to the existing displaced vertex searches.
We consider a U(1)X⊗Z2⊗Z2′ extension of the Standard Model (SM), where the U(1)X charge of an SM field is given by a linear combination of its hypercharge and B−L number. Apart from the SM particle content, the model contains three right-handed neutrinos (RHNs) NRi and two scalars Φ, χ, all singlets under the SM gauge group but charged under U(1)X gauge group. Two of these additional fields, fermion NR3 is odd under Z2 and scalar χ is odd under Z2′ symmetry. Thus both χ and NR3 contribute to the observed dark matter relic density, leading to two-component dark matter candidates. We study in detail its dark matter properties such as relic density and direct detection taking into account the constraints coming from collider studies. We find that in our model, there can be possible annihilation of one Dark Matter (DM) into the other, which may potentially alter the relic density in a significant way.
Electroweak Symmetry Breaking (EWSB) is known to produce a massive universe that we live in. However, it may also provide an important boundary for freeze-in or freeze-out of dark matter (DM) connected to the Standard Model via the Higgs portal as processes contributing to DM relic differ across the boundary. We explore such possibilities in a two-component DM framework, where a massive $U(1)_X$ gauge boson DM freezes in and a scalar singlet DM freezes out, which inherits the effect of EWSB for both the cases in a correlated way. Amongst different possibilities, we study two sample cases; first when one DM component freezes in and the other freezes out from thermal bath both necessarily before EWSB and the second when both freeze-in and freeze-out occur after EWSB. We find some prominent distinctive features in the available parameter space of the model for these two cases, after addressing relic density and the recent most direct search constraints from XENON1T, some of which can be borrowed in a model-independent way.
We report on the results of new physics searches in a final state containing a photon and missing transverse energy called “monophoton searches” in a p-p collision at √s =13TeV. The data correspond to an integrated luminosity of 138fb$^-$$^1$. In the Standard Model, the only process that results in the genuine signature of a single photon and large MET is Z + γ production, in which the Z boson decays into a neutrino ($\nu$) and an antineutrino ($\overline{\nu}$). The rate of Z + γ production can be precisely calculated in the SM, and therefore a deviation of the observation from the prediction in this signature is a robust indicator of the physics beyond the standard model. This process, in which the Z boson decays to 2 neutrinos, is the irreducible process, as the signal and background look exactly the same in the detector. In practice, multiple other collision and non-collision processes mimic the signature and thus constitute the additional background to this search. We aim to reduce the contributions from such non-Z+γ backgrounds and other remaining backgrounds using the data-driven techniques and Monte Carlo (MC) simulations. Results are interpreted in the context of dark matter using simplified models and large extra dimensions using ADD model
In this work we study a viable dark matter (DM) mass region in a non-supersymmetric $SO(10)$ GUT scalar dark matter model. The model comprises of a scalar singlet $S$ and an inert doublet $\phi$ which are odd under a discrete $Z_2$ matter parity $(-1)^{(B-L)}$. The DM is a mixture of $Z_2$ - odd scalar singlet $S$ and neutral component of doublet $\phi$ belonging to a new scalar 16 representation of $SO(10)$. We also analyse the one-loop vacuum stability by solving Renormalisation Group Equations (RGEs) for the parameters of the model. Next, we investigate the model predictions with the current theoretical and experimental constraints, and identify the range of parameter space which is consistent with relic density and direct search limits from the latest XENON1T result, as well as realising the stability of the electroweak vacuum up to the Planck scale. The predictions of the model are testable in future DM search experiments, along with the feature that is a part of an elegant grand unified theory.
The Standard Model effective field theory (SMEFT) is one of the preferred approaches for studying particle physics in the present scenario. The dimension-six SMEFT operators are the most relevant ones and have been studied in various works. The renormalization group evolution equations of these operators are available in the literature and facilitate examining the SMEFT on combined experimental information gathered across different energy scales. But, the dimension-six operators are not the dominant term for all observables, and some of these operators are loop-generated when UV theories are matched to the SMEFT. Also, considering that for relatively low values of the cut-off scale of the SMEFT, contributions from dimension-eight operators cannot be neglected.
In this work, we present the renormalization of the bosonic sector of the dimension-eight operators by tree-level generated dimension-eight operators in the matching of weakly coupled UV theories to the SMEFT. These operators appear in the positivity constraints, which determine the signs of certain combinations of Wilson coefficients based on the unitarity and analyticity of the S-matrix. These constraints are remarkably significant as any experimental evidence of a violation of these constraints would indicate the invalidity of the EFT approach, such as, for example, the existence of lighter degrees of freedom below the cut-off scale of the EFT. Also, these restrictions can be taken into account while defining priors on the fits aiming at constraining the SMEFT parameter space.
We consider a black hole with a stretched horizon as a toy model for a fuzzball microstate. The stretched horizon provides a cut-off, and therefore one can determine the normal (as opposed to quasi-normal) modes of a probe scalar in this geometry. For the BTZ black hole, we compute these as a function of the level $n$ and the angular quantum number $J$. Though conventional level repulsion is absent in this system, we find that the Spectral Form Factor (SFF) shows clear evidence for a dip-ramp-plateau structure with a linear ramp of slope $\sim$ 1 on a log-log plot, with or without ensemble averaging. We show that this is a robust feature of stretched horizons by repeating our calculations on 2d $\text{Rindler} \times S^{1}$ geometry. We also observe that this is not a generic feature of integrable systems, as illustrated by standard examples like integrable billiards and random 2-site coupled SYK model, among others.
We study a strongly coupled lattice model containing two flavors of massless staggered fermions interacting via two types of interactions: (1) a current- current interaction involving a four-fermi term of the same flavour, and (2) an on-site four-fermion interaction involving two flavours. We study the model at strong coupling, where both these interactions dominated over the free hoping term. We find two massive fermion phases with two different mechanisms. In one phase fermions become massive through Spontaneous Symmetry Breaking (SSB) and in another phase, fermions get mass without any symmetry breaking. We study the model in both two and three space-time dimensions using the Monte Carlo worm algorithm, we find a second-order phase transition between two phases.
The critical behavior of the two-dimensional XY model has been explored in the literature using various methods. They include the high-temperature expansion (HTE) method, Monte Carlo (MC) approach, strong coupling expansion method, and tensor network (TN) methods. This model undergoes a Berezinskii-Kosterlitz-Thouless (BKT) type of phase transition. This model can be modified by adding spin-nematic interaction terms with a period to give rise to the generalized XY model. The modified model contains excitations of integer and half-integer vortices. These vortices govern the critical behavior of the theory and produce rich physics. With the help of tensor networks, we investigate the transition behavior between the integer vortex binding and half-integer vortex binding phases of the model and how this transition line merges into two BKT transition lines.
Heavy quarks (charm and beauty) have masses much larger than the characteristic energy scale of QCD interaction. Due to this they are typically produced in hard scattering processes with large $\{Q}^2$ and thus offer a unique perspective to study the transition from quark to hadrons in all collision systems. Recent production measurements of charm baryons and mesons in small system at midrapidity show a charm baryon-to-meson ratio significantly higher than those measured in e+e− collisions, which suggests that the fragmentation of charm is not universal across different collision systems. Thus, precise measurements of charm baryon and meson production are crucial to study the charm quark hadronization in a partonic rich environment like the one produced in pp collisions at the LHC energies.
In p–Pb collisions, a modification of the hadronization mechanisms could be present due to cold nuclear matter effects and possible collective phenomena. A systematic comparison between data and models will help to understand charm quark hadronization in pp and p–Pb collisions. In this contribution, the most recent measurements with the ALICE experiment of charm baryon production in pp collisions and p–Pb will be shown. Also the comparison with model calculations including several modeling for the charm hadronization in the small collision system will be discussed.
The study of prompt direct photons, from Compton scattering and annihilation hard processes in hadronic collisions, can test perturbative quantum chromodynamics theory predictions. In pp collisions, they can be used to constrain parton distribution functions as they come directly from the parton-parton hard scatterings. The measurement of direct photon production is complicated due to the presence of large photon background from hadron decays, especially from neutral mesons.
In this contribution, we will present the measurement of isolated photon production in pp collisions at $\sqrt{s}$ = 8 TeV using the data collected by the ALICE detector. The isolation technique is used to select prompt direct photons and reduce contamination from decay and fragmentation photons. The results have been compared to a theoretical prediction.
Heavy quarks (HQs) are considered as effective probes to study the evolution of the quark-gluon plasma (QGP). We study the dynamics of HQs in a hot QCD medium with a time-correlated noise, η. We have introduced the effect of memory through η and the dissipative force in the Generalized Langevin equation (GLV). We assume that the time correlations of the colored noise decay exponentially with time, called the memory time, \tau. We have explored the effect of non-zero values of \tau on the nuclear modification factor, R AA, and transverse momentum broadening, \sigma_p of the HQs within the QGP medium. We find that overall memory slows down the momentum evolution of heavy quarks; In fact, transverse momentum broadening and the formation of RAA are slowed down by memory and the thermalization time of the heavy quarks becomes larger. We will discuss the potential impact on other observables.
There is serious disagreement between the predictions of Non-Relativistic Quantum Chromodynamics (NRQCD) and the data on $J/\psi$ polarisation which has persisted for almost a quarter of a century. We find that if we account for the effect of perturbative soft gluons on the intermediate charm-anticharm octet states in NRQCD then the polarisation problem can be resolved. In addition, this model, when used to fit the Run 1 data on $J/\psi$, $\psi'$ and $\chi_c$ production from the CDF experiment at Tevatron, gives good fits and yields values of (energy- independent) non-perturbative parameters. These, in turn, can be used to make parameter-free predictions for $J/\psi$ and $\psi'$ data from the CMS experiment at the Large Hadron Collider and the predictions are in excellent agreement with then CMS data. We have also made the predictions for both $\chi_c^1$ and $\chi_c^2$ production at $\sqrt{s}=$7 TeV and find excellent agreement with data from the ATLAS experiment. Furthermore, we have extended our work for $\eta_c$ and made the comparision with LHCb experimental data using the non-perturbative parameter for $J/\psi$ production from the CDF experiment at Tevatron. It also gives a very good agreement with the data from LHCb experiment for $\sqrt{s}=$7 TeV, 8 TeV, 13 TeV.
$\bf{References:}$
[1] Sudhansu S. Biswal, Sushree S. Mishra and K. Sridhar, Phys. Lett. B $\bf{832}$, 137221 (2022) [arXiv:2201.09393 [hep-ph]].
[2] Sudhansu S. Biswal, Sushree S. Mishra and K. Sridhar, [arXiv:2206.15252 [hep-ph]]; Communicated to journal.
The production of charmonium and its suppression in heavy-ion collisions is an ideal probe to explore the Quark-Gluon Plasma (QGP) in the laboratory. Suppression can also take place in hadron-nucleus collisions due to cold nuclear matter (CNM). The hadron-nucleus collisions are therefore important as they help disentangling the effects of the QGP from those due to CNM. The Charmonium production in hA collisions at fixed-target SPS energies is sensitive to the CNM effects like the nPDFs, and the partonic energy loss in nuclear matter.
The double differential ($x_{\rm F}$, $p_{\rm T}$) cross sections of J/$\psi$ production have been measured by the COMPASS collaboration in hA collisions at $\sqrt{s} = 18.9$ GeV. A negative pion beam with a momentum of 190 GeV/c was impinging on ammonia, aluminum, and tungsten targets.
The preliminary results for the ratios of heavy to light targets show strong suppression towards high $x_{\rm F}$ and low $p_{\rm T}$, indicating the presence of energy loss effects. A dependence with $p_{\rm T}$ is also investigated to study the nuclear $p_{\rm T}$-broadening effects. The results will be compared to the available fixed-target measurements and will be followed by the comparison with theoretical model predictions.
We demonstrate high prediction accuracy of three important properties that determine the initial geometry of the heavy-ion collision (HIC) experiments by using supervised machine learning (ML) methods. These properties are the impact parameter, the eccentricity, and the participant eccentricity. Although ML techniques have been used previously to determine the impact parameter of these collisions, we study multiple ML algorithms, their error spectrum, and sampling methods using exhaustive parameter scans and ablation studies to determine a combination of efficient algorithm and tuned training set. This gives multifold improvement in accuracy for all three different heavy-ion collision models. The three models chosen are a transport model, a hydrodynamic model, and a hybrid model. The motivation for using three different heavy-ion collision models was to show that even if the model is trained using a transport model, it gives accurate results for a hydrodynamic model as well as a hybrid model. We show that the accuracy of the impact-parameter prediction depends on the centrality of the collision. With the standard application of ML training methods, prediction accuracy is considerably low for central collisions. We have improved the accuracy by using different sampling techniques.
The anomalous magnetic moment of muons has been a long-standing problem in SM. The current deviation of experimental value of the (g − 2) μ from the standard model prediction is exactly 4.2σ. Two Higgs Doublet Models can accommodate this discrepancy but such type of model naturally generate flavor changing neutral current(FCNC). To prevent this it was postulated that 2HDM without FCNC required that all fermions of a given charge couple to the same Higgs boson but the
rule breaks in Muon Specific Two Higgs Doublet Model where all fermions except muon couple to one Higgs doublet and muon with the other Higgs doublet. The Muon Specific Two Higgs Doublet model explain muon anomaly with a fine tuning problem of very large tan β value with other parameters. We have found a simple solution of this fine tuning problem by extending this model with a vector like lepton generation which could explain the muon anomaly at low tan β value with a heavy pseudo scalar Higgs boson under the shadow of current experimental and theoretical constraints. Moreover, with the help of the cut based analysis and multivariate analysis methods, we have also attempted to shed some light on the potential experimental signature of vector lepton decay to the heavy Higgs boson in the LHC experiment. We have showed that a multivariate analysis can increase the vector like leptons signal significance by up to an order of magnitude than that of a cut based analysis.
We analyze in a model-independent way the potential to probe new physics using Higgs decay to ϒγ. The h→ϒγ decay width is unusually small in the Standard Model because of an accidental cancellation among the direct and indirect decay diagrams. Thus, any new physics that can modify the direct or the indirect decay amplitudes disrupts the accidental Standard Model cancellation and can potentially lead to a relatively large decay width for h→ϒγ. We carry out a detailed model-independent analysis of the possible new physics that can disrupt this cancellation. We demonstrate that after taking into account all possible constraints on Higgs production and decay processes from experimental measurements, the wrong-sign hbb coupling is the only scenario in which the h→ϒγ decay width can be changed by almost two orders of magnitude. We conclude that an observation of a significantly enhanced h→ϒγ decay width at the LHC or any future collider will be a conclusive evidence of a wrong-sign hbb coupling.
We present the results of a search for a pseudoscalar higgs boson (A), where A decays to a Z boson and a Standard Model like higgs boson (h), using pp collision data collected by the CMS experiment during LHC Run-2 at centre-of-mass energy of 13 TeV. Such a pseudoscalar higgs boson can be produced and decay in various beyond SM models, such as 2HDM and MSSM etc. In the final states, we consider Z decaying to a pair of oppositely charged electrons or muons and h decaying to a pair of tau leptons, which further decay hadronically or leptonically. The results are presented in terms of upper limits at 95% CL on production cross sections times branching fractions of A boson and interpretations in a few BSM scenarios.
We study the Standard model (SM) Higgs production in association with a Z-boson at the LHC. The leading contribution comes from $q\bar{q}$-initiated (DY-like) subprocess. In addition to this, the gluon fusion subprocess via fermion loops also contributes to this process at higher orders in QCD. In this work, we study the impact of higher-order QCD corrections and provide precise results for this $ZH$ production process in hadron collisions. We use the third-order soft virtual results for the DY type of process. Using the universality of the threshold production, we extract the process-dependent coefficients and resum large threshold logarithms to N$^3$LL level. We find the scale uncertainty gets reduced to as small as $0.4\%$, for the conventional variation of unphysical renormalization and factorization scales.
Besides, we also study the gluon initiated subprocess for ZH production, which formally contributes from NNLO onwards in QCD. Being loop induced process, the higher order corrections to this gluon fusion subprocess are difficult to compute. We estimate the size of the NLO QCD corrections to this subprocess and study the impact of the threshold logarithms in the high invariant mass region. We give the numerical predictions for the invariant mass distribution and the total production cross sections for different center of mass energies at hadron colliders.
The results of a search for Higgs boson pair (HH) production in the 4W, 2W2tau, and 4tau decay modes are presented. The search uses 138 /fb of proton-proton collision data recorded by the CMS experiment at the LHC at a center-of-mass energy of 13 TeV from 2016 to 2018. Analyzed events contain two, three, or four reconstructed leptons, including electrons, muons, and hadronically decaying tau leptons. No evidence for a signal is found in the data. Upper limits are set on the cross section for non-resonant HH production, as well as resonant production in which a new heavy particle decays to a pair of Higgs bosons. For non-resonant production, the observed (expected) upper limit on the cross section at 95% confidence level (CL) is 21.3 (19.4) times the standard model (SM) prediction. The observed (expected) ratio of the trilinear Higgs boson self-coupling to its value in the SM is constrained to be within the interval -6.9 to 11.1 (-6.9 to 11.7) at 95% CL, and limits are set on a variety of new-physics models using an effective field theory approach. The observed (expected) limits on the cross section for resonant HH production amount to 0.18-0.90 (0.08-1.06) pb at 95% CL for new heavy-particle masses in the range 250-1000 GeV.
In this talk I review the most important developments in string theory research over the last decade or so.
A Higgs boson was discovered by the ATLAS and CMS experiments at the LHC in 2012. Since then, the LHC experiments have made significant progress in precision studies with the data recorded during LHC Run-1+2 to establish the nature of the observed scalar particle as well as to look for indirect evidence for physics beyond the Standard Model. The latest LHC results from precision studies of the Higgs sector are discussed. Furthermore, results from the precision studies of the top quark, the heaviest known elementary particle, are presented.
Over the past two decades, it has been increasingly clear that cosmological observations are homing on to a concordance ‘standard' model
of Cosmology model with increasingly precise determination of cosmological parameters. The observations of cosmic microwave background (CMB), most recently, the exquisite ESA Planck measurements have not only spearheaded this transition, but also allow critical cross-checks of the underlying paradigm of early universe, and the origin and growth of structures.
In the decades ahead, attention would focus on the 90% of the polarisation information, and a practically untouched spectral information, of high value for Cosmology and High Energy Physics that remain to be mapped out with a more capable next generation CMB space mission.
Instrumenting a gigaton of ice at the South Pole, the IceCube Neutrino Observatory can probe neutrino interactions and properties at high energies with large statistics. This is possible due to the existence of a flux of high-energy astrophysical neutrinos, discovered by IceCube in 2013-14, and the prevalence of neutrinos produced in cosmic ray interactions in the upper atmosphere. Recently, promising candidate sources have emerged for the astrophysical neutrino flux, primarily due to real time multi-messenger followup efforts, while measurements have also been performed of the neutrino-nucleon cross section above a TeV as well as neutrino oscillation parameters using hundreds of thousands of events. IceCube has also detected its first electron antineutrino candidate near the Glashow resonance energy of 6.3 PeV. This talk will highlight recent results and illustrate the unique capabilities of this detector, motivating the proposed IceCube Gen2 extension and concluding with a discussion of opportunities for Indian participation and synergies with our homegrown capabilities and research programmes.
The Cryogenic Dark Matter Search experiment II (CDMS II) was a direct dark matter search experiment that operated between the years 2003 to 2012 at Soudan Underground Laboratory, Minnesota, USA [1]. The experiment deployed a total of 19 germanium (Ge) and 11 silicon (Si) cryogenic detectors each having a mass of $\sim$ 250 g and $\sim$ 100 g respectively, in a 5 tower configuration, at a temperature of $\sim$ 40 mK. The detection principle of this experiment involved measuring the recoil energy of the target mass (Ge or Si) after a dark matter particle elastically scatters off it. The detector measured charge and phonon signals from an interaction with the target. The spin-independent interaction cross-section of a dark matter particle with a nucleon is of the order of $\sim 10^{-41}$ cm$^{2}$ for the dark matter mass $\leq10$ GeV/$c^{2}$. The predicted dark matter event rate in a germanium target is $\sim 0.05$ /kg-day for the nuclear recoil energy in the order of a few keV [2]. So, the interaction of dark matter particles is very rare and occurs at a rate that is well below the background radiation rate. Hence the identification and rejection of the backgrounds in these experiments are crucial.
$^{32}$Si is an isotope of Si which is present in the Si detectors from the time of fabrication [3]. It emits $\beta$ particles which act as a source of background in the CDMS II experiment. The endpoint energies of the $\beta$ spectrum are 227 keV for $^{32}$Si $\rightarrow$ $^{32}$P and 1710 keV for $^{32}$P $\rightarrow$ $^{32}$S. The $\beta$ particles create electron recoils in the detector. Our analysis goal is to estimate the decay rate of $^{32}$Si and $^{32}$P in the Si detectors using CDMS II data. We will present the recent results towards obtaining the $^{32}$Si and $^{32}$P. We will be using the likelihood method to calculate the $^{32}$Si and $^{32}$P decay rate. Our analysis is important for the SuperCDMS SNOLAB experiment, the successor of CDMS and other experiments that uses Si detectors [4] to look for rare events.
References:
[1] Gianfranco Bertone, Dan Hooper, and Joseph Silk. Physics Reports 405 (5) (2005) 279-390.
[2] J.D. Lewin and P.F. Smith. Astroparticle Physics 6 (1996) 87-112.
[3] John L. Orrell et al. Astroparticle Physics 99 (2018) 9-20.
[4] R. Agnese et al. Physical Review D 95 (8) (2017) 082002.
We study the possibility of generating light Dirac neutrino mass from a radiative seesaw mechanism with dark sector particles going inside the loop, known as the scotogenic framework. The loop suppression and additional free parameters allow large ($\sim\mathcal{O}(1))$ coupling of light Dirac neutrinos with the dark sector particles. Such large Yukawa coupling not only dictates the relic abundance of heavy fermion singlet dark matter but also can lead to thermalization of the right chiral part of Dirac neutrinos, generating additional relativistic degrees of freedom ${\rm \Delta{N_{eff}}}$. We find that the parameter space consistent with dark matter phenomenology can also be probed at future cosmic microwave background experiments like CMB-S4 via precision measurements of ${\rm \Delta{N_{eff}}}$. The same parameter space can also have other interesting and complementary observational prospects at colliders, charged lepton flavour violation.
A good angular is essential for detecting gamma ray sources at multi-TeV energies. The GRAPES-3 experiment, located in Ooty, Tamil Nadu (11.4$^\circ$ N, 76.7$^\circ$ E, 2200 m a.s.l.), is designed with a dense array of 400 scintillator detectors spread over 25000 m$^2$ to study gamma rays sources in the TeV-PeV energy range. By exploiting the shower front curvature, almost a factor of two improvement in the angular resolution of the array could be achieved as compared to the earlier analysis. This has been verified by observing shadow in the cosmic ray flux cast by the Moon using 3 years of the GRAPES-3 data. Further, it has allowed us to determine the pointing accuracy of the direction. The new angular resolution is comparable to other major air shower experiments located at twice the GRAPES-3 altitude.
Abstract: The primary ingredient for studying the phases of a quantum field theory is the effective action. Though obtaining an exact form is beyond the scope of the existing techniques, approximate expressions using perturbative methods which to the leading order involve computation of one-loop determinants are available. In the talk which is based on our paper [1], I will first describe a method for computing one-loop partition function for scalar field on thermal $AdS_{d+1}$ for arbitrary $d$ that reproduces results known in the literature. The derivation is based on the method of images and uses the eigenfunctions of the Laplacian on Euclidean $AdS$. Employing these results, I will then discuss the phases of scalar field theories in thermal $AdS_{d+1}$ spaces for $d=1,2,3$. We will analyze theories with global $O(N)$ symmetry for finite as well as large $N$. The symmetry-preserving and symmetry-breaking phases will be identified as a function of the mass-squared of the scalar field ($m^2$) and temperature ($T=1/ \beta$) in the $\beta$-$m^2$ parameter space. It will also be seen that the sign of the regularized volume of thermal $AdS_{d+1}$ plays a crucial role in the qualitative nature of the phase diagrams. As was shown for zero temperature in [2], we will confirm that for a finite temperature theory in $AdS$ there occurs a symmetry breaking phase in two dimensions, which is in contrast to the flat space where the Coleman-Mermin-Wagner theorem prohibits continuous symmetry breaking [3, 4]. We will also see that unlike the flat space, there exists a region in $AdS$ space where both the symmetry breaking and symmetry preserving phases coexist. In a particular case of $AdS^{}_{3}$ the symmetry gets broken at high temperatures.
References:
[1] A. Kakkar and S. Sarkar, "On partition functions and phases of scalars in AdS,'' JHEP 07 (2022), 089 doi:10.1007/JHEP07(2022)089 [arXiv:2201.09043 [hep-th]].
[2] T. Inami and H. Ooguri, "NAMBU-GOLDSTONE BOSONS IN CURVED SPACE-TIME,'' Phys. Lett. B 163 (1985), 101-105 doi:10.1016/0370-2693(85)90201-1
[3] N. D. Mermin and H. Wagner, "Absence of ferromagnetism or antiferromagnetism in one-dimensional or two-dimensional isotropic Heisenberg models,'' Phys. Rev. Lett. 17 (1966), 1133-1136 doi:10.1103/PhysRevLett.17.1133.
[4] S. R. Coleman, "There are no Goldstone bosons in two-dimensions,'' Commun. Math. Phys. 31 (1973), 259-264 doi:10.1007/BF01646487
A charged falling particle in an AdS space is studied as a holographic model of local charged quench. The evolution of holographic complexity in the conformal field theory following a local quench is studied using both the “complexity equals volume” (CV) and the “complexity equals action” (CA) conjectures in various models. The connection between operator size in chaotic theories and the bulk momentum of a particle falling into black holes is also discussed in a broad class of models involving certain non-local theories.
String theory lives in higher dimensions, and compactification of extra dimensions leads to many equivalent 4-d effective theories which potentially describe our universe. Hence, it is interesting to study this large set of 4-d models and their phenomenology in a statistical setup.
We focus on the statistical aspects of the type-IIB string landscape. We show that stabilization of kähler moduli is important as it governs the distributions in low-energy physics. In a generic case of the Large Volume Scenarios of moduli stabilization, we find that the SUSY breaking scale and mass & decay constant of axions feature a logarithmic distribution. We also notice that the QCD axion prefers the mild logarithmic preference for smaller couplings with SM gauge fields. However, for the Kachru-Kallosh-Linde-Trivedi (KKLT) model of moduli stabilization, we find that SUSY breaking scale is distributed quadratically. Since small values of Gukov-Vafa-Witten (GVW) superpotential are more suited for viable phenomenology and perturbatively flat vacua provide that. We have developed algorithms to find all possible perturbatively flat vacua numerically for two moduli cases. We find that perturbatively flat vacua are statistically sparse in the whole set of vacua at a low vacuum value of GVW superpotential.
We apply the physically more appealing MIT Bag boundary conditions to study the Casimir effect on the lattice. Employing known formalism to calculate the Casimir energy for free lattice fermions, we show that the results for the naive, Wilson and overlap fermions match the continuum expressions precisely in the zero lattice spacing limit, as expected from universality. Furthermore, the apparent violation of the universality for naive fermions for (anti-)periodic boundary conditions noted by Ishikawa et al. is shown to be cured by applying suitable series extrapolation techniques, thus demonstrating that the Casimir energy for the naive fermions with P/AP boundary conditions agrees with the results for other free lattice fermions.
In gauge theories and gravity, the scattering amplitudes of any number of external particles under the soft limit reduce to amplitudes with one less number of external particles times a universal soft factor. Higher-order interaction vertices (or scattering amplitudes) in a theory can be built up from the lower-order vertices (amplitudes) by using a multiplicative universal factor associated with the emission of a soft boson.
In this talk, I will first, for example, explain how universal soft factors for Yang-Mills theory and gravity can be extracted from their respective light-cone actions and how to construct higher point interaction vertices using the soft factor. I will then discuss soft theorems in the context of maximally supersymmetric field theories and explain the construction of interaction vertices of N = 8 supergravity.
Feynman integrals at any order of perturbation, in the Lee-Pomeransky representation, could be realised as a subset of Euler-Mellin integrals. Such integrals are known to satisfy the Gelfand-Kapranov-Zelevinsky (GKZ) system of partial differential equations. In an ongoing collaboration, we automate the derivation of the associated GKZ-system for a given Feynman diagram from either its Lee-Pomeransky representation or its Mellin-Barnes representation. We also present the automation of two mathematically equivalent techniques, namely the Gröbner deformation method and the method of triangulations of point configurations to solve this system, in the form of Mathematica packages, with support from specialised software such as TOPCOM, Polymake, and Macaulay2. As applications, we show that our package allows one to compute both NLO and NNLO Feynman integrals and express their result as multivariate hypergeometric functions.
In this talk, I will discuss the technical challenges in analytic computations of multi-loop integrals that appear in higher-order perturbative computations. I will explain the techniques for computing integrals with massive internal propagators needed for two-loop QCD corrections for Higgs decays and NNLO QCD corrections for ttj production. The inclusion of massive internal propagators often lead to a more complicated class of function space, which will be another highlight of my talk.
The proton spin crisis is a long-term issue in spin physics, implying that the quark only carries a fraction of the proton's spin.
To compensate for the proton's spin, the spin sum rules are given.
These rules state that in a longitudinally polarized proton, the spin carried by the quarks and gluons, as well as their orbital angular momentum (OAM), must add to the proton spin.
This decomposition of the proton spin into spin and OAM of both quarks and gluons is given by Jaffe-Manohar and Ji separately.
In this work [Phys. Rev. D 105 (2022), 114033], we have derived a generalized form of the gluon orbital angular momentum operator at small-$x$ for longitudinally polarized proton.
This expression reproduces the gluon OAM operator given by the Jaffe-Manohar and
Ji in the appropriate combination of limits.
Singular factors originating from the QCD factorisation of scattering amplitudes in soft and collinear limits play a prominent role in both organising and computing high-order perturbative contributions to hard-scattering cross sections. In this talk, I will report on recent work with eprint number 2208.05840. We start from the factorisation structure of scattering amplitudes in the collinear limit, and we introduce collinear functions that have a process-independent structure. These collinear functions, which are defined at the fully-differential level, can then be integrated over the appropriate observable-dependent phase space to compute logarithmically-enhanced contributions to the corresponding observable. For transverse-momentum dependent observables, we show how the collinear functions can be defined without introducing what is known as rapidity divergences in the literature. We present the results of explicit computations of the collinear functions up to next-to-next-to-leading order in perturbation theory.
One of the classic ways of studying QCD events in high energy experiments is to measure the Event Shape variables e.g., Thrust, Jet Broadening, Angularity etc. which are observables designed to characterize several properties including the geometric shape of hadron distribution in the event. In this talk, we will discuss a more general global event shape "angularity" for deep inelastic scattering process (DIS), eP -> dijet, in the framework of Soft-Collinear Effective Theory (SCET) and give precision prediction to the DIS angularity cross-section for future Electron-Ion-Collider(EIC) at next-to-next-to-leading log (NNLL) accuracy. The talk is mostly based on our recent publication JHEP11(2021)026.
Angularity is a class of event-shape observables that can be measured in
deep-inelastic scattering(EIC at BNL). With its continuous parameter 'a' one can interpolate angularity between thrust and broadening and further access beyond the region. Providing such a systematic way to access various observables makes angularity attractive in analysis with event shapes. We give the definition of angularity for DIS and factorize the cross-section by using the soft-collinear effective theory. The factorization is valid in a wide range of below and above thrust regions but invalid in broadening limits. It contains an angularity beam function, which is the new result, and we give the expression at O(αs). We also perform large log resummation of angularity and make predictions at various values of 'a' at next-to-next-to-leading log accuracy.
In this talk, I discuss the calculation of all helicity amplitudes for four-parton scattering in three-loop massless QCD. Our results allow us, for the first time, to verify completely the structure of quadrupole IR divergences at this perturbative order in QCD. From the high-energy limit of the amplitudes, we have extracted the three-loop gluon Regge trajectory in full QCD. Our findings provide a highly non-trivial test of the universality of high-energy factorization in QCD.
The success of the LHC project is marked by not only the discovery of the Higgs boson a decade back, but also by the vast amount of knowledge acquired about the relevant physics. This talk will mainly highlight the major results on characterization of the Higgs boson based on the data from Run2 operation of the LHC.
Currently, the most important mandate of the LHC physics programme is to measure the self-coupling
($\lambda$) of the Higgs boson. This parameter is crucial for describing the shape of the Higgs potential.
At the LHC pair production of the Higgs boson provides direct access to $\lambda$, though the event rate
is extremely small in the standard model. However, contributions from new physics beyond standard model
can potentially enhance the event yield even with limited amounts of data collected so far. Both the ATLAS
and the CMS collaborations are studying Higgs pair production using various final states. In this talk I will
present the strategies and the most important results obtained by the CMS experiment using full Run-2 data
delivered by the LHC proton-proton collision at the centre of mass energy of 13 TeV.
An overview of the search for top quark associated Higgs boson production (ttH and tH) using full Run-2 proton-proton collision data (137 fb-1) collected by the CMS experiment at center-of-mass energy of 13 TeV will be presented. This will cover Higgs decays into final states involving pairs of photons as well as final states involving leptons (electrons, muons and taus). This search is important in providing a direct probe of the top-Higgs Yukawa coupling which might be influenced by beyond standard model physics. Comprehensive details about search strategies e.g. event categorization, background estimation and signal extraction will be provided. The overview will conclude with the latest CMS results on this search.
Measurements of fiducial inclusive and differential cross sections of Higgs boson production in p-p collisions are presented based on the data collected with the CMS detector, corresponding to an integrated luminosity of 137 $fb^{-1}$ at a centre-of-mass energy of 13 TeV. The final state is considered where the Higgs boson decays to two W bosons, which subsequently decay to an electron, a muon, and a pair of neutrinos. The measured value of the integrated fiducial cross section is 86.5±9.5 fb, consistent with the standard model expectation of 82.5±4.2 fb.
Despite an extensive set of searches for physics beyond the standard model, no smoking-gun evidence of resonant phenomena is observed so far at the LHC. Nevertheless, the recent application of effective field theories (EFT) demonstrates that subtle deviations, hiding in the observables' distributions, can probe new physics at energy scales often exceeding the LHC's reach in the direct searches. In this talk, the latest results from the CMS experiment on the searches for the signature of effective field theory operators involving top quark and Higgs boson will be presented.
Below is a tentative list of results to be reviewed.
1. arXiv:2205.05120
2. Phys. Rev. D. 104 (2021) 052004
3. arXiv:2208.12837
4. JHEP 05 (2022) 091
We consider $Z'$ models that can generate the two `1D’ new physics scenarios with Wilson coefficients $C_9^{\rm NP} <0$ and $C_9^{\rm NP} = -C_{10}^{\rm NP}$, to account for the anomalies in $b \to s \ell \ell$ decays. We present the $1\sigma$ favored parameter space of these two classes of models using updated constraints from $CP$-conserving and $CP$-violating observables, $B_s$-mixing and neutrino trident. We show that the predictions of direct $CP$ asymmetry $A_{CP}$ in $B \to K^+ \mu \mu$ decays close to the $c\bar{c}$ resonance region can be used to detect the presence of new $CP$ violating phases in the couplings. The preferred parameter space of $Z'$ models generating scenario $C_9^{\rm NP} <0$ allows for an enhancement in the integrated $A_{CP}$ in $q^2 = [8,9]\mathrm{GeV}^2$ and $q^2 = [6,7]\mathrm{GeV}^2$ up to $\pm 10\%$ and $\pm 5\%$ respectively. While such an enhancement is possible in $Z'$ models generating the scenario $C_9^{\rm NP} = -C_{10}^{\rm NP}$, the favored parameter space prefers only positive values. Hence, a future more precise measurement of $A_{CP}$ near the $c\bar{c}$ resonance can help in distinguishing between the two classes of $Z'$ models.
Several extensions of the standard model (SM) predict the existence of heavy particles that undergo lepton flavor violating (LFV) decays, thereby motivating searches to look for deviations from the SM in the dilepton final states. This talk will present the recent results on the search for such heavy resonances and quantum black holes in the eμ, eτ, and μτ mass spectra using the proton-proton collisions data recorded by the CMS experiment at the CERN LHC at center-of-mass energy of 13 TeV corresponding to an integrated luminosity of 137.1 fb-1.
The distribution amplitudes (DAs) for a heavy quark system are not known well and are very challenging. One tries to model them using heavy quark effective field theory. However, there is a free parameter involved which is related to the inverse moments of DAs. Its value needs to be fixed using some experimental data. For the case of B-meson, it is done using the information on the $B \to \ell \nu_\ell \gamma$ process which helps us in limiting its value by providing limits. For the case of D-meson, the uncertainty in its value is very large which leads to huge uncertainties in the non-perturbative hadronic quantities like form factors. \
In this talk, we will shed some light on these issues and will discuss a possible solution using the experimental data of $D_q^* \to D_q \gamma$ (q=u,d,s) decays and comparing them with the results obtained using Light Cone Sum Rules. We will show how such an estimation can provide better and complementary results for these unknown parameters.
Due to the loop suppression in the standard model, the flavor changing neutral current transition decays provide an ideal plateform to look for physics beyond the standard model. Latest LHCb measurements on various flavor observables in $b \to s \ell^{+}\ell^{-}$ quark level transition decays show significant deviation from the standard model expectations. Similarly, very recently Belle II collaboration has reported an upper bound of $\mathcal{B}(B\to K^{+}\nu\bar{\nu})<4.1\times 10^{-5}$ in $b\to s\nu\bar{\nu}$ decays. It is well known that $b\to s \ell^{+}\ell^{-}$ and $b\to s\nu\bar{\nu}$ decay channels are closely linked in the standard model as well as in beyond the standard model physics under ${SU(2})_L$ gauge symmetry. In this context, we perform a combined angular analysis of $\Lambda_b\to \Lambda^{(*)}\mu^{+}\mu^{-}$ and $\Lambda_b\to \Lambda^{(*)} \nu\bar{\nu}$ baryonic decay channels. To explore the new physics effects we make use of the standard model effective theory formalism, which provides a model-independent way to describe new physics in terms of operators with higher dimensions. We give predictions of several physical observables pertaining to these baryonic decay channels in the standard model and in case of several new physics scenarios.
The Belle experiment at KEK, Japan has at present one of the largest dataset accumulated at $\Upsilon(5\rm S)$ resonance. This dataset produced at $e^+e^-$ centre-of-mass (CM) energy of approximately $10.86$ GeVs correspond to an integrated luminosity of $121.4$ fb$^{-1}$. We have searched for the rare decay for the first time using this accumulated dataset.
The decay is a neutral, charmless, non-leptonic, charged current mediated and strangeness non-conserving rare decay which proceeds via W-exchange and W-annihilation Feynman diagrams within the Standard Model (SM). The theoretical branching fraction (BF) predicted using various methods such as the Flavor Diagram Approach, perturbative QCD, and QCD factorization are, $(0.40\pm 0.27)\times 10^{-6}$, $(0.28\pm 0.09)\times 10^{-6}$, and $(0.13\pm 0.05)\times 10^{-6}$, respectively.
We have analyzed the real data sets for this analysis and the results will be presented at the symposium.
Decays mediated by the flavour-changing neutral current transition $b\to s\ell^{+}\ell^{-}$ are not allowed at tree level in the standard model (SM) and can only proceed via higher-order loop diagrams. Such suppressed decays provide an excellent avenue to search for physics beyond the SM. The $B\to K\ell^{+}\ell^{-}$ ($ell = e, \mu$) decays have recently sparked a lot of interest in a measurement related to the lepton-flavour-universality ratio $R_{K}$, which is the ratio of the muon to electron channel branching fraction. LHCb found a 3.1 standard deviation difference between its $R_{K}$ measurement and the SM prediction. Belle II, which has been recording $e^{+}e^{-}$ collision data since 2019, provides a complementary experimental setup to confirm this discrepancy. The $B\to J/\psi(\ell^{+}\ell^{-})K$ decays, in contrast to suppressed, charmless $B\to K\ell^{+}\ell^{-}$ decays, involve the favoured $b\to c$ tree-level transition, where beyond-the-SM contributions are expected to be negligible. Thus, a measurement of $R_{J/\psi}$ and its consistency with unity would be a strong validation of the future $R_{K}$ measurement in the charmless counterpart. We present our recent findings from Belle II data on $R_{J/\psi}$, isospin asymmetry, and the branching fraction of $B\to J/\psi K$ decays. The talk also covers a simulation-based sensitivity study on the upcoming Belle II measurement of $R_{K}$.
We measure the strong-phase difference between $D^{0}$ and $\bar{D_{0}}\to K_{S}^{0}K^{+}K^{-}$ using a data sample corresponding to an integrated luminosity of 2.92 ${\rm fb}^{-1}$ collected in $e^{+}e^{-}$ collisions at a centre-of-mass energy corresponding to the mass of the $\psi(3770)$. The D0-D0bar meson pairs produced are in a quantum-correlated state, their subsequent decays are recorded by the BESIII 4$\pi$ magnetic spectrometer. The measured value of the strong-phase difference is an essential input to determining the value of Cabbibo-Kobayashi-Maskawa (CKM) angle $\gamma/\phi_{3}$ through the decay $B^{-}\to DK^{-}$, where $D$ can be $D^{0}$ or $\bar{D^{0}}$ decaying to $K_{S}^{0}K^{+}K^{-}$. The measured values of strong-phase difference is most precise to this date. In this talk, I will also highlight the recent measurements from Belle II and LHCb using the BESIII value.
We study a simplest viable dark matter model with a real singlet scalar, vector-like singlet and a doublet lepton. We find a considerable enhancement in the allowed region of the scalar dark matter parameter spaces under the influence of the new Yukawa coupling. The Yukawa coupling associate with the fermion sector heavily dominant the dark matter parameter spaces satisfying the current relic density of the Universe. This model could also accommodate tiny neutrino masses and mixing at one loop-level through the radiative seesaw mechanism. Dilepton+Missing Transverse Energy signature arising from the new fermionic sector can observe at Large Hadron Collider (LHC), satisfying relic density, including other theoretical and experimental bounds. We perform such analysis for a benchmark point in the context of 14 TeV LHC experiments with a future integrated luminosity of 3000 ${\rm fb^{-1}}$.
Muons produced in extensive air showers (EAS) by the interaction of primary cosmic rays (PCR) in the Earth’s atmosphere provide an excellent tool to determine the PCR composition. This is based on the fact that a heavier mass PCR produced more muons as compared to lighter ones. An accurate determination of the muon multiplicity in an EAS is required. The GRAPES-3 experiment located at Ooty, Tamil Nadu contains a large area muon telescope to detect the muons above 1 GeV energy in the EAS by counting the hits in the proportional counters which have been used for the PCR composition studies so far. Here, we present a new method to calculate the number of muons in an EAS based on the pulse height information in the proportional counters which is a piece of additional information recorded besides the hit information. The pulse height is proportional to the energy deposited by muons. The preliminary results of the analysis show that the dynamic range of detecting muons has increased by more than a factor of two. This is important to determine the mass composition of PCRs accurately beyond the Knee region ($\sim$ 3 PeV) of the cosmic ray spectrum.
Keywords: Hubble tension, inflation, quantum gravity, effective field theory.
Recent observations from CMB, Planck and supernovae measurements show a discrepancy in the present estimated value of the Hubble parameter, known as the Hubble tension. In the present work, we seek the possibility of addressing the Hubble tension in the inflationary scenario with quantum gravity effect in the frame work of effective field theory. Further, we investigate the role of quantum gravity on the phase transition of the hybrid inflation in the light of Hubble tension and scrutinise the results with various observations.
Deep learning (DL) is one of the most popular machine learning frameworks in the high-energy physics community and has been applied to solve numerous problems for decades. The ability of the DL models to learn unique patterns and correlations from data to map highly complex non-linear functions is a matter of interest. Such features of the DL model could be used to explore the hidden physics laws that govern particle production, anisotropic flow, spectra, etc., in heavy-ion collisions. This work sheds light on the possible use of the DL techniques such as the feed-forward deep neural network (DNN) based estimator to predict the elliptic flow ($v_2$) in heavy-ion collisions at RHIC and LHC energies. A novel method is proposed to feed the track-level information as input to the DNN model. The model is trained with Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV minimum bias simulated events with a multi-phase transport model (AMPT). All charged hadrons are used for the training. The trained model is successfully applied to estimate the centrality dependence of $v_2$ for both LHC and RHIC energies. The proposed model is quite successful in predicting the transverse momentum ($p_{\rm T}$) dependence of $v_2$ as well. Further extension of the work is being performed to look into the elliptic flow of pions, kaons, and protons at these energies using the proposed DNN model. Some of the scaling properties are also explored. A noise sensitivity test is performed to estimate the systematic uncertainty of this method. Results of the DNN estimator are compared to both simulation and experiment, which concludes the robustness and prediction accuracy of the model.
Reference:
Neelkamal Mallick, Suraj Prasad, Aditya Nath Mishra, Raghunath Sahoo, and Gergely G\'abor Barnaf\"oldi, Phys.Rev.D \textbf{105}, 114022 (2022).
Motivated by the recent flavor anomalies we consider the extension of the Standard Model with a scalar leptoquark $S_1(\bar{3},1,1/3)$ and a scalar triplet to investigate the rare semileptonic $B$ decays involving quark level transitions $b\rightarrow c\ell^{-}\bar{\nu_{\ell}}$, $(g-2)_{\mu}$ anomaly, neutrino mass and matter-antimatter asymmetry simultaneously. The important feature of the work is that it leads to successful gauge coupling unification of fundamental forces when embedded in non-SUSY $SO(10)$ grand unified theory. We also comment on the feasibility of parameter space that can be probed at low-energy experiments like neutrinoless double beta decay or at high-energy colliders.
Measurement of neutrino mixing parameters using a magnetized iron calorimeter (ICAL) is the primary goal of INO. Most of INO-ICAL related analysis, using prototype detector data and simulations, are currently based on conventional algorithms. In the recent years, AI-based analysis have shown impressive performance in many high-energy physics experiments. In this talk, we present an overview of machine learning algorithms that we have developed for a few analysis related to INO-ICAL and prototype detectors. The studies include directionality and charge identification, energy reconstruction of cosmic muons in mICAL, prediction of muon multiplicity, and search for new event topologies.
This talk will discuss the semi-leptonic and non-leptonic $B_c$ meson decay into S-wave charmonium and present our results. This analysis has been done in the non-relativistic QCD(NRQCD) framework. We used Heavy-Quark Spin Symmetry (HQSS) to relate the $B_c\to \eta_c$ to the $B_c \to J/\psi$ form factors. Furthermore, the information on $B_c\to \eta_c$ form factors were extracted from available information on $B_c\to J/\psi$ form factors from the lattice. We then predict the observables R(J/$\psi$) and R($\eta_c$) along with their NP predictions for these observables. Several other predictions are related to $e^+e^-$, Higgs, and Z- decay to single or double charmonium and radiative modes involving J/$\psi$ and(or) $\eta_c$ also presented here.
The ultimate purpose of the relativistic heavy-ion physics is to study strongly interacting matter under the extreme conditions of high temperature and energy density respectively, where the quantum chromo-dynamics (QCD), the theory of strong interactions within the Standard Model, predicts a transition to a new phase of matter, the quark-gluon plasma, QGP. The QGP, a novel state of matter is considered to be the QCD ground state, where partons (quarks and gluons) are deconfined, i.e. no longer bound into composite particles. Such a state of matter existed in the primordial universe, microseconds after the Big Bang, and may still exist today in the cores of neutron starts. In general, such studies are expected to provide information on the properties of large, complex systems including elementary quantum fields, and an indication on the influence of the microscopic laws of physics, expressed by the “QCD equations”, on the macroscopic phenomena like phase transitions and critical behavior.
Therefore, the study of nuclear matter and its different phases is of relevance also beyond the QCD specific domain, because phase transitions and symmetry breaking are principal concepts of the Standard Model and the QCD phase transitions are the only ones that are within reach of laboratory experiments. So far there are no clear experimental indications for the creation of quark matter. However, the study of unusually large density fluctuations observed in high energy hadronic and heavy ion collisions has drawn special attention towards the understanding of the mechanism of particle production. The existence of large fluctuations may be a signal for a phase transition and the understanding of the origin of these fluctuations may provide new insights into the underlying mechanisms responsible for the particle production.
Cosmic muon flux and its angular distribution have been measured using Resistive Plate Chamber(RPC) at Kolkata (22° 36' 6.71" N, 88° 25' 7.89" E) at 8 m elevation. Zenith angle was varied upto 90 degree in clockwise and 90 degree in anticlockwise direction with respect to zenith direction. The similar scenario was also simulated using cosmic flux following cosnθ distribution. Experiment and simulated result were compared. Details of the measurement technique along with simulation result will be presented.
MINERvA is a dedicated (anti)neutrino experiment performed using $(\bar\nu_\mu)\nu_\mu$ beam with different nuclear targets and the aim is to study neutrino interactions and nuclear medium effects in the wide range of Bjorken $x$ and four momentum transfer squared $Q^2$. The study would not only be helpful in understanding the hadron dynamics in the nuclear medium but also it will be useful in reducing the systematics in neutrino oscillation experiments being performed worldwide with accelerator and atmospheric neutrinos.
It is very important to separate the signal and background events for the accurate measurement of DIS cross section in the nuclear target region. In this analysis, we select the signal as all charge current antineutrino events that are in given material with muon energy 2-50GeV and passes the true DIS cuts ($Q^2\geq 1 GeV^2$ and $W\geq 2 GeV$) and all other events are background. The background is categorized based on the position of the interaction vertex in the detector and events passing the kinematical variables cut.
Once we fix the background events then the results would be obtained for double differential scattering cross section in DIS region.
We shall present the preliminary results of the analysis being performed to estimate the background in anti-neutrino nucleus double differential DIS cross section in the intermediate energy region (average $E_{\bar\nu_\mu}\sim$ 6 GeV ) with $\bar\nu_\mu$ beam in MINERvA.
The finite modular groups are isomorphic to permutation groups e.g. $\Gamma_{3}\simeq$ A$_4$. Apart from usual irreducible representations of the permutation groups, they have modular weights as new symmetry charges. The Yukawa couplings transforms as modular forms of complex modulus $\tau$ acquiring suitable charges of the underlying symmetry. In this work, we propose a scenario implementing the correction to scaling neutrino mass matrix and investigate baryogenesis based on A$_4$ modular symmetry within Type-I+II seesaw framework. In fact the scaling neutrino mass matrix results in vanishing reactor mixing angle ($\theta_{13}$), inverted ordering of neutrino masses with vanishing lowest neutrino mass eigenvalue ($m_{3}=0$). In the proposed model, field content comprises of the standard model particles, two chiral neutrino superfields ($N_{1}^{c}$, $N_{2}^{c}$) and scalar singlet weighton field ($\phi$) which results in scaling neutrino mass matrix through Type-I seesaw. The correction to scaling neutrino mass matrix is manifested through Type-II seesaw obtained by introducing a supersymmetric pair of scalar triplet fields ($\Delta$,$\bar{\Delta}$). In particular, correction to scaling neutrino mass matrix is found to be proportional to modular Yukawa couplings of weight 10 ($Y_{1,1'}^{10}$). The model satisfies the neutrino oscillation data and cosmological constraint on sum of neutrino masses ($\sum m_{i}\leq 0.12$eV). The modular Yukawa couplings of modular weight 2 are sensitive to the imaginary part of complex modulus $\tau$, only. Also, we have studied the implications of the model for neutrinoless double beta decay ($0\nu\beta\beta$). The effective Majorana mass parameter ($M_{ee}$) is found to be in range ($0.04-0.06$) eV which is well within the sensitivity reach of $0\nu\beta\beta$ decay experiments. Furthermore, there exist robust lower bound on sum of neutrino masses ($\sum m_{i}\geq 0.05$ eV). Also, in order to generate a consistent baryon asymmetry of the universe the right-handed neutrino mass is found to be in the range $((1-5)\times10^{13})$ GeV implying that the flavor effects are negligible.
The Level-1(L1) trigger is the first of the two-level trigger system of the CMS detector and is hardware based. It gathers information from the Electromagnetic(ECAL) and Hadron(HCAL) Calorimeters and muon detectors to select interesting physics events. The L1 Electron/Photon (e/γ) trigger identifies e/γ candidates based on energy deposition in the ECAL and HCAL. The present data taking period of LHC, the Run3 has just started. We are going to present here the studies that were performed to prepare the L1 e/γ trigger for the Run3 data taking. This includes analyzing and optimizing the efficiency and resolutions of L1 e/γ triggers at a feasible trigger rate. The optimisation focused on developing a new set of monte-carlo based calibrations and isolation schemes that puts a limit on the transverse energy around the candidate e/γ to maximise the physics reach of CMS. Resolutions of the L1 e/γ trigger in the recent Run3 data will also be presented.
The characterization of sensor is being done in the set up at IIT Madras that includes laser source. The electrical measurements are performed on several silicon pad detectors, the results have been found to be in good agreement with the set up at CERN. First results are reported including the physical verification of the sensors using profilometer, manual probe station, confocal microscope before performing electrical measurements.
The CMS Collaboration is preparing to replace its endcap calorimeters for the HL-LHC era with a high-granularity calorimeter (HGCAL). The HGCAL will have fine segmentation in both the transverse and longitudinal directions, and will be the first such calorimeter specifically optimized for particle-flow reconstruction to operate at a colliding-beam experiment. The proposed design uses silicon sensors as active material in the regions of highest radiation and plastic scintillator tiles equipped with on-tile silicon photo-multipliers (SiPMs), in the less-challenging regions. The unprecedented transverse and longitudinal segmentation facilitates particle identification, particle-flow reconstruction and pileup rejection.
A prototype of the silicon-based electromagnetic and hadronic sections along a section of the CALICE AHCAL prototype was exposed to muons, electrons and charged pions in beam test experiments at the H2 beamline at the CERN SPS in October 2018 to study the performance of the detector and its readout electronic components. Given the complex nature of hadronic showers, energy reconstruction is expected to benefit from detailed information of energy deposits and its spatial distribution of the individual showers in the detector, which can be well utilized by advanced machine learning algorithms. Here we present reconstruction of hadronic showers created by charged pions of momenta 20-300 GeV using a dynamic reduction network (DRN) based on graph neural networks (GNNs).
We consider a unified framework accommodating the dark matter and the matter-antimatter asymmetry of the universe, through a minimal addition of right-handed neutrinos and a singlet scalar φ to the Standard Model. The framework assumes an asymmetry in the dark sector and connects it with the asymmetry in the visible sector. It turns out that the out-of-equilibrium decay of heavier right-handed neutrinos to the visible sector (LH) and to the dark sector (heavy N_R decays to lightest RH-neutrino as dark matter candidate via φ scalar) is consistent with the relic density observations of both the sectors. The model allows for a wide range of dark matter masses, from keV to GeV scale, and is also able to provide for the active neutrino masses through Type-I
seesaw mechanism. Thus, the model explains the lepton asymmetry, dark matter abundance and neutrino masses all in a next-to-minimal framework.
Abstract: Multiparticle production process in hadron-nucleus (hA) and nucleus-nucleus ( AA) collision has been studied throughly in the last four decades. Main aim of the study of AA collision is to investigate the properties of QGP. There are many signatures of QGP, one of the method to study this is fluctuations in the particle density. Fluctuation in the individual events may gives rise peaks in the phase space domains. These may be studies by the method of scaled factorial moments of the multiplicity distribution.
The concept of intermittency is connected to the fractal geometry of the underlying physical process. Fractal geometry allows us to describe a system that is intrinsically irregular at all scales. A fractal structure has the property that if one magnifies a small portion of it that shows the same complexity as which system. The idea, therefore, is to construct a formalism that can describe systems with local properties of self-similarity. They have suggested that if intermittency exists, then the normalized factorial moments of the multiplicity distribution should exhibit a characteristic power-law dependence or scale invariance. Many referres have done the similar work to show the presence of intermittency . The intermittency may be due to the self-similar and fractal structure of the particle production in nuclear collisions.
Analysis of Scaled Factorial Moment (SFM) has been done for the order of the moment, $q$ = 2-5 in order to study the intermittent behaviour of particle produced in 14.5A Gev/c, $^{28}$Si-AgBr collisions.
The events generated from Heavy Ion Jet Interaction Generator (HIJING) model and Ultra relativistic Quantum Molecular Dynamics (UrQMD) model.
The experimental results are also compared with those obtained for the simulated events. The variation of anomalous dimension, $d_q$, and parameter, $%lambda_q$ related to the non-thermal phase transition with the order of moment, $q$ has been studied. We have calculated scaling index, $%nu$ and found that it’s values are close to the predicted value. This confirms that the phase transition is of the second order. It is also found that $\langle F_{q}\rangle$ exhibits power-law behaviour at small bin sizes. It shows that intermittency takes place in multiparticle production.
The Standard Model Effective Field Theory (SMEFT) is a useful framework to study indirect deviations, for new non-resonant physics effects at the LHC. In this talk, we will focus on the production of Higgs in association with a photon from pp collisions in the boosted regime. We will discuss the modification of the Higgs couplings to gauge bosons and fermions arising from higher dimensional operators that contribute to this process. Taking some of the admissible dimension-6 operators as illustration, we focus on some kinematic variables that can reflect the presence of such effective operators. This will bring in the identification of the kinematic regions distributed in the corners of the phase space where the SM is depleted. We will discuss the utility of multivariate analysis and jet substructure observables in facilitating the isolation of contributions from the new interactions. Finally, we conclude with the projected limits on the Wilson coefficients of the dimension-6 operators, for which they can be probed at the $3 \sigma$ level in the high luminosity run of the LHC at 14 TeV.
The INO's ICAL experiment will be instrumented with glass Resistive Plates Chambers (RPCs) as the active detector element. The mini-ICAL (Mini Iron CALorimeter) is a 600-times scaled-down version of the ICAL. It is currently in operation at a transit campus of INO in Madurai, mainly to study the engineering aspects of the ambitious ICAL detector. The mini-ICAL employs a Closed Loop gas System (CLS) that will supply a gas mixture of R134a (95.2%), isobutane (4.5%), and SF6 (0.3%), purify it after flowing through the different filters and reuse. Any contamination to the gas mixture during the operation may change the characteristics of the detector, e.g., increasing the current, increasing the background noise rates and consequently, a deterioration of the performance of RPCs.
The composition of the gas mixture can affects the RPC pulse shape. As the electron multiplicity reaches to an extreme value, the avalanche mode has a chance to transform into streamer mode. This can also happen due to leakage of atmospheric gas into the RPC volume. A dedicated study was done by inserting different fractions of atmospheric air into the RPC system with and without removing the water vapour. The percentage of impurities are monitored by the Gas Chromatograph (GC) and studied the RPC dark current, noise rates, strip multiplicity, efficiency factor, as well as the fraction of streamer pulses in cosmic muon events. This paper will present the correlation of all these performance characteristics using RPCs in the mini-ICAL detector.
Current Status and Future Outlook of Neutrinoless Double Beta Decay Searches
Lisha*1, Neelu Mahajan1
1Department of Physics, Goswami Ganesh Dutta Sanatan Dharma College, Chandigarh, India-160031
corresponding author: neelu.mahajan@ggdsd.ac.in
In the last two decades, the search for understanding the nature of neutrinos and the origin of mass is one of the topics of paramount importance and emerged as a prolific field of research. The main reason behind this, is the discovery of neutrino oscillations which clearly stated the existence of massive neutrinos and provided a signal to explore New Physics (NP). Further, an additional motivation comes from the evidence of neutrinoless double beta decay. Sequel to it, a new generation experiments having different detection and isotope techniques is being actively done by experimental groups across the world. In this paper, the physics of neutrino less double beta decay is briefly discussed, mainly the nature of neutrinos whether they are Dirac or Majorana. The current experimental data and sensitivity of future experiments are also presented. If the signal of neutrino less double beta decay is observed, searches would demonstrate the Majorana nature of neutrinos, hence provide a precise measurement of their mass. This unambiguous detection would open an exciting era for next generation searches with an objective to understand the physics mechanism.
Keywords: Neutrino, Neutrino Oscillations, Majorana neutrinos, Neutrinoless double beta decay,
*Presenter
Many physics analyses at collider experiments are performed with photon as one of the final state particles, if not the only final state object. Identification of photons coming from hard scattering (prompt photons) in proton-proton collisions thus plays a crucial role. A major background comes from photons produced from π0-decays and fragmentation processes in jets. In order to identify the prompt photons, many identification (ID) variables are constructed that can discriminate against these non-prompt photons. Isolation variables are among those ID variables and they can be calculated from either the energy deposits in the detector (cluster isolation variables) or from particle candidates formed by merging appropriate energy clusters (particle isolation variables). In the present work, a comparison study of the performance of these two kinds of isolation variables and their combination was done using Monte Carlo samples for the Compact Muon Solenoid (CMS) detector at the Large Hadron Collider (LHC) experiment. The photon ID criteria were tuned by optimizing these isolation variables along with other ID variables (shower shape variables) using the genetic algorithm for three working points at 90%, 80% and 70% efficiencies for the prompt photons and will also be discussed. The cluster isolation variables showed improved performance in the preliminary study and currently, the same is being explored in the context of latest proton-proton collisions at the LHC.
Higgs couplings to charged leptons form an important measurement to understand not only the Standard Model but also physics beyond Standard Models including, multihiggs models, supersymmetric models etc. In the present work, We focus on the complementarity between the direct and indirect measurements in fixing the charged Lepton Yukawa couplings including flavour violating couplings. We show that the present limits from LHC are already competing with the indirect flavour violating measurements in some cases.
Resistive plate chambers (RPC) are parallel plate gaseous detectors, made with relatively low cost materials, robust in fabrication and handling, and offer excellent time and position resolutions (up to 50 ps and tens of micrometers, respectively). RPCs have been used for muon identification in high energy physics experiments, including collider physics experiments, neutrino physics experiments, etc. From less than two decades, RPCs are being used for societal applications as well, including medicine, muography applications, etc. The RPC electrodes are usually made of a high resistive material (typically, bakelite or float glass), with a bulk resistivity of 10^10 – 10^12 Ω.cm. At NISER, we are developing RPCs using Bakelite electrodes in motivation towards using them for muon radiography applications. In general, to make the inner surfaces of the Bakelite electrodes smoother, they are coated with linseed oil, which helps to reduce the micro discharge probability and also reduces the surface UV sensitivity. We coated one 1 cm x 1 cm Bakelite sample with raw linseed oil and another with double boiled linseed oil, which is added with drying agents. The Raman Spectroscopy analysis results on these samples will be presented. We have developed RPCs using 0.1 cm thick Bakelite electrodes of 20 cm x 20 cm size without linseed oil coating, coated with both raw and double boiled linseed oil, and studied their I-V characteristics, signals rate and efficiency. These results will be presented.
The measurements on lepton flavor universality violation in semileptonic $b\to s$ and $b\to c$ transitions hint towards a possible role of new physics in both sectors. Motivated by these anomalies, we investigate the lepton flavor violating $B\to K^*_2 (1430)\mu^{\pm}\tau^{\mp}$ decays. We calculate the two-fold angular distribution of $B\to K^*_2\ell_1\ell_2$ decay in presence of vector, axial-vector, scalar and pseudo-scalar new physics interactions. Later, we compute the branching fraction and lepton forward-backward asymmetry in the framework of $U^{2/3}_1$ vector leptoquark which is a viable solution to the $B$ anomalies. We find that the upper limits are $\mathcal{B}(B\to K^*_2\mu^-\tau^+)\leq 0.74\times10^{-8}$ and $\mathcal{B}(B\to K^*_2\mu^+\tau^-)\leq 0.33\times 10^{-7}$ at $90\%$ C.L.
In the present work we have investigated the properties of the heavy quarkonia in the presence of finite baryonic chemical potential and momentum anisotropy by using the quasi-particle approach. The effect of the finite baryonic chemical potential has been incorporated through the quasi-particle Debye mass, and momentum anisotropy is used to examine the binding energies of the quarkonium states. We have also calculated the thermal width of the quarkonium states from the imaginary part of the potential and found that the thermal width is decreases with momentum anisotropy. The dissociation baryonic chemical potential of the states of quarkonia have been calculated with the help of thermal width in the presence of temperature and momentum anisotropy. The effect of the temperature and momentum anisotropy on the mass spectra with finite baryonic chemical potential of the quarkonium states has also been studied.
We determine the properties of 1P state of the charmonia and bottomonia in the presence of strong magnetic field. Here we have employed the medium modified form of the potential which has both columbic as well as string part. This enables us to study the properties of the heavy quarkonia even above the critical temperatures. The magnetic field effect has been incorporated through the quasi-particle Debye mass. It has been found that the binding energies of the ${\chi }_c$ and ${\chi }_b$ are strongly affected by the magnetic field. The dissociation temperatures of the ${\chi }_c$ and ${\chi }_b$ are also reduced with the increasing values of the magnetic field . Although the ${\chi }_b$ state dissociates at higher values of the temperature and magnetic field because of the larger mass and hence the large binding energy. We have also studied the effect of magnetic field on mass spectra of the 1P states of heavy quarkonia.
We apply the renormalization group procedure for effective particles (RGPEP) for a single flavor quark to obtain an effective Hamiltonian in light-front QCD for heavy quarkonia. We introduce gluon mass ansatz that leads to truncation up to the quark-anti-quark gluon sector. Using the renormalized Hamiltonian and appropriate Fock state basis we formulate the eigenvalue problem for quarkonium.
A rotating neutron star with hyperonic core described by an effective chiral model with $\sigma-\rho$ cross coupling within mean field approximation is considered. The hyperonic bulk viscosity coefficient caused by non-leptonic weak interactions is calculated and its role in suppressing the gravitationally driven $r$-modes is investigated. Various other relevant damping timescales are calculated and are used to obtain the $r$-mode instability window. Our model predicts a significant reduction of the unstable region between $10^8 - 10^9$ K due to hyperon bulk viscosity alone.
Strange hadrons have smaller interaction cross-sections compared to light hadrons. The freeze-out temperatures of strange hadrons are close to the quark-hadron transition temperature as predicted by lattice Quantum Chromodynamics (lQCD). Therefore, they serve as an excellent probe for understanding the dynamics of QCD matter and the onset of the partonic stage in relativistic heavy-ion collisions.
In this work, we will present results on elliptic flow ($v_{2}$) of $K_{s}^{0}$, $\Lambda$, $\bar{\Lambda}$, $\Xi^{-}$, $\bar{\Xi}^{+}$, $\Omega^{-}$, and $\bar{\Omega}^{+}$ for Au + Au collisions at $E_{lab}$ = 35~A GeV from Parton Hadron String Dynamics (PHSD) transport model. PHSD is a microscopic off-shell transport approach that describes the strongly interacting partonic and hadronic matter in and out-of equilibrium.
We have analyzed $\sim$20 million minimum bias events for Au + Au collisions at $E_{lab}$ = 35~A GeV from the PHSD model. Measurements are made in the central rapidity region ($|y|$ $<$ 1.0) and different centrality intervals, which cover central to peripheral collisions.
We will present the dependence of $v_{2}$ on centrality, rapidity ($y$), and transverse momentum ($p_{T}$). In order to investigate the collectivity in Au + Au collisions, we have also measured the ratio of $v_{2}$ scaled by the participant eccentricity ($\varepsilon_{2}$). We will discuss the number of constituent quark (NCQ) scaling of the measured $v_{2}$ in these collisions. We will show comparison of our measurements to the published experimental results.
We show in this work how a sub-100 GeV Z' in a U (1) extension of the Standard Model (SM)
can emerge through Higgs mediated channels at the Large Hadron Collider (LHC). The light Z'
has minimal interaction with the SM sector as well as vanishing kinetic mixing with Z boson which
allows it to be light and below the SM gauge boson masses. Interestingly such a light Z' is very
difficult to observe in the standard production modes. We show that it is possible to observe such a
gauge boson via scalar mediators that are responsible for the symmetry breaking mechanism of the
model. The model also provides a dark matter candidate whose compatibility with the observed
relic density is established due to the light Z' . We also comment on other interesting possibilities
such a light Z' may present for other observables.
We continue the study of a recently constructed holographic QCD model supplemented with magnetic field. We consider the holographic dual of a quark, anti-quark pair and investigate its entropy beyond the deconfinement phase transition in terms of interquark distance, temperature and magnetic field. We obtain a clear magnetic field dependence in the strongly decreasing entropy near deconfinement and in the entropy variation for growing interquark separation. We uncover various supporting evidences for inverse magnetic catalysis. The emergent entropic force is found to become stronger with magnetic field, promoting the quarkonium dissociation. We also probe the dynamical dissociation of the quarkonium state, finding a faster dissociation with magnetic field. At least the static predictions should become amenable to a qualitative comparison with future lattice data.
Initial energy density produced in ultrarelativistic hadronic and heavy-ion collisions is an important quantity for characterisation of the system created in these collisions. In this work the Bjorken initial energy density is estimated in proton-proton collisions at $\sqrt{s} = 5.02, 7$ and 13 TeV for both minimum bias and different multiplicity classes with a new method using experimental data for proton radius R = 0.89 fm taken from electron proton scattering and taking the area of overlap region of collisions as $\pi R^{2}$. The same quantity has also been calculated for minimum bias pp collisions only for $\sqrt{s} = 0.9, 2.76$ and 8 TeV. It is observed that the Bjorken initial energy density in proton-proton collisions in high multiplicity events for the above mentioned collision energies reach the value that is obtained in case of Pb-Pb collisions at $\sqrt{s_{NN}} = 2.76$ TeV [1] and 5.02 TeV [2]. The results obtained in this work are also compared to those reported earlier for $\sqrt{s} = 7$ TeV [3] that uses the overlap area obtained from Gaussian scattering density profile.
[1] Phys. Rev. C 93, no.2, 024911 (2016)
[2] Sci. Rep. 12, no.1, 3917 (2022)
[3] Universe 3, no.1, 9 (2017)
Over the last few decades, there has been extensive research going on on a
few of the most sophisticated experimental setups which have ever been established in human history. Large Hadron Collider(LHC) at CERN and Relativistic
Heavy Ion Collider(RHIC), located at Brookhaven National Laboratory(BNL),
in New York, two serves as the benchmark to study the primordial form
of matter that existed in the universe shortly after the Big Bang and to mimic
conditions that existed at the birth of the universe. As a result of the collision of a subatomic particle moving at ultra-relativistic velocity, the constituent of those, namely Quarks and Gluons deconfined for a short amount of time, and its internal(color) degrees of freedom governs its dynamics.
To our surprise, the properties of QGP are opposite to what was expected(as a result of perturbative QCD calculations) long before its very existence. This has attracted the long-lasting curiosity of physicists all across.
Strongly interacting and correlated QGP gives us a green signal to incorporate dissipative hydrodynamics as a tool to extract the properties of this extremely dense matter, QGP, in the form of various transport coefficients. Heavy quarks(HQ), on the other hand, play an important role to dig for the properties of QGP due to several factors. With a huge mass of the order of a few GeV, heavy quarks are produced in the pre-equilibrium phase i.e. before the formation of QGP. The long relaxation time of heavy quark gives it an upper hand for using it as a tool to describe the properties of QGP. Also due to larger mass, the production of heavy quarks as a result of the interaction of QGP medium particles is very unlikely to occur, hence heavy quarks, to a good extent, remain constant in terms of numbers. Therefore, because of these factors, heavy quark acts as a good candidate to probe in QGP.
The QGP is a highly correlated system with large coupling strength hence
the perturbative treatment is plagued with large inconsistencies when matched with lattice results. This factor signals toward consideration of the non-perturbative effect in the calculation. One of the non-perturbative approaches that seem quite promising in the non-perturbative scale is given by Gribov, which later was updated by Zwanziger by formulating renormalizable action, termed as Gribov-Zwanziger(GZ) action. Within the GZ action, the gluon propagator in the covariant gauge is expressed as:
$$D^{\mu\nu}(Q)=\left[\delta^{\mu\nu}-(1-\xi)\frac{Q^{\mu}Q^{\nu}}{Q^2}\right]\left(\frac{Q^2}{Q^4+\gamma_G^4}\right)~,$$
where $\xi$ is the gauge parameter and $\gamma_G$ is termed as the Gribov parameter, which is fixed either by matching the thermodynamic quantities with lattice equation of state or by solving one loop gap equation. The Gribov-Zwanziger prescription leads to infrared-improved dispersion relations for gluons. I tried to calculate the diffusion coefficient of
heavy quark under Gribov prescription and match it with lattice data available in the range $1 \le T/T_c \le 5$. In the end, we compare our calculation with the Leading Order(LO) and Next to Leading Order(NLO) calculation of the diffusion coefficients, $\kappa$ and $\mathcal{D}$. We saw a better agreement of our calculation with the lattice data in comparison to LO and NLO calculation of the same, in the given range of $T/T_c~$.
We have calculated coupling constants of an eta meson with the lowest two nucleon resonances from first constructing a vacuum-to-eta correlation function of the interpolating fields of two nucleons and then taking its matrix elements with respect to a nucleon spinor and/or a nucleon resonance spinor(s). Different matrix elements give rise to different QCD sum rules which have been used to solve for the diagonal as well as non-diagonal coupling constants involving a nucleon resonance(s) and/or a nucleon. We have also checked the stability of our results with respect to variations of different QCD and phenomenological input parameters.
We revisit an alternate gauge theoretic formulation leading to emergent gravity scenarios with renewed interest. The generic perspective of bulk/boundary correspondence is exploited to ensure the boundary aspects of quantum gravity from a bulk gauge theory. In addition to the extremal multi-black holes, we show that the non-extremal a charged black hole is also governed by multi-black holes in an emergent gravity framework. The unique topological quantum correction is worked out explicitly to ensure the multi-black holes underlying the quantum gravity. The analysis underlying the new theoretical tool is believed to unfold an origin of dark energy in the Universe.
1Corresponding author.
In this work, we focus on how to find out the exact solutions of a time dependent model of a damped harmonic oscillator affected by an external time varying magnetic field with a time dependent noncommutativity. The well known method of Lewis invariant associated with a non-linear differential equation, known as Ermakov-Pinney (EP) equation in literature, is chosen to treat the system. Then it is observed that some explicit solution set of EP equation enable us to get a series of exact analytic form of the eigenfunctions for some specific choices of the damping factor, the time dependent frequency of the oscillator and the time dependent external magnetic field. Finally, we also can establish the explicit expressions of the energy expectation values corresponding to each exact solution and show their dynamics graphically.
In this analysis, I have derived the CP odd weak basis invariants (WB) at low energies, for a 3×3 neutrino mass matrix subjected to the condition of two zeros and an equality between arbitrary non-zero elements (also known as hybrid texture) in the basis, where charged lepton mass matrix is assumed to be diagonal. Among the fourty-two phenomenological possibilities of hybrid texture, only eight are found to be viable in the light of current experimental data at 3σ confidence level (CL) as shown in the recent work1. Further, I have computed the necessary and sufficient condition required for leptonic CP conservation corresponding to each viable case, and subsequently finding the leptonic CP properties.
Ever since their inception by Pauli, neutrinos have turned out to be one of the most fascinating particles. Despite decades of theoretical and experimental advances, many questions still remain unanswered in neutrino physics. Some of the most significant ones amongst these include the nature of neutrinos being Dirac or Majorana and the possibility of CP violation in the leptonic sector. In order to decode these enigmatic aspects, the last few decades have witnessed a lot of thrust in the form of dedicated experimental as well as phenomenological advances. On the experimental front, many of the ongoing and upcoming experiments such as DUNE, IceCube, GERDA, EXO-200,JUNO, NOvA etc. are expected to shed some light on these in near future. On the phenomenological front, many approaches have been proposed over the years in order to decipher the mystery of neutrino masses and mixings, amongst which the ones based on texture specific mass matrices have turned out to be quite noteworthy. In this context, it becomes interesting to explore the implications of these matrices regarding the above mentioned puzzles of neutrino sector. To this end, the present work aims to explore the texture two zero neutrino mass matrices and seek their implications for leptonic CP violation and neutrinoless double beta decay specifically in the light of current neutrino
oscillation data. After examining the viability of different classes of these mass matrices, we analyze them further and obtain interesting results regarding some significant parameters such as Dirac and Majorana CP violating phases, effective mass in neutrinoless double beta decay, absolute neutrino mass etc.
Quark gluon plasma (QGP) is a thermalized color deconfined QCD matter created at extreme conditions such as very high temperature and/or density in ultra relativistic heavy-ion collisions. The formation of QGP has been confirmed at the LHC and RHIC experiments by comparison of low momentum ($p_\perp$) measurements with relativistic hydrodynamic predictions, and, by comparison of high $p_\perp$ data with pQCD predictions. While high $p_\perp$ physics had a decisive role in the QGP discovery, it was rarely used for understanding the bulk properties. On the other hand, high $p_\perp$ probes become also powerful tomography tools, since they are sensitive
to global QGP features, such as different temperature profiles or initial conditions. Further, the low $p_\perp$ observables do not put strict constraints on all the parameters of the models used to describe the evolution of QGP, leaving some properties of QGP badly constrained.
Therefore, high $p_\perp$ observables can be used to improve the constraints to the parameters, complementing the low $p_\perp$ ones. In this work, we perform analysis to determine if the high $p_\perp$ observables can distinguish the specific shear viscosity ($\eta/s$) of the system. We use $`$T$_\text{R}$ENTo' framework to generate initial profiles, evolve the fluid with $(2+1)$-dimensional viscous hydrodynamics code $`$VISH2+1'. The particlization is performed using Cooper-Frye prescription, and finally, we use $`$UrQMD' hadronic afterburner to produce the low $p_\perp$ observables. To achieve the goal of utilizing high $p_\perp$ theory and data for inferring the bulk properties of QGP, a dynamical high $p_\perp$ parton energy loss formalism has been developed. This formalism is based on finite size, finite temperature field theory and takes into account that QGP constituents are dynamical particles. Both collisional and radiative energy losses are calculated in the same theoretical framework. Finally, this formalism is integrated into a numerical framework $`$DREENA' (Dynamical Radiative and Elastic ENergy loss Approach). This has been used to compute the high $p_\perp$ observables.
We investigate the flavour bounds on the Z2×Z5 symmetry, a minimal form of the Z2×ZN flavour symmetry, that can provide a simple set-up for the Froggatt-Nielsen mechanism. This minimal form is capable of explaining the fermionic masses and mixing pattern of the standard model including that of the neutrinos. The bounds on the parameter space of the flavon field of the Z2×Z5 symmetry are derived using the current quark and lepton flavour physics data and future projected sensitivities of quark and lepton flavour effects. The strongest bounds on the flavon of the Z2×Z5 symmetry come from the K0−K¯0 and D0−D¯0 mixing. In future phase-\rom{2} of the LHCb, the ratio of the $BR(B_d\rightarrow μ^+μ^-)$ and $BR(B_s\rightarrow μ^+μ^-)$ branching fractions, Rμμ, will be crucial in ruling out the major part of the flavon parameter space.
The GRAPES-3 experiment contains several types of detectors namely plastic scintillators, proportional counters, and NaI(Tl) crystal detectors which are coupled with multiple independent data acquisition systems. At present, the GRAPES-3 operates 19 independent data acquisition (DAQ) systems. Multiple type detectors record different secondary components in the cosmic ray shower to study various physics phenomena including cosmic ray composition, multi-TeV gamma rays, and terrestrial gamma ray flashes produced by thunderstorms. It is necessary to collate the data with a common time stamp. In order to provide an easier hardware solution, an FPGA-based time calibration trigger (TCT) system has been designed which generates a precise handle among various DAQ systems. The TCT has been tested by implementing it for the trigger-less muon data acquisition (TMDAQ) system. The TMDAQ is based on the concept of making measurements from 118 channels by a single board in an independent fashion, for the generation of a user-desired trigger which is the core concept of TMDAQ. It is a necessity to collate the data collected from any of the above boards with precision. The accomplishment of the above is done by implementing the TCT system for the 16 boards. We will present the hardware details, its impact, and observations along with future plans.
In mathematical physics, geometric quantization is a method of defining quantum theory corresponding to an existing classical theory. It has been successfully applied to many field theoretic models. Also, Constrained systems occur frequently in physics, since they typically arise in the Hamiltonian formulation of classical systems with gauge symmetries. Here, we will try to understand the geometric quantization from the perspective of a constrained system.
In this work, we explore an alternative derivation of Hawking radiation. Instead of the field-theoretic derivation, we have suggested a simpler calculation based on quantum mechanical reflection from a one-dimensional potential. The reflection coefficient shows an exponential fall in energy which, in comparison with the Boltzmann probability distribution, yields a temperature. The temperature is the same as Hawking's temperature for spherically symmetric black holes. The derivation gives an exact local calculation of Hawking temperature that involves a region lying entirely outside the horizon. This is a crucial difference from the tunneling calculation, where it is necessary to involve a region inside the horizon.
Reference: https://arxiv.org/abs/2203.06588
The LHC physics program achieved a huge physics goal by discovering the most anticipated particle, the Higgs boson, at 2012. Although the Higgs mass was found to be 125 Gev, there are physics models predicting heavy Higgs bosons. In LHC Run 1 analysis, It has already been found that there is no standard model like heavy Higgs boson within the mass range between 200 GeV and 1000 GeV. With an increased center of mass energy in LHC run 2, it will be good to perform a similar test. In this analysis, full LHC Run 2 data recorded by CMS detector has been used to look for high mass resonance in two Z bosons production, where one of the Z bosons decays into two leptons, whereas the other Z boson decays into quarks. A matrix element based method has been used to calculate discriminants between the signal and backgrounds, using the complete kinematic information of the final state particles.
Black hole horizons admit a set of degrees of freedom which may be thought to be induced from the spacetime bulk. Such a formulation, in which the quantum dynamics of horizon may be understood in terms of these horizon data, may be developed in a covariant framework. We shall describe how this method may be used to identify the thermodynamic properties of a horizon.
Quantum Chromodynamics (QCD) predicts that at sufficiently high temperature ($T$) and/or baryon chemical potential ($\mu_{B}$), the state of matter is in the form of quarks and gluons, which are no longer confined withins hadrons. This deconfined state of matter is known as the Quark-Gluon Plasma (QGP). The goal of relativistic heavy-ion collision experiments is to create such a hot and dense state of matter and study its properties. Measurements of identified particle spectra in Au+Au collisions provides the information on the bulk properties, such as integrated yield (dN/dy), average transverse momenta ($\langle p_{T} \rangle$), particle ratios, and freeze-out parameters of the medium produced. The systematic study of bulk properties sheds light on the particle production mechanism in these collisions. Also, the centrality dependence of the freeze-out parameters provides an opportunity to explore the QCD phase diagram.
In this talk, we will present the transverse momentum spectra of identified hadrons ($\pi^{\pm}$, $K^{\pm}$, $p$, and $\bar{p}$) at mid-rapidity ($|y|<$0.1) in Au+Au collisions at $\sqrt{s_{NN}}$ = 54.4 GeV. The centrality dependence of dN/dy, particle ratios, and kinetic freeze-out parameters will also be presented, and their physics implications will be discussed. In addition, we will compare our results with previously published results at other collision energies.
We have explored the thermodynamics and phase structure of the Polyakov loop-extended three flavored quark-meson model at varying values of temperature and quark chemical potential. We have investigated the effect of finite volume on phase structure of QCD in transition from confined hadronic state to deconfined quarks. The PQM model has been modified by the inclusion of vector fields along-with the introduction of asymmetry by inducing isospin chemical potential. The phase boundary is found to shift towards the higher value of temperature and quark chemical with decreasing system size. The circumstantial study of such effects will have important implication in studying QCD phase diagram.
We have revisited Chandrasekhar limit calculation for zero magnetic field case and explored its finite magnetic field extension. We have considered the quantum aspect of magnetic field for which the perpendicular momentum components (with respect to the direction of magnetic field) are quantized. Due to magnetic field, an anisotropic nature in thermodynamic quantities like pressure is expected. We have obtained magnetic field dependent pressure of degenerate electron gas along parallel and perpendicular directions for non-relativistic, ultra-relativistic and relativistic cases. Making balance between the outward degenerate pressure and inwards gravity pressure, our aim is to find mass-radius (M-R) curves for different magnetic fields. We have found M-R relations for Lowest Landau level approximation, although the calculations for higher Landau level contribution is under progress.
The recent measurements on $R_D$, $R_{D^*}$ and $R_{J/\psi}$ by BaBar, Belle and LHCb experiments, which deviate substantially from their Standard Model predictions indicate that the notion of lepton flavour universality (LFU) is violated in the weak charged-current processes, mediated through $b \to c \ell \bar \nu_\ell$ transitions. These intriguing results hint towards the possible implication of new physics in $ b \to c \tau \bar \nu$ transition. This, in turn, opens up another avenue, i.e., $ b \bar b \to \tau \bar \tau$ processes, to look for new physics, i.e., the new physics contributions to $ b \to c \tau \bar \nu$ process would necessarily modify the $ b \bar b \to \tau \bar \tau$ transitions. In this context, we study the implications of LFU violating new physics scenarios of ratio of branching fractions of leptonic $\Upsilon \equiv \Upsilon(ns) (n=1,2,3)$ decays, i.e., $R_{\Upsilon}(ns)= Br(\Upsilon(ns) \to \tau \bar{\tau})/ Br(\Upsilon(ns) \to \mu \bar{\mu})$. We find that $ R_{\Upsilon}(ns)$ values deviate significantly from their standard model predictions due to the impact of new physics.
CMS detector at CERN-LHC has a two level trigger system to filter events that are stored for further analysis. The Level-1 trigger is based on fast electronics. The events passing the Level-1 trigger have to then go through the High Level Trigger (HLT), which is based on a computing farm. The HLT menu consists of a large number of HLT paths based on various selections on reconstructed physics object(s) in a given event. The object reconstruction at HLT is optimized to run faster due to the timing constraints and, therefore, less accurate than the offline reconstruction. It is crucial to properly reconstruct jets and missing transverse energy at the high level trigger for the success of precision measurements and new physics searches at LHC, since many physics signatures at LHC include jets and missing transverse energy in their final state. In this talk we will present the performance of the jet and missing transverse energy reconstruction at HLT during the LHC Run-2. The performances are measured in terms of efficiencies and resolutions with respect to that of offline reconstructed objects.
We will describe the search for exotic decays of Higgs boson to a pair of light pseudoscalar Higgs bosons under the hypothesis that one of the pseudoscalars decays to a pair of b quarks and the other decays to two photons. Such signatures are predicted in Beyond Standard Model (BSM) physics, namely next-to-minimal supersymmetric standard model. The final state consisting a pair of b jets and photons is analyzed producing the Higgs boson in asscoation with gauge bosons VH (V=W/Z). This analysis is based on a dataset of proton-proton collisions corresponding to luminosity 59.73 fb-1 accumulated with the CMS experiment at the CERN LHC in 2018 at a center-of-mass energy of 13 TeV. The detailed estimation of background will be also presented.
The motion of cosmic strings in the universe leads to the generation of wakes behind them. We study magnetized wakes of cosmic strings moving in the post recombination plasma. We show that magnetic reconnection can occur in the post-shock region of the cosmic string wake. This leads to a large amount of kinetic energy being released in the post-shock region of the wake. Since the width of the cosmic string wake is very narrow, the reconnection occurs over a very short lengthscale. This enhances the kinetic energy released during the reconnection. It is well known that magnetic reconnection can lead to Gamma Ray Bursts (GRB). We make a rudimentary estimate of the kinetic energy released by the magnetic reconnection in cosmic string wakes and show that it can account for low-energy Gamma Ray Bursts in the post recombination era.
Keywords: cosmic strings, shocks, magnetic reconnection.
The work is used to understand the behaviour of thermodynamic properties of hot and dense system quark gluon plasma (QGP) in the presence of static and dynamic magnetic field. The theoretical and experimental results indicated that a strong magnetic field produce at RHIC and LHC during the collisions of heavy ions. We compute the equation of states (EoS) of QGP with static and dynamic magnetic field using quasiparticles approach. We compared our results of time dependent magnetic field with earlier results of static magnetic field. The results of EoS in the magnetized quark gluon plasma are interesting which shows a new findings in the direction of high energy heavy ion collision, astrophysics etc.
Jets are collimated sprays of particles produced from the fragmentation and hadronization of hard-scattered partons in high energy hadronic and nuclear collisions. Jet properties are sensitive to details of parton showering processes and are expected to be modified in the presence of a dense partonic medium. Measurement of intra-jet properties in p--Pb collisions will help to investigate cold nuclear matter effects and enrich our current understanding of particle production in such collision systems. In this work, we will present the measurement of charged-particle jet properties, the mean charged-particle multiplicity and fragmentation functions, for leading jets in the range of jet $p_{\rm T}$ from 10 -- 100 GeV/c at midrapidity in p–Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV with ALICE. Results will be compared with theoretical model predictions.
The proposed ICAL detector by the INO Collaboration is a 51 kTon magnetized Iron Calorimeter which is designed to detect muons of energy in the range of 1-25 GeV, which are generated by the interaction of atmospheric $\nu_{\mu}$ and $\bar{\nu}_{\mu}$ with the iron. ICAL is designed to provide a maximum magnetic field of $\sim$ 1.5 Tesla with 90$\%$ of its volume having more than 1 Tesla field. Since ICAL is a magnetized detector, it provides excellent charge identification of tracked particles and also helps with reconstruction of muon momentum.
The mini-ICAL is an 85-ton prototype detector, which is functioning at Madurai, Tamil Nadu. It consists of 11 layers of iron - 4m $\times$ 4m in dimension and each layer made up of 7 plates of 56 mm thick iron. 10 layers of Resistive Plate Chambers (RPCs) are sandwiched between the iron layers, as active detector elements. The mini-ICAL consists of two sets of copper coils each having 18 turns which are used to magnetize mini-ICAL, by passing about 900A current through them. One of its main goals is to study the challenges involved to produce the required uniform magnetic field and measure it in-situ, as accurately as possible. The measurements are also used to validate the magnetic field simulations carried out using MAGNET 7.7 software.
To measure the magnetic field, 150 Hall sensors and 5 search coils are used for each of the layer 1, 6 and 11. Search coils will provide magnetic field value during ramp up and ramp down of the current through the coils. Hall sensors will provide magnetic field value in the steady state. The static 3-D simulation is carried out for different values of current by optimizing various parameters such as mesh size etc. In this paper, characterization and calibration studies of Hall sensors as well as comparison between the Hall sensors and search coil will be summarized. Comparison of measured and simulated magnetic fields will also be presented.
Effective cross-section defines the matter overlap in a two-particle collision and is considered one of the important tools to study proton-proton collisions at high energies. In a conventional approach, the value of effective cross-section is estimated by fitting the observables sensitive to double parton scattering. In this paper, the value of effective cross-section is predicted using the PYTHIA8 Monte-Carlo event generator by optimizing (tuning) model parameters, using the PROFESSOR package, to available underlying event data. The predicted value of effective cross-section is compared to already measured values.
The decay of any higher $c\bar{s}$ meson to $D_{s}^{+}\pi^{0}$ violates isospin conservation, thus small partial width. Some theoretical models suggest $D_{s}^{*+} \to D_{s}^{+}\pi^{0}$ proceed via $\pi^{0} - \eta$ mixing to conserve isospin but including such consideration also, the radiative decay $D_{s}^{*+} \to D_{s}^{+}\gamma$ is still expected to dominate. The Belle II detector provides an opportunity to improve previous measurements with higher data statistics and improved detector performance. In this presentation, we present the feasibility to study the measurements of the ratio of partial widths $\Gamma(D_{s}^{*+} \to D_{s}^{+}\pi^{0})/\Gamma(D_{s}^{*+} \to D_{s}^{+}\gamma)$.
One of the manifestations of strong jet-energy loss in heavy-ion collisions is the large energy or $p_{\rm T}$ imbalance of jet-pairs that are produced back-to-back. It is generally argued that this asymmetry is caused by the difference in the path-length traversed by the jets in the medium. We utilize this magnitude of momentum imbalance ($x_j$) as a parameter to study the path-length effect of jet-energy loss on some intra-jet properties of leading and sub-leading jets, using a pQCD-inspired model for jet-medium interactions, JEWEL. We calculate the radial momentum density distributions for leading and subleading jets in proton-proton collisions at 5.02 TeV as a function of dijet momentum imbalance $x_j$ and their modifications in Pb-Pb collisions at same energy. Our study shows, for events with large dijet-asymmetry, modifications to sub-leading jets are stronger than leading jets while, in symmetric events both leading and sub-leading jets are significantly modified. These observations may indicate an apparent role of path-length effect on the jet energy loss mechanisms.
A minimal extension of the standard model giving rise to baryogenesis is studied. This model includes the interaction of heavy color-triplet scalars (~TeV) with a light Majorana fermion (~GeV) which is a potential non-thermal dark matter (DM) candidate and an up-type quark. The color-triplet scalars would be produced via the fusion of two down-type quarks ( d and d'). We investigate the case of d and b quarks. Such models give a classic monojet signature with a b-quark jet at the LHC where a high transverse momentum jet is produced along with large missing transverse energy in an event. Detailed simulation studies will be presented to study the interactions of such massive scalars using optimized and advanced analysis techniques.
One of the essential ingredients for the precise measurement of neutrino oscillation parameters is the precise knowledge of neutrino energy. Due to heavy nuclear targets, nuclear effects introduce complications that create systematic uncertainties in neutrino energy reconstruction. These uncertainties further influence the determination of neutrino oscillation parameters. In this work, we study the impact of Effective Spectral Function and transverse enhancement on the sensitivity measurement of various neutrino oscillation parameters in the disappearance channel in the NOvA experiment. Together, the Effective spectral function and transverse enhancement model give a complete description of nuclear effects. We observe a significant impact of these two models in neutrino oscillation measurements in comparison with the Relativistic Fermi Gas model and the Local Fermi Gas model.
TXS0506+056 is the first-ever known blazar having ∼3.5σ spatial and temporal correlation with IceCube neutrino alert 170922A, this source has been observed in multi-wavelengths during different epochs. A multi-wavelength study of this source during different epochs suggests a time delay between high and Very high energy gamma(VHE) emission. Here, we propose this time delay could be explained by two interaction channels electron synchrotron, SSC and proton synchrotron.
In this work we study the synergy among the future accelerator (T2HK), future atmospheric (ICAL) and future reactor (JUNO) neutrino experiments to determine the neutrino mass ordering. T2HK can measure the mass ordering only for favorable values of $\delta_{\rm CP}$, whereas the mass ordering sensitivity of JUNO is dependent on the energy resolution. Our results show that with a combination of T2HK, ICAL and JUNO one can have a mass ordering sensitivity of 7.2 $\sigma$ even for the unfavorable value of $\delta_{\rm CP} = 0^\circ$ for T2HK and most conservative value of JUNO energy resolution of 5$\%/\sqrt{E(MeV)}$.
The synergy mainly comes because different oscillation channels prefer different values of $|\Delta m_{31}^2|$ in the fit when the mass-ordering $\chi^2$ is minimized. In this context we also study: (i) effect of varying energy resolution of JUNO, (ii) the effect of longer run-time of ICAL, (iii) effect of different true values of $\theta_{23}$ and (iv) effect of octant degeneracy in the determination of neutrino mass ordering.
The nonzero Ɵ13 from experimental results have a significant role on the neutrino mixing matrix. Though, tribimaximal (TBM) neutrino mixing is a well-established mixing matrix model, but it predicts nonzero Ɵ13=0. We introduce simple perturbation matrix into TBM matrix to get a suitable departure from TBM matrix which gives nonzero Ɵ13 and it agrees with the latest global fit to neutrino data. We also discuss Dirac Phase δCP along with Jarlskog rephasing invariant JCP in our work.
We explore the next-to-minimal supersymmetric Standard Model (NMSSM) using Higgs information at the LHC. The preferred values of various sparticle masses will also be discussed along with the associated NMSSM parameters found through a detailed scan consistent with experimental and dark matter constraints.
We apply the BFFT formalism to a prototypical second-class system, aiming to convert its constraints from second to first-class. The proposed system admits a consistent initial set of second-class constraints and an open potential function providing room for applications to mechanical models as well as field theory such as the non-linear sigma model. The constraints can be arbitralily non-linear, broadly generalizing previously known cases. We obtain a sufficient condition for which a simple closed expression for the Abelian converted constraints and modified involutive Hamiltonian can be achieved. As an explicit example, we discuss a particle on a torus model, obtaining the full first-class abelianized constraints in closed form and the corresponding involutive Hamiltonian.
We present the Olsson.wl Mathematica package which aims to find linear transformations for some classes of multivariable hypergeometric functions. It is based on a well-known method developed by P. O. M. Olsson in J. Math. Phys. 5, 420 (1964) to derive the analytic continuations of the Appell F1 double hypergeometric series from the linear transformations of the Gauss 2F1 hypergeometric function. We provide a brief description of Olsson’s method and demonstrate the commands of the package, along with examples. We also provide a companion package, called ROC2.wl and dedicated to the derivation of the regions of convergence of double hypergeometric series. As applications, we show the usefulness of the package in finding the analytic continuations of Feynman integrals via their hypergeometric function representation.
As a part of its R\&D, the ICAL collaboration has built a small prototype module called mini-
ICAL to study the detector performance, engineering challenges in the construction of large-scale magnet,
magnetic field measurement system as well as to test the ICAL electronics in the presence of the magnetic field. This
detector was also used to measure the charge-dependent muon flux as well as to study the feasibility
of cosmic muon veto for a shallow depth neutrino experiments. The mini-ICAL detector consists of 11 layers of iron plates (dimension
4\,m$\times$4\,m$\times$0.056\,m) with an inter-layer gap of 45\,mm. The RPC (area $\sim$2\,m$\times$2\,m) detectors are inserted
between iron layers.
The characterisation of RPC detectors and its electronics is an important part
of the experiment before looking for physics data and results. Detector efficiencies, position resolution, time
resolution are the major parameters to characterise. In general, the operating potential of this chamber is determined by
the optimisation of the efficiency and noise rate of the device. This optimisation is based on the assumption
that the performance of the device over the whole surface area is uniform. The INO-ICAL experiment is
going to use $\sim$\,30000 RPCs of size $\sim$\,2\,m$\times$2\,m. All the RPCs will have to pass a minimum quality assurance
criteria, but may not be able to maintain a good uniformity over the whole surface area, particularly for the
whole running period of about twenty years. This paper describes our studies on the choice of the optimum operating HV
for an RPC of non-uniform response.
The extension of the standard model with heavy right-handed neutrinos simultaneously accounts for the light neutrino mass and baryon asymmetry through leptogenesis. In addition to the properties of the heavy neutrinos, leptogenesis exhibits dependence on the CP phases measurable at neutrino oscillation and neutrinoless double beta decay experiments. In this work, we examine the scenario where the Dirac and Majorana phases are the dominant source of CP violation required for successful leptogenesis. We will demonstrate a scenario within minimal inverse seesaw (ISS(2, 2)) framework with texture zeros in which the CP violation necessary for leptogenesis comes from low-energy CP phases. We perform a numerical study of the evolution of lepton asymmetry by solving the associated Boltzmann equations. Furthermore, the effective neutrino mass relevant for neutrinoless double beta decay amplitude is found to be constrained by successful explanation of neutrino experimental data and the baryon asymmetry of the universe.
We explore the possibility of Euclidean wormhole to black hole phase transition in the context of JT gravity at finite charge density. The low temperature phase of the system indicates the charged wormhole solution which is dual to the two-site uncoupled complex SYK model at finite charge density. On increasing the temperature of the system, the wormhole phase undergoes a first order phase transition to a two charged blackhole system. At the critical temperature (T = T0), the free energy of the system undergoes a discontinuous change.
Recent image of the M87*
and Sgr A*
black hole by EHT collaboration has opened a new portal to unlock various mysteries of the universe. Due to extreme gravity around a black hole, there will be an enhanced distribution of dark matter, which will have a significant effect on the image of the black hole. One certain feature of a black hole image is the black hole shadow, which can be used to extract information about this dark matter environment. There have been various models of dark matter, which propose an effective (but very weak) interaction of dark matter with light, which leads them to have a fractional charge and is thus called millicharged dark matter. I will present to you the effect of this millicharged dark matter environment on the shadow of a black hole. I will also show the proposed bound on millicharged dark matter parameter space, based on more precise future observation of the black hole shadow.
Ref.: Exploring millicharged dark matter components from the shadows
Lalit S. Bhandari(IISER, Pune), Arun M. Thalapillil(IISER, Pune)
JCAP 03 (2022) 03, 043
The first observation of an exclusive $b\to s\gamma$ process was made by CLEO in 1993 in the $B \to K^{*}(892) \gamma$ decay. Since then, the decay has been one of the most extensively studied radiative penguin processes. The decay of the $B$ meson to the $K^{*}(892)\gamma$ final state is forbidden at tree level in the standard model (SM), which primarily occurs via a one-loop $b\to s\gamma$ diagram. Various extensions to the SM posit new particles that can contribute to the loop, altering the branching fraction and other observables from their SM predictions, making the decay an excellent probe for such models. We present a study based on data recorded by the Belle II experiment before its first long shutdown and discuss results obtained from the early data-taking period.
We propose a scenario where a high scale seesaw origin of light neutrino mass and gravitational dark matter (DM) in MeV-TeV ballpark originating from primordial black hole (PBH) evaporation can be simultaneously probed by future observations of stochastic gravitational wave (GW) background with multiple tilts or spectral breaks. A high scale breaking of an Abelian gauge symmetry ensures the dynamical origin of the seesaw scale while also leading to the formation of cosmic strings responsible for generating stochastic GW background. The requirement of a correct DM relic in this ballpark necessitates the inclusion of a diluter as PBH typically leads to DM overproduction. This leads to a second early matter dominated epoch after PBH evaporation due to the long-lived diluter. These two early matter dominated epochs, crucially connected to the DM relic, leads to multiple spectral breaks in the otherwise scale-invariant GW spectrum formed by cosmic strings. We find interesting correlations between DM mass and turning point frequencies of GW spectrum which are within reach of several near future experiments like LISA, BBO, ET, CE, etc.
The universe is believed to be originated from a quantum state. However, defining measurable quantities for the quantum properties in the present universe has gained interest recently. In this work, we propose a quantum Poincare sphere as an observable quantity that can hint at the quantumness of primordial gravitational waves and large-scale magnetic fields. The Poincare sphere is defined in terms of quantum stokes operators associated with the polarization of those fields, which can be measured directly. We have further studied the effects of the initial non-BD vacuum on the power spectrum and squeezing parameter of the primordial gravitational waves and magnetic field. We have found that the initial non-BD vacuum increases the value of the squeezing parameter at the end of inflation, which further enhances the possibility of measuring the quantumness of the fields under consideration. To support our results, we further explored the possible Bell violation test for a set of generalized pseudo spin operators defined in the polarization space of those fields.
Non-standard neutrino interactions with a massive boson can produce the bosons in the core of core-collapse supernovae (SNe). After the emission of the bosons from the SN core, their sub- sequent decays into neutrinos can modify the SN neutrino flux. We show future observations of neutrinos from a next galactic SN in Super-Kamiokande (SK) and Hyper-Kamiokande (HK) can probe flavor-universal non-standard neutrino couplings to a light boson, improving the previous limit from the SN 1987A neutrino burst by several orders of magnitude. We also discuss sensitivity of the flavor-universal non-standard neutrino interactions in future observations of diffuse neutrinos from all the past SNe, known as the diffuse supernova neutrino background (DSNB). According to our analysis, observations of the DSNB in HK, JUNO and DUNE experiments can probe such couplings by a factor of ∼ 2 beyond the SN 1987A constraint.
Extreme conditions of energy density and temperature are the prerequisites to produce a deconfined state of quarks and gluons in ultra-relativistic heavy-ion collisions. The initial state geometry of the created system gives rise to spatial anisotropy, which later results in the momentum anisotropy of the final state particles in non-central collisions. Anisotropic flow (especially elliptic flow) quantifies the momentum anisotropy of the produced system, which is one of the signatures of QGP. A deformed nucleus like Xenon (Xe) offers access to the initial geometry's effect on final state particle production. This analysis focuses on the impact of hadron cascade-time on particle production and elliptic flow using A Multi-Phase Transport (AMPT) model by incorporating nuclear deformation in colliding nuclei for Xe+Xe collisions at $\sqrt {s_ {\rm NN}} $ = 5.44 TeV. We analyze the effect of hadronic cascade-time on identified particle production by studying $p_{\rm T}$-differential particle ratios. The impact of hadronic cascade time on elliptic flow is studied by varying the cascade time between 5 and 25 fm/$c$. This study shows the final state interactions among particles generate additional anisotropic flow with increasing hadron cascade-time, especially at very low and high-$p_{\rm T}$.
We give two proposals regarding the status of connectivity of entanglement wedges and the associated saturation of mutual information. The first proposal has been given for the scenario before the Page time depicting the fact that the early to late time transition can be obtained from the status of the radiation entanglement wedge. In particular, we compute the time where the mutual information between the regions where the Hawking radiation gets collected vanishes before the formation of the island. We argue that this time is the Hartman-Maldacena time at which the fine-grained entropy of radiation goes as $\sim \log(\beta)$ where $\beta$ is the inverse of Hawking temperature of the black hole. On the other hand, the second proposal shows that just after the Page time, the vanishing of mutual information between the black hole subsystems leads to a time independent expression for the fine grained entropy of Hawking radiation consistent with the correct Page curve. We also give corrections to this entropy and Page time which are logarithmic and inverse power law in form.
The Higgs discovery in 2012 started a precision era in particle physics. Most future colliders aim for precision measurements in the Higgs sector to probe any signal of new physics. These precision measurements demand more accurate theoretical predictions of Higgs production and decay rates. In this direction, we compute the contribution of two-loop mixed QCD-Electroweak corrections at O(\alpha \alpha_s) to the golden decay channel of Higgs H > ZZ* > e+e- \mu+ \mu-. I will discuss a numerical approach to compute the non-trivial two-loop amplitude for the process and relevant checks on our calculation. Finally, I will present numeric results for the partial decay width and some kinematic distributions.
The unitary highest weight representations of integral levels of $\widehat{su}(2)$ current algebra conformal field theories (CFTs) satisfy all properties of a rational CFT (RCFT), but the story is not straightforward at admissible fractional levels. The admissible levels are labelled by two natural coprime numbers $(p\geq 2,u)$ such that the level is $m=p/u-2$.
We show that almost every fractional admissible level $\widehat{su}(2)_m$ current algebra exhibits one or more quasi-character(s). We find three special classes without quasi-characters: the sequence $(p=2,u=2N+1)$, where the admissibility condition is saturated, at positive half-odd integer levels labelled by $(p=2N+3,u=2)$, and an isolated point $(p=3, u=4)$. We also relate the characters of these three classes with characters of RCFTs corresponding to integral levels of $\widehat{su}(2)$ and $\widehat{so}(5)$. The sequence with $u=2$ is quite intriguing and seems to defy most of the usual CFT descriptions (except possibly the$\ log$ CFT). We also report two criteria to eliminate character vectors of the fractional admissible level $\widehat{su}(2)_m$ current algebra at $(p\in prime,u\in prime)$ and $(p, Np-1)$ where $N\in \mathbb{N}$, admitting quasi-characters, as character vectors of an RCFT. Based on 2208.09037.
We investigate the renormalization group scale dependence of the $H \rightarrow gg$ decay rate at the order N$^4$LO in renormalization-group summed perturbative theory, which employs the summation of all renormalization-group accessible logarithms including the leading and subsequent four sub-leading logarithmic contributions to the full perturbative series expansion. The attractive advantage of this approach is the closed-form analytic expressions, which represent the summation of all RG-accessible logarithms in the perturbative series that is known to a given order. The new renormalization-group summed expansion for the $H \rightarrow gg$ decay rate shows an improved behaviour by exhibiting a reduced sensitivity to the renormalization-group scale. The largest uncertainty in the determination of the $H \rightarrow gg$ decay width in this work arises due to the $1\%$ change in the strong coupling constant $\alpha_s (M_Z^2)$, and is in the range $(2.3-2.6) \%$. We also improve the $H \rightarrow gg$ decay rate by estimating the higher order corrections through the asymptotic $Pad\acute{e}$ approximant method.
This work addresses the viability of \textit {Dirac phase leptogenesis}, in a scenario where the light Majorana neutrinos acquire masses by the inverse seesaw (ISS) mechanism. We show that, a successful leptogenesis in the ISS, driven (only) by the Dirac CP phase can be achieved with the involvement of an unorthodox form of the rotational matrix $R = e^{i{\bf A}} \,\,\,(e^{{\bf A}})$ in the Casas-Ibarra parametrisation. This particular structure of $R$ turns out to be an artefact in explaining the observed baryon asymmetry of the Universe in a pure ISS scenario. We detail here the confined regions of the $R$ matrix parameter space, essential for a successful leptogenesis. The $R$-matrix parameter space assists in rescuing the ISS parameter space needed for successful leptogenesis. This finding is otherwise unprecedented in the ISS set up. Making use of the resulted $R$ matrix parameter space we have calculated the branching ratio for the LFV decay $\mu \rightarrow e\gamma$. This accounts for an indirect probe of the $R$-matrix parameter space. The branching ratio obtained from the leptogenesis parameter space surpasses the existing bound on the branching ratio that resulted in a scenario of combined effect of linear and inverse seesaw by several order of magnitude. We also report here that, for $R = e^{i{\bf A}}$ choice leptogenesis demands the Dirac CP phase ($\delta$) to oscillate around $\pi/2$, although for the later choice the constraint on $\delta$ is much relaxed.
The HQET Lagrangian containing the non-perturbative parameters ($\overline{\Lambda}$, $\lambda_1$, and $\lambda_2$) are studied for S-wave singly heavy baryons. The mass formulae are used to estimate these parameters for $n=1$ singly heavy charm and bottom baryons. The symmetry of these parameters is used to calculate the masses of radially excited heavy baryons. Also, the ratio of $\frac{1}{m_Q^1}$ order mass terms with the $\frac{1}{m_Q^0}$ order mass terms are varied with the heavy quark mass. This study can be expanded for orbitally excited states also.
A search for the production of a vector-like quark, T′ will be presented based on proton-proton collision events at √s = 13 TeV. The data sample corresponds to an integrated luminosity of $138~\rm{fb^{-1}}$, collected by the CMS during 2016-18 operations of the LHC. This search targets the electroweak production mechanism of T′ in the mass range 600 − 1200 GeV, in a narrow width approximation. The T′ quark decays to a top quark and a Higgs boson (T′ → tH); with the Higgs boson subsequently decaying into a pair of photons (H → γγ). This search is the first $\rm{T}^\prime$ search to exploit the decay of the Higgs boson in the diphoton channel. The excellent diphoton invariant mass resolution of 1 − 2% results in an increased sensitivity compared to previous searches up to $1~\mathrm{TeV}$. No significant excess over the standard model background is observed, accordingly an upper limit on the T′ production cross section is set and the T′ masses up to 730 GeV are excluded at 95% confidence level.
Quantum chromodynamics (QCD) is the theory of strong interaction between quarks mediated by gluons. QCD predicts that pairs or triplets of quark and antiquark can bind together, forming the hadrons. In QCD, the gluons interact not only with the quarks but also among themselves since they carry the color charge that characterizes the strong interaction. This fact allows Lattice QCD to predict the existence of particles composed of gluons only. The lightest glueball is expected in a mass range of \mbox{1550--1750 MeV/$c^{2}$} having total angular momentum (J), parity (P) and charge conjugate (C) J$^{\mathrm{PC}}= 0^{++}$. Possible states with J$^{\mathrm{PC}}= 0^{++}$, and isospin I $=$0 are $f_\mathrm{0}(980)$, $f_\mathrm{0}(1370)$, $f_\mathrm{0}(1500)$ and $f_\mathrm{0}(1710)$. The $f_\mathrm{0}(1710)$ is a suitable candidate for glueball as it falls in the mass range of the Lattice QCD predictions. The large statistics data sample collected by ALICE in pp collisions at the highest LHC centre-of- mass energy provides an opportunity to search for high mass resonances, whose characteristics and internal structure are still unknown.
We report on the measurements of invariant mass distributions at midrapidity of higher mass resonances using the decay K$^{0}_\mathrm{S}\mathrm{K}^{0}_\mathrm{S}$ and K$^{+}$K$^{-}$ channels, collected by the ALICE detector in pp collisions at $\sqrt{s}$ = 13 TeV.
We search for the decay of $B^0\rightarrow \gamma\gamma$ using 711$fb^{-1}$ of data collected at $\Upsilon$(4S) resonance by the Belle detector at the KEKB asymmetric energy $e^+e^-$ collider located at High Energy Accelerator Research Organization (KEK), Japan. It is a Flavor Changing Neutral Current (FCNC) process which can be described by a penguin loop diagram in the leading order. The decay process is yet to be observed with an expected branching ratio of $3\times10^{-8}$ in the Standard Model (SM). The decay is sensitive to New Physics (NP) beyond the SM, since a new particle such as a charged Higgs may enter the penguin loop and can change its branching ratio. The best previous experimental upper limit on the branching ratio for this decay is $1.7\pm1.1(stat.)\pm0.2(syst.)\times10^{-7}$ at $90\%$ confidence level set by the BaBar experiment using 426$fb^{-1}$ data. The Belle collaboration has also put an upper limit on its branching ratio at $6.2\times10^{-7}$ at $90\%$ confidence level using 104$fb^{-1}$ of data. Using the final data set from the Belle experiment, we expect to reach the SM sensitivity for this decay.
Recent measurements of B to charm semileptonic decays show a difference between the branching ratio of the sum of exclusive decay rates and the inclusive $b \rightarrow cℓν$ decay rate (the so-called Semi-Leptonic (SL) gap) which affects the interpretation of the CKM element |Vcb|. Large contributions from not-yet measured $B \rightarrow D(∗)ηℓν$ decays could be the explanation of such difference. We present a sensitivity study of the $B \rightarrow D^{*}ηπ$ decay using the data sample collected by the Belle II experiment. This measurement will provide valuable information to predict its semileptonic counterpart $B \rightarrow D^{*}ηℓν$. If $B \rightarrow D^{*}ηπ$ decay is found to be large, it could contribute significantly to the hadronic B-tagging, and consequently enhance the sensitivity for searching rare B decays with missing energy.
In the standard model, mixing and CP violation in the charm sector are expected to be very small and thus they constitute a sensitive probe for potential new physics contributions.
The "wrong-sign" D^{0}->K^{+}π^{−}π^{0} decay is one of the most promising channels at Belle II, as this can be produced through two interfering processes: a direct doubly Cabibbo-suppressed decay of the D^{0} meson, or through D^{0}-D^{0}bar mixing followed by a Cabibbo-favored decay of the D^{0}bar meson.
In this work, we report the WS-to-RS ratio of the "wrong-sign" D^{0}->K^{+}π^{−}π^{0} decay in the simulation of the integrated luminosity of 1ab^{-1} at Belle II. The Belle II is the upgraded experimental facility at SuperKEKB, KEK, Japan. This study will be used to measure the mixing and CP Violation of the "wrong-sign" D^{0}->K^{+}π^{−}π^{0} decay.
Within Neutrino Physics, Seesaw mechanism is a very important pillar, known to all. Various Seesaw types make it an interesting phenomenon to include and verify it's validity in several low energy processes. Such a low energy and LNV ($\Delta L = 2$) process is the neutrinoless double beta decay ($0\nu\beta\beta$). If the $0\nu\beta\beta$ decay process is being observed in Left-right symmetric model, the effective mass of electron neutrino ( $m_{eff}^{<0\nu\beta\beta>}$ ) would be a function of $v_R$ (vev of right-handed Higgs triplet) and Majorana phases ($\alpha$ and $\beta$). This $v_R$ is basically the high energy scale (Weinberg's dim = 5 operator), which allows to explore new physics beyond Standard model. The Left-right symmetric model in general includes Seesaw type-I and type-II mass terms as a hybrid mass for light neutrino and the percentage of type-I and II contributions (termed as dominance) differs for different solutions. We are studying different dominance patterns ($2^n, n=3(gen)$) for the effective mass of ($0\nu\beta\beta$) decay with given experimental bounds (Kamland-Zen \& GERDA).
Extensive investigations on Pb+Pb and Au+Au collisions at the Large Hadron Collider (LHC) and Relativistic Heavy Ion Collider (RHIC) have helped us to produce and comprehend the properties of the quark-gluon plasma (QGP) in heavy-ion collisions. Recent investigations hint toward the possible formation of QGP-droplets in small collision systems such as high-multiplicity pp collisions. Oxygen-oxygen (O-O) collisions are expected in the forthcoming Run3 at the LHC. This will provide a significant and timely opportunity to investigate the effects seen in high-multiplicity pp and p-Pb collisions with a system that has a similarly small number of participating nucleons and final-state multiplicity but with a larger geometrical transverse overlap, thereby enhancing jet-quenching effects, which depend on path length. In this work, we implement both harmonic oscillator and Woods-Saxon type density profiles for the oxygen nucleus. Also, an alpha-cluster tetrahedral structure is incorporated for oxygen. We report the results of global properties such as Bjorken energy density, speed of sound, kinetic freeze-out parameters, particle ratios, and elliptic flow in O+O collisions at $\sqrt{s_{NN}}$ = 7 TeV from a multi-phase transport model (AMPT) for these nuclear charge density profiles. Such a study will help us understand the effects of nuclear density profiles on the said global observables and give us a fair baseline measurement to confront the experimental results in the future.
Decays of $B$ mesons to the $\pi\ell^{+}\ell^{-}$ final state, where $\ell$ denotes an electron or a muon, constitute a flavour-changing neutral current $b \to d$ transition that is forbidden at tree level in the standard model (SM) and proceeds through higher order loop diagrams. Such decays are sensitive to beyond-the-SM physics thanks to the possibility of new particles contributing to the loops. Because of an extra generation gap, these decays are rarer than those mediated by the $b\to s$ transition, providing an excellent complement to the lepton-flavour universality tests conducted in $b\to s$ channels. We present a preliminary study of $B\to\pi\ell^{+}\ell^{-}$ decays performed with Belle II simulation data, as well as a projection of the sensitivity of Belle II for the data sample expected in the future.
We study the formation of shocks in the wakes of moving cosmic strings. The plasma considered is a magnetized plasma with a high beta value. We find that multiple shocks may form in the cosmic string wake. A detailed numerical study is carried out to study the structure of the shocks. We use a 2D-magnetohydrodynamic simulation to study the evolution of the magnetic fields in the shocks. The presence of multiple shocks will affect the observational signals of cosmic string wakes. Another important result that we find is the possibility of magnetic field reconnection in the wakes of these cosmic strings. The magnetic field lines get rearranged to form closed loops. The possibility of magnetic reconnection indicates the change of magnetic energy to kinetic energy. This can have several observational consequences for the magnetized wakes. The reconnection also leads to the decrease of the magnetic field in the postshock region.
Moduli stabilization in type-IIB string theory is an intriguing problem to arrive at an effective description of 4d cosmological inflationary paradigm as reflected by the recent experiments. At tree level with fluxes, dilaton and complex structure moduli are stabilized [1] by suppersymmetric constraints leaving the Kahler moduli undetermined. In order to stabilize the latter, non-perturbative (arising from gaugino condensation/instantons effects) [2] and various perturbative corrections ($α'$-correction/one-loop corrections) are introduced through branes and multi-graviton scattering amplitudes in the internal manifold [3,4]. In this paper, we have derived an F-term potential for three Kahler moduli ($τ_1$ $τ_2$ $τ_3$) corresponding to three non-interacting and intersecting magnetized D7 branes in $T^6$/$Z_N$ orbifold compactification of which one is stabilized through the overall by perturbative correction in Kahler potential and remaining two are stabilized by non-perturbative contributions on superpotential [5]. The F-term potential is uplifted by a D-term potential arising from D7 brane configuration. Then the effective potential (sum of F- and D-term potentials) is converted to inflaton potential by a canonical normalization procedure. The perturbatively stabilized Kahler modulus is identified as the inflaton field. The slow-roll potential is obtained by supersymmetrically fixing two Kahler moduli appearing in superpotential. In this way, the auxiliary field appearing in [6] is avoided without losing the slow-roll feature.
References:
1. Giddings S. B. et al., Phys. Rev. D, 66 (2002) 106006.
2. Kachru S. et al., Phys. Rev. D, 68 (2003) 046005
3. Becker K. et al., JHEP, 06 (2002) 060.
4. Antoniadis I. et al., JHEP, 01 (2020) 149.
5. Basiouris V. and Leontaris G. K., Fortschr. Phys., 70 (2022) 2100181.
6. Let A. et al., EPL 139 (2022) 59002.
The spectral properties of strange quarkonium (ss ̅) is analysed using quark model approach. Present study also incorporates spin dependent interactions to obtain the hyperfine splitting of ss ̅. To compute these mass spectra of (ss ̅), we have solved Dirac equations with a Martin plus constant confinement mean field potential. The predicted masses of nS states of strange quarkonium are in good agreement with available experimental as well as with the theoretical predictions. Our predicted mass of ø(1680) is 1681 for 23S1 state, which is in very close agreement with experimental results of 1680 ± 20 MeV . Our computed vector decay constant for ϕ(1020) meson is 251 MeV, which is in good accordance with Lattice QCD result of 238 MeV and its leptonic decay width of 1.098 keV. The Leptonic decay width is calculated with and without QCD correction, it is noted that in the calculation of leptonic decay width QCD correction factor is not effective. Other properties of the (ss ̅) bound state predicted here include the pseudoscalar decay constants widths and digamma decay widths for S -
wave (ss ̅) meson. The present study thus will be useful to identify the new and exotic states in the energy sector 1.5 GeV to 2.5 GeV.
Within the statistical model approach, we investigate the contributions of sea
quarks and gluon to the structure of pion. In this approach, hadrons assumed
to be an ensemble of quark-gluon Fock states. The principle of detailed balance
is used to calculate the probability of each Fock state in pions. Various sub-
processes like g ⇔ gg, g ⇔ q ̄q and q ⇔ qg are considered. We calculated the
contribution of strange quark-antiquark pairs in the pions which are generated
from considering the g ⇔ s ̄s process. With the help of these probabilities,
we calculated the masses of pions in a statistical approach. The strangeness
suppression factor is also calculated.
At very high temperatures and nearly zero baryon densities, experiments are concentrated on the study of the properties of the deconfined QCD matter. At moderate temperatures and high baryon densities, investigations are focused on the search for structures in the QCD phase diagram such as the critical end point, the predicted first order phase transition between hadronic and partonic matter and the chiral phase transition. Strangeness production has been suggested as a sensitive probe to the early dynamics of the deconfined matter created in heavy-ion collisions. The data taken during 2010 and 2011 in Beam Energy Scan (BES) phase-I indicated potential changes in the medium properties for $\sqrt{s_{NN}}$ $\leq$ 19.6 GeV. However, no definite conclusions can be drawn due to the limited precision of those data. Since 2018, STAR has conducted the BES phase-II program and accumulated high statistics Au+Au collision data at various energies ($\sqrt{s_{NN}}$) below 27 GeV. The production of $\Lambda$ from Au+Au collisions at $\sqrt{s_{NN}}$ = 19.6 GeV will be presented in this talk. The $P_{T}$ spectra, nuclear modification factors, and particle ratios will also be reported.
Photon energy bias is used to compute the corrections to the reconstructed photon energy and improve data-simulation agreement in analyses having final states with photons.
In this study, we reconstruct clean samples of π^{0} → γγ decays from the D^{∗+} → D^{0}(→ K^{−}π^{+}π^{0})π^{+} decay chain in both simulation and data collected by Belle II. The Belle II is the upgraded experimental facility at SuperKEKB, KEK, Japan. We present the comparison of mean π^{0} mass and π^{0}-mass resolution in data recorded at 207 fb^{−1} as well as in simulation in different bins of photon energy.
In recent years, several measurements of B-decays with flavor-changing neutral currents, i.e., $b \rightarrow d$ transitions, hint at deviations from the Standard Model (SM) predictions. The $B^{0} \rightarrow \gamma\gamma$ decays are forbidden at tree-level in the SM and can only proceed via suppressed loop level diagrams. Such decays are an ideal probe to search for phenomena beyond the SM since contributions from new particles can affect the decays on the same level as SM particles. The Belle II experiment is a substantial upgrade of the Belle detector and operates at the SuperKEKB energy-asymmetric $e^{+}e^{−}$ collider. This is the first Belle+Belle II measurement using the data set of $711$ $fb^{−1}$ from the Belle and pre-LS1 data from Belle II. A combined measurement will significantly improve precision than that of Belle II alone. This decay mode is yet to be observed with an expected branching fraction of $3.1 \times 10^{−8}$ in the SM. The best previous experimental upper limit on the branching fraction of this mode is $3.3 \times 10^{−7}$ at a 90% confidence level (CL) set by BaBar using $426$ $fb^{−1}$ of data. We expect to make the first observation of this decay by considering the SM expectation or put the most stringent limit on its branching fraction so far.
The recent experimental measurements in the heavy flavour sector seem to indicate presence of physics beyond the standard model, though not conclusively. Though the measured branching ratio of $B^0_s \to \mu^+ \mu^-$ at the LHC experiments seem to be compatible with the standard model expectations within errors, it is imperative to search for other processes as a probe for new physics. We are studying the lepton flavour violating decays of $B^0_s \to e^\pm \mu^\mp$ and $B^0 \to e^\pm \mu^\mp$ in proton-proton collision data collected with the CMS experiment at the center of mass energy of 13 TeV. Such decays are not allowed in the standard model but models with a heavy, neutral gauge boson (Z') or leptoquarks predict such transitions. Though BESIII and LHCb experiments have performed searches for these processes, they have not been studied yet by the CMS experiment which has collected about 10 billion B-decay events during Run2 operation of the LHC. Preliminary results of our study will be presented in this poster.
Here in this work we have carried out the study of traversable wormhole in $f(R)$ gravity with the function $f(R)=R+\alpha R^n$, where $\alpha$ and $n$ are arbitrary constants. The $f(R)$ gravity is a reputed alternative gravity theory in which the Ricci scalar $R$ in the Einstein-Hilbert gravitational Lagrangian is replaced by a general function of $R$. We have chosen the shape function of the form $b(r)=r\exp(1-\frac{r}{r_0})$, where $r_0$ is the radius of the wormhole throat. We have considered a spherically symmetric and static wormhole metric and derived field equations. We have also checked the necessary energy conditions such as null, weak, strong and dominant energy conditions near the throat region with a throat radius $r_0$. For this we have chosen some different types of redshift functions, $\phi = constant$, $\beta ln(\frac{r}{r_0})$, $\frac{1}{r}$, $\exp(-\frac{r_0}{r})$ and $\exp(-\frac{r_0}{r}-\frac{r_0^2}{r^2})$, where $\beta$ is any arbitrary constant. Finally we have also tried to calculate the amount of the exotic matter near the wormhole throat.
We develop an $A_4 \times Z_4 \times Z_2$ symmetry model of neutrino masses and mixings within the Minimal Extended Seesaw mechanism where three right-handed neutrinos $N_1$ , $N_2$ and $N_3$ and a keV-scale singlet sterile neutrino $S$ are added to the Standard Model. This model breaks $\mu-\tau$ symmetry of neutrino mass matrix and successfully explains leptonic mixing with non-zero $\theta_{13}$. We study the phenomenological results of the keV-scale sterile neutrino as a dark matter candidate by calculating the relic abundance of the sterile neutrino and its decay rate. The effects on effective neutrino mass in neutrinoless double beta decay as well as baryogenesis via resonant leptogenesis is also studied and significant results are observed within the experimental bounds.
The role of tachyonic potential in the accelerated expansion of the universe is well discussed. Using power law and polynomial function for expansion factor a the corresponding tachyonic potential and equation of state are studied in the present article. Investigating with higher order terms in a, the results are found in good approximation with the standard one.
In this concept based theory, “Mass is Equivalent to Length of Imaginary Straight Line”. With this concept, all Fermions and Bosons are like Quanta String particles with definite direction. For e.g. Unidirectional Imaginary Straight Line with fixed length are Massive Spin ½ Fermions; while Unidirectional Imaginary Quanta Curved Lines are Massless Spin 1 Bosons. All Fundamental particles are arranged in 3 Folds way (Bottom Fold, Middle Lower and Upper Folds and Top Fold) and projected them in 4th Imaginary Dimension in order of decreased in Mass from TeV to approx. 0 eV respectively. This Theory is Beyond Standard Model because it predicts New Fundamental Particles viz. Dark Matter (Spin=0 Massive Boson) along with Gravitons (Spin=2 Massless Bosons); 4th Generation Neutrinos, Vertical Massless Boson Particles and Tri-Axis Massive Boson (Spin=0) particle. Discovery of these particles will act as Solid Proof to this theory. With this 3F4D representation of the Universe at atomic and sub-atomic level, it solves lot of current problems of SM of Particle physics like Matter-Antimatter Asymmetry, Origin of mass of hadrons like protons, Origin of mass and L.H. nature for neutrinos, Wave-particle duality of particles etc. giving true insight of fundamental particles. With proving that, Dark matter is not a Quanta Particle, rather it is a Single Entity and expands throughout the Universe in the form of “Web of Spider”, it shows space-time is not empty, but it is made of Continuous lines of Dark Matter. Correlation of its Continuity with Time says that; “Time is neither Illusion nor 4th Dimension, but it is Intrinsic Property of Continuous Single Entity, Dark Matter”. Gradient of Mass of Dark Matter represents Curvature of Space-Time in terms of Increase in its mass-density w.r.t. to Mass-Density of Flat Universe.
In geometric representation, the vacuum wavefunction is comprised of one scalar, three vectors, three bivectors, and one trivector (1,3,3,1). Vacuum wavefunction is the same at all scales, from Planck length to boundary of the observable universe. Various combinations of the four fundamental constants (electric charge, Planck's angular momentum quantum, speed of light, and magnetic permeability of space) that define the dimensionless electromagnetic coupling constant alpha permit assigning geometrically and topologically appropriate electric and magnetic flux quanta to each of the eight wavefunction components. Physics at different energies arises from the scale to which the flux quanta are confined, from the scale-dependent electromagnetic field energy 1. In such a model the Higgs mode exists not only at the scale of LHC physics, but also at the Planck particle scale. Higgs mode is inside Planck length event horizon, such that in the earliest instants of the big bang it drives event horizon to scale of the universe boundary.
1 Naturalness begets Naturalness: An Emergent Definition
The photonuclear reactions using deuterium targets find application in nuclear physics, laser physics and astrophysics. The studies related to deuteron photodisintegration using polarized photons has been the focus of interest since 1998 [1] which influenced many experimental studies that were carried out using 100 percent linearly polarized photons at Duke free electron Laser Laboratory.Theoretical study on deuteron photodisintegration was carried out [2,3]and in these studies the possibility of 3 different $E1_v$ amplitudes leading to the final n-p state in the continuum was discussed.As there is experimental evidence about the splitting of 3 $E1_v-$p wave amplitudes at slightly higher energies, we hope that the same may be true at near threshold energies also. As the spin dependent variables are more sensitive to theoretical inputs and the data obtained on polarization observables are more sensitive to theoretical calculations, there is a considerable interest in studies related to the reaction. On the other hand, photon polarization in n-p fusion was discussed [4], wherein it was suggested for a polarized target-beam test to check for the presence of smaller isoscalar amplitudes. Recently, neutron polarization in $d(\vec \gamma ,\vec n)p$ was studied at near threshold energies[5].
In this regard, the purpose of the present contribution is to extend this study to discuss proton polarization in $d(\vec \gamma, \vec p)n$ reaction using model independent irreducible tensor formalism at near threshold energies of interest to astrophysics.
References:
[1] S. Burles and D. Tytler, The Astrophysical Journal, 499 (1998) 699.
[2] G. Ramachandran, S.P.Shilpashree, Physical Review C, 74 (2006)
052801(R).
[3] S.P.Shilpashree, Physica Scripta, 97 (2022) 075003.\
[4] G. Ramachandran, P. N. Deepak, and S. Prasanna Kumar, Journal of Physics G: Nuclear and Particle Physics 29 (2003) L45.\
[5] S.P. Shilpashree, Venkataramana Shastri, JETP Letters (Pis$’$ma v ZhETF), 116 (2022) 273.
The two-particle correlations as a function of relative momenta of identified hadrons involving $\mathrm{K^{0}_{S}}$ and $\Lambda/\bar{\Lambda}$ are measured in PbPb collision at $\sqrt{s_{_{\mathrm{NN}}}} =$ 5.02 TeV with the data samples collected by the CMS experiment at the LHC. Such correlations are sensitive to quantum statistics and possible final state interactions between the particles. The shape of the correlation function is observed to vary largely for different particle pairs, revealing the effect of the strong final state interaction in each case. The source radii are extracted from $\mathrm{K^{0}_{S}K^{0}_{S}}$ correlations in different centrality regions and found to decrease from central to peripheral collisions. The strong interaction scattering parameters are extracted from $\mathrm{K^{0}_{S}K^{0}_{S}}$, $\Lambda\mathrm{K^{0}_{S}}\oplus\bar{\Lambda}\mathrm{K^{0}_{S}}$, $\Lambda\Lambda\oplus\bar{\Lambda}\bar{\Lambda}$ and $\Lambda\bar{\Lambda}$ correlations using the Lednicky-Lyuboshits model, and compared with other experimental and theoretical results. The scattering parameters indicate that the $\Lambda\Lambda\oplus\bar{\Lambda}\bar{\Lambda}$ is attractive and that the $\Lambda\mathrm{K^{0}_{S}}\oplus\bar{\Lambda}\mathrm{K^{0}_{S}}$ interaction is repulsive.
We propose a type II seesaw model for light Dirac neutrinos to provide an explanation for the recently reported anomaly in W boson mass by the CDF collaboration with $7\sigma$ statistical significance. In the minimal model, the required enhancement in W boson mass is obtained at tree level due to the vacuum expectation value of a real scalar triplet, which also plays a role in generating light Dirac neutrino mass. Depending upon the couplings and masses of newly introduced particles, we can have thermally or non-thermally generated relativistic degrees of freedom $\Delta N_{\rm eff}$ in the form of right handed neutrinos which can be observed at future cosmology experiments. Extending the model to a radiative Dirac seesaw scenario can also accommodate dark matter and lepton anomalous magnetic moment.
We introduce a new model of relativistic quantum analogue of the classical Otto engine in the presence of a perfectly reflecting boundary. A single qubit acts as the working substance interacting with a massless quantum scalar field, with the boundary obeying the Dirichlet condition. The quantum vacuum serves as a thermal bath through the Unruh effect. We observe that the response function of the qubit gets significantly modified by the presence of the reflecting boundary. From the structure of the correlation function, we find that three different cases emerge, namely, the intermediate boundary regime, the near boundary regime, and the far boundary regime. As expected, the correlation in the far boundary regime approaches that of the Unruh quantum Otto engine (UQOE) when the reflecting boundary goes to infinity. The effect of the reflecting boundary is manifested through the reduction of the critical excitation probability of the qubit and the work output of the engine. In spite of the reduced work output, the efficiency of the engine remains unaltered even in the presence of the boundary.
Recent advancement in Neutrino Oscillation study has reached the precision era and hence a highly precise measurement of $\theta_{23}$ mixing angle takes a prime role to address long-standing flavor problems by ruling out different theoretical mass models. Two highly promising future long-baseline experiments DUNE and T2HK can serve us to pin down the atmospheric neutrino oscillation parameters with significantly high precision. The latest global fit analyses of world oscillation data under 3$\nu$-paradigm show 1.6$\sigma$ indications for lower $\theta_{23}$ octant and favor normal mass ordering (NMO) at 2.5$\sigma$ hint. In this work, we find that, the individual performance of DUNE [5 yrs $\nu$ + 5 yrs $\bar{\nu}$] and T2HK [2.5 yrs $\nu$ + 7.5 yrs $\bar{\nu}$] can improve relative 1$\sigma$ precision on $\sin^2\theta_{23}$ ($\Delta m^2_{31}$) of current canvas of global oscillation data significantly. Further the combined performance of DUNE and T2HK enhances the present fit by a factor of 7.64 (5.45). We show that DUNE (T2HK) can resolve octant of $\theta_{23}$ at 5 (4.42)$\sigma$ confidence level with the present global neutrino oscillation data. We also show the possible correlations and degeneracies among $\sin^2\theta_{23}$, $\Delta m^2_{31}$ in the neutrino, antineutrino, and their combined modes. It is remarkable that the combined antineutrino data of DUNE and T2HK can exclude the wrong octant solution at 3$\sigma$ C.L but cannot attain the same in the neutrino mode.
Black holes are one of the most intriguing and puzzling objects in the universe. Computing the volume of a black hole is not as straightforward as defining the area of the enclosing horizon. In this article we define the volume using the technique developed by Christodoulou and Rovelli for Schwarzschild black holes and extend it to the case of a rotating black hole in 2+1 dimensions. We show that the maximum contribution to the volume of the hyper-surface comes from what we call the steady state radius. We then find that this volume grows linearly and indefinitely with the advance time. We then introduce a scalar field in this maximal hyper-surface and compute its entropy. We find that in the near extremal limit, the entropy of this scalar field is proportional to the horizon entropy of the black hole.
In this work, we propose a model by extending the Standard Model (SM) with two right handed fermion triplet superfields ($\Sigma_{R_i}$), in presence of modular symmetry $\Gamma_3^\prime$ $\sim$ $\rm A_4^\prime$, i.e., double cover of $\rm A_4$ modular symmetry, also known as $ \rm T^\prime$ modular symmetry. It is crucial to identify the seesaw models in order to explain the neutrino mass generation, wherein the neutrino masses are naturally lowered due to the exchange of heavy right handed particles at tree level. As a consequence, we utilize type-III seesaw mechanism, which involves right-handed fermionic triplet for getting the correct masses for the active neutrinos as realized from neutrino oscillation data. The minimal extension with type-III seesaw framework is well-suited to explain neutrino phenomenology accurately. The $\rm T^\prime$ symmetry has been used to analyze the different possible neutrino mass matrices which are expressed in terms of modulus $\tau$ introduced to break the modular symmetry. Our predictions include observables like neutrino mass square differences, mixing angles in the leptonic sector, Dirac phase and the absolute mass scale of neutrinos by suitably fixing model parameters. Alongside, we are also able to explain the recent measurements of the $W$ boson mass published by CDF collaboration.
We examine a singlet-doublet fermion dark matter where the incorporation of a small Majorana mass term for the singlet fermion helps evade the severe direct detection constraint by making the dark matter pseudo-Dirac. Interestingly, the same mass term provides a platform to address the non-zero neutrino mass in the presence of singlet scalars. In addition, we discuss the realization of the freeze-in production of dark matter in the same setup. Here, a light scalar mediator present in our framework can mediate a large self-interaction for the fermion dark matter, which is capable of solving the small-scale structure anomalies of the Universe.
There are compelling theoretical arguments in favour of the existence of various baryon-rich exotic QCD phases in the core of a pulsar. However, proving such a hypothesis remains challenging due to the lack of a probe of the core. We suggested a technique of probing the phases by studying the effects of phase transition induced density inhomogeneities on pulse profile modulation. We initiated our study by taking general statistical density fluctuations. Such density fluctuations cause the initial moment of inertia tensor of the presumably oblate shape pulsar (with the pulsar deformation parameter $\eta$) to get random additional contributions for each component. These contributions are assumed to be Gaussian distributed with certain width characterized by the strength of density fluctuations $\epsilon$. Using sample values of $\epsilon$ and $\eta$, we solve Euler's equations for the rotational dynamics of the pulsar to observe the effects of wobbling through the modifications of pulse profiles. Our results show a specific pattern in the perturbed pulses, which are observable in modulations of pulses over large time periods. Once the density fluctuations fade away, leading to a uniform phase in the interior of the pulsar, the off-diagonal components of the MI tensor also vanish, eventually causing the wobbling of the pulsar to die away. This feature allows one to distinguish these transient pulse modulations from the effects of any initial wobbling. Since the decay of these modulations in time is directly related to the relaxation of density fluctuations in the pulsar, it gives valuable information about the nature of phase transition occurring inside the pulsar.
As the densities in the interior of neutron stars exceed those of terrestrial nuclear experiments, they provide scope for studying the nature of dense matter under extreme conditions. The composition of the inner core of neutron stars is highly uncertain, and it is speculated that exotic forms of matter such as hyperons may appear there. Gravitational waves emitted by unstable oscillation modes in neutron stars contain information about their interior composition and therefore allow us to probe the interior directly. In this work, we study the influence of the appearance of hyperons on f-mode oscillations and therefore on the emission of gravitational waves. We also speculate whether a future detection of f-modes could provide a possibility of probing the presence of hyperons in the neutron star core. We further show the importance of General Relativity in calculating the f-mode characteristics and also investigate their possible correlations with nuclear/hyper-nuclear empirical parameters as well as NS observable properties.
We consider an extension of Littlest Seesaw model with an additional scalar and a fermionic particle under the freeze-in scenario. Primordial black hole of a certain mass range can act as an alternate production mechanism for the dark matter particles as it evaporates via Hawking radiation. Furthermore, the presence of a primordial black hole with substantial energy density gives rise to non-standard cosmology which also modifies the freeze-in production. We have investigated this freeze-in scenario in presence of primordial black hole for a few interesting cases and constrained the parameter space accordingly. If the universe is primordial black hole dominated at any point before Big Bang Nucleosynthesis, we have found that the final relic in that case is constituted mostly by the evaporation component in the high dark matter mass and by the freeze-in component in the low dark matter mass.
We explore the role of dissipative effects during warm inflation leading to the features in the primordial curvature power spectrum at the small-scales. In our study, we consider different models of warm inflation and discuss the formation of primordial black holes (PBH) from them. In particular, we focus on generating PBHs with mass in the range ($10^{17} −10^{23}$) g, that can explain the full dark matter abundance. Further, we calculate the scalar induced gravitational waves (SIGW) spectrum associated with the PBH formation, and explore their signatures in the gravitational wave detectors. This is crucial for understanding the physics of inflation, dark matter phenomenology and gravitational waves observations.
We propose a Dirac neutrino portal dark matter scenario by minimally extending the particle content of the Standard Model (SM) with three right-handed neutrinos ($\nu_R$), a Dirac fermion dark matter candidate ($\psi$) and a complex scalar ($\phi$), all of which are singlets under the SM gauge group. An additional $\mathbb{Z}_4$ symmetry has been introduced for the stability of dark matter candidate ψ and also ensuring the Dirac nature of light neutrinos at the same time. Both the right-handed neutrinos and the dark matter thermalise with the SM plasma due to a new Yukawa interaction involving $\nu_R$, $\psi$ and $\phi$ while the latter maintains thermal contact via the Higgs portal interaction. The decoupling of $\nu_R$ occurs when $\phi$ loses its kinetic equilibrium with the SM plasma and thereafter all three $\mathbb{Z}_4$ charged particles form an equilibrium among themselves with a temperature $T_{\nu_R}$. The dark matter candidate $\psi$ finally freezes out within the dark sector and preserves its relic abundance. We have found that in the present scenario, some portion of low mass dark matter ($M_{\psi} \leq 10$ GeV) is already excluded by the Planck 2018 data for keeping $\nu_R$s in the thermal bath below a temperature of 600 MeV and thereby producing an excess contribution to N$_{\rm eff}$. The next generation experiments like CMB-S4, SPT-3G etc. will have the required sensitivities to probe the entire model parameter space of this minimal scenario, especially the low mass range of $\psi$ where direct detection experiments are still not capable enough for detection.
We present an extension of the SM involving three triplet fermions, one triplet scalar and one singlet fermion, which can explain both neutrino masses and dark matter. One triplet of fermions and the singlet are odd under a $Z_2$ symmetry, thus the model features two possible dark matter candidates. The two remaining $Z_2$-even triplet fermions can reproduce the neutrino masses and oscillation parameters consistent with observations. We consider the case where the singlet has feeble couplings while the triplet is weakly interacting and investigate the different possibilities for reproducing the observed dark matter relic density. This includes production of the triplet WIMP from freeze-out and from decay of the singlet as well as freeze-in production of the singlet from decay of particles that belong to the thermal bath or are thermally decoupled. While freeze-in production is usually dominated by decay processes, we also show cases where the annihilation of bath particles give substantial contribution to the final relic density. This occurs when the new scalars are below the TeV scale, thus in the reach of the LHC. The next-to-lightest odd particle can be long-lived and can alter the successful BBN predictions for the abundance of light elements, these constraints are relevant in both the scenarios where the singlet or the triplet are the long-lived particle. In the case where the triplet is the DM, the model is subject to constraints from ongoing direct, indirect and collider experiments. When the singlet is the DM, the triplet which is the next-to-lightest odd particle can be long-lived and can be probed at the proposed MATHUSLA detector. Finally we also address the detection prospects of triplet fermions and scalars at the LHC.
The recent Fermilab muon $g-2$ result and the same for electron due to fine-structure constant measurement through ${}^{133}~{\rm Cs}$ matter-wave interferometry are probed in relation to MSSM with non-holomorphic (NH) trilinear soft SUSY breaking terms, referred to as NHSSM. Supersymmetric contributions to charged lepton $(g-2)_l$ can be enhanced via the new trilinear terms involving a wrong Higgs coupling with left and right-handed scalars. Bino-slepton loop is used to enhance the SUSY contribution to $g-2$ where wino mass stays at 1.5 TeV and the left and right slepton mass parameters for the first two generations are considered to be the same. Unlike many MSSM-based analyses, the model does not require a light electroweakino, or light sleptons, or unequal left and right slepton masses, or a very large higgsino mass parameter. In our analysis large Yukawa threshold correction (an outcome of NHSSM) and opposite signs of trilinear NH coefficients associated with $\mu$ and $e$ fields are used to satisfy the dual limits of $\Delta {a_\mu}$ and $\Delta {a_e}$ (where the latter comes with negative sign) along with the limits from Higgs mass, B-physics, collider data, direct detection of dark matter (DM), while focusing on a higgsino DM which is underabundant in nature. Varying Yukawa threshold corrections provide the necessary flavor-dependent enhancement of $\Delta {a_e}/m_e^2$ compared to that of $\Delta {a_\mu}/m_\mu^2$. A larger Yukawa threshold correction through $A^\prime_e$ for $y_e$ also takes away the direct proportionality of $a_e$ with respect to $\tan\beta$. With a finite intercept, $a_e$ becomes only an increasing function of $\tan\beta$. We identified the available parameter space in the two cases while also satisfying the ATLAS data from slepton pair production searches in the plane of slepton mass parameter and the mass of the lightest neutralino.
In this work, we have explored a highly motivated beyond the Standard Model scenario, namely, R-parity violating supersymmetry, in the context of light neutrino mass and mixing. The R-parity is broken by only the lepton number violating the bilinear term. We try to fit two non-zero neutrino mass square differences and three mixing angle values obtained from the global chi-square analysis of neutrino oscillation data. We have also taken into account the updated data of Higgs mass, low energy flavor violating constraints like rare b-hadron decays, and Higgs coupling strengths with various Standard Model particles from LHC RUN-II data. We have used Markov Chain Monte Carlo(MCMC) to scan the parameter space of our model. After a detailed scan of the parameter space, we found that this model can explain data for the Normal Hierarchy scenario. We also represent $1\sigma$ and $2\sigma$ contour plots of different correlated parameters like Bilinear R-parity violating coupling parameters($\varepsilon_i$), corresponding soft coupling parameters(B$_i$), $\mu$, $\tan\beta$ etc.
Doubly-charged Higgs bosons within the mass range 84–200 GeV decaying into a pair of W-bosons have been overlooked by the LHC searches. Such Higgses, when produced in a highly Lorentz-boosted regime, tend to manifest themselves as a single doubly-charged fat jet. We perform a multivariate analysis with the jet substructure variables as inputs to the boosted decision tree classifier to discern such jets from the SM jets. Considering their Drell-Yan pair production, we present a novel search strategy for them in the final state with two same-sign leptons and a doubly-charged fat jet. We find that discovery with 5σ significance is achievable with the already collected Run 2 LHC data.
The CMS and ATLAS experiments have made clear observations of a Standard Model (SM) Higgs boson candidate with a mass of 125 GeV. While its discovery has reaffirmed the SM, detailed measurements of its properties and searches for additional similar undiscovered particles are promising methods of discovering Beyond SM (BSM) physics. A search for the SM Higgs boson decaying to Z boson and a photon have been conducted, but lacks the sensitivity to confirm or reject the SM Higgs hypothesis. The SM backgrounds arise from the production of a Z boson and a radiated photon or mis-reconstructed jet mimicking a photon, which greatly outnumber the small expected signal from H→ Zγ or X→ Zγ where “X” is a new scalar particle. However, other theoretical models exist that predict the existence of a composite Higgs boson, in which the coupling to heavier scalar bosons is more favourable for the Z and photon production mode. This work involves the search for a new Spin-0 particle (X) using Zγ resonance in leptonic final states with full Run-2 data of 137 fb-1 at √s =13 TeV. The search will look for a mass bump over continuous SM continuum backgrounds which provides a clear experimental signature. The process considered is gluon-gluon → X → Zγ where Z further decays to e+e-/μ+μ−. In this talk the preliminary results for the leptonic channels will be presented.
Various extensions of the standard model (SM) invoke the presence of more than one Higgs doublet in the SM Lagrangian. In particular, the two-Higgs doublet model assumes the presence of an additional $SU(2)_{L}$ complex doublet. We are looking for $H^{\pm}$ in the top-quark pair production process, where one top quark decays to a charged Higgs boson and a bottom quark, and the other to a $W$ boson and a bottom quark. The charged Higgs boson further decays into a charm and a strange quark, and the $W$ boson decays to a charged lepton and a neutrino. The search for the charged Higgs boson has been carried out for an $H^{\pm}$ mass range from 80 to 160 GeV. We will discuss the final results published using the 2016 data sample and the new developments in the analysis method for the analysis of full Run-2 data recorded by the CMS experiment at the LHC.
The inert Higgs-doublet model provides a simple framework to accommodate a viable Higgs portal scalar dark matter candidate, together with other heavier scalars of mass 100 GeV or more. We study the effect of next-to-leading order (NLO) QCD corrections in this scenario in the context of the Large Hadron Collider.
${\cal{O}}(\alpha_s)$ corrections to the gluon-gluon-Higgs effective coupling have been taken into account in this study wherever appropriate. We find such corrections have a significant impact on various kinematic distributions and reduce scale uncertainties substantially. Fixed order NLO results are matched to the {\sc Pythia8} parton shower (PS) and the di-fatjet signal associated with the missing transverse momentum is analyzed, as this channel has the ability to explore its entire parameter space during the next phase of the LHC run. A closer look at the NLO+PS computation indicates a sizable NLO effect together with a subdued contribution from associated production of the heavy scalar compared to the pair production, thereby leading to a refined analysis strategy during the multivariate analysis of this signal.
The Forward Calorimeter System (FCS) as well as the Forward Silicon Tracker (FST) is the most recent upgrade of the STAR detector at RHIC, BNL. This upgrade in the forward 2.5<η<4 rapidity region is mainly driven to explore QCD physics in the low region of x as those related to the nucleon spin structure. The FCS consists of the refurbished PHENIX Shashlyk Lead Scinitillator (Pb/Sc) Electromagnetic Calorimeter (ECal) followed by an iron and scintilltor (Fe/Sc) sampling Hadronic Calorimeter (HCal), with compact silicon photomultipliers (SiPMs) as readouts. The construction of the FCS was completed towards the end of 2020 and started taking data during the 2022 Run Period. This talk will focus on the construction and calibration of the FCS with a focus on the radiation damage of the front end electronics, as well as the gain correction factor for each ECal tower for reconstructing neutral pions using the pp Run23 √s = 510 GeV data.
The proposed new Electron-Ion collider poses a technical and intellectual challenge for the detector design to accommodate the long-term diverse physics goals envisaged by the program; one requires a 4𝝅 detector system capable of identifying and reconstructing the energy and momentum of final state particles with high precision. The EPIC collaboration has formed to design, build, and take data with Detector 1.
EIC requires identifying particles of different masses over an extensive momentum range. This imposes a challenge for making relevant choices about a single PID detector. Thus, a diverse spectrum of PID detectors has been proposed. Of 4 types of PID detectors, three are based on Ring Imaging Cherenkov Counter (RICH) technologies, and one is realized by the Time of Flight (ToF) method. Two types of Photon Detectors (PDs) are proposed for these 3 RICHs. Two will most probably be equipped with Silicon Photomultiplier Tubes (SiPMs), and the DIRC RICH [Detection of Internally Reflected Cherenkov (Light) Ring Imaging Cherenkov Counter] will most probably use commercial MCP – PMTs; Large Area Picosecond Photon Detector (LAPPD), with a 20 X 20 cm2 active area, produced and marketed by Incom INC. USA is the most likely candidate.
At NISER, we are setting up the laboratory facility for test characterizing and benchmarking this product. The first GEN II pad segmented LAPPD has been bought, and critical first results are shown. Also, its possible uses in many fields in High Energy Physics and other areas like medical imaging and social security devices will be discussed.
Muon scattering angle in various materials varies differently due to the multiple Coulomb scattering of incoming muons with materials. The scattering angle mainly depends on the atomic number, the density of the material, and the thickness of the medium at a given energy. Scattering angles at different initial energies also provide the opportunity to classify the scattering angle.In this study, we show that the deflection angle depends on the thickness of the material and it depends on the density of the material, and it exponentially decays as a function of the initial energy of muon. We took a different approach to simulate the scattering angle. We are using our setup geometry for this. The experimental setup is not ready yet but we are using the Geant4 simulation package for obtaining data.
The high granularity calorimeter (HGCAL) of CMS is planned to operate during the high luminosity operation of the LHC (year 2028 onwards), replacing the existing electromagnetic and hadronic calorimeters at the endcap. It will enable a detailed investigation of vector boson fusion processes and Lorentz-boosted topologies at forward rapidity. An extensive validation of the hardware and software components is currently in progress. We have developed a muon tomography technique that is found to be very useful for identifying any problems after changes are made, and testing the correctness of the geometry. We will discuss how this technique is used to figure out energy loss discrepancies with partial-wafer silicon sensors, incorrect rotation of full- and partial-wafer silicon sensors, and validation of GEANT hit positions in HGCAL scintillator tiles.
Liquid argon time-projection chamber (LArTPC) detectors have unique and powerful properties for neutrino physics and beyond-the-standard-model (BSM) searches. A LArTPC provides precise digital readout of charged particle trajectories, enabling a detailed picture of the aftermath of neutrino and BSM particle interactions. We will discusses the opportunity to search for the Beyond Standard model particle at the ICARUS LArTPC detector at Fermilab Short-Baseline Neutrino (SBN) program, using the beam off-axis of the NuMI neutrino beam. Since ICARUS is situated about 5.7◦ off of the NuMI neutrino beam, ICARUS is the excellent setup for the BSM searches. In this talk I will discuss about the LArTPC detector as well as the BSM physics opportunities at the SBN program.
The ALICE experiment at the LHC is designed to study the hot and dense medium produced in ultrarelativistic heavy-ion collisions. Due to their short lifetimes, resonances are useful tools to understand the mechanism of particle production and properties of the hadronic phase created after these collisions. The yield of resonances might be modified with respect to expectations due to in-medium effects such as re-scattering and regeneration. Resonance production is also important to understand the in-medium effects observed through the nuclear modification factor ($R_{\rm{AA}}$) at high $p_{\mathrm{T}}$. Rapidity asymmetry ($y_{\rm{asym}}$) is important to study the particle production mechanism in p-Pb collisions, where nuclear effects will be different in the p-going and Pb-going directions. \
In this talk, recent results on resonance production in p-Pb collisions at $\sqrt{s_{NN}}$= 5.02 TeV, Xe-Xe collisions at $\sqrt{s_{NN}}$= 5.44 TeV and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV are presented. The results include transverse momentum spectra ($p_{\mathrm{T}}$), rapidity asymmetry ($y_{\rm{asym}}$), integrated yields and mean transverse momenta for various centrality classes. The results will be compared with model predictions.
Hadronic resonances are effective tools for studying the hadronic phase in ultrarelativistic heavy-ion collisions. In fact, their lifetime is comparable to that of the hadronic phase, and resonances are sensitive to effects such as rescattering and regeneration processes, which might affect the resonance yields and shape of the transverse momentum spectra. These processes can be studied considering the yield ratio of resonance to the corresponding stable particle as a function of the charged-particle multiplicity. Similar characteristics to those in heavy-ion collisions have been observed in the multiplicity-dependent studies of particle production in pp and p--Pb collisions. Resonance measurements may provide insight into the potential emergence of collective-like phenomena and a non-zero lifetime of the hadronic phase in small collision systems.
In this contribution, we present new ALICE results on the measurement of mesonic and baryonic resonances in small collision systems at LHC energies, including the measurement of K$^{*\pm}(892)$, $\Lambda(1520)$, $\Sigma^{*\pm}(1385)$, $\Xi^{0}(1530)$, $\phi(1020)$ as a function of the charged-particle multiplicity.
The short-lived resonances, like $K^{*0}$, are a good candidate to probe the hadronic phase of the matter formed in heavy-ion collisions. Due to its short lifetime, the decay daughters may interact with the hadronic medium, resulting in a change in the properties of the resonances. The decay daughters may undergo various in-medium effects like rescattering and re-generation. Hence $K^{*0}/K$ is a unique tool to investigate the interplay between these effects in the hadronic phase during the evolution of heavy-ion collisions. The high statistics Au+Au data collected by STAR in its BES-II program with enhanced detector capabilities and a wider pseudorapdiity coverage will enable more differential measurements with reduced statistical uncertainties than those achieved in BES-I.
We will report invariant yields, $p_T$ integrated yield (dN/dy), mean transeverse momentum ($\langle p_T \rangle$) of $K^{*0}$ using the Au+Au collisions at $\sqrt{s_{NN}}$ = 19.6 GeV recorded during BES-II. The results will be compared with previous BES-I measurements. The average transverse momentum of $K^{*0}$ will be compared with other hadrons. The resonance to non-resonance ratio will be shown as a function of centrality to study the rescattering vs. regeneration effects. Measurement of the lower limit of hadronic phase lifetime will be shown as a function of centrality and will be compared with measurements at other RHIC and LHC energies.
Exploring the QCD phase diagram and searching for the QCD critical point are some of the main goals of the Beam Energy Scan program at RHIC. In 2017, the STAR experiment collected a large dataset of Au+Au collisions at $\sqrt{s_{NN}} =$ 54.4 GeV. The identified particle spectra and yields provide information about the bulk properties of the hot medium created in these collisions. Furthermore, the rapidity dependence study is essential for exploring the boost-invariant regions of the system.
We present the measurements of the production of $\pi$$^\pm$, K${^\pm}$, p, and $\bar{p}$ in various centralities and rapidity intervals. The results for the transverse momentum spectra, particle yields d$N$/d$y$, average transverse momentum $\langle$p$_{T}$$\rangle$, and particle ratios will be presented for different centrality classes and rapidity intervals. The kinetic freeze-out parameters will be obtained for different rapidity intervals and the results will be compared to similar measurements at other energies. The physics implications of the results will be discussed.
Ultraperipheral lead-lead collisions produce very large photon fluxes that fundamental quantum-mechanical processes can be observed and well studied. The first measurement of $\tau$ lepton pair in ultraperipheral PbPb collisions at $\sqrt{s_{_{\mathrm{NN}}}} =$ 5.02 TeV with data collected by CMS during the LHC Run 2 will be presented. The study paves the way for the determination of the anomalous magnetic moment of the $\tau$ lepton, currently poorly constrained.
Global observables such as the pseudorapidity distributions of particle multiplicities in the final state are crucial to shed light into the physics processes involved in hadronic collisions. In proton-lead (p-Pb) collisions at LHC energies, such measurements provide an important baseline to understand lead-lead (Pb-Pb) results by disentangling hot nuclear matter effects from cold nuclear matter effects. Multiplicity measurements can also put constraints on theoretical models describing the initial stages of the collision, e.g., to what degree the nucleon and the nuclei interact as dilute (partons) or dense (CGC-like) fields. The study of inclusive photon multiplicity aims to provide complementary measurements to those obtained with charged particles.
In this contribution, the multiplicity and pseudorapidity distributions of inclusive photons at forward rapidity (2.3 $<\eta<$ 3.9) in pp and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV will be presented. The data samples have been collected using the Photon Multiplicity Detector (PMD) of ALICE. The dependence of photon production on centrality will be presented and compared both with charged-particle distributions measured at midrapidity and with predictions from Monte Carlo event generators.
We will describe the opportunities offered by long baseline oscillation experiments and highlight the role played by the near detector as well as the far detectors. This discussion would mainly revolve around the upcoming Deep Underground Neutrino Experiment (DUNE). DUNE will detect neutrinos generated in the LBNF beamline at Fermilab, using a Near Detector (ND) situated near the beam target where the neutrinos originate and a Far Detector (FD) located 1300 km away in South Dakota. We will touch upon some of the physics searches beyond the Standard Model (SM) and describe the role played by the ND along with the FD in these studies.
The latest data of the two long-baseline accelerator experiments NOνA and T2K, interpreted in the standard 3-flavor scenario, display a discrepancy. A mismatch in the determination of the standard CP-phase $\delta_{\rm CP}$ extracted by the two experiments is evident in the normal neutrino mass ordering. While NOνA prefers values close to $\delta_{\rm CP} \sim 0.8\pi$, T2K identifies values of $\delta_{\rm CP} \sim 1.4\pi$. Such two estimates are in disagreement at more than 90% C.L. for 2 degrees of freedom. We show that such a tension can be resolved if one hypothesizes the existence of complex neutral-current non-standard interactions (NSI) of the flavor changing type involving the $e-\mu$ or the $e-\tau$ sectors with couplings $|\varepsilon_{e\mu}| \sim |\varepsilon_{e\tau}| \sim 0.2$. Remarkably, in the presence of such NSI, both experiments point towards the same common value of the standard CP-phase $\delta_{\rm CP} \sim 3\pi/2$. Our analysis also
highlights an intriguing preference for maximal CP-violation in the non-standard sector with the NSI CP-phases having best fit close to $\phi_{e\mu} \sim \phi_{e\tau} \sim 3\pi/2$, hence pointing towards imaginary NSI couplings.
NOvA is a long-baseline beam neutrino experiment. It uses the 700 kW NuMI beam at Fermilab to send muon neutrinos (or muon antineutrinos) to two functionally identical detectors, located 14.6 mrad off the beam axis. The Near Detector (ND) is located at Fermilab, 1 km from the neutrino source and the 14 kton Far Detector (FD) is located 810 km away in Ash River, Minnesota. Both the detectors are tracking calorimeters filled with liquid scintillator which can detect and identify muon and electron neutrino interactions with high efficiency. The physics goals of NOvA are to observe the oscillation of muon neutrinos to electron neutrinos, understand why matter dominates over antimatter in the universe, and to resolve the ordering of neutrino masses. To that end, NOvA measures the electron neutrino and antineutrino appearance rates, as well as the muon neutrino and antineutrino disappearance rates. The ND receives a high statistics neutrino flux due to its close proximity to the neutrino source which makes it suitable for high precision neutrino cross-section measurements and is used as a control for the oscillation analyses. The FD is used to analyze the appearance and disappearance of the neutrinos arriving from Fermilab. In this talk I will give an overview of the NOvA and present the latest results combining both neutrino data (13.6×10^20 POT) and antineutrino data (12.5×10^20 POT).
Dark matter (DM), if captured in considerable amount at the Solar core may undergo the process of self-annihilation producing standard model particles such as neutrinos, charged leptons, or gamma as the end product. Neutrinos, thus produced in the Solar core from DM annihilation may be detected at a terrestrial neutrino detector. KM3NeT is an under-sea neutrino detector at the Mediterranean sea where the sea water is the detecting material. Neutrinos are detected by the Cherenkov light due to charge leptons obtained from the charge current interaction and/or neutral current scattering of neutrinos with the seawater detector of KM3NeT. In this work, the detectability of such DM neutrinos from the Sun at the upcoming KM3NeT detector is addressed. Upper bounds of the detection rate for such neutrinos at KM3NeT are computed for the case of a generic dark matter scenario. The results are also shown for DM candidates in specific particle DM models. In this work, upper bounds of muon event rates for different annihilating dark matter masses are computed for each of the cases of dark matter annihilation channels (e.g. $b\bar{b}~, W^+W^-, Z\bar{Z} $, etc). These upper bounds are also computed by considering the dark matter scattering cross-section using upper bounds obtained from XENON1T direct dark matter search experiment.
Existence of non-interacting fermion singlets, known as sterile neutrinos, are contextualised in beyond standard model physics. From accommodating oscillation anomalies to play a role of messenger for neutrino mass generation mechanism through type-I seesaw, they have blossomed as a key ingredient with their several mass variants. In addition, being a standard model singlet, they are accessible to acquire majorana type mass and hence triggers lepton number violation (LNV) and charged lepton flavour violation (cLFV) processes. Driven by this fact, we are intended to work with incorporation of 3 sterile states $N_i, i=1,2,3$ with mass $M_i, i=1,2,3$ along with the standard model active neutrinos, thus presenting a 3+3 model for calculation of the effective mass $ |m_{ee}|$ of neutrino less double beta ($0\nu\beta\beta$) decay which comes as prior of interest in the context of LNV process. We have derived the analytical forms for the added sterile states using exact seesaw relation. In this context, the additional cp violating (CPV) phases and active-sterile mixing angle values are also explored. Following the experimental limit of $ | m_{ee}| $, we have obtained the involved sterile states in the KeV-MeV range. Interestingly, most of the KeV range sterile stated are found to follow the hierarchical pattern i.e., for NH ($M_1 > M_2 > M_3$) and for IH ($M_1 < M_2 < M_3$). We have also checked the reliability of our result by calculating the branching ratio of $\mu \rightarrow e \gamma$, a prominent cLFV process, in the 3+3 framework. Significant results on BR($\mu \rightarrow e \gamma$) are found for KeV range sterile neutrinos with their corresponding CPV phase and active-sterile mixing angle values.
The exercise of producing the "Mega Science Vision 2035" document for India, coordinated by the Office of the PSA, is in progress. The current status of the Nuclear Physics and High Energy Physics documents will be presented.
The past decade has seen the opening of two new windows to the Universe - high energy neutrinos and gravitational waves. Leveraging data from other astrophysical messengers, two extragalactic candidate sources have been identified for the high energy neutrinos - TXS 0506+056 and NGC 1068. The existence of this flux enables the testing of fundamental particle physics at energies far beyond the reach of colliders on Earth - the Glashow resonance and neutrino nucleon scattering cross sections at TeV-PeV energies. This talk will review these developments as well as the resulting multimessenger theoretical understanding from an IceCube perspective, and provide an outlook for the next decade with a special focus on opportunities for the Indian community.
The GRAPES-3 experiment located in Ooty, Tamil Nadu is the major cosmic ray observation facility in India. It operates 24x7 with an array of 400 plastic scintillator detectors of 1 m$^2$ area each and a 560 m$^2$ area large muon telescope to sample the electromagnetic and muonic components of cosmic ray showers respectively. It allows us to study high energy phenomena from 10 TeV to 10 PeV energy range including the measurements of cosmic ray energy spectrum and composition while providing an overlap with various direct measurements in space, cosmic ray anisotropy and gamma ray source searches from a near-equatorial location. In addition, the muon telescope is designed to record muon flux above 1 GeV energy from 169 directions covering 2.3 sr sky at a rate of 3 million muons per minute, thus providing high statistical measurements of the cosmic ray modulation induced by solar and atmospheric phenomena on short time scales. An overview of the recent results on cosmic ray composition, cosmic ray anisotropy, angular resolution, long term solar anisotropy as well as thunderstorm phenomena along with detector & electronics developments will be presented.
This presentation will review recent advances in detectors by taking into account the needs for present and future experiments. It will discuss some selected directions of targeted detector development. It will also explore some interesting areas of blue sky research.
Topological defects are a natural consequence of several symmetry breaking phase transitions. In this talk I will concentrate on cosmic strings and give an overview of the search for signatures of cosmic strings using various methods. I will then describe how magnetic fields can be generated in the wakes of Abelian Higgs strings. The magnetic field generated in these wakes can open up a whole new field for detection of these exotic objects using different forms of eletromagnetic radiation.
Study of particle showers produced in the atmosphere due to the interactions of primary cosmic particles have provided a natural laboratory of physics of standard model and beyond standard model. While the showers encompass the physics of strong, weak and and electromagnetic interactions, the very first interactions are strong interactions producing hadronic showers which introduce the largest uncertainty in the estimates of particle yields. In this work, we made comprehensive study of air shower simulations using various combinations of hadronic models and particle transport code of CORSIKA package. The hadronic particles mostly pions and kaons decay to muons which are the most abundant charged particles at the Earth. We start with primary proton and alpha distributions as power law which are scaled to match the measured flux in balloon experiments at the top of atmosphere. The shower simulation includes production, transport and decays of secondaries upto the ground level. We provide a way to validate the simulation results using the ground based measurements namely, single and multiple muon yields and their charge ratios as a function of zenith angle and momentum. This provides a basis for comparisons amoung the six model combinations used in this study. We then use the best model to quantitatively predict the absolute and relative yields of various particles at ground level as well as their correlations with primaries and with each other.
We consider the propagation of gravitational waves in the late time Universe with the presence of structure. Before detection, gravitational waves emitted from distant sources have to traverse through regions of spacetime which are far from smooth and homogeneous. We investigate the effect of inhomogeneities on the observables associated with the gravitational
wave sources. In particular, we evaluate the impact of inhomogeneities on gravitational wave
propagation by employing Buchert’s framework of averaging. In context of a toy model within
the above framework, it is first shown how the redshift versus distance relation gets affected
through the averaging process. We then study the variation of the redshift dependent part of
the observed gravitational wave amplitude for different combination of our model parameters.
We show that the variation of the gravitational wave amplitude with respect to redshift
can deviate significantly compared to that in the ΛCDM-model. Our result signifies the
importance of local inhomogeneities on precision measurements of parameters of gravitational
wave sources.
We study a possible particle-antiparticle asymmetry in the dark matter (DM) sector via DM scatterings. We have studied two example scenarios in which we show a novel interplay between DM elastic and inelastic scatterings set the relic density and the composition of the DM sector in the present universe. The scenario can be realized in a $\mathcal{Z}_3$ symmetric effective theory with a complex scalar DM with cubic self-interaction which leads to CP-violation at one loop level.
In Ref.JHEP 08 (2020) 149 We have discussed the role of the semi-annihilation of DM producing the asymmetric relic. We find the upper bound on the DM mass for maximal CP-violation case to be 15 GeV, much stronger than the usual WIMP scenario.
In Ref.Phys.Rev.D 104 (2021) 12, 12 we have shown the role of DM self-scattering in deciding the density and composition of DM. In particular, the simultaneous presence of DM self-scatterings and annhilations are instrumental in generating the present density and possible particle-antiparticle asymmetry in the DM sector due to unitarity sum rules. This is realized again with a comlex scalar DM stabilized by reflection symmetry in a minimal theoretic framework.
Axions-like-particles (ALP) naturally appear in many extensions of the Standard Model of particle physics, and are viable candidates to Cosmological Dark Matter. The Sun can also be an astrophysical source of ALP, produced through the Primakoff process. It can leave their signatures at detector through the inverse Primakoff (IP) scattering. We identify inelastic channels to the IP-processes due to atomic excitation and ionization. Their cross sections are derived with full electromagnetic fields of atomic charge and current densities, and computed by well-benchmarked atomic many-body methods. We also present new upper limits on ALP - photon couplings between 1 eV to 1 MeV using TEXONO and XENON1T experiments.
We consider a fermionic dark matter (DM) in the left-right symmetric framework by introducing a pair of vector-like (VL) doublets in the particle spectrum. The stability of the DM is ensured through an unbroken Z2 symmetry. We explore the parameter space of the model compatible with the observed relic density and direct and indirect detection cross sections. The presence of charged dark fermions opens up an interesting possibility for the doubly charged Higgs signal at LHC and ILC. The signal for the doubly charged scalar decaying into the dark sector is analyzed in multilepton final states for a few representative parameter choices consistent
with DM observations.
We present a search for the lepton flavor violating decays $B_s \rightarrow \ell \tau$; $\ell = e, \mu$ using the full data sample of 121.4 fb$^{-1}$ of data recorded on $\Upsilon(5S)$ resonance with the Belle detector at the KEKB asymmetric energy $e^+e^-$ collider. We use $B_s\bar{B_s}$ events in which one $B_s$ meson is reconstructed in a semileptonic $B_s \rightarrow D_s \ell \nu$ decay mode and another $B_s$ meson is reconstructed in the signal mode. For this search, the τ lepton is reconstructed from $\tau^- \to e^- \nu_\tau \bar{\nu}_{e}$ and $\tau^- \to \mu^- \nu_\tau \bar{\nu}_{\mu}$ decay channels. We will set an upper limit on branching fraction $\mathcal{B}(UL)$ of these decays.
We present the results of a search for new physics in events with a photon, an electron or muon, and large missing transverse energy (MET). The study is based on a sample of proton-proton collisions at $\sqrt{s} = 13$ TeV corresponding to an integrated luminosity of 137 $fb^{-1}$ collected with the CMS detector in 2016-2018. Many models of new physics predict events with jets and significant MET in addition to electroweak gauge bosons. Notably, models of supersymmetry (SUSY) with gauge-mediated supersymmetry breaking (GMSB) naturally yield events with photons in the final state. Searches for events with both a photon and a lepton are not only sensitive probes of these models, but they also have the potential to help characterize the branching fractions of new particles. We interpret the results of our search in the context of simplified SUSY models.
The extension of SM with inert doublet and right-handed neutrinos is being studied. The
inert doublet which is odd under Z_2 does not take part in the electroweak symmetry breaking
(EWSB) and thus provides a viable dark matter candidate. The light neutrino mass is generated by the seesaw mechanism. It is observed that vacuum stability is rescued by the addition
of scalars i.e. doublet and triplet scalars and the bounds only come from perturbativity (where
any of the coupling in the theory hits 4π constraint) while the effect of fermions to the running of SM-Higgs quartic coupling is negative and therefore, the Planck scale stability is compromised.
Next, we observed that SU(2) charged fermion shows a drastic change in the running behaviour
of gauge coupling g 2 and contributes positively giving completely stable scenario. In case of Type-
III inverse seesaw scenario (fermionic triplet with SU(2) charge), we observed that the only bound
comes from perturbativity and the number of generations of fermionic triplet are restricted to two
from Planck scale perturbativity due to large positive contribution.
For further extensions with doublet and triplet leptoquark with all three
gauge charges, the positive contribution to the running of gauge couplings is even more. Therefore, R_2 + S_3
are restricted from Planck scale perturbativity because of the
large positive contribution.
Dark matter constraints from relic density, Direct detection experiments like XENONIT, LUX
and Indirect cross-section constraints for domiant modes from HESS and Fermi-Lat experiments
are studied in case of inert Higgs doublet (IDM) and inert Higgs triplet (ITM). The lower bound
on DM mass from relic density is M DM > 700 GeV and 1200 GeV for IDM and ITM respectively.
In case of Standard Model, the order of electroweak phase transition is second-order. The
electroweak baryogenesis and the Gravitational wave signatures requires strongly first-order phase
transition which also motivates the beyond Standard Model fields. However, in extensions of
the Standard Model such as minimal supersymmetric standard model(MSSM), a sizeable CP
violation can occur through an extended Higgs sector. The contribution from additonal degrees
of freedom in the cubic term of the effective potential enhances the strength of phase transition.
We observed that the upper bound will come on the mass parameter from strongly first order
phase transition consistent with current Higgs boson mass of 125.5 GeV and Planck scale stability
and perturbativity. We also studied the parameter space for the Gravitational wave frequency
detectable by Laser Interferometer Space Antenna (LISA) experiment.
Standard Model (SM) is a theory of fundamental particles and their interactions. Despite being a successful theory, SM is unable to offer explanation for the existence of Dark matter (DM), matter-antimatter asymmetry, hierarchy problem, neutrino masses etc. Many models beyond the SM have evolved over the time to explain these limitations. One such model is Supersymmetry (SUSY) which has been proposed to solve these SM problems, and could also provide the DM candidate. A search for supersymmetric electroweakinos ($\widetilde{\chi}_{1}^{\pm}$, $\widetilde{\chi}_{2}^{0}$) produced in the vector boson fusion (VBF) topology in proton-proton collisions at $\sqrt{s}$=13 TeV using the full Run II data collected by the Compact Muon Solenoid (CMS) detector at the Large Hadron Collider (LHC) is presented. The experimental features of VBF processes provide an increased sensitivity for compressed SUSY, compared to traditional searches. The benchmark model for this search is the R-parity conserving Minimal Supersymmetric Standard Model (MSSM), where the lightest neutralino is the canonical dark matter candidate. The dominant SM background processes are estimated using data-driven techniques. In this talk, search strategy, methodology, and the results on the background estimation using various control regions and their validation will be presented.enter code here
Electroweak Baryogenesis needs a Strong First-Order Electroweak Phase Transition as a pre-requisite to explain the observed Baryon asymmetry of the Universe. A strong first-order electroweak phase transition, single or multi-step, can be accommodated easily in supersymmetric models with singlet extensions. Confining ourselves in the context of the Next-to-Minimal Supersymmetric Standard Model (NMSSM) extended with a right-handed neutrino superfield, we investigate the dynamics of Electroweak Phase Transition and the possibility of Electroweak Baryogenesis, consistent with the collider, neutrino, and flavour physics constraints. We also scrutinize the scopes of Gravitational Wave production and detection prospects in the upcoming space-based interferometer within our framework. Our findings demonstrate the merit of such less familiar yet complementary probes for Physics beyond the Standard Model.
There have been several signatures of quark-gluon plasma (QGP) seen in the TeV proton+proton collisions at the Large Hadron Collider (LHC) energies, some of those include long-range ridge-like correlations, strangeness enhancement, comparable freeze-out temperature with a deconfinement transition temperature, etc.. Although so far it is inconclusive to conclude on the formation of QGP in TeV hadronic collisions, it is compelling to make deeper studies to understand the matter created in the high-multiplicity collisions.
In this mini-review, I shall discuss the experimental findings, confronted with theoretical studies to understand a possible QGP-droplet formation in TeV proton+proton collisions.
The Soft function that captures the IR singularity of scattering amplitudes exponentiates and the soft anomalous dimension $\Gamma_s$ sits in the exponent. $\Gamma_s$ can be expressed in
terms of sets of Feynman diagrams known as Cwebs. The color of Diagrams in a Cweb mix through Cweb mixing matrices.
In this talk I present a novel approach for the calculation of these matrices which makes the structures present in them transparent and also enables us to make predictions to all orders in perturbation theory for several classes of CWebs.
The proliferating list of experimentally discovered heavy quark exotic hadrons calls for urgent first principles theoretical investigations. We present a status update of our ongoing calculation of tetra-quark states composed of a bottom and a charm quark in isospin I=0, axial-vector ($J^P=1^+$) channel using first principles Lattice Quantum Chromodynamics.
These calculations are performed on the state-of-the-art MILC ensembles with dynamical up/down, strange and charm quark fields realized using a highly improved staggered quark action. The valence quarks are realized using an overlap action for quark masses ranging from light to the charm sector, whereas bottom quark evolution is studied in a non-relativistic QCD framework. We find a strong evidence for an energy level below the elastic threshold possibly hinting an attractive interaction between the bottom and charm mesons, which may further indicate the presence of charmed-bottomed tetra-quarks.
Effective field theories (EFTs) provide us a powerful way of organising low-energy physics of interest in quantum field theories. The Physics of QCD at very low temperature and at very high temperature are well described by; respectively, theory of pions, and an effective weak coupling expansion. These EFTs usually depend on hierarchy of scales, and usually become ineffective when temperature is around cross over temperature (around a few hundred MeV).
To inspect the physics of QCD around cross over temperature $T_{co}$, we build an effective field theory of thermal QCD around $T_{co}$. Employing the global symmetries of the QCD as our guiding principle, we organize the effective field theory. In particular we use the vector and axial symmetries of QCD for the case of two flavors of quarks to arrange theory. We write down all possible axial preserving and breaking terms upto dimension six operators to build the model. The dimension-6 interaction terms consist of broadly two classes of interaction terms, current-current interactions Ref. [1] and 6-dimensional higher derivative term (will be refereed to as gradient-cubed term hereafter). We find the gradient-cubed term to modify the current term, giving rise to PCAC relation with this new current.
We proceed to treat this EFT in mean field theory (MFT). All the current-current interactions give rise to an effective mass term, with all the coupling coefficients reduced to an effective coupling Ref. [2] . The original EFT was organized in powers of temperature/$T_0$, where $T_0$ is a cut-off scale, and at chiral limit in this theory, the gradient-cubed term is one of the terms which stays unaffected after MFT treatment. We proceed to obtain the free energy for this MFT. At chiral limit we find two solutions for the critical temperature $T_c$ . After discarding the unphysical solution we manage to get bounds on the coupling of gradient-cubed term. At chiral limit the phase transition is found to be of second order with the gradient-cubed term modifying the value of $T_c$ . The QCD phase diagram at chiral limit also gets modified due to the contribution of gradient-cubed term, when compared to only current-current interaction terms. The curvature of the critical line at chiral limit also depend on the gradient-cubed term.
We proceed to inspect the pionic fluctuations around the mean field theory. The properties of pionic theory at 1-loop is found to be dependent on the gradient-cubed terms. The physical parameters of the pionic theory, namely, $m_\pi$, $f_\pi$, $u_\pi$, get contributions from the gradient-cubed term. We find the Gell-Mann-Okaes-Renner (GMOR) relation to be valid in this theory also. The curvature coefficients $\kappa_2$ and $\kappa_4$ are found to be modified and depend on the coupling of gradient-cubed term.
$\textbf{Keywords}$ : QCD at finite temperature, Cross-over temperature, Critical Temperature, Pionic theory, Phase Diagram QCD.
$\textbf{References}$
[1] S. Gupta and R. Sharma, ``Effective field theory for warm QCD,'' Phys. Rev. D $\textbf{97}$ (2018) no.3, 036025, [arXiv:1710.05345 [hep-ph]].
[2] S. Gupta and R. Sharma, ``Real time warm pions from the lattice using an effective theory,'' Int. J. Mod. Phys. A $\textbf{35}$ (2020) no.33, 2030021, [arXiv:2006.16626 [hep-ph]].
We present here for the first time the impact-parameter dependent saturated dipole model (bSat or IP-Sat) with a fit [1] to the leading neutron structure function HERA data in one pion exchange approximation. We estimate the magnitude of gluon saturation effects by performing a fit to the same data with the linearised version of the considered dipole amplitude and comparing both models. Our analysis helps to constrain the longitudinal gluon distribution of the pions at small $x$, which is difficult to be measured in a direct experiment. Both models provide good descriptions of the considered data, and no hints of saturation could be deduced from the currently available data. This could be understood as the Bjorken $x$ value probed in neutron-tagged DIS measurements is considerably larger than the Bjorken $x$ in inclusive DIS events on proton, where the latter has exhibited no clear signal for saturation Further, we discuss the observables in leading neutron production in $ep$ collisions that are sensitive to the non-linear effects [2].
[1] A. Kumar, arXiv:2208.14200 [hep-ph]
[2] A. Kumar and T. Toll, Phys. Rev. D 105 (2022) 114045, arXiv:2203.13314 [hep-ph]
The existence of a sterile neutrino is one of the foremost topics of research in the area of neutrino physics. There are several short-baseline anomalies which hint towards the existence of sterile neutrinos [1]. However, a simplistic theory where oscillation is the only explanation is severely constrained due to null results at other experiments [1, 2]. In this work, we propose to study these $O(1~\rm eV^2)$ $\Delta m^2$-driven short-baseline oscillations with antineutrinos from reactor neutrino complexes located in India. We propose to study both the charged current (CC) and neutral current (NC) interactions of antineutrinos on a deuterated liquid scintillator (DLS) detector. It should be noted that studying NC interactions of neutrinos in the MeV range is facilitated with the use of deuterium in the detector. India is one of the largest producers of heavy water in the world and using deuterated hydrocarbons as neutrino detectors is quite feasible. Such a detector has been explored for detecting future supernova neutrinos, very recently, in Ref. [3]. It is also possible to have multiple detector locations in the few metres to few kilometres distances from the reactor core; thereby aiding a synergistic study between CC and NC interactions at near and far detectors - possibly eliminating flux and cross-section related uncertainties. Further, a combined study of CC and NC interactions may provide sensitivity to all the new active-sterile oscillation parameters which may be quiescent in only CC interactions, as demonstrated before in Ref. [4].
References:
[1] M. Dentler, Á. Hernández-Cabezudo, J. Kopp, P. A. N. Machado, M. Maltoni, I. Martinez-Soler et al., "Updated Global Analysis of Neutrino Oscillations in the Presence of eV-Scale Sterile Neutrinos", JHEP 08 (2018) 010, [1803.10661]
[2] B. Dasgupta and J. Kopp, "Sterile Neutrinos", Phys. Rept. 928 (2021) 1–63, [2106.05913]
[3] B. Chauhan, B. Dasgupta and V. Datar, "A deuterated liquid scintillator for supernova neutrino detection", JCAP 11 (2021) 005, [2106.10927]
[4] R. Gandhi, B. Kayser, S. Prakash and S. Roy, "What measurements of neutrino neutral current events can reveal", JHEP 11 (2017) 202, [1708.01816]
The establishment of non-zero neutrino masses by the phenomenon of neutrino oscillations provides a clear-cut indication of the existence of neutrino magnetic moments. We provide the neutrino magnetic moments in the presence of right-handed current effects within the Left-Right Symmetric Model (LRSM). The effects of Dirac and Majorana phases on the cancellation of the magnetic moments for various choices of left-handed (U) and right-handed (V) neutrino mixing matrices are shown in detail. The discussion is focused on the idea of considering right-handed neutrino mixing equal to the well-known PMNS matrix in the left-handed neutrino sector, leading to the vanishing contribution of Dirac magnetic moments while providing a sizeable contribution to Majorana magnetic moments. The role reversal feature is found when we consider right-handed mixing as a conjugate of its left-handed counterpart, leading to vanishing Majorana magnetic moments. Moreover, for the right-handed neutrino mixings as a transpose of the left-handed mixings, both Dirac and Majorana components of magnetic moment exist. Currently the value of neutrino magnetic moment is reported by the XENON1T detector at a 90% confidence of $\mu_\nu ~ \epsilon ~(1.4,2.9)\times 10^{-11} \mu_B$ [1]. Thus, the study of magnetic moments in the presence of the right-handed current effect may shed light on the Dirac or Majorana nature of neutrinos. Our results show that, even though certain values of the Majorana phases can eliminate neutrino magnetic moments, the presence of a maximal CP-violating phase in the neutrino mixing matrix, as favored by the discrepancy between T2K results and reactor measurements in neutrino oscillations, require that at least one neutrino have a large nonzero magnetic moment.
[1] E. Aprile et al. (XENON), Phys. Rev. D 102, 072004 (2020), 2006.09721.
The ANTARES neutrino telescope and its next-generation offspring, KM3NeT, located in the abyss of the Mediterranean Sea, have been designed to study neutrinos from a variety of sources over a wide range of energies and baselines. One of the principal goals of these experiments is to determine Earth-matter effects stemming from the energy and zenith angle dependence of atmospheric neutrinos in the multi-GeV range.
In this contribution, I will present the physics potential of ANTARES and KM3NeT-ORCA (ORCA being the low energy sub-array of KM3NeT) detectors to measure sub-dominant effects in the atmospheric neutrino oscillations, vis-á-vis, Neutrino Non-Standard Interactions (NSIs). The latter in the propagation of neutrinos in matter can lead to significant deviations in neutrino oscillations expected from the standard 3-flavour neutrino oscillation framework. These additional interactions would result in an anomalous flux of neutrinos discernible at neutrino telescopes. A likelihood-based search for NSIs with ten years of atmospheric muon-neutrino data recorded with ANTARES will be reported, and sensitivity projections for ORCA, based on realistic detector simulations, will be shown. The phenomenological consequences of NSIs on neutrino mass ordering measurements at ORCA will be addressed as well. In addition, the sensitivity of ORCA towards the octant of $\theta_{23}$ will be outlined. Remarkably, the bounds obtained with ANTARES in the NSI $\mu - \tau$ sector constitute the most stringent limits till date.
Neutrinos are fundamental yet ill-understood particles in the standard model. The fact that they oscillate among each other is an indication towards non-zero masses of neutrinos. This highlights the limitations of the Standard Model of particle physics, which predicts massless neutrinos. Recent measurements of non-zero reactor angle has also opened up an opportunity for a wide variety of models and ansatz which try to explain neutrino masses’ and angles’ hierarchy. One key hypothesis is High Scale Mixing Unification (HSMU) hypothesis which tries to do the same by unifying mixing angles of quarks and leptons at GUT energy scale. In this work, the validity of HSMU predictions is verified with the recent experimental bounds, as the measurements in recent years have highly increased in their precision. Furthermore, we also check an ansatz which demands less stringent requirements than HSMU, called Wolfenstein ansatz. It hypothesizes that the Wolfenstein parameterization structure in quarks’ mixing matrix is duplicated in leptons’ mixing matrix. We find that the current measured values of neutrino oscillation parameters discards the possibility of HSMU even with the added modifications.
We study neutrino oscillations in a rotating spacetime under the weak gravity limit for the trajectories of neutrinos which are constrained in the equatorial plane. Using the asymptotic form of the Kerr metric, we show that the rotation of the gravitational source non-trivially modifies the neutrino phase. We find that the oscillation probabilities deviate significantly from the corresponding results in the Schwarzschild spacetime when neutrinos are produced near the black hole (still in the weak-gravity limit) with non-zero angular momentum and detected on the same side, i.e., the non-lensed neutrino. Moreover, for a given gravitational body and geometric parameters, there exists a distance scale for every energy scale (and vice versa), after which the rotational contribution in the neutrino phase becomes significant.
Using the sun-sized gravitational body in the numerical analysis of the one-sided neutrino propagation, we show that even a small rotation of the gravitational object can significantly change the survival or appearance events of a neutrino flavor registered by the detector, which is located on the earth. These effects are expected to be prominent for cosmological/astrophysical scenarios where neutrinos travel past by many (rotating) gravitational bodies and for large distances. Thus rotational effects of all such bodies must be incorporated in analyzing oscillations data.
NOvA is a two-detector, long-baseline neutrino oscillation experiment located at Fermilab, Batavia, IL, USA. It is primarily designed to constrain neutrino oscillation parameters using muon neutrino (anti-neutrino) disappearance data and electron neutrino (anti-neutrino) appearance data. NOvA detects neutrinos from Fermilab’s Neutrinos at Main Injector (NuMI) beamline. The unoscillated muon neutrino and beam $\nu_e$ events are observed by the NOvA Near Detector (ND), which is 100m underground and at a distance of 1km from the beam source. The Far Detector (FD), situated 809 km from the ND, is in Ash River, MN, USA, and observes $\nu_\mu$ and $\nu_e$ events after oscillations.
Traditionally, NOvA has used $\nu_e$ events in the energy range $1
Proton radiography is one of the promising techniques in proton therapy to reduce the range uncertainty that arises from the conversion of x-ray Hounsfield Units (HU) into relative stopping power (RSP) of protons. However, the quality of the obtained proton images is deteriorated due to the multiple Coulomb scattering (MCS) effects. There are various approaches proposed to improve the proton images and one of the recent techniques is the energy-resolved dosimetry (ERD) technique. This method uses a simple setup with a single detector and multiple beam energies to measure the dose in contrast to the tracking system which uses a complex setup with multiple detectors. The distribution of dose as a function of energy is known as the energy-resolved dose function (ERDF) which is unique for a given water equivalent path length (WEPL) and a pattern matching algorithm is used to relate the ERDF to a specific WEPL. In this work, we tried an alternative approach to estimate WEPL using an analytical expression relating the energy, dose and the corresponding WEPL. We present the preliminary results from the generic formula in terms of the required imaging time and the accuracy in WEPL estimation.
In High Energy Experiments there is extensive use of Machine Learning and Deep Learning algorithms. These well-established algorithms extract complex features from the data and are used for event and particle identification, energy estimation, and pile-up suppression. We present the application of these tools in the domain of pituitary tumor identification in MRI and PET-CT scans. The pituitary is a pea-sized gland, housed within a bony structure (sella turcica) at the base of the brain. Precise determination of pituitary is challenging due to structural intricacy, the noise in the data acquisition, and the incompatibility between PET and CT's spatial resolution.
In this presentation, we present the use of deep convolutional network architectures such as UNET, R-CNN for the localization of the pituitary gland. We use Transfer Learning and data augmentation for high accuracy. Two different types of datasets (MRI and CT) available freely on Kaggle were used for this purpose. We provide an explanation of the algorithms used, their performance, and a comparison with different backbones such as ResNet, VGG on the CT scan as well as the MRI image dataset. We also provide the results of using data augmentation on these datasets.
The historic event of “the Hunga Tonga-Hunga Ha‘apai volcano eruption” on 15 January 2022 is the largest volcanic eruption after Krakatoa in 1883. This eruption triggered destructive events such as tsunami waves, atmospheric shock waves, sonic booms, etc. In addition, the eruption injected about 10% of the water vapour found in the stratosphere, which may lead to a global temperature rise. Recent studies by NASA and ESA revealed records of high-speed winds in the upper atmosphere and unusual electric currents found in the ionosphere immediately after the eruption. It is one of the most destructive events observed over the past century with a direct impact on space weather. A pressure wave created by this violent eruption circled the globe multiple times, recorded by numerous instruments around the world. The GRAPES-3 is a ground-based cosmic ray experiment consisting of an array of 400 plastic scintillators and a large area muon telescope to record secondary cosmic rays in the extensive air showers produced by the interaction of primary cosmic rays in the atmosphere. It also monitors the local atmospheric pressure at Ooty to make corrections in the measured count rates of particles. Both the detectors recorded a sudden change in the count rate that coincided with the pressure variation associated with the volcano eruption. We will discuss the analysis outcome and the subsequent interpretation during the symposium.
Standard model of cosmology (Λ-CDM model) mainly suffers from two drawbacks, first one is the fine tunning problem and second one is a cosmic-coincidence problem. In this standard model of cosmology, Λ represents the cosmological constant and CDM denotes the cold-dark matter. Another important downside of the Λ-CDM model from the observational perspective is the discrepancy between the present observed value of Hubble’s constant and with predicted value of Hubble’s constant from theory. These fundamental discrepancies motivate us to study different kinds of cosmological models based on the coupled field-fluid sectors. Based on these above considerations, we can build a theoretical framework for coupled field-fluid sector. Where field sector is made of a non-cannonical scalar field ($k$-essence sector) and the fluid sector is composed of pressureless dust. The nonminimal coupling term is introduced at the Lagrangian level. We employ the variational approach with respect to independent variables that produce modified k-essence field equations and the Friedmann equations. We have analyzed the coupled field-fluid framework explicitly using the dynamical system technique, considering two forms of the potential: constant and inverse power-law type. After examining these models it is seen that both models are capable of producing accelerating attractor solutions satisfying adiabatic sound speed conditions.
We investigate a high momentum regime of inflation where the cosmological perturbation breaks down due to large inflationary quantum fluctuations to form the primordial black holes(PBHs). In our study, we have found that, in this region, the values of the Bardeen potential is large negative causing a gravitational instability conducive to the formation of the PBHs. We have used three self-consistent differential equations to study the dynamical evolution in the k-space, in the spatially flat gauge i.e. $\delta\phi\neq0$, which shows the role of the inflaton perturbation as well as that of the background metric in the formation of PBHs. We have found that the α-attractor potentials which are favored by the PLANCK-18 data help to create the gravitational instability i.e. the large value of Bardeen potential in the region k~$10^{13}$ Mpc$^{-1}$ and the density contrast exceeds the value of 0.41 which indicates the formation of the PBHs. Further we have calculated the values of the σ(M), β(M) and $f_{PBH}$(M) and masses of these PBHs in a new method and get the values which are consistent with LISA, WD, NS, DECIGO/AI, FL, SIGWs forecasts. The calculated masses of the PBHs are in the range of $1.35 × 10^{−13}$ − $2.60 × 10^{−16} M_\odot$ and $f_{PBH}$ are $0.3\%$ − $36\%$ which indicates the fraction of PBH present in dark matter.
Particulate dark matter captured by a population of neutron stars distributed around the galactic center while annihilating through long-lived mediators can give rise to an observable neutrino flux. We examine the prospect of an idealised gigaton detector like IceCube/KM3NeT in probing such scenarios. Within this framework, we report an improved reach in spin-dependent and spin-independent dark matter nucleon cross-sections below the current limits for dark matter masses in the TeV-PeV range.
The recoil threshold of Direct Detection experiments limits the mass range of Dark Matter (DM) particles that can be detected, with most DD experiments being blind to sub-MeV DM particles. However, these light DM particles can be boosted to very high energies via collisions with energetic Cosmic Ray electrons. This allows Dark Matter particles to induce detectable recoil in the target of Direct Detection experiments. We derive constraints on scattering cross section of DM and electron, using XENONnT and Super-Kamiokande data. Vector and scalar mediators are considered, in the heavy and light regimes. We discuss the importance of including energy dependent cross sections (due to specific Lorentz structure of the vertex) in our analysis, and show that the bounds can be significantly different than the results obtained assuming constant energy-independent cross-section, often assumed in the literature for simplicity. Our bounds are also compared with other astrophysical and cosmological constraints as well as bounds arising out of collider experiments.
We discuss the imprint of high scale non-thermal leptogenesis on cosmic microwave background experiments from the measurement of spectral index ($n_s$) and tensor to scalar ratio ($r$) which is otherwise inaccessible to the conventional laboratory experiments. We argue that non-thermal production of baryon (lepton) asymmetry from tree level inflaton decay is sensitive to the reheating dynamics of the Universe after the end of inflation. Such dependence provides detectable imprint on the $n_s-r$ plane which is well constrained by the Planck experiment. We investigate two separate cases, (i) inflaton decays to radiation dominantly and (ii) inflaton does not decay to radiation at tree level. We obtain the corresponding estimates for $n_s$ and $r$ and find the later case to be more predictive in view of recent Planck/BICEP data. The method presented here is generic and can be implemented to various kind of single field inflationary models given the conditions for non-thermal leptogeneis is satisfied.
In the Standard Model a Dark Matter candidate is missing, but it is relatively
simple to enlarge the model including one or more suitable particles.
We consider in this paper one such extension, inspired by simplicity and
by the goal to solve more than just the Dark Matter issue.
Indeed we consider a local $U(1) $ extension of the SM providing an
axion particle to solve the strong CP problem and including RH neutrinos
with appropriate mass terms. One of the latter is decoupled from the SM
leptons and can constitute stable sterile neutrino DM.
In this setting, the PQ symmetry arises only as an accidental symmetry
but its breaking by higher order operators is sufficiently suppressed to
avoid introducing a large $ \theta $ contribution.
The axion decay constant and the RH neutrino masses are related
to the same v.e.v.s and the PQ scale and both DM densities are determined by
the parameters of the axion and scalar sector.
The model predicts in general a mixed Dark Matter
scenario with both axion and sterile neutrino DM and is characterised by
a reduced density and observational signals from each single component.
After a decade of Higgs boson discovery by the ATLAS and the CMS experiment at the LHC, subsequent observation of ttH events, and study of Higgs decaying into a pairs of W(Z) boson and $\tau$ fermion, physics beyond the standard model still remains a puzzle. Vector-like quarks (VLQs) are hypothetical spin-1/2 particles of the fourth generation that have left and right-handed components transforming exactly the same under $ SU(3)_C\times SU(2)_L\times U(1)_Y~$ group. They are key members in various BSM models, postulated to solve the hierarchy problem and stabilize the Higgs mass, while escaping constraints on the Higgs cross section measurement. This talk will present the current status of the search for VLQs (T$^\prime$) decaying to a top quark( $t\rightarrow Wb\rightarrow l\nu b$) and a Higgs boson($H\rightarrow WW^*\rightarrow 4q$) at the CMS experiment with a total integrated luminosity of 137fb$^{-1}$ collected at the LHC. We will also discuss how jet substructure techniques can be used to identify the decays of the Higgs bosons, detailing the new Higgs$\rightarrow4q$ tagger based on Deep Neural Network.
In this work, we have considered an extension of the standard model (SM) with a $SU(2)_L$ singlet vector-like quark (VLQ) with electric charge $Q=+2/3$. The model also contains an additional local $U(1)_d$ symmetry group and the corresponding gauge boson is the dark photon. The VLQ is charged while all the SM particles are neutral under the new $U(1)_d$ gauge group. Even though in this model the VLQ possesses many properties qualitatively similar to that of the traditional top partner ($T_p$), there are some compelling differences as well. In particular, its branching ratio to the traditional modes ($T_p \to bW, tZ, th$) are suppressed which in turn helps to evade many of the existing bound, mainly coming from the LHC experiments. In an earlier work, such a VLQ is referred to as "maverick top partner". It has been shown that the top partner in this model predominantly decays to a top quark and a dark photon/dark higgs pair ($T_p \to t\gamma_d ,~th_d$) over a large region of the parameter space. The dark photon can be made invisible and consequently, it gives rise to the missing transverse energy ($\not\!\!{E} _{T}$) signature at the LHC detector. We have mainly focused on the LHC signatures and future prospects of such top partners. In particular, we have studied the $t\bar{t}+\not\!\!{E}_{T}$ and $t+\not\!\!{E}_{T}$ signatures in the context of the LHC via pair and single productions of the top partner, respectively at 13 and 14 TeV LHC center of mass energies assuming that the dark photon either decays into an invisible mode or it is invisible at the length scale of the detector. We have shown that one can exclude $\sin\theta_L \sim 0.025$ (0.05) for $m_{_{T_p}} \leq $ 2.0 (2.6) TeV at $\sqrt{s}=14$ TeV with an integrated luminosity of 3 ab$^{-1}$ using the single top partner production channel.
The recent discrepancy in the measurement of $W$ boson mass as reported by the CDF collaboration and the longstanding anomalies in muon g-2, $R_{K^{(*)}}$ and $R_{D^{(*)}}$ observables suggest a promising path towards new physics. Leptoquarks (LQs) are known for explaining the anomalies separately in the current literature. However, these anomalies may be intertwined at a fundamental level, and there is a single new-physics explanation. We present a simple model with two TeV-scale scalar LQs, a weak-singlet $S_1$ and a weak-triplet $S_3$, and an economic flavor ansatz that can explain all the above anomalies simultaneously without violating experimental observations, including the bounds from the LHC. For our purpose, the LQs can be as light as $1.5$ TeV making the model directly testable at the LHC. In addition to this, we discuss their collider phenomenology.
Direct searches for the vectorlike quarks (VLQs) generally assume that they decay exclusively to a third-generation quark and a heavy vector-boson or a Higgs. The current mass limits on the VLQs lie between $1.2-1.6$ TeV, for various weak representations and decays. These limits can relax drastically if the VLQs have additional decay(s). Here, we discuss a scenario where the VLQs decay substantially to a new singlet scalar (or pseudoscalar) that couples exclusively to the VLQs. Such a singlet state is well-motivated in many BSM models. We reinterpret the current bounds on the VLQs in this scenario and chart a model-agnostic roadmap to search for the VLQs with decay to a singlet (pseudo)scalar at the LHC. We discuss the possible signatures and identify a representative benchmark parameter set. We also discuss the strategies to isolate VLQ signatures amidst the huge SM background and their prospects at the HL-LHC.
In this paper, we have studied a mechanism that is capable of naturally producing hierarchical masses. Starting from structures in abstract spaces, SM neutrino masses have been produced. We have also studied the possibility of this mechanism generating PMNS matrix on introducing randomness in the system. The flexibility of this mechanism with the nature of underlying structures in abstract spaces have also been explored.
The ATLAS and CMS collaborations have measured the Higgs boson in a variety of production and decay modes. These measurements still leave room for exotic Higgs boson couplings proposed by simple extensions of the Standard Model (SM). In recent years much effort has been put into looking for the 125 GeV Higgs decaying to two light pseudoscalars which then decay to bb, mumu, tautau etc. In this talk we will present results based on a search performed in the context of a two Higgs doublet model extended with a scalar (2HDM+S), using p-p collision data collected by the CMS experiment in the Run-2 of LHC in different final states.
In micropattern gas detectors, the Gas Electron Multiplier (GEM) is a proven amplification method for the position detection of ionising radiation, such as charged particles, photons, X-rays, and neutrons. GEM detectors have been used in high energy physics experiments due to their excellent spatial and time resolution, high-rate capabilities, and flexibility in design. The principle is based on a novel concept of charge amplification in gas where GEM foils placed inside the detector enable charge amplification in the presence of an electric field. A GEM is constructed from a thin polymer foil coated with metal that has a dense concentration of chemically created holes. The microscopic structure of GEM provides a spatial resolution of better than 100$\mu$m which makes it a potential candidate for imaging. An effort is being made to use triple GEM as an imaging detector for medical applications as well as scanning in commercial containers since such scanning devices can be constructed to a large size using GEMs. The hit location of the incoming radiation and the avalanche charge deposition on the readout plane are used to reconstruct an image. The experiment is carried out in X-ray with a flux $\sim1000Hz/cm^2$ and cluster size, energy deposition cuts are implemented to extract images with improved resolutions. Object and material identification and dimension measurements have also been carried out from the reconstructed image. Our results show a sub-millimetre resolution for finding the dimensions of the scanned object and less than 1% uncertainty in the identification of the material. I will be presenting an imaging technique using GEM along with the current results and the possible improvement in future research.
Each High Energy Physics (HEP) experiment has its unique research motivation. The distinctive experimental goals further decides the various set of requirements and design criteria for the front end electronics (FEE) and data acquisition (DAQ) at the back-end. The main purpose of the DAQ is to receive the data with high reliability from the FEE near the HEP detector, and then transfer on to the back-end DAQ servers. With the recent advancements in silicon technology, the Field programmable gate array (FPGA) based DAQ provides the opportunity to implement the detector specific logic cores with reconfigurable architecture. For the long distance transmission of data from the FEE located near the detectors to the FPGA based DAQ located far away from the detectors, low voltage differential signals (LVDS) are preferred. We will present the design and development of the FPGA Mezzanine Card (FMC) based LVDS interface board that is capable to receive the data with high reliability from the detector FEE and pluggable directly on the FPGA cards for acquisition and processing. The card is designed to provide access to the LVDS pin pairs on the FPGA boards through the FMC connector. We will summarize the technical challenges for the development of such a FMC based interface card, points of uncertainties and their probable solutions. The use is not just restricted to particle physics; the customized design also fits well for industry applications like muon tomography, medical imaging and future HEP experiments.
Gas Electron Multiplier (GEM) is one of the most suitable gaseous detector for tracking devices in high rate Heavy-Ion (HI) experiments for their high rate handling capability and good spatial resolution.
The performance studies including the detector efficiency, time resolution, discharge probability and also the radiation induced effects on the chamber such as charging-up effect, long-term stability study are the most important aspects need to be investigated before using the detector in any experiments.
In this work all of the above mentioned aspects are investigated for triple GEM chamber prototypes operated with premixed Ar/CO2 gas mixtures in different volume ratios. The cosmic ray muons are used to measure the efficiency of the chamber. The time resolution of the chamber is investigated with a Cs137 gamma source of energy 662 keV. The radiation induced effects on the chamber are investigated using a Fe55 X-ray source of characteristic energy 5.9 keV. The discharge probability of the chamber is measured at the CERN-SPS beam line facility. The details of the experimental setup, method of measurements and experimental results will be presented.
A real-sized trapezoidal Resistive Plate Chamber (RPC) has been developed for the Muon Chamber (MuCh) detector set-up in the upcoming Compressed Baryonic Matter (CBM) experiment at the Facility for Antiproton and Ion Research (FAIR), Darmstadt, Germany. The detector has been tested for its performance with dedicated self-triggered electronics and DAQ chain in presence of a very harsh photon environment at GIF++ facility, CERN, Switzerland during November-2021 beamtime followed by particle rate handling capability test at the m-CBM set-up at GSI, Darmstadt, Germany. At GIF++, the detector was tested for its muon detection efficiency and other related properties in presence of different photon flux incident on it as a background ranging from 0 MHz/cm$^2$ to 2.72 MHz/cm$^2$. Voltage scan and threshold scan were performed at different locations of the incident muon beam on the detector. Previously the performance of the detector was tested in the local laboratory with cosmic rays to optimize its operational parameters.
The various successful test results and future perspectives of the development will be presented.
The CMS Level-1 trigger, including the calorimeter trigger, will receive a massive upgrade to avail of the benefits and tackle the challenges posed by the High-Luminosity LHC (HL-LHC). Calorimeter trigger is planned to employ high-speed optical links (~28 Gbps) and Xilinx Large UltraScale+ FPGA to meet the high bandwidth and parallel computing demands of the HL-LHC. The system will handle 25 times more granularity and intends to process the newly commissioned high-granularity calorimeter (HGCAL) information. To process such high granular information from large calorimeter geometry, trigger algorithms are proposed, which provide better performance in terms of latency, area, and scalability.
Jets are studied in photoproduction produced in ep collisions at the proposed EIC energies, $\sqrt{s}$= 30-140 GeV. The contribution of the photoproduction subprocesses, direct and resolved, are studied separately. The data are generated using the event generators, PYTHIA8 and RAPGAP. The jets are reconstructed using the longitudinally invariant kT-algorithm in the standalone software package FASTJET3.
The substructure of the gluon-and-quark-initiated jets is studied and predictions are made for the subjet multiplicities in the two subprocesses. Comparison of the jet shapes in gluon versus quark jets is presented in terms fat and thin jets.
Event-by-event fluctuations of the mean transverse momentum, $\langle p_{\rm T}\rangle$, of charged particles produced in Pb-Pb and Xe-Xe collisions at $\sqrt{s_\rm{NN}}$ = 5.02 TeV and $\sqrt{s_\rm{NN}}$ =5.44 TeV, respectively, are studied as a function of the charged-particle multiplicity using the ALICE detector at the LHC. Dynamical fluctuations are observed in both collision systems which indicate correlated particle emission. The central Pb-Pb and Xe-Xe collisions show a significant reduction of the fluctuation in comparison to peripheral collisions and are in qualitative agreement with previous measurements in Pb-Pb collisions at $\sqrt{s_\rm{NN}}$ = 2.76 TeV. The results are compared with the HIJING model. A clear deviation from a simple superposition scenario, where the final state particles are produced from the superposition of particle emitting sources, is observed.
The event-by-event fluctuations of conserved quantities, such as, baryon number, strangeness and electric charge in ultrarelativistic heavy-ion collisions are related to thermodynamic properties of the produced hot and dense system and may reveal the properties of the quark--gluon plasma and the QCD phase diagram. In the present work, the net-charge fluctuations are studied in terms of the observable $\nu_{dyn}[+,-]$, which is a measure of the relative correlation strength of particle pairs. The observable $\nu_{dyn}[+,-]$ is robust against detector efficiency losses and after proper scaling it becomes equivalent to the strongly-intensive quantity, $\Sigma$. The values of $\nu_{dyn}[+,-]$ are calculated for experimental data on pp collisions at $\sqrt{s}$ = 5.02 TeV, p--Pb collisions at $\sqrt{s}_{\rm NN}$ = 5.02 TeV, Pb--Pb collisions at $\sqrt{s}_{\rm NN}$ = 2.76 and 5.02 TeV and Xe--Xe collisions at $\sqrt{s}_{\rm NN}$ = 5.44 TeV using the ALICE detector. The observed dependence of $\nu_{dyn}[+,-]$ on charged particle density shows a regular smooth evolution of net-charge fluctuations from small to large collision systems. Moreover, the observed negative values of $\nu_{dyn}[+,-]$ indicate the dominance of correlation between the oppositely charged particle pairs compared to that arising from the like-sign charge pairs. These findings are compared to predictions from Monte Carlo models such as PYTHIA 8, HIJING and EPOS-LHC. The effect of the kinematic acceptance has also been investigated by examining the dependence of scaled $\nu_{dyn}[+,-]$ values on transverse momentum range as well as the width of the pseudorapidity window. The effect of the hadronic resonance decays is also studied by comparing the experimental findings with the predictions of HIJING model.
Two-particle charge-dependent correlations (balance functions) are sen- sitive to the production and transport of conserved quantum numbers in the medium created in hadronic collisions. In this contribution, recent ALICE measurements of the balance functions of charge, strangeness, and baryon numbers are presented. Balance functions for all combinations of identified charged-hadron (π, K, p) pairs are calculated in Pb--Pb collisions at √sNN = 2.76 TeV as a function of collision centrality. The balancing in azimuthal angle and rapidity is expected to provide information about quark diffusion and delayed hadro-nization, respectively. For the latter, a possible two-stage quark-production scenario --- early production of strange quarks and late production of light quarks --- is discussed. In addition, balance function integrals of (un)identified hadron pairs as a function of collision centralities, which provide the information about different pairing probabilities, are calculated for the first time.
To investigate further the strange-ness enhancement with multiplicity in small systems, recent measurements of how the production of doubly-strange Ξ baryons is balanced with mesons (strange kaons and non-strange pions) and baryons (Ξ, Λ, and p) in pp collisions at s = 13 TeV are shown. The balance is studied by triggering on Ξ baryons and subtracting the same quantum number from the opposite quantum number per-trigger yields. In particular, the multiplicity dependence is studied in order to identify if the same strangeness production mechanism is at work in low- and high- multiplicity pp collisions. The results are compared to predictions from Monte Carlo models with various tunes of PYTHIA 8 (Lund string-based approach) and EPOS LHC (based on core-corona approach).
Lattice QCD predicts an ordering of the ratios of baryon number fluctuations in the vicinity of critical point in the QCD phase diagram i.e, $\frac{\chi_6}{\chi_2}<\frac{\chi_5}{\chi_1}<\frac{\chi_4}{\chi_2}<\frac{\chi_3}{\chi_1}$ where $\chi_n$ is the nth order cumulant of baryon number fluctuation. Taking the analog of baryon density as order parameter in the spin model, these inequalities can be tested. We have used two different models, Ising model and three-state Pott's model for the study. Simulations are performed in 2D square lattices of different sizes using the Wolff algorithm. The cumulants of the order parameter are obtained up to sixth order in both these models near their corresponding critical temperatures. The size dependence of the peaks/dips of the higher-order cumulants appears to decrease with the increase in the order of the cumulants. The consequences of the results for the QCD case are discussed.
The Taylor expansion of thermodynamic observables at a finite baryon chemical potential $\mu_B$ is a well-known approach to circumvent the fermion sign problem. The reliability of a Taylor estimate is determined by the radius of convergence, a reasonable estimate of which requires sufficiently higher order calculations in $\mu_B$. But, owing to the associated difficulty and limitations of precision in calculating these higher-order Taylor coefficients, it becomes essential to look for various alternative expansion schemes. Exponential Resummation to all orders in $\mu_B$ is one such promising alternative scheme, which has been recently proposed in Phys. Rev. Lett. 128, 022001 (2022). Unfortunately, the resummation formulation gets affected by the appearance of biased estimates. The effects from these estimates can become very drastic and can radically misdirect the calculations for higher values and orders of $\mu$ and also with increasing order of $\mu$ derivatives of free energy. In this work, we present a cumulant expansion procedure that allows to investigate and regulate these biased estimates at different orders in isospin chemical potential $\mu_I$. We find that the unbiased estimates in the cumulant expansion can truly capture the genuine higher order stochastic fluctuations of the higher order correlation functions, which got suppressed in exponential resummation. Finally, we discuss an unbiased formalism of the exponential resummation, which when expanded in form of a series, can exactly replicate the Taylor series up to a desired power in $\mu_B$. This enables us to regain the knowledge of reweighting factor and most importantly, retrieve back the partition function and many of its important properties, which got entirely lost while implementing the cumulant expansion scheme.
The hadron resonance gas (HRG) model which considers a grand canonical ensemble of all the experimentally established hadrons and resonance states, is very successful in reproducing the LQCD results of the hadronic phase of the strongly interacting matter. Various extensions of the HRG model have been made to improve its agreement with LQCD results. One such extension is the implementation of the van der Waals type of interaction between the baryons, known as the vdW-HRG model.
In this model, repulsive and attractive interactions have been considered through the van der Walls constants. Phase transition and the criticality are inbuilt in the van der Waals equation that depends on the choice of van der Walls constants. In literature, the van der Waals constants are extracted by comparing the lattice results. The extracted values of the van der Walls constants also depend on the hadronic list used in the HRG model.
In this work, we try to study the effect of the inclusion of the Hagedorn mass spectrum along with the experimentally established hadrons. Hagedorn mass spectrum is commonly used to compensate for the missing higher resonance states those are yet to be confirmed experimentally. We also compare our results by updating the hadronic list with the Quark Model predicted hadronic states. We find that the vdW-HRG model with the Hagedorn state describes the lattice result best for both zero and finite chemical potential. The Hagedorn states have a significant influence on the van der Waals parameters and hence on the thermodynamic and transport quantities. We also infer that there is a strong dependence of van der Waals parameters with the chemical potential.
The strong statistical significance of an observed electron-like event excess in the MiniBooNE (MB) experiment, along with an earlier similar excess seen in the Liquid Scintillator Neutrino Detector (LSND), when interpreted in conjunction with recent MicroBooNE results may have brought us to the cusp of new physics discoveries. This has led to many attempts to understand these observations, both for each experiment individually and in conjunction, via physics beyond the Standard Model (SM). We provide an overview of the current situation, and discuss three major categories under which the many proposals for new physics fall. The possibility that the same new, non-oscillation physics explains both anomalies leads to new restrictions and requirements. An important class of such common solutions, which we focus on in this work, consists of a heavy O(MeV−sub-GeV) sterile neutral fermion produced in the detectors, (via up-scattering of the incoming muon neutrinos), and subsequently decaying to photons or $e^+e^−$ pairs which mimic the observed signals. Such solutions are subject to strong demands from a) cross section requirements which would yield a sufficient number of total events in both LSND and MB, b) requirements imposed by the measured energy and angular distributions in both experiments and finally, c) consistency and compatibility of the new physics model and its particle content with other bounds from a diverse swathe of particle physics experiments. We find that these criteria often pull proposed solutions in different directions, and stringently limit the viable set of proposals which could resolve both anomalies. Our conclusions are relevant for both the general search for new physics and for the ongoing observations and analyses of the MicroBooNE experiment.
Neutrino Oscillation has been one of the most promising sources of physics beyond the standard model in many experimental searches so far. Various short and long baseline experiments have been developed using the most advanced detector technologies to detect this elusive particle. NOvA is one such experiment designed with the initial aim of looking at the possibility of the oscillation of muon neutrino to electron neutrino. In addition to the study of standard three-flavor neutrino oscillation, the NoVA experimental facility allows one to study physics beyond the standard model as well, such as the search for the sterile neutrino. We use the neutral current events for the sterile neutrino analysis. It is one of the three interactions in the detector, namely $\nu_{\mu}$-CC, $\nu_{e}$-CC, and NC interaction, where CC is charged current, and NC is neutral current. Furthermore, for this analysis, we have been using the joint two detectors fit approach instead of the previously used near-to-far detector ratio approach giving an additional advantage of enhancement in the $\Delta m_{41}^2$ space.
Based on the properties of NC events, various selection criteria have been applied to distinguish the true NC events from the background events for the analysis. Since the NOvA far detector is located on the surface and is continuously exposed to the cosmic ray particles, the comics are a dominant background for the sterile analysis. The strategy adopted for the reduction of the cosmic background and the preliminary results from this analysis will be presented in this talk.
Results of experiments like LSND and MiniBooNE hint toward the possible presence of an additional eV scale sterile neutrino. The addition of a sterile neutrino will significantly impact the standard three flavor neutrino oscillations in particular giving rise to additional degeneracies due to new sterile parameters. In our work, we investigate how sterile neutrino influences the sensitivity to determine the octant of $\theta_{23}$. We have analyzed the sensitivity of octant at very long baselines where the resonance matter effect comes into play. Our analytical expressions of the neutrino oscillation probabilities using $\Delta_{21}=0$ approximation helps us in explaining the same. We have determined the octant sensitivity using atmospheric and accelerator neutrinos at DUNE in presence of a non-magnetized Liquid Argon Time Projection Chamber (LArTPC) detector as well as the inclusion of a process to differentiate between $\mu^+, \mu^-$ using muon capture and decay.
The information about Earth's internal structure comes from indirect probes such as seismic studies and gravitational measurements. The density distribution inside the Earth, incorporated in the Preliminary Reference Earth Model (PREM), is estimated from the model-dependent empirical relations having assumptions based on the Earth's temperature, pressure, composition, and elastic properties, which give rise to uncertainties in the PREM profile. Atmospheric neutrinos using weak interactions provide a unique avenue to explore Earth's internal structure, which are complementary to seismic studies and gravitational measurements based on electromagnetic and gravitational interactions, respectively. These complementary approaches would pave the way for "multi-messenger tomography of Earth". Atmospheric neutrinos offer the possibility of validating the Earth's core, measuring the location of the Core-Mantle Boundary (CMB), and probing dark matter inside the core in a unique way through Earth matter effects in neutrino oscillations. We explore these possibilities using a proposed Iron Calorimeter (ICAL) detector at the India-based Neutrino Observatory (INO). With 500 kt$\cdot$yr exposure, we show that the presence of Earth’s core can be independently confirmed at ICAL with a median $\Delta \chi^2$ of 7.45 (4.83), assuming normal (inverted) mass ordering. We demonstrate that the ICAL detector with 1000 kt$\cdot$yr exposure would be able to locate the CMB with a precision of about $\pm$250 km at 1$\sigma$ confidence level. We find that for all these Earth's matter-driven measurements, the CID capability of the ICAL detector is crucial.
Neutrinoless double-$\beta$ decay (0$\nu \beta \beta$) is the most sensitive
experimental probe to answer the question whether neutrinos have Majorana or Dirac
nature. The observation of (0$\nu \beta \beta$) would not only establish the Majorana
nature of neutrinos, but also provide a measurement of effective mass and probe the
neutrino mass hierarchy. The latest precise neutrino oscillation data of the mass
differences and mixings among the three neutrino mass eigenstates, along with
cosmology data provide slight preferences of the “normal hierarchy” (NH) over the
“inverted hierarchy” (IH) in the structures of the neutrino mass eigenstates. In this
work, we address issue of the required exposures (target mass × data taking time) of
(0$\nu \beta \beta$) projects vs the expected background B0 before the experiments
are performed. The background reduction can substantially alleviate the necessity of
unrealistic large exposure as the normal mass hierarchy (NH) is probed. The
nondegenerate (ND)-NH can be covered with an exposure of order of 100 ton-year,
which is only an order of magnitude larger than those planned for next-generation
projects—provided that the background could be reduced by order of 10−6 relative to
the current best levels[1]. It follows that background suppression will be playing
increasingly important and investment-effective role to covering ND-NH
in future 0νββ experiments.
[1] M. K. Singh et al., Phy. Rev, D
The study of neutrino interactions with nucleons and nuclear targets is quite important, in the few GeV energy region, in the analysis of the neutrino oscillation experiments being performed with accelerator and atmospheric neutrinos. The present goal of the experimenters is to measure with better precision the various neutrino oscillation parameters, like the mixing angles, the mass-squared-difference of the neutrino mass eigenstates, CP violating phase $\delta$ in the lepton sector as well as to determine the mass hierarchy of the neutrino mass eigenstates, for which the simultaneous knowledge of neutrino and antineutrino cross sections in the same energy region, for a given nuclear target is required. In the few GeV energy region of neutrinos and antineutrinos, the contribution to the total scattering cross section comes from the quasielastic, inelastic, and deep inelastic scattering processes. However, in the case of antineutrinos, additional contribution comes from the single hyperon production, which although Cabibbo suppressed is significant in the antineutrino energy region of $\sim$ 0.5 -- 1.5 GeV. The production of hyperons in the neutrino induced reactions is forbidden by the $\Delta S = \Delta Q$ rule. Moreover, with the increase in (anti)neutrino energy~($E_{\nu} \approx 1.5$~GeV), the production of hyperons proceeds via the associated particle production
\begin{equation}
\nu_{l}(\bar{\nu}_l) + N \longrightarrow l^{\mp} + Y+K,
\end{equation}
where $l=e,\mu,\tau$ and $Y=\Lambda, \Sigma^{0,-}$. The study of hyperon production in the $\Delta S=0$ and 1 channels are limited both theoretically as well as experimentally. Therefore, in this work, we plan to study the hyperon production in the (anti)neutrino induced reactions from the free nucleon target.
Apart from being significant in the analysis of neutrino oscillation parameters, the study of single hyperon production is important in its own right as it provides information about the nucleon-hyperon transition form factors at high $Q^2$, which are presently known only at low $Q^2$ from the study of the semileptonic decays of hyperons, where the symmetries of the weak hadronic current like the T-invariance, G-invariance, and SU(3) symmetry could also be tested. We have studied the dependence of different vector and axial vector currents including the second class current form factors on the total and differential cross section as well as the time reversal and G-parity violations in the antineutrino induced single hyperon production.
The results shall be presented for the total and differential scattering cross sections for the hyperon production in the $|\Delta S| = 0$ and 1 channels in the charged current (anti)neutrino scattering from free nucleons as well as from the nuclear targets like carbon, argon, and lead.
I will take an overview of the latest progress in theory model space of dark matter candidates. I will in particular concentrate on new class of dark matter theories, where dark matter is a composite particle arising through confinement in a secluded non-Abelian confining sector. I will sketch the principles behind construction of such theories, demonstrate the usage of lattice computations and sketch some of the experimental signatures.
There is an overwhelming evidence of the existence of dark matter (DM) from observations at different scales, from galaxies to the whole Universe, supporting that a large fraction of the
mass and energy budget cannot be explained within the standard cosmological model. Weakly Interacting Massive Particles (WIMPs) have been proposed to be most interesting DM candidates. Different complimentary techniques and strategies are being used all over the world for the detection of dark matter. In this review, I will be discussing the direct detection method (elastic scattering of WIMPs off target nuclei) and indirect detection methods (production of standard particles through annihilation of DM candidates) and will summarize the latest results and the future of the field.
Understanding the thermodynamics of strongly interacting matter described by Quantum Chromodynamics (QCD) lies at the core of explaining, for e.g., the formation of an overwhelming fraction of the visible matter in the universe and the dense baryon matter that exists at the core of the neutron stars. Lattice QCD techniques have provided some spectacular results in this field, specially in the regime of small baryon densities. Recent efforts to extend lattice calculations to higher baryon densities by circumventing the infamous sign-problem will be discussed. Moreover lattice techniques are entering an era of high precision where not only the bulk properties of different phases can be calculated but also can show us a path to understand the underlying mechanisms (chiral symmetry breaking and confinement) that drives the phase transition.
The Galactic cosmic ray particles in the inner heliosphere are convected outward by the solar wind while they diffuse into the inner heliosphere along the interplanetary magnetic field direction. A balance between the two processes produces an anisotropic flow of the GCR particles. The ground-based neutron or muon detectors can observe it as a 24-hour periodic variation in their counting rates due to the rotation of the Earth. The GRAPES-3 muon telescope located at Ooty, India, is the largest in the world, and it records four billion muons daily, allowing it to probe tiny variations in cosmic ray flux caused by solar activities. The daily variation called solar diurnal anisotropy (SDA) observed in the GRAPES-3 muon data was modeled using the Fourier series technique, and the respective amplitudes were extracted for 21 years (2001-2021) of data. The yearly mean amplitude of SDA was found to have a strong correlation with the interplanetary magnetic field and a reasonably good correlation with other solar parameters, which will be presented during the symposium.
Observation of the rotational velocity of stars in our galaxy and other gravitational effects point to the existence of huge non-luminous matter which is known as Dark Matter. The most promising candidate of dark matter is the weakly interacting massive particles (WIMPs). They naturally give the appropriate relic abundance and also appear in the theories of weak scale physics beyond the standard model. The predicted mass of WIMPs varies from few MeV to few hundreds of TeV. It can be detected by the recoil nuclei from the WIMPs-nuclear elastic scattering in the detector material by the direct detection process. Superheated liquid detector (SLD) with C2H2F4 (b.p. = -26.3 oC) liquid has the potential to detect the low mass WIMPs. The major advantage of SLD is that the certain backgrounds can be rejected by adjusting the operating temperature and pressure of the detector. The calculation shows that SLD with C2H2F4 liquid at the operating temperature of 60 oC is found to be able to detect 140 MeV, 430 MeV, and 540 MeV WIMP masses by the elastic collision with 1H, 12C, and 19F nucleus respectively. At this high temperature, SED becomes sensitive to background gamma rays also. The reduced background environment is essential for such an experiment searching for the WIMPs. The initiative of the experiment has been started at the Jaduguda Underground Science Laboratory (JUSL), UCIL, Jharkhand, India, at 555 m deep underground from the surface. The preliminary measurement was performed at JUSL with an exposure of 101.2 gm.hr at a temperature of 24 oC. The background count rate was observed to be as 7.96 x10^-2 (kg)^-1 and the maximum sensitivity that can be achieved at this temperature and exposure is 3.80 x 10^-33 cm^2 at a WIMP mass of 7 GeV. It is already established that the C2H2F4 SLD has the potential to probe WIMPs-nucleon spin-independent cross section with a projected sensitivity levels (at 90% C.L.) better than 4.6 × 10^-41 cm^2 at WIMP masses down to ∼4 GeV with a total exposure of ∼1000 kg.day for a zero background consideration. The R & D has been started to increase the mass of the detector and the aim is to reach this highly sensitive region in several steps by increasing the exposure of the detector and lowering the backgrounds in near future.
Instrumenting a gigaton of ice at the South Pole, the IceCube Neutrino Observatory can probe neutrino interactions and properties at high energies with large statistics. This is possible due to the existence of a flux of high-energy astrophysical neutrinos, discovered by IceCube in 2013-14, and the prevalence of neutrinos produced in cosmic ray interactions in the upper atmosphere. Recently, promising candidate sources have emerged for the astrophysical neutrino flux, primarily due to real time multi-messenger followup efforts, while measurements have also been performed of the neutrino-nucleon cross section above a TeV as well as neutrino oscillation parameters using hundreds of thousands of events. IceCube has also detected its first electron antineutrino candidate near the Glashow resonance energy of 6.3 PeV. This talk will highlight recent results and illustrate the unique capabilities of this detector, motivating the proposed IceCube Gen2 extension and concluding with a discussion of opportunities for Indian participation and synergies with our homegrown capabilities and research programmes.
Measurement of the number of effective relativistic degrees of freedom (Neff) by Planck experiment at Cosmic Microwave Background (CMB) strongly restricts the presence of additional light particles in the early Universe. We first discuss the cosmological constraints on MeV scale thermal dark matter from the current Planck data. Next we consider an MeV scale thermally decoupled non-minimal dark sector and study the impact of the dark sector dynamics on the Neff at the time of CMB formation. We find that the value of Neff turns out to be maximum for a non-hierarchical dark sector and is within the reach of future CMB experiments.
Primordial black holes are one of the most well-motivated dark matter candidates and it is important to devise new search strategies for them. Low-mass PBHs (masses between $\sim 10^{15}$ g to $10^{18}$ g) can be detected via their Hawking radiation. Evaporating PBHs inject energy into the intergalactic medium (IGM), which can significantly alter the thermal and ionization history of the Universe. At the low redshifts, measurements of the Lyman-$\alpha$ forest informs us about the temperature of IGM. In this work, we use these measurements to derive new constraints on the PBH abundance as the dark matter for both the non-spinning and spinning black holes.
The large area GRAPES-3 muon telescope (G3MT) is designed to record the muon component of the extensive air shower (EAS), playing an important role in the determination of the composition of primary cosmic rays (PCRs) and separation between γ-rays and cosmic rays primaries for γ-ray astronomy. These studies require a detailed understanding of the response of EAS components in the G3MT which has been achieved by the development of a GEANT4-based simulation framework. We present the geometric modeling of the G3MT components, such as the proportional counter as well as the mass absorber, which is used as shielding for the electromagnetic and hadronic components and provides a threshold of 1 GeV x sec (θ) energy for the muons incident at zenith angle θ. We modeled the muon saturation and estimated the hadron punch-through contribution in G3MT. We also present a comparison study between the observed and simulated muon multiplicity distribution (MMD), assuming the PCR composition from the H4a model.
The ATLAS and CMS experiments have analyzed about $140 fb^{-1}$ data which was collected during Run 2 of the LHC. There are several beyond standard model searches that are/will be carried out by these experiments. A brief summary of recent supersymmetry searches is presented. The talk discusses current limits on strongly produced SUSY particles, electroweakinos and sleptons. Some of the searches for R-parity violating and long lived SUSY particles is also discussed. Potential areas for improvement using Run 3 data will also be presented at the end.
Rare baryonic decays induced by flavour changing neutral current (FCNC) have been of immense interest in recent years because of their sensitivities towards new physics (NP) beyond the standard model (SM). The exploration had been triggered with the observation of $Λ_b→Λμ^+ μ^-$ transition at the Fermilab [1] and the LHCb [2]. Theoretically these decays are also studied at different NP models [3-5]. Inspired by these results obtained for baryonic decays [3-7], we are interested to study the polarization asymmetry for $Σ_b$ baryon with the effect of NP. Various theoretical studies of branching fractions for $Σ_b→Σl^+ l^-$ decays in the standard model (SM) [8] proclaim the possibility of observation of these decays at the LHC. In this work, we will mainly concentrate on longitudinal polarization asymmetries for muonic, electronic and taunic channels in family non-universal $Z'$ model [9, 10]. Asymmetry parameters characterize the angular dependence of differential decay width with polarized and unpolarized heavy baryons. We will investigate the observables with the contribution of $Z'$ boson. The phenomenology of $Z'$ is one of the important sectors to the accelerators. Due to its heavy mass, it may be used to calibrate the upcoming runs of the experiments. Here, we will introduce the NP couplings and use their constrained values. We will show the variation of the observables throughout the whole allowed kinematic region. These results for $Σ_b$ decays will help the experimental community to observe the decays in colliders and will unlock a new horizon to the theoretical community to probe NP with heavy baryons.
References:
1. T. Aaltonen et al. [CDF Collaboration], Phys. Rev. Lett. 107, 201802 (2011) [arXiv: 1107.3753 [hep-ph]].
2. R. Aaij et al. [LHCb Collaboration], Phys. Lett. B 725, 25 (2013).
3. S. Sahoo, C. K. Das and L. Maharana, Int. J. Mod. Phys. A 24, 6223 (2009).
4. S. Sahoo and R. Mohanta, New J. Phys. 18, 093051 (2016) [arXiv: 1607.04449 [hep-ph]].
5. D. Banerjee and S. Sahoo, Chin. Phys. C 41, 083101 (2017).
6. A. Nasrullah, M. J. Aslam and S. Shafaq, Prog. Theor. Exp. Phys. 2018, 043B08 (2018).
7. R. L. Workman et al. [Particle data group], Prog. Theor. Exp. Phys. 2022, 083C01 (2022).
8. K. Azizi, M. Bayar, A. Ozpineci, Y. Sarac and H. Sundu, Phys. Rev. D 85, 016002 (2012) [arXiv: 1112.5147 [hep-ph]].
9. P. Langacker, Rev. Mod. Phys. 81, 1199 (2009).
10. P. Maji, S. Mahata, S. Biswas and S. Sahoo, Int. J. Theor. Phys. 61, 162 (2022).
We build different gauged $U(1)_X$ models ($X = B, L B+L$) by finding anomaly free solutions. We analyse their phenomenological implications focusing on proton decay. We find that the $U(1)_L$ model is the most interesting case. We analyse the possibility of the gauged $U(1)_L$ breaking to a residual Z_3 subgroup and show that it leads to novel proton decays. We find that the usual two-body proton decays are forbidden by the residual Z_3 symmetry. Instead, our model leads to n-body ($n \geq 4$) proton decays that are induced by dimension nine effective operators, leading to novel experimental signatures.
We investigate a left-right symmetric model obeying $SU(3)_C \otimes SU(2)_L \otimes U(1)_L \otimes SU(2)_R \otimes U(1)_R$ local gauge symmetry. We consider the fermions of full $\bf{27}$-plet of $E_6$. Having one bi-doublet, two doublet and one singlet scalar is necessary for a complete symmetry breaking to Standard Model. Its rich particle sector is one of the reasons behind our interest in this model. We derive lower limits on the masses of some of the heavy exotic particles from the experimental data from LHC. Among them some of the scalars may show interesting signatures at the LHC. From Dark Matter perspective, we discuss the eligibility of two charge neutral heavy fermions present in this model to satisfy the limits from PLANCK and direct detection experiments.
A 50 kt Iron CALorimeter (ICAL) has been proposed to study atmospheric neutrinos using Resistive Plate Chambers (RPCs) as the active detector elements. It’s proposed location is an underground cavern in a mountain, to reduce the cosmic muon background. A scaled down (1/600) version of the ICAL called the mini-ICAL has been constructed and is being operated at the IICHEP transit campus in Madurai, for the last few years, to study the construction and performance of the detector and ICAL electronics. The mini-ICAL comprises of a 4 m × 4 m × 1.1 m magnet made of 11 layers of iron plates interleaved with RPCs of size 2 m × 2 m in two stacks in the central region of a near uniform magnetic field. In order to study the feasibility of a shallow depth neutrino experiment a plastic scintillator based cosmic muon veto detector (CMVD) is being built around the mini-ICAL.
The CMVD needs to provide a muon detection efficiency of more than $99.99\%$ and false positive rate of less than $10^{-5}$. The CMVD consists of veto walls on three sides and the top of mini-ICAL. The veto walls are made from extruded plastic scintillator (EPS) strips. The top wall (the roof) will have four and the side walls will have three layers of EPS strips. Scintillators in different layers are staggered by 1/3 or 1/4 of the strip width to minimise any effect of the inter-strip gaps. The front side of the mini-ICAL won’t have a veto wall in order to provide access for the maintenance of the mini-ICAL. The EPS strips are 4.5, 4.6 or 4.7 m long and 50 mm wide, and have two holes separated by 25 mm, running along the full length of the strip. A wavelength shifting fibre of 1.4 mm diameter is inserted in each hole for light collection, and a Hamamatsu SiPM of an active area of 2 mm × 2 mm is mounted at both ends of the fibre for signal readout. A total of 760 EPS strips, 7 km of fibre and 3040 SiPMs will be used.
The SiPM signals are amplified by trans-impedance amplifiers (TIA) of gain 1200 ohm to produce a voltage pulse. A DRS4 ASIC based readout system is being designed to sample the SiPM signals at a rate of 1 GSa/s. The region of interest of the DRS4 is chosen so as to cover the SiPM’s entire signal profile in addition to accounting for the latency of the mini-ICAL trigger. The samples are digitised on receiving a mini-ICAL trigger, and zero suppressed data is transferred to the back-end data server. The mini-ICAL muon tracks are extrapolated to the CMVD and matched with the CMVD hits to evaluate the CMVD muon veto efficiency. An FPGA based DAQ board consisting of 40 TIAs, 5 DRS4 ASICs and a network interface chip is being designed. The board will also have an SiPM bias generator and SiPM calibration logic. 76 such boards will be required to readout the 3040 SiPMs.
This paper will discuss the design and construction of the CMVD and its readout.
An efficient Particle identification(PID) is crucial for Belle II as it will be dealing with a much higher event rate than Belle and ultimately with a larger background. PID is advantageous in suppressing background, studying rare decays as well as for the flavour tagging of B-mesons. We study the charged Kaon and Pion identification performances based on the data collected by Belle II experiment corresponding to a luminosity of 208$fb^{-1}$ and compare with Monte Carlo simulations. For the study, the decay mode $D^{*+} \rightarrow D^0[K^− \pi^+ \pi^0]\pi^+$ is reconstructed as it helps to probe the lower momentum region(< 0.5 ). The kaon efficiency and pion mis-identification rates are calculated in bins of momentum and polar angle, for the different PID criteria.
Compressed Baryonic Matter (CBM) is a fixed target experiment, which will explore the properties of nuclear matter under extreme density. This experiment will take place in the upcoming Facility for the Antiproton and Ion Research (FAIR) at Darmstadt, Germany. CBM will consist of 4 stations of Muon- Chambers (MuCh) sandwiched between absorber layers and each station will consist of 3 layers of gaseous detectors of a identical technology. The first 2 stations of MuCh will consist of high rate capability advanced gaseous detectors made of Gas Electron Multiplying (GEM) technology and the last 2 stations will be made of Resistive Plate Chamber (RPC) detector. A first real size prototype of second station GEM chamber has been designed and fabricated indigenously. The 8 -layer readout PCB consists of 1824 readout pads with size varying progressively from 4 mm x 4 mm to 21 mm x 21 mm. The length of the trapezoidal readout PCB is around 1 meter. Due to fabrication constraint of such large size single PCB by Indian Industry, we opted for a novel technique in which two parts of the PCB were joined together. The real size trapezoidal shaped, single-mask GEM foils were procured from CERN. The foils were custom designed for the GEM stations of MuCh. Those foils were stretched according to “NS-2” technique, which doesn’t use any glue. The detector was assembled at VECC clean room (class 1000) using 3 GEM foils, readout PCB, drift PCB and other necessary components. As the module was made of a joined PCB so we tested for any gas leak, and it was found to be under control. Investigation on the pad noise characteristics from the readout plane was carried out, particularly at the joining position. The joined portion of readout pads showed higher noise in data and appropriate shielding was implemented. The detector was then tested with cosmic muons and we measured an detection efficiency upto 98% and time resolution of ~ 18 ns at VECC lab. Gain and efficiency uniformity over the entire area of the module has also been studied. The detailed results on fabrication and testing will be presented and discussed.
Test results of Real size Station-1 MuCh modules in the nucleus-nucleus collisions at mini-CBM experiment at GSI
A. Agarwal1,2, C. Ghosh1,2, A.K. Dubey1, J. Saini1, E. Nandy1,2, V. Singhal1,V. Negi1, S. Roy3, S. Chatterjee3, A.Kumar4, C. Sturm5, D. Emschermann5, P. A. Loizeau5 and S. Chattopadhyay5
1VECC, Kolkata, 2HBNI, Mumbai,
3 Bose Institute, Kolkata,
4NISER, Jatni,
5GSI, Germany
Corresponding author. Email: a.agarwal@vecc.gov.in
The goal of the Muon Chamber (MuCh) system of the Compressed Baryonic Matter(CBM) experiment at FAIR is to measure dimuon signals arising from the nucleus-nucleus collisions at lab energies of 2-11 AGeV. Tracking detectors based on the Gas Electron Multiplier (GEM) technology will be used in the first two stations of MuCh. Two such real size GEM modules have been fabricated and commissioned for response studies in the mini-CBM(mCBM) experiment at the SIS18 beamline at GSI. mCBM is a part of the FAIR-phase0 program of CBM and provides a dedicated facility for the test of detector subsystem, their electronics, their integrated response and high volume data transport under heavy ion collisions environment. The detailed setup of mCBM will be discussed.
The trapezoidal GEM modules having an area ~2000 sq. cm. and with 2.2 k readout pads each, have participated in the major beamtime campaigns at mCBM. Data has been acquired using self-triggered readout electronics, STS/MuCh-XYTER. Data at different GEM voltages and for beams of varying beam intensities and target thicknesses have been taken. The response in terms of spill structure, cluster-size characteristics, time correlation between detector subsystems will be presented. Event reconstruction based on the time stamps of the signal hits in a free-streaming data will be discussed. Recently, intensity scan data with U-Au and Au-Au collisions have been taken, where interaction rates of about 1 MHz have been achieved. Preliminary results from these tests which were carried out with upgraded GEM modules will be presented. All the key test results along with the hardware design issues and modifications will be reported and discussed.
This paper explores a technique in large area single gap Resistive Plate Chamber (RPC) where the position of the particle along a pickup strip is extracted by measuring the timing difference from the two ends of the strip. With precise time-measurement, the position can be measured more precisely than the conventional x-y strip readout with the same number of channels. The readout strips on either side of the RPC are kept aligned, i.e. in parallel, and the signals from both ends of the strip are read differentially to minimise noise. This technique has been successfully tested in the case of multigap-RPCs in [1] with resolution of 18 ps or 1.7 mm. It is expected that this method would work on single-gap RPCs also as the timing-difference resolution does not depend on the intrinsic fluctuations of timing of the device, as the signal will be induced to both the sides of a strip simultaneously. The intrinsic uncertainty due to fluctuation in avalanche formation will be common to both the ends and should also be cancelled out. A similar method has been tested on RPCs in [2] achieving a position resolution of 10.69 mm (timing difference resolution of 150 ps), but this was done using single ended signals with pickup strips mounted only on one side of the RPC which limits the performance. This paper will present the development of readout electronics and the results on the position measurement from timing differences with a single gap RPC using differential readout.
References
Thermal photons from the QGP provide important information about the interaction among the plasma constituents. The photon production rate from a thermally equilibrated plasma is proportional to the transverse spectral function $\rho_T$(𝑘0=|𝑘⃗ |,𝑘⃗ ). Photon production rates can also be calculated from the difference between $\rho_T$(transverse) and $\rho_L$(longitudinal) correlator as $\rho_L$ vanishes on the photon point. The IR part of $\rho_T-\rho_L$ dominates, and therefore the corresponding Euclidean correlator receives most of its contribution from the IR region. We calculate the correlator of $\rho_T-\rho_L$on $𝑁_f$=2+1 flavor HISQ configurations with $m_l=m_s/5$ at temperatures ∼1.15𝑇𝑐 and ∼1.3𝑇𝑐. We have used various ansätze of the spectral function, which are 1) Polynomial ansatz of the spectral function connected to the UV perturbative region and 2) Hydro-inspired spectral function. We have also used the Backus-Gilbert method to estimate the spectral function. We will show the comparison of the photon production rate estimated from all these different methods.
In heavy-ion collisions, a strong magnetic field ($\sim$ 10$^{15}$ T) is expected to be created, which together with the presence of a non-zero vector and axial currents, gives rise to a collective excitation in the quark--gluon plasma (QGP) called the Chiral Magnetic Wave (CMW). The experimental signature of the CMW is charge-dependent elliptic flow, $v_{2}$. In particular, the normalized difference of $v_{2}$ of positive and negative charges, ($\Delta v_{2_{\mathrm{Norm}}}$), may exhibit a positive slope as a function of the asymmetry ($A_{\mathrm{ch}}$) in the number of positively and negatively charged particles in an event. However, non-CMW mechanisms such as Local Charge Conservation (LCC) intertwined with collective flow can also lead to a similar dependence of $v_{2_{\mathrm{Norm}}}$ on $A_{\mathrm{ch}}$. A similar measurement with triangular flow $v_{3}$ can provide an estimate of the effect of LCC, as it is expected to be unaffected by the CMW.
We report the ALICE measurements on $v_{2}$, $\Delta v_{2_{\mathrm{Norm}}}$, $v_{3}$ and $\Delta v_{3_{\rm{Norm}}}$ of inclusive and identified hadrons as a function of $A_{\mathrm{ch}}$ in Pb--Pb collisions. Finite slope parameters corresponding to $v_{2_{\mathrm{Norm}}}$ and $v_{3_{\mathrm{Norm}}}$ versus $A_{\mathrm{ch}}$ are measured as a function of collision centrality and compared with results from other experiments and models. In addition, the Event Shape Engineering technique is adopted for the first time to quantitatively distinguish the CMW signal and the LCC background.
We study the effect on thermal vorticity near the QCD critical point. To evaluate thermal vorticity, we solve the equations for the relativistic causal hydrodynamics in (3+1) dimensions. The effects of the critical point is incorporated through the equation of state and the scaling behaviour of the transport coefficients. We observed a significant suppression in thermal vorticity at late times due to enhanced viscosities as the transport coefficients diverge in the critical region. Assuming local thermodynamic equilibrium for the spin degrees of freedom, the polarization vector for spin-1/2 particles is related linearly to thermal vorticity. Taking an average of the polarization over the freezeout hypersurface, we obtain the mean polarization observed in experiments. For the same global polarization, we find a significant suppression due to the critical point in the rapidity profile of the component of polarization along the angular momentum direction. The study suggests that the change induced by the critical point in the rapidity dependence of the spin polarization of $\Lambda$ hyperons can be used as an indicator of the critical point.
Experiments conducted in the last decade to search for the Chiral Magnetic Effect (CME) in heavy-ion collisions have been inconclusive. Isobar program at RHIC was conducted to address this problem. Also, in order to study the CME, a new approach known as the Sliding Dumbbell Method (SDM) [1] has been developed. This method searches for the back-to-back charge separation on an event-by-event basis. The SDM facilitates the selection of events corresponding to various charge separations ($f_{DbCS}$) across the dumbbell. A partitioning of the charge separation distributions for each collision centrality into ten percentile bins is done in order to find potential CME-like events that correspond to the highest charge separation across the dumbbell. Results for two- and three-particle correlators for isobar (Ru+Ru and Zr+Zr) collisions at $\sqrt{s_{\mathrm{NN}}} = 200$ GeV will be presented for each bin of $f_{DbCS}$ in each collision centrality. The background contribution due to statistical fluctuations is obtained by randomly shuffling the charges of the particles in a particular collision centrality. Correlated backgrounds are calculated for each $f_{DbCS}$ bin of charged shuffled events using their corresponding original events.
References
[1] J. Singh, A. Attri, and M. M. Aggarwal, Proceedings of the DAE Symp. on Nucl. Phys. 64, 830 (2019) "http://www.sympnp.org/proceedings/64/E66.pdf".
A number of anomalous results in short-baseline oscillation may hint at the existence of one or more light sterile neutrino states in the eV mass range and have triggered a wave of new experimental efforts to search for a definite signature of oscillations between active and sterile neutrino states. This new neutrino would have to be a Standard Model gauge singlet, thus it is referred to as sterile. The discovery (or exclusion) of the sterile neutrino associated with these anomalies would be groundbreaking, and would have profound implications not only for particle physics but also for astrophysics and cosmology. In this talk, I will give an overview of our present understanding of the experimental neutrino anomalies and future plan to search for the sterile neutrino in addressing the anomalies.
One of the basic propositions of quantum field theory is Lorentz invariance.
The spontaneous breaking of Lorentz symmetry at a more fundamental theory at high energy scale can manifest itself at the low energy extension of standard model perturbatively via
effective field theories.
The present and future Long-baseline neutrino experiments can give a scope to observe such a Planck-suppressed physics of Lorentz Invariance Violation.
The proposed long baseline experiment P2O extending from Protvino to ORCA with a baseline of 2595 km, is expected to provide good sensitivities to unresolved issues, especially neutrino mass ordering.
P2O can offer good statistics even with a moderate beam power and runtime, owing to the very large ($\sim 6$ Mt) detector volume at ORCA.
Here we discuss in detail how the individual LIV parameters affect neutrino oscillation at the P2O and DUNE baseline at the level of probability and derive analytical expressions to understand interesting degeneracies and other features.
We estimate $\Delta \chi^{2}$ sensitivities to the LIV parameters, analyzing their correlations among
each other, and also with the standard oscillation parameters.
We calculate these results for P2O alone and also carry out a combined analysis of P2O with DUNE.
We point out crucial features in the sensitivity contours and explain them qualitatively with the help of the relevant probability expressions derived here.
Finally, we estimate constraints on the individual LIV parameters at a confidence level (C.L.) of $95\%$ with the combined (P2O+DUNE) analysis and highlight the improvement over the existing
constraints.
We also find out that the additional degeneracy induced by the LIV parameter $a_{ee}$
around $-22 \times 10^{-23}$ GeV is lifted by the combined analysis at $95\%$ C.L.
The mixing among three light active neutrinos is parametrized using the unitary PMNS matrix. If there are additional neutrinos present in Nature which are heavy iso-singlets, then the effective mixing matrix for the light three active neutrinos would be non-unitary. Because of this non-unitary neutrino mixing (NUNM), the oscillation probabilities between the three active neutrinos would be altered as compared to the probabilities obtained under the assumption of a unitary PMNS matrix. Atmospheric neutrinos have access to a wide range of energies and baselines, which can feel the presence of such NUNM in Earth’s matter effect. In this talk, I will discuss the possible constraints that can be placed on the NUNM parameter ($\alpha_{32}$) in a model-independent fashion using the proposed 50 kt magnetized Iron Calorimeter (ICAL) detector under the India-based Neutrino Observatory (INO) project, which can efficiently detect the atmospheric $\nu_\mu$ and $\bar\nu_\mu$ separately in the multi-GeV energy range passing through deep inside the Earth.
The study of tauon($\tau$-lepton) is one of the topics of current interest as its understanding is required for various important aspects such as to test the lepton flavor universality, for the accurate measurement of the neutrino oscillation parameters, to reduce the uncertainties in the $\nu_l-$nucleus cross section measurements, etc. Since the tauon has higher threshold and very short lifetime ($\simeq 2.9 \times 10^{-13}$ sec), therefore, it is very difficult to observe. Till date a few tauon events with limited statistics have been observed in the accelerator and atmospheric neutrino experiments such as DONuT, NOMAD, OPERA, SuperK and IceCube.
The experimenters have planned
to study the tauon production with higher statistics via the decay of
$D_s$-meson ($D_s \to \tau \nu_\tau$) in the DsTau and SHiP experiments as well as through the $\nu_\mu \to \nu_\tau$ oscillation channel
in the DUNE and T2HK experiments, which covers a wide energy spectrum of neutrinos. For example, at DUNE it is expected that $\nu_\tau$
events in the appearance mode would be between 100 to 1000. The $\tau$-leptons are identified by the observation of leptons and pions whose decay rates and the topologies depend upon the production cross section and polarization of the $\tau$-leptons produced through the various reaction processes such as quasielastic, inelastic and deep inelastic scattering in the $\nu_\tau$-nucleon interactions. In the low and intermediate energy regions ($\approx$ few GeV) the tauon is not fully polarized while at very high energies i.e., $E_{\nu_\tau}>>m_\tau$,
it is fully polarized. In the literature there are several studies for the $\tau$ polarization in the quasielastic and inelastic
reaction channels but for deep inelastic scattering (DIS) process studies are very limited.
We have studied the tauon polarization for the charged current induced DIS process in the case of free nucleon as well as in the nuclear targets which are being used in the ongoing and proposed experiments. In the standard model the transverse component of lepton polarization
is zero due to time reversal invariance. We shall present the results for the longitudinal and perpendicular components of the tau polarization both for the $\nu_\tau$ scattering from the free nucleon as well as off the nuclear targets, where the effect of nuclear medium modification by taking into account Fermi motion, binding energy, nucleon correlations, mesonic contribution and shadowing effects have been considered.
Lorentz Invariance Violation (LIV) is a trending topic in Beyond Standard Model Physics. Lorentz symmetry is well established in the low-energy realm of physics. But there are various theories, which suggest its violation at the Planck scale phenomenon. As neutrinos, having tiny mass, are the particles that are breaking down the barriers of the Standard Model, they may be an excellent tool for searching such Planck-suppressed signals. For this study, we have opted the Standard model extension as the theoretical framework, which contains all Lorentz violating terms in it. We study the non-isotropic LIV, which causes the sidereal effect in the neutrino beamline experiment. The neutrino disappearance channel is simulated for the NO$\nu$A far detector. We find that NOνA FD is highly sensitive to the LIV and new limits of LIV coefficients are also predicted.
A large area (560 m2) muon telescope has been operating in Ooty, Tamil Nadu, as a part of the GRAPES-3 experiment since 2000. The construction of a similar area muon telescope is in progress. The existing muon telescope consists of nearly 4000 proportional counters (PRCs), and a similar number of PRCs have been deployed in the new muon telescope. The signal produced by the PRCs due to the passage of muons is only a few mV which needs to be amplified and discriminated appropriately before further processing. Due to the tiny signal (~100 pC charge), noise is an important issue affecting the recorded data quality. The PRCs in the existing muon telescope were instrumented with amplifier-discriminators designed and used for more than four decades since the Kolar Gold Field experiments. However, an effort was undertaken to design and develop these electronics in-house using the latest technologies and considering various aspects to suppress the noise and reduce power consumption. It has been successfully developed, fabricated in large numbers, and implemented in the existing muon telescope by replacing the old frontend electronics. The quality of the recorded data has been significantly improved after the upgrade. We will discuss the challenges in developing and producing the frontend electronics in large numbers and their superior performances compared to the old electronics.
We explore the phenomenological implications of one vanishing minor in neutrino mass matrix using trimaximal mixing matrix. We analyse all the six possible patterns of one vanishing minor in neutrino mass matrix for both TM$_1$ and TM$_2$ mixing matrix. We predict the value of sum of neutrino masses and effective Majorana masses for all the patterns. We also analyse the variation of total neutrino mass, effective Majorana mass and Majorana CP violating phases $\alpha$ and $\beta$ with respect to unknown parameter $\phi$.
Transverse spherocity is an event shape observable which is capable of separating pQCD-dominated jetty events from soft QCD-dominated isotropic events. Recent studies show that transverse spherocity can be applied not only in pp collisions but in heavy-ion collisions, which are relatively dominated by soft-QCD processes. We take this scope of transverse spherocity to exploit its use to probe the correlation between the initial spatial anisotropy and final azimuthal anisotropic coefficients, namely, eccentricity, triangularity, elliptic flow and triangular flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV using a multi-phase transport model (AMPT). We have found that both eccentricity and elliptic flow are anti-correlated with transverse spherocity, and triangular flow is positively correlated while triangularity has no dependence on transverse spherocity. This work also shows the nonlinear correlation between initial and final anisotropies for different transverse spherocity selections. Finally, we report a transverse momentum crossing point between the elliptic flow and triangular flow coefficients, which is found to vary with centrality and transverse spherocity selections.
The expansion rate of the Universe is modified when the total energy density receives a substantial contribution from a new scalar field, along with the standard radiation energy. This results in a few crucial changes in the set of Boltzmann equations for leptogenesis, governing the evolution of abundance of decaying particle and the lepton asymmetry. Thus the analytical solution of Boltzmann equations (derived assuming radiation dominated cosmology), already available in literature are no longer applicable in the present scenario. Hence we solve the set of modified Boltzmann equations in order to find the expression of the so called efficiency factor for leptogenesis in non-standard cosmological scenario. The asymmetry generated (at an early epoch) due to the CP violating out of equilibrium decay of heavy Right handed neutrino (RHN) suffers from washout effect due to inverse decay. This washout process is mainly controlled by the decay parameter which can be lowered considerably by incorporating the idea of faster expansion of Universe. Therefore in the non-standard cosmological scenario a larger fraction of the asymmetry (produced at high temperature) survives down to the present day temperature. Thus, without altering the underlying particle physics model (Standard Model + three RHNs) we will end up with a larger final asymmetry compared to the standard cosmology. It is a clear hint towards the possible relaxation of the stringent lower bound on the lightest RHN mass required to produce adequate asymmetry in agreement with current experimental data.
In this report, we make a comparative study of Machine Learning (ML) algorithms available in ROOT based TMVA package, when employed for particle discrimination. A dataset from Fermilab's MiniBooNE experiment has been used to accomplish this task. The goal is to distinguish between two flavours of neutrinos, namely electron neutrinos (signal) and muon neutrinos (background). There are 5 different algorithms under TMVA package: Boosted Decision Trees (Adaboost), K-Nearest Neighbours, Artificial Neural Networks, Support Vector Machines and Fischer's Discriminant Analysis. Through many iterations, thorough tuning of their parameters after each iteration was carried out in order for to gain maximal output, i.e., good purity, signal efficiency and significance. As for the structure of the MiniBooNE dataset is concerned, 36499 signal events and 93565 background events were used, making a total of 130064 instances. There are 50 particle ID variables for each event.
Through our exercise it was observed that the BDT and KNN models outperform the others by a considerable margin. Among these two, BDT provides a marginally better purity of 91.1 percent whereas KNN provides a marginally better signal efficiency of 0.9604 and a significance of 29.5152. The details of the analysis, along with factors such computation time, etc. for obtaining a fruitful result will be presented and discussed.
This work demonstrates a viscous extended holographic Ricci dark energy (EHRDE) in a flat FRW universe based on the Israel-Stewart approach. Under the consideration that EHRDE dominates the universe, we study the evolution equation for the bulk viscous pressure $\Pi$ with the truncated form $\tau \dot{\Pi}+\Pi=-3\xi H$, where $\tau$ is the relaxation time, and $\xi$ is the bulk viscosity coefficient. Considering the thermodynamic pressure of EHRDE and bulk viscous pressure, we demonstrate the evolution of the EoS parameter $w_{DE}$ is behaving like a phantom i.e., $w_{DE}\leq -1$. We also observe that $p_{eff}=p+\Pi$ is a monotone decreasing function of time. A decreasing effect of bulk viscosity happens with the evolution of the universe. Lastly, the generalized second law of thermodynamics is valid for the viscous EHRDE-dominated universe enveloped by an apparent horizon.
We discuss the consequences of the only Higgs boson in a split-supersymmetric (Split-SUSY) scenario using information theory as a tool which employs branching ratios of Higgs and other experimental constraints. Estimates on preferred values of the SUSY breaking scale and $\tan\beta$ are about $10^7$ GeV and 41, respectively, while the mass of neutralino LSP turns out to be about a TeV.
We present and discuss a new family of topological hairy dyonic black hole solutions in asymptotically anti-de Sitter space. The coupled Einstein-Maxwell-scalar gravity system, that carries both the electric and magnetic charges is solved, and exact hairy dyonic black hole solutions are obtained analytically. The scalar field profile that gives rise to such black hole solutions is regular everywhere. The hairy solutions are obtained for planar, spherical, and hyperbolic horizon topologies. In addition, analytic expressions of regularized action, stress tensor, conserved charges, and free energies are obtained. We further comment on different prescriptions for computing the black hole mass with hairy backgrounds. We analyze the thermodynamics of these hairy dyonic black holes in canonical and grand canonical ensembles, and we find that both electric and magnetic charges have a constructive effect on the stability of the hairy solution. For the case of planar and hyperbolic horizons, we find thermodynamically stable hairy black holes that are favored at low temperatures compared to the non-hairy counterparts. We further find that, for a spherical hairy dyonic black hole, the thermodynamic phase diagram resembles that of a Van der Waals fluid not only in canonical but also in the grand canonical ensemble.
Angular distributions of the decay B+ → K∗(892)+μ+μ− are studied using events collected with the CMS detector in s√ = 8 TeV proton-proton collisions at the LHC, corresponding to an integrated luminosity of 20.0 fb−1. The forward-backward asymmetry of the muons and the longitudinal polarization of the K∗(892)+ meson are determined as a function of the square of the dimuon invariant mass. These are the first results from this exclusive decay mode and are in agreement with a standard model prediction.
Resistive Plate Chamber (RPC) is a very popular gaseous detector used in High-Energy Physics (HEP) experiments for triggering and tracking.
Keeping in mind the requirements of detectors having high-rate handling capability, cost-effectiveness, and large area coverage to be used in future HEP experiments, commercially available bakelite plates with moderate bulk resistivity are used to build RPC prototypes.
In general bakelite RPCs are fabricated with a linseed oil coating on the inner surface to make it smooth which helps to reduce the micro-discharge probability. A new method of linseed oil coating has been developed for the bakelite RPC. In conventional bakelite RPC, the linseed oil coating is done after making the gas gap. In this particular work, the linseed oil coating is done before making the gas gap. After the linseed oil coating, the plates are cured for several days. The advantage of this procedure is that after linseed oil coating it can be checked visually whether the curing is properly done, or any uncured droplet of linseed oil is present. The detector is characterised with Tetrafluoroethane (C$_{2}$H$_{2}$F$_{4}$) gas and also with conventional Tetrafluoroethane, Iso-butane (i-C$_{4}$H$_{10}$) gas mixture. The details of the fabrication, measurement and test results will be presented.
The SuperKEKB colliding-beam accelerator provides e+e− collisions at an energy corresponding to the mass of the Υ(4S) resonance, the products of which are being recorded by the Belle II detector. The mass of the Υ(4S) meson (∼ 10.58 GeV) is just above the threshold for decay into B-meson pairs. We measure the beam energy in the center of mass frame using fully reconstructed neutral and charged B decays from various final states. The measurement is performed using the beam-constrained mass of the reconstructed B decays. An accurate determination of the beam energy is important as a kinematic constraint in many analyses, as well as being an input for calibrations of the momentum and energy scale. We have calculated the average center-of-mass energy along with its uncertainty and the width of the distribution of the center-of-mass energy for the data recorded by the Belle II detector from 2020 to 2022.
The profile of a particle in quantum theory is usually formulated as an eigenstate of momentum using plane waves. This is a straightforward and widely used prescription, but it is inadequate because the energy localisation of the particles is completely indescribable.
Due to this spatial non-normalizability, in plane-wave calculations, the frequency of quantum transitions can only be calculated as an averaged physical quantity with dimension, averaged in time (and volume). In contrast, with wave packets, including the effects of particle localisation, a complete quantum transition can be described as a pure (dimensionless) probability quantity.
Now, whether wave packet-specific effects - beyond the "averaging" described above - can be observed experimentally.
In this talk, I will focus on several decay processes of quarkoniums occurring near the kinetic threshold and show that the experimental results, which were difficult to understand by plane-wave calculations, can be explained by wave packet calculations. The talk will also cover the fundamentals of calculation methods using wave packets.
We present here a bottomonium suppression study with centrality, transverse momentum and rapidity dependence. The system under consideration is Pb$-$Pb collisions at $2.76$ TeV center of mass energy per nucleon for bottomonium (1S) state. The bottomonium bound states produced in the early hard scattering stage of collision traverses the Quark-Gluon Plasma (QGP) medium. We calculate a survival probability of the bottomonium due to dissociation by absorption of a gluon, collisional damping and the Debye color charge screening. We also accounted for the suppression due to the shadowing as cold nuclear matter effect. The QGP stage is evolved using (3+1)-dimensional viscous hydrodynamics based on Israel-Stewart’s second-order formalism with Wuppertal-Budapest Lattice QCD equation of state. The parameters in hydrodynamics are fixed such that the generated pions elliptic flow, $p_T$-spectra and rapidity spectra could match with the corresponding experimentally measured observables. The Survival probability calculated due to gluonic dissociation \& collisional damping processes, corrected for correlated recombination of bottomonia and feed-down has been compared with the experimentally measured equivalent quantity; namely nuclear modification factor ($R_{AA}$). We find a reasonably good agreement between the theoretical predicted and measured values for all of the three dependences.
DEASA (Dayalbagh Educational Air Shower Array) is a ground based mini array to study Air shower phenomena at Dayalbagh Educational Institute[1], Agra in India located at 27.1767° N and 78.0081° E respectively and 170 m above sea level. It comprises 8 plastic scintillation detectors of dimensions (1 m x 1 m x 2 cm) and 2 prototype detectors of dimensions (23.5 cm x 24 cm x 2 cm). The total area covered by the array is 260 square meters.
. The pulse analysis of the prototype detectors has been done through the differential and integral pulse height spectra of MIPs (minimum ionizing particles) corresponding to amplitude peak values varying from 300 mV to 800 mV. The observational study of MIPs with respect to temperature of PMT is also interpreted. Generally MIPs are muons, one of the elementary particles reaching towards the earth at sea level, they are weakly interacting particles and traverse through the material by ionizing the medium of the material of the detector. When MIP traverses vertically through a plastic scintillation detector of thickness 2 cm, it deposits 3.15 MeV. These prototype detectors were used for calibration of the eight detectors in every six months by coincidence method.
Experimental observations involving count rate in hertz of MIPs using counter CAEN mod 1145 connected to NIM unit have been graphically analyzed [3]. The graphs have been plotted between count rates at different temperatures( deg. C) and pressure (hPa) monthly for almost half a year. Statistics play a major role in the analysis of count rates in particle astrophysics. The experimental observations have been fitted with a gaussian probability distribution function.
References
Measurements and simulations of secondaries with a detector station(s) using CAMAC data acquisition, Kajal Garg thesis defended in 2021.
Dam, K. & Eijk, B. & Fokkema, D. & Holten, J. & Laat, A. & Schultheiss, N. & Steijger, Jos & Verkooijen, J.. (2019). The HiSPARC Experiment.
Roy, S., Chakraborty, S., Chatterjee, S., Biswas, S., Das, S., Ghosh, S. K., ... & Raha, S. (2019). Plastic scintillator detector array for detection of cosmic ray air shower. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 936, 249-251.
The Dirac scotogenic model provides an elegant mechanism which explains small Dirac neutrino masses and neutrino mixing, with a single symmetry simultaneously protecting the "Diracness" of the neutrinos and the stability of the dark matter candidate. Here we explore the phenomenological implications of the recent CDF-II measurement of the $W$ boson mass in the Dirac scotogenic framework. We show that, in the scenario where the dark matter is mainly a $SU(2)_L$ scalar doublet, it cannot concurrently satisfy: (a) the dark matter relic density (b) the $m_W$ anomaly and (c) the direct detection constraints. However, unlike the Majorana scotogenic model, the Dirac version also has a "dark sector" $SU(2)_L$ singlet scalar. We show that if the singlet scalar is the lightest dark sector particle i.e. the dark matter, then all neutrino physics and dark matter constraints along with the constraints from oblique $S$, $T$ and $U$ parameters can be concurrently satisfied for $W$ boson mass in CDF-II mass range.
The charge dependent azimuthal anisotropy of cosmic muon flux at different zenith angles is studied using the mini-Iron Calorimeter (miniICAL) at IICHEP, Madurai. The miniICAL consists of 11 layers of 5.6 cm thick iron plates with 10 layers of 2m$\times$2m Resistive Plate Chambers (RPCs) in between them. The gap between two iron plates is 4.5 cm. The miniICAL was commissioned in 2018 and started taking cosmics data by the middle of 2018. The distribution of muons at the sea level with respect to the azimuth, $\phi$ is affected by factors such as geomagnetic fields and solar modulations whereas the distribution with respect to zenith $\theta$ is mainly affected by the thickness of the atmosphere penetrated by the muons. The iron layers are magnetized with a maximum field of 1.5 T, facilitating charge identification. The data are compared with the simulated events, where cosmic muons are generated using CORSIKA extensive air shower simulation and the detector simulation is done through the GEANT4 toolkits. This paper discusses the comparison of data with simulation and also various systematics associated with the simulation and reconstruction.
Properties of nuclear matter at density beyond nuclear saturation density ($n_0$) are not well understood. Compact star is unique laboratory to provide us with a plenty of information for studying such dense nuclear matter. According to Bodmer and Witten conjecture, strange quark matter (SQM) composed of up ($u$), down ($d$) and strange ($s$) quarks could be true ground state of strongly interacting matter at high density. Conversion of nuclear matter into SQM via weak interactions give birth to new possibilities of compact objects one is hybrid star and other is strange star. Hybrid star is a NS with core composed of quark matter and SS is entirely composed of SQM up to surface. Recent gravitational waves and NICER observations of pulsars provides us lot of information about constraints on mass, radius and tidal deformability of these compact objects. We model the SQM inside SS to make it compatible with these astrophysical observations. For that we consider modified MIT bag model and vector bag model as stable SQM region can be generated with these two models with certain values of parameters. We determine tidal effects due gravitational field of companion star. Applying tidal constraints we obtain new limits on radius and compactness of SS.
In this work, we focus on the complementarity between the two upcoming long-baseline experiments: DUNE and T2HK, in establishing the leptonic CP violation at 3$\sigma$ C.L. for atleast 75% of the Dirac CP phase ($\delta_{\mathrm{CP}}$). We find that DUNE + T2HK combinedly can achieve the desired CP coverage of 75% with only half of their individual nominal exposures, while independently, they both fail to attain the same even with full exposure. Further, we also elaborate on both individual and complementary performance in establishing CP coverage as a function of optimal choice of run-time, systematic uncertainties, and the subsequent effect of exposure in our study. We also explore the crucial role of disappearance mode in establishing the CP violation. We also incorporate the effect of having a probable second detector in Korea (T2HKK). We realize that although T2HKK has better sensitivity towards CP coverage than individual DUNE and T2HK, but it is still less than DUNE + T2HK, whatever be the octant of $\theta_{23}$.
The high-energy astrophysical neutrinos detected by IceCube, with TeV-PeV energies, allow us to test neutrino physics in new energy and distance scales. One possibility is looking for new interactions between neutrinos and matter whose existence would ordinarily be too feeble to detect, except at high neutrino energies. We focus on well-motivated, economical new interactions introduced by gauging lepton-number symmetries that already exist in the Standard Model, $L_e - L_{\mu}$, $L_e - L_{\tau}$ and $L_{\mu} - L_{\tau}$. They introduce a new $Z^{\prime}$ boson that, if light, mediates a long-range potential between neutrinos and distant matter, separated by distances of up to giga-parsecs. This potential can significantly alter the oscillation probabilities of high-energy neutrinos en-route to Earth and, as a result, also their flavor composition, i.e., the relative number of neutrinos of each flavor in the incoming flux. The high energies and long propagation distances of the IceCube neutrinos allow us to probe unexplored regions of the parameter space: tiny $Z^{\prime}$ masses, of $10^{-10}-10^{-25}$ eV, and tiny adimensional couplings with $Z^{\prime}$, of $10^{-24}-10^{-35}$. We present refined constraints, based on the measurements of flavor composition in IceCube and its planned upgrade, IceCube-Gen2, for $L_e - L_{\mu}$ and $L_e - L_{\tau}$ and new limits on $L_{\mu} - L_{\tau}$. We show that the long-range potentials larger than $10^{-17}$ eV are disfavored at very high statistical significance.
The measurement of the standard three-flavor neutrino mixing matrix elements with very high precision makes it inevitable to test its unitarity property. So, in this work, we study the ability of the next generation long-baseline experiments DUNE and T2HKK to constrain various parameters relevant for non-unitary neutrino mixing (NUNM) in a complete model-independent fashion. We also discuss the possible correlation between the standard oscillation parameters $\theta_{23}$, $\delta_{CP}$, and various NUNM parameters. We observe that T2HKK has better sensitivities for NUNM parameters $|\alpha_{21}|$ and $\alpha_{22}$ as compared to DUNE because of its large statistics in the appearance channel and less systematic uncertainties in the disappearance channel. For $|\alpha_{31}|$, $|\alpha_{32}|$, and $\alpha_{33}$, DUNE performs better due to its larger matter effect and wide band beam. The expected limit on $\alpha_{11}$ is similar from both the experiments. We also discuss the complementarity of the two experimental setups and improvement in the constraints when their prospective data are combined. We also observe that the near detectors can significantly constrain $\alpha_{11}$, $|\alpha_{21}|$, and $\alpha_{22}$ because of the zero-distance effect. Finally, we demonstrate how $\nu_\tau$ appearance channel in DUNE can improve the constraints on $|\alpha_{32}|$ and $\alpha_{33}$.
The chemical abundance of different elements in Universe depends substantially on the nuclear structure and nuclear reactions. In order to determine the premordial $^7Li$ abundance in the early Universe, the $^3H(\alpha,\gamma)^7Li$ radiative-capture process is of great astronomical relevance. The calculations of primordial nucleosynthesis offer some thorough and comprehensive assessments of main assumptions of the big-bang model. The key information required for these calculations is nuclear reaction rate $N_A <\sigma v>$, which further depends on the velocity-averaged cross section ($\sigma$) of the nuclear raection. Astrophysical S-factor calculations also require the total cross section of any reaction. The cross section for $^3He(\alpha,\gamma)^7Be$ Astrophysical reaction has already been obtained by our group for laboratory energies up-to $9$ $MeV$. In this work, we have calculated cross section for the reaction $^3H(\alpha,\gamma)^7Li$ by calculating scattering phase shifts using Phase function method. The phase shifts are calculated for laboratory energies below $15$ MeV for $\frac{5}{2}^-$ and $\frac{7}{2}^-$ resonant states of $^7Li$ (partial wave $\ell=3$). The calculated resonance energies ($E_r$) and width of resonance $\Gamma$ are: $E_r=2.19$ $MeV$ ($exp = 2.18$ $MeV$), $\Gamma= 0.090$ $MeV$ ($exp = 0.069$ $MeV$) for $\frac{7}{2}^-$ resonant state and $E_r= 3.60$ $MeV$ ($exp=4.14$ $MeV$), $\Gamma= 0.704$ $MeV$ ($exp=0.918$ $MeV$) for $\frac{5}{2}^-$ resonant state.
The CERN IT department hosts the HGCAL database (DB), which is based on a framework originally developed at Fermilab. It is now used by several CMS sub-detectors – tracker, calorimeters, and muon system. The DB can be used for detector construction and operation, where each stored component has a unique ID, barcode, serial number, or name. It is also used to track the flow of components from their time of reception to their final placement on the detector. The HGCAL DB uses parent-child relationships to "build" higher level detector objects: silicon modules, scintillator tile-boards, tile segments, half and full cassettes, layers, disks, and HGC+ (& HGC-). This helps in tracking detector fabrication processes with the HGCAL sub-detector at the top of the pyramid. The HGCAL DB will store various types of detector data – manufacturing, construction, characterization, and calibration, as well as quality control (QC) data for all traceable components. An effort is underway to develop an online monitoring system to visualize QC data of detector components. The talk will describe key aspects of the HGCAL DB.
We present our analysis of the deconfinement phase transition in the bosonic BMN matrix model. The model is investigated using a non-perturbative lattice framework. We used the Polyakov loop as the order parameter to monitor the phase transition, and the results are verified using the separatrix ratio. The calculations are performed using a large number of colors and a broad range of temperatures for all couplings. Our results indicate a first-order phase transition in this theory for all the coupling values that connect perturbative and non-perturbative regimes of the theory.
Machine learning (ML) is a rapidly expanding field with a wide range of applications ranging from healthcare to high-energy physics (HEP) research. Deep Learning is a sub-field of ML in which the most basic structure is a neural network. Training such models with a vast amount of pre-processed data allows them to be used for any pattern recognition, classification, or regression problem. These techniques have been used for event reconstruction in numerous HEP experiments. INO-ICAL is an underground detector facility for neutrino research, which is an India-based Neutrino Observatory equipped with a magnetized Iron Calorimeter. It consists of RPC detectors placed in between iron plates acting as the target for neutrinos. The primary objectives are to precisely measure neutrino oscillation parameters and the $\delta_{CP}$ phase. The neutrinos interacting with the iron plates produces muons which travels through the stack of RPC layers to produce a track. The method used for muon track reconstruction at INO-ICAL is the Kalman filter algorithm. In this work, the feasibility of using machine learning methods for muon track reconstruction has been investigated.
In this preliminary study, an algorithm based on the deep neural network using a multi-layer perception has been developed to reconstruct the muon energy using the signal hit information in the ICAL detector. The algorithm has been tested with various different model configurations. The comparison of performance of these different models are presented here. The final output of muon momentum resolution for ICAL detector for the best model configuration is compared with standard method of track reconstruction used till now in ICAL simulation i.e. Kalman Filter technique.
We explore $\mathcal CP$-violating anomalous $ht\bar{t}$ couplings via associated production of Higgs boson at the LHC and its future variants using a set of newly proposed T-odd observables involving momentum of final state particles. Limits on such couplings are also presented using the production asymmetries associated with the process $pp \to t(-> l^{+}\nu_l b)~\bar{t}(l^{-}\bar{\nu_l}\bar{b})$. Our estimates reflect $\left|c_p\right| \lt 4.32 \times 10^{-2}$ at LHC with $\sqrt{S}$ = 13 TeV and the integrated luminosity of 139 fb$^{-1}$. The corresponding bounds for HL-LHC with $\sqrt{S}$ = 14 TeV and FCC-hh with $\sqrt{S}$ = 100 TeV for the projected luminosities of 3 ab$^{-1}$ and 30 ab$^{-1}$ are found to be $\left|c_p\right| \lt 8.1 \times 10^{-3}$ and $\left|c_p\right| \lt 3.5 \times 10^{-4}$, respectively at 2.5 $\sigma$ level.
Using renormalisation group equations (RGEs) we study the radiative corrections of different models of neutrino mass patterns at different values of high seesaw scale $M_R$ and $\tan \beta$ with the variation of SUSY breaking scale $m_s$. Different neutrino mass patterns are found to behave differently under the analysis. Small value of $\tan \beta$ is found preferable for NH case wheres higher value of $\tan \beta$ is found favorable for IH case. A self-complementarity relation among the three leptonic mixing angles is employed and it is found to be invariant against radiative correction. Both NH and IH prefer the high SUSY breaking scale $m_s$. Different neutrino oscillation parameters receive varying radiative corrections under the analysis.
We study in detail the viability and the patterns of a strong first-order electroweak phase transition as a prerequisite to electroweak baryogenesis in the framework of $Z_3$-invariant Next-to-Minimal Supersymmetric Standard Model (NMSSM), in the light of recent experimental results from the Higgs sector, dark matter (DM) searches and those from the searches of the lighter chargino and neutralinos at the Large Hadron Collider (LHC). For the latter, we undertake thorough recasts of the relevant, recent LHC analyses. With the help of a few benchmark scenarios, we demonstrate that while the LHC has started to eliminate regions of the parameter space with relatively small $\mu_{_{\text eff}}$, that favors the coveted strong first-order phase transition, rather steadily, there remains phenomenologically much involved and compatible regions of the same which are yet not sensitive to the current LHC analyses. It is further noted that such a region could also be compatible with all pertinent theoretical and experimental constraints. We then proceed to analyze the prospects of detecting the stochastic gravitational waves, which are expected to arise from such a phase transition, at various future/proposed experiments, within the mentioned theoretical framework and find them to be somewhat ambitious under the currently projected sensitivities of those experiments.
Hot and dense matter created in relativistic heavy-ion collisions exhibits collective behaviour due to multi-particle interactions among the constituents of the matter.
Elliptic flow (the second harmonic coefficient of the Fourier decomposition of the azimuthal angle distribution of particles) is one of the observables to measure the collective behavior in the early stages of heavy-ion collisions. The $\phi$ meson, which is the bound state of strange (s) and antistrange ($\bar{s}$) quark, have a low hadronic interaction cross-section as compared to other light hadrons. Due to the small hadronic interaction cross-section, the elliptic flow ($v_{2}$) of $\phi$ mesons act as an important tool to probe the system created in relativistic heavy-ion collisions.
We have extracted the $\phi$ mesons from $K^{+} K^{-}$ decay channel in Au + Au collisions generated from Parton Hadron String Dynamics (PHSD) transport model. We have used 30 million events from the PHSD model. We will present $v_{2}$ of $\phi$ mesons as a function of transverse momentum ($p_{T}$) and rapidity (y) in Au+Au collisions at $E_{lab}$ = 35 A GeV. We will also present collision centrality dependence of $\phi$ mesons $v_{2}$ and show comparison to the published experimental results.
We evaluate the exact two-photon exchange (TPE) correction to the unpolarized elastic lepton-proton scattering at small momentum transfer using a low energy effective field theory, heavy baryon chiral perturbation theory. The infrared divergent four- point box diagram with one heavy proton propagator is evaluated analytically via photon mass regularization. We present a numerical comparison of the finite (physical) part of our exact result with one based on the widely used soft-photon approximation. It is found that the exact contributions are around 150-200% more than the SPA contribution depending upon the beam energy and lepton mass. We estimate the charge asymmetry for both electron-proton and muon-proton scattering in the MUSE kinematic region.
Muons produced by the interaction of primary cosmic rays in the Earth's atmosphere serve an excellent tool for studying various solar phenomena, primary cosmic ray composition, and gamma ray sources. The GRAPES-3 experiment at the Cosmic Ray Laboratory in Ooty is home to the world’s largest muon telescope. Another muon telescope of similar detection area (560 m$^{2}$) is under construction to enhance its physics sensitivity as mentioned above. We have performed a study of the detection of muons with the GRAPES-3 new muon telescope using GEANT4 simulations. In this contribution, we will present the details of the geometry reconstruction of the new muon telescope, including proportional counters and the concrete used for shielding the electromagnetic and hadronic components. We will also present the response of the new muon telescope to various particles in the cosmic ray shower.
Short-lived hadronic resonances with widely varying lifetimes provide an excellent tool to study the hadronic phase produced in relativistic heavy ion collisions. The dynamics of these particles, especially the $K^*(892)^0$ meson, and thus varying yields has been used extensively to study the hadronic phase lifetime. In this work, we employ an alternative method by assuming 1+1D second-order viscous hydrodynamics for the evolution of the hadronic medium and to obtain the hadronic phase lifetime. The evolution is assumed to break down when the Knudsen number limit, $Kn > 1$, is attained. It is assumed that the particle yield gets preserved at this limit. The obtained lifetime is then used within a transport model for $K^*(892)^0$ mesons modelled by including rescattering and regeneration effects to predict their final state yield. The results obtained in our calculations are qualitatively in agreement with the experimentally obtained hadronic phase lifetime and $K^*(892)^0/K$ ratio.
Heavy quarks (charm and bottom) are created during an early stage of the heavy-ion collision via hard scattering. Due to their large mass, they do not get thermalized with the constituents of the quark-gluon plasma (QGP) over the lifetime of the plasma. Hence, they witness the evolution of QGP and are useful probes to study the strongly interacting matter. Heavy quark transport coefficients are sensitive to the interaction with the QGP medium and the estimation of the drag and diffusion coefficients of heavy quarks in the hot QCD medium is of interest. We investigate the effects of the collision and soft gluon radiation by heavy quark on the transport coefficients (e.g., drag and diffusion coefficients) within the ambit of perturbative QCD and kinetic theory for a viscous QGP utilizing the effective fugacity quasi-particle model (EQPM) which models the hot QCD medium based on the lattice QCD equation of state [1]. This modifies the momentum distribution function of the QGP constituent particles, i.e., light quarks, anti-quarks and gluons by introducing a temperature-dependent effective fugacity parameter. Viscous corrections to heavy quark transport coefficients due to shear and bulk viscosities of the medium are incorporated at leading order in the thermal distribution function of in-medium particles [2]. We observe that the soft gluon radiation substantially affects the heavy quark transport coefficients in the viscous QGP medium. The effect of introducing next-to-leading order viscous corrections to the heavy quark transport coefficients is in progress for both collisional and radiative processes.
References:
[1] A. Shaikh et al. In: Phys. Rev. D 104 (2021) 3, 034017.
[2] A. Shaikh et al. In: PoS CHARM2020 (2021) 060.
Several heavy-ion collision experiments at RHIC and LHC have been performed in identifying quark-gluon plasma (QGP) matter. In recent times, non-central heavy-ion collisions are of more interests where very strong magnetic field is produced in the direction perpendicular to the reaction plane. Many theoretical efforts have been made to study the modification of the strongly interacting matter in presence of an external magnetic field.
The heavy quarkonium is one of the important probes to investigate the properties of nuclear matter in presence of finite temperature and magnetic field. Also the time scale of quarkonia formation and the magnetic field generation are of similar order. So the study of heavy quarkonia in presence of magnetic field is of great interest.
In this work we have explored the imaginary part of the Heavy Quark (HQ) potential and subsequently the dissociation of heavy quarkonia at finite temperature and magnetic field. With respect to earlier investigations on this topic, present work contain three new ingredients. First one is considering all Landau level summation, for which present work can be applicable in entire magnetic field domain - from weak to strong. Second one is the general structure of the gauge boson propagator in a hot magnetized medium, which is used here in heavy quark potential problem first time. Third one is a rich anisotropic structure of the complex heavy quark potential, which explicitly depends on the longitudinal and transverse distance. By comparing with earlier references, we have attempted to display our new contributions by plotting heavy quark potential tomography and dissociation probability at finite temperature and magnetic field.
Neutrinos are fundamental particles that can act as a probe for exploring violations of fundamental symmetries such as Lorentz Invariance. Lorentz symmetry breaking is a fundamental violation of space-time symmetry which implies that physical laws vary under Lorentz transformation. We consider the LIV effect which is intrinsic and whose effects exist even in vacuum. We use the Standard Model Extension(SME) framework to treat the effect of LIV as a perturbation to the standard matter Hamiltonian. We study the influence of CPT-odd LIV term on the mass-induced neutrino oscillation. In this work, we explore the impact of LIV parameters on the neutrino oscillation probabilities, particularly the oscillation channel $P_{\mu e}$ which is the most significant channel for DUNE. We observe a sizable effect of LIV parameters on the oscillation probabilities. We also investigate the impact of LIV parameters on the CP-measurement sensitivity at DUNE.
In standard model of electroweak interaction, neutrino charge in vacuum vanishes and this follows from the requirement of anomaly cancelation $(SU(2)_{L} \times U(1)_{Y})$.
In a thermal medium in presence of an external electro-magnetic field,
neutrino can interact with photon, mediated by the corresponding charged
leptons (real or virtual). Thus it acquires an effective charge. In this theory, this comes from the vector type and axial vector type vertex of
weak interaction. In absence of magnetic field only the vector type vertex contributes\cite{pal1,orae}. On the other hand in a magnetized plasma, the
axial vector part also start contributing to the effective charge
of neutrino. This contribution is dominant to order $\frac{eB}{m_{e}^2}$ for $eB < m_{e}^2$,
when $B$ is the magnetic field. The size of the contribution is: $
e^{\nu_{a}}_{eff} \sim -(3.036 \times 10^{-12})\left[g_{A}e\left(\frac{{ B}}{B_{c}}\right) \frac{1}{\pi^{3/2}}\right] \left(\sqrt{\frac{m_{e}}{T}} \right)
e^{-m_{e}/T}\cosh(\mu/T)(1-\lambda)\cos\theta.$
In this equation, for electron neutrinos, $g_{\rm A} = (-1 + 1/2)$ and for mu and tau neutrinos
$g_{\rm A} = (1 - 1/2)$, $B_{c} \sim \frac{m_{e}^{2}}{e}$, $m_{e}$ is the electron mass, $e$ is electron charge, $T$ is temperature, $\mu$ is the chemical potential, $\lambda$ happens to be the helicity of the (Dirac type) neutrino and $\theta$ is the angle between the neutrino momentum and the magnetic field.
In an earlier paper we had obtained this result \cite{MDPI}. \
\indent
In this paper we estimate the contribution
to neutrino effective charge from the vector type (coupling constant $g_{V}$) vertex $e^{\nu_{V}}_{eff}$ coming from the polarization tensor $\Pi_{\mu\nu}$ in a magnetized medium. For electron type neutrinos $g_{\rm V} = 1 - (1 - 4 \sin^2 \theta_{\rm W})/2$,
and for tau and mu type neutrino $g_{\rm V} = -(1 - 4 \sin^2 \theta_{\rm W})/2 $. \
\indent
We note that keeping PCT symmetry in view, the leading powers of $B$ and $\mu$
that appears in the expression of $e^{\nu_{V}}_{eff}$ is of order $(eB)^{2}$ and $\mu^{2}$. We further elucidate on the direction dependence of this charge thats a manifestation of loss of isotropy due to the presence of an external field. The expression of the induced charge from the vector type vertex happen to be:
\begin{eqnarray}
e^{\nu_{V}}{eff} &=& \frac{G{F}g_{V}}{\sqrt{2}e} \left(\frac{e^{2} m_{e}^{2}}{ 1.68\pi^{\frac{3}{2}}} e^{-\frac{\sqrt{2}m_{e}}{T}}{\cal F}(\theta)\right)
\left(1- \frac{\lambda|k|}{\omega}\right)\left[\left(\frac{T}{m_{e}}\right)^{\frac{1}{2}} + \left(\frac{m_{e}}{T}\right)^{\frac{3}{2}} \left(\frac{eB}{m_{e}^{2}}\right)^{2}\right].
\end{eqnarray}
In the ultra-relativistic limit the neutrinos with $\lambda = -1$ acquire the charge, the other helicity state remains charge neutral. We conclude this work by inferring on astrophysical and cosmological consequences of the same.
\begin{thebibliography}{999}
\bibitem{pal1} J.~F. Nieves and P.~B. Pal, Phys. Rev. D {\bf 49}, 1398 (1994),
%J.~C. D'Olivo, J.~F. Nieves and P.~B. Pal, Phys. Rev. D {\bf
%40}, 3679 (1989),
R.~N. Mohapatra and ~P.~B.~Pal. {\em Massive
neutrinos in Physics
and Astrophysics}, (World Scientific, 2nd Ed. 1998.)
\bibitem{orae} V.~N. Oraevsky, V.~B. Semikoz and Ya.~A. Smorodinsky, JETP Lett. {\bf 43}, 709 (1986)
%\bi{pal2} J.~F. Nieves and P.~B. Pal, Phys. Rev. D {\bf 49}, 1398 (1994)
\bibitem{MDPI} Avijit K. Ganguly, Venktesh Singh, Damini Singh, Ankur Chaubey, MDPI {\it Galaxies} {\bf 9}, 2 (2021).
\end{thebibliography}
We study the effect of interference on the lepton number violating (LNV) and lepton number conserving (LNC) three-body meson decays $M^+_1 \rightarrow l^+_i l^\pm_j π^\mp$, that arise in a TeV scale Left Right Symmetric model (LRSM) with degenerate or nearly degenerate right-handed (RH) neutrinos. LRSM contains three RH neutrinos and a RH gauge boson. The RH neutrinos with masses in the range of MN ∼ (MeV - few GeV) can give resonant enhancement in the semi-leptonic LNV and LNC meson decays. In the case, where only one RH neutrino contributes to these decays, the predicted new physics branching ratio of semi-leptonic LNV and LNC meson decays $M^+_1 → l^+_i l^+_j π^−$ and $M^+_1 → l^+_i l^−_j π^+$ are equal. We find that with at least two RH neutrinos contributing to the process, the LNV and LNC decay rates can differ. Depending on the neutrino mixing angles and CP violating phases, the branching ratios of LNV and LNC decay channels mediated by the heavy neutrinos can be either enhanced or suppressed, and the ratio of these two rates can differ from unity.
Nature of transition from the deconfined quark-gluon state to the confined hadron gas and the location of the critical point are among the various properties in heavy ion collisions that are still a matter of investigation. One of the basic characteristics of the critical behaviour of a system undergoing phase transition is that it exhibits fluctuations of all scales. In recent collider experiments the multiplicity of particles produced high enough that it becomes feasible to examine the nature of quark-hadron phase transition using intermittency analysis that relies on high multiplicity events. Scaling properties of factorial moments of spatial patterns of the particles termed as intermittency, used to quantify the value of fluctuations in a system, will be discussed along with observable which help to characterize the system and particle production mechanism. Results from recent phenomenological and experimental investigations will also be presented.
We explore the parameter space of the phenomenological Minimal Supersymmetric Standard Model (pMSSM) with a light neutralino thermal dark matter (having mass less than half the SM Higgs boson mass) for both positive and negative values of the higgsino mass parameter (μ) that is consistent with current collider and astrophysical constraints. Our investigation shows that the recent experimental results from the LHC as well as from direct detection searches for dark matter by the LUX-ZEPLIN collaboration basically rule out the μ>0 scenario while only allowing a very narrow region with light electroweakinos in the μ<0 scenario. These are well within the reach of the Run-3 of LHC and dedicated efforts to probe this region should be pursued.
We incorporate the isospin mass splittings of $J^P={1/2}^+$ and ${3/2}^+$ baryons through the intrinsic mass difference between $u$ and $d$ quarks. We have been calculated the electromagnetic and strong hyperfine contributions arising from the quark-quark interactions in the framework of effective mass scheme. We exploit the experimental information to obtain the effective masses of the quarks inside the baryons, which in turn receive the contributions from the electromagnetic and strong hyperfine interactions. Further, we make robust predictions of heavy flavor baryon masses. We found that our predictions are in very good agreement with the experiment.
Local multiplicity fluctuations are a useful tool to understand the dynamics of the particle production and the phase-space changes from quarks to hadrons in ultrarelativistic heavy-ion collisions. The study of scaling behavior of multiplicity fluctuations in geometrical configurations in multiparticle production can be performed using the factorial moments and recognized in terms of a phenomenon referred to as intermittency.
In this contribution, we present the analysis of the factorial moment performed on the multiplicity distributions of charged particles produced in Pb$-$Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV, recorded with the ALICE detector at the LHC. The normalized factorial moments (NFM), $F_{q}$ of the spatial configurations of charged particles in two-dimensional angular ($\eta,\varphi$) phase space are calculated. For a system with dynamic fluctuations due to the characteristic critical behaviour near the phase transition, $F_{q}$ exhibits power-law growth with increasing bin number or decreasing bin size which indicates self-similar fluctuations. Relating the $q^{\rm{th}}$ order NFM ($F_{q}$) to the second-order NFM ($F_{2}$), the value of the scaling exponent ($\nu$) is extracted, which indicates the order of the phase transition within the framework of Ginzburg-Landau theory. The dependence of scaling exponent on the $p_{\rm{T}}$ bin width and the centrality of the events will be presented. The measurements are also compared with the corresponding results from the AMPT and HIJING models.
Quark gluon plasma is detected as a lowest viscous matter of unverse in heavy ion collision experiments like RHIC and LHC because its measured shear viscosity to entropy density ratio remain very close to its quantum lower bound or KSS bound, predicted from string theory calculation. A corresponding lower limit of relaxation time of quarks (and gluons) can be obtained by using the relaxation time approximation based expression of the ratio. In this strongly coupled environment, we have attempted to sketch the detailed anisotropic transportation in presence of strong magnetic field, which is expected to be produced in the heavy ion collision experiments.
Two-particle electric charge balance function has been measured in proton-lead and lead-lead events with the CMS detector at the LHC. Particle correlations can be used as a probe of the charge creation mechanism, and the balance function is constructed using the like- and unlike-charge particle pairs to quantify these correlations. Compared to previous measurements, the pseudorapidity range is extended up to $|\eta| $ < 2.4. This larger phase space region is essential for studying the system time evolution. The width of the balance function, both in relative $|\eta|$ and relative azimuthal angle, is found to decrease with multiplicity for low transverse momentum ($p_{T}$ < 2 GeV/c). The effect is observed for both collision systems, and it is consistent with a late hadronization scenario, where particles are produced at a later stage during the system evolution. The multiplicity dependence is weaker for the higher $p_{T}$, which signifies that the balancing charge partners are strongly correlated compared to the low-$p_{T}$ region. Model comparisons cannot reproduce the multiplicity dependence of the width in $\Delta\eta$, albeit a model which incorporates collective effects can reproduce the narrowing of the width.
In proton--proton collisions, the measurements of beauty-hadron production cross sections are an effective tool to test the perturbative QCD (pQCD) calculations. In addition, they provide the required reference for measurements performed in Pb--Pb and p--Pb collision systems, in order to study the in-medium mass dependent energy loss and the possible effects of cold nuclear matter, respectively.
In this contribution, the production of electrons from beauty-hadron decays in pp collisions at midrapidity with ALICE will be presented. The Time Projection Chamber (TPC), Time Of Flight (TOF) and ElectroMagnetic Calorimeter (EMCal) are used for particle identification. The presence of EMCal along with the TPC is exploited to measure the beauty-hadron decay electron production cross section in the high transverse momentum region. The $p_{\rm T }$-differential production cross section of electrons from beauty-hadron decays measured with ALICE in pp collisions at different centre of mass energies $\sqrt{s}$, ranging from 2.76 TeV to 13 TeV will be presented. In addition, the comparison of these measurements with different models will be shown.
We formulate a texture 2 zero mass matrix for neutrinos, with charged lepton matrix being diagonal, compatible with the current oscillation data. The proposed matrix is having a minimal structure and hence is very predictive. The predictions of the proposed mass matrix for lightest neutrino mass $m_{\nu_1}$, Jarlskog's rephasing invariant $J$, CP violating phase $\delta$ and effective neutrino mass $\langle m_{ee}\rangle$ are found to be in tune with the latest global analyses. In particular, the Majorana phases exhibit constrained ranges and are strongly correlated, which might have useful implications for experimental searches related to these phases. Some of the mass matrix elements are observed to be almost linearly correlated with $ m_{\nu_1}$ which shows that a measurement of absolute neutrino masses will constrain our model.
High multiplicity proton-proton and proton-lead collisions at LHC energies exhibit similar signatures to those observed in Pb-Pb collisions (i.e. the strangeness enhancement, the ridge behaviours etc.), that were commonly attributed to the formation of the Quark-Gluon Plasma. In this contribution, the measurements of $\pi$, $K$ and, p transverse momentum spectra in the rapidity region $|y|<0.5$ for various multiplicity classes in pp, p-Pb and A-A collisions with the ALICE detector at the LHC will be presented. Various results, including the integrated particle yields and particle ratios as a function of charged particle multiplicity for different systems and energies, will be discussed.
Heavy quarks (charm and beauty) act as better probes in understanding the formation and evolution of the QCD medium in the ultra-relativistic heavy-ion collisions because of their heavy mass and large relaxation time compared to QGP (quark-gluon plasma) lifetime. Possible thermalization of charm quarks is observed in small systems like proton+proton (pp) collisions by studying charm-hadrons, namely $D^{0}$ and $Λ_{c}^{+}$ . With this enthusiasm, we have studied the freeze-out scenarios of $D^{0}$ and $Λ_{c}^{+}$ in a pp collisions at $\sqrt{s} = $13 TeV using a pQCD-based framework called PYTHIA8. In this work, the production dynamics of open-charm hadrons are studied as a function of event multiplicity ($dN_{ch}/d\eta$), transverse momentum spectra ($p_{T}$) and pseudorapidity ($\eta$). The $p_{T}$ -spectra are analyzed with thermodynamically consistent non-extensive Tsallis distribution function. The effective temperature (T) and non-extensive parameter (q) have been studied as a function of charged-particle multiplicity, transverse momentum, and pseudorapidity. We observed that T increases with $dN_{ch}/d\eta$ and decreases with η while the non-extensive parameter increases for both the cases, when we consider heavy-flavors. Further, it is observed that the behaviour of T and q with transverse momentum depends explicitly on the final state charged-particle multiplicity. Along with these two parameters T and q, a correlation between these initial state effects and final state effects is observed through the number of multi-partonic interactions (nMPI) and Knudsen number. Possible thermalization in small collision systems and applicability of hydrodynamics are explored.
The value of muon magnetic moment, recently, reported by Fermilab has $4.2 \sigma$ discrepancy with the theoretical prediction which is a robust signal for physics beyond the SM. In this work, we consider $U(1)_{L_{\mu}-L_{\tau}}$ extension of the scotogenic model to explain non-zero neutrino mass and muon ($g-2$), simultaneously. It is known that muon neutrino trident (MNT) process put an upper bound on mass of gauge boson $M_{Z_{\mu\tau}}<300$ MeV to accommodate muon ($g-2$) anomaly. We have constrained the $vev$ of scalar singlet responsible for the mass of gauge boson using low energy neutrino data. We find that there exist a range of $M_{Z_{\mu\tau}}$ above 300 MeV giving consistent neutrino phenomenology. In this case, it is shown that muon ($g-2$) can be explained by adding a vector like lepton triplet with appropriate $L_{\mu}-L_{\tau}$ charge such that it only couples to muon through inert doublet $\eta$. We have, also, investigated the implication of the model for $CP$ violation and effective Majorana neutrino mass $m_{ee}$ appearing in neutrinoless double beta ($0\nu\beta\beta$) decay process.
We introduce a novel hybrid framework combining type I and type II seesaw models for neutrino mass where a complex vacuum expectation value of a singlet scalar field is assumed to spontaneously break CP symmetry. Using pragmatic organizing symmetries we demonstrate that such a model can simultaneously explain the neutrino oscillation data and generated observed baryon asymmetry through leptogenesis. Interestingly, natural choice of parameters leads to a mixed leptogenesis scenario driven by nearly degenerate scalar triplet and right handed singlet neutrino fields for which we present a detailed quantitative analysis.
In this work, we consider an extension of the magic symmetry ansatz within the paradigm of the TM$_2$ mixing scheme, wherein (2, 2) element of the effective low energy neutrino mass matrix $M_\nu$ is, also, equal to the $``\textit{magic sum}"$. The new constraint reduces the number of free parameters making the mass matrix highly predictive with strong correlations and homoscedasticity amongst the physical observables, in particular, neutrino mass hierarchy, atmospheric mixing angle ($\theta_{23}$) and $CP$ phase $\delta$. The cosmological bound on sum of neutrino masses refutes inverted hierarchy of neutrino masses at $95\%$ CL implying TM$_2$ with normal hierarchy as
the only viable possibility in the model. Also, $\theta_{23}$ is found to be non-maximal ($\theta_{23}\ne45^o$) to accommodate $CP$-violation observable in neutrino oscillation experiments.
We present the first results on the resummation of Next-to-Soft Virtual (NSV) logarithms for the threshold production of pseudoscalar Higgs boson through gluon fusion at the LHC. These results are presented after resumming the NSV logarithms of the kind ${\log}^{i}(1-z)$ to $\overline{\text{NNLL}}$ accuracy and matching them systematically to the fixed order NNLO cross-sections. These results are obtained using collinear factorization, renormalization group invariance, and recent developments in the NSV resummation techniques. The phenomenological implications of these NSV resummed results for 13 TeV LHC are studied and it is observed that these NSV logarithms are quite large. We also evaluate theory uncertainties and find that the renormalization scale uncertainties get reduced further with the inclusion of NSV corrections at various orders in QCD. We further study the impact of QCD corrections on mixed scalar-pseudoscalar states for different values of the mixing angle $\alpha$.
In the present work, we apply Tsallis non-extensive statistics to study the thermodynamic properties of quark matter in the chiral SU(3) mean field model. Within this model, the quark matter properties are modified through the scalar fields $\sigma, \zeta, \delta, \chi$ and the vector fields $\omega, \rho$ at finite temperature and chemical potential. Non-extensive effects have been introduced through a dimensionless parameter $q$ and the results are compared to the extensive case ($q\rightarrow1$). In the non-extensive case, the exponential in the Fermi-Dirac (FD) function is modified to a $q$-exponential form. The influence of the parameter $q$ on various thermodynamic properties such as energy density, entropy density and trace anomaly is investigated.
We analyse $\mathcal CP$-effects of the anomalous hVV vertices (with V = W, Z, $\gamma$) through Higgs allied processes in the context of LHC and it's proposed variants. Sensitivities to such interactions would also be discussed for various detection modes of Higgs for the aforementioned colliders.
Compared to other neutrino sources, the huge anti-neutrino fluxes at nuclear reactor based experiments empower us to derive stronger bounds on non-standard interactions of neutrinos with electrons mediated by light scalar/vector mediators. At neutrino energy around $200$~keV reactor anti-neutrino flux is at-least an order of magnitude larger compared to the solar flux. The atomic and crystal form factors of the detector materials related to the details of the atomic structure becomes relevant at this energy scale as the momentum transfers would be small. Non-standard neutrino-electron interactions mediated by light scalar/vector mediator arises naturally in many low-scale models. We also propose one such new model with light scalar mediator. Here, we investigate the parameter space of such low-scale models in reactor based neutrino experiments with low threshold Ge and Si detectors, and find the prospect of probing/ruling out the relevant parameter space by finding the projected sensitivity at $90 \%$ confidence level by performing a $\chi^2$-analysis. We find that a detector capable of discriminating between electron recoil and nuclear recoil signal down to very low threshold such as $5$~eV placed in reactor based experiment with very low threshold would be able to probe larger region in parameter space compared to the previously explored region. A Ge (Si) detector with $10$~kg-yr exposure and 1 MW reactor anti-neutrino flux would be able to probe the scalar and vector mediators with masses below 1 keV for coupling products $\sqrt{g_\nu g_e}$ $\sim$ $1 \times 10^{-6}~(9.5 \times 10^{-7}) ~{\rm and}~ 1\times 10^{-7} ~(8\times 10^{-8})$, respectively.
NOvA is a long baseline neutrino oscillation experiment based at Fermilab, with the primary aim of studying the properties of neutrinos, the most elusive type of fundamental particle. The experiment measures neutrinos from Fermilab's NuMI beam using two detectors: a near detector located 1 km downstream from the beam source, and a far detector at a baseline of 810 km. Both detectors utilize liquid scintillator technology and are functionally identical, which allows for significant cancellation of detector systematic uncertainties.
We have developed a toolset that comprises two parts one validation package which deals with the analysis codes producing low-level distributions directly from the event record at all stages of MC dataset production, from physics simulation through detector simulation and reconstruction, and the other dealing with the detector aging. On one hand, the validation package will help to keep track of all the changes going throughout the simulation and reconstruction phase and is an easy-to-use toolkit. On the other hand, since NOvA data-taking began in 2014, which places the experiment currently approximately halfway to the projected end of its lifetime. For such a long-lived experiment as NOvA, detector aging will have significant effects on the physics analysis. In general, this toolset helps us to track changes in both detectors and at the same time will also help to incorporate the aging effects at the simulation level. Tools developed for monitoring the detector performance and for the validation of the produced dataset will be presented.
To observe the time performance of the Gas Electron Multiplier (GEM) detector with various parameters, a simulation is carried out.
The signal and induced current are determined using a 5.9 KeV X-ray photon along with a muon source at various energies. The gain and time resolution factors have been optimized for single and quadruple-layer GEM detectors. Different detector parameters, such as gas composition, initial particle position, energy, and electric fields, are adjusted in order to estimate the value of time resolution.
We consider the scenario of self-interacting dark matter(SIDM) with a light mediator in a model independent way, which can alleviate two long standing issues of the small scale cosmology namely cusp vs. core and too-big-to-fail. A Yukawa potential is chosen to achieve mediator exchange between DM particles as part of their self-interactions. The dynamics of self-interacting transfer cross-section is studied for a range of mediator mass($m_Z'$). Also, a relationship is established between the cross-section and DM particles' relative velocity, which ensures the solution to DM crisis at small scales. Our obtained numerical results are efficient compared to the earlier works in the context that lesser number of $\ell$ modes have been used by us to achieve the same level of accuracy in the cross-section calculations. For a better understanding of the SIDM parameter space, we perform an analytical analysis on the dependence of transfer cross-section over the other important SIDM parameters using a Hulthen potential which is similar in it's behaviour to Yukawa potential. A detailed evolution of particle dynamics using the Boltzmann equation and the effect of sommerfeld enhancement on such calculations has also been studied here. We also provide a minimal anomaly-free leptophilic extension of standard model, that can incorporate SIDM and its mediator candidate in the framework.
The decays B->Psi(2S)Ks pi+ pi- and Bs->Psi(2S)Ks are observed for the first time based on data samples (2017 and 2018) of pp collision collected with the CMS detector, corresponding to an integrated luminosity 103 inverse femtobarn taken at the centre of mass energy 13 TeV. These decays are observed with a significance exceeding five standard deviations. In this study, the branching fraction of both decays Bs->Psi(2S)Ks and B->Psi(2S)Ks pi+pi- relative to the B->Psi(2S)Ks decay are measured.
Neutrinos can acquire both dynamic and geometric phases due to the non-trivial mixing between mass and flavour eigenstates. In this article, we derive the general expressions for all plausible gauge invariant diagonal and off-diagonal geometric phases in the three flavour neutrino model using the kinematic approach. We find that diagonal and higher order off-diagonal geometric phases are sensitive to the mass ordering and the Dirac CP violating phase $\delta$. We show that, third order off-diagonal geometric phase ($\Phi_{\mu e\tau}$) is invariant under any cyclic or non-cyclic permutations of flavour indices when the Dirac CP phase is zero. For non-zero $\delta$, we find that $\Phi_{\mu e\tau}(\delta)=\Phi_{e \mu \tau}(-\delta)$. Further, we explore the effects of matter background using a two flavour neutrino model and show that the diagonal geometric phase is either 0 or $\pi$ in the MSW resonance region and takes non-trivial values elsewhere. The transition between zero and $\pi$ occurs at the point of complete oscillation inversion called the nodal point, where the diagonal geometric phase is not defined. Also, in two flavour approximations, two distinct diagonal geometric phases are co-functions with respect to the mixing angle. Finally, in the two flavour model, we show that the only second order off-diagonal geometric phase is a topological invariant quantity and is always $\pi$.
While quarkonia (quark-antiquark systems of charm and bottom) provide some of the most interesting probes of a deconfined quark-gluon plasma,
a QCD-based theoretical study of them is nontrivial. An effective
thermal potential can be constructed, but its nature and properties
are very different from the corresponding vacuum potential.
I will critically examine the interpretation and scope of the effective thermal potential. I will present results from our recent study of the potential, and discuss what they say about the behavior of a heavy quark-antiquark pair in the plasma.
A strong classical electromagnetic or gravitational background can lead to vacuum instability and produce particle-antiparticle pairs. This extraordinary property of quantum field theory has far-reaching implications for understanding the generation of particle-antiparticle pairs in the presence of a strong electric field[6]; particle creation in the expanding universe[25]; black hole evaporation as a result of Hawking radiation[22-24]; and Unruh radiation, in which particle number is seen by an accelerating observer[20-21]. The process of particle creation from the quantum vacuum was first studied in 1951 by Schwinger under a constant electric field, and this phenomenon is known as the Schwinger effect[1]. This particle creation paradigm has crucial importance for non-equilibrium processes in heavy-ion collisions[2-4] as well as astrophysical phenomena and the search for nonlinear and nonperturbative effects in ultraintense laser systems[7-8].
Particle production is the process of evolving a quantum system from an initial equilibrium configuration to a new final equilibrium configuration via an intermediate non-equilibrium evolution caused by a strong field background. Quantitative description of particle production at all times in a time-dependent electromagnetic field is not possible due to the absence of unique separation into positive and negative energy states at intermediate times and these positive and negative states are well-defined only at asymptotically early and late times where the field vanishes. A common approach is to define particle number in terms of an adiabatic basis using the Bogoliubov transformation[9-12,26]. In the adiabatic basis, we examine the problem of pair production in a time-varying spatially uniform electric field E(t)=(0,0,E_0 sech^2(t/τ)) which has been studied by various authors[13-18], who calculated the number of particles created at the asymptotic time but the problem of particle production at the finite time is not studied. Actually, we looking for the evolution of the quantum system at some initial time t_0 in the vacuum state but now what will be the properties of the quantum system at finite time t? We choose a finite time t in the multiple of the pulse duration (τ) of the given electric field (T = τ, 2τ, 3τ, ...) and see what happens to the system properties at that time. The finite time behavior of particle production in the Sauter-pulse field is studied. To study the dynamical behavior of particle production, the one-particle momentum distribution function f(p, t) is an important quantity in the description of the particle production process in the time-dependent electric field.The time evolution of the particle distribution function f(p, t) in momentum space is studied for E_0 = 0.2 and τ = 10 in non-perturbative regime with the Keldysh parameter ( \gammma = 0.5).
Here, we discuss both longitudinal and transverse momentum spectrums to understand what happened to the quantum system after a finite time in the process of particle production from the vacuum in a linearly polarized time-dependent Sauter field.
The longitudinal momentum ( canonical momentum along the field/ p_z)spectrum of the created particle shows a complex behavior of splitting and manifests oscillation arising at the finite time where the electric field nearly vanished and this oscillating structure can be understood in the Dynamical Tunneling picture[19].
The transverse momentum (canonical momentum perpendicular to the field) spectrum of the created particle shows only the splitting of smooth structure and the absence of quantum interference, which is an obvious interference effect that occurs only in the direction of the electric field.
[1] J. Schwinger, On gauge invariance and vacuum polarization, Phys. Rev. 82, 664 (1951).
[2] F. Gelis and R. Venugopalan, Particle production in field theories coupled to strong external sources, I: Formalism and main
results, Nucl. Phys. A776, 135 (2006); Particle production in field theories coupled to strong external sources, II: Generating
functions, Nucl. Phys. A779, 177 (2006).
[3] D. Kharzeev and K. Tuchin, From color glass condensate to quark gluon plasma through the event horizon, Nucl. Phys. A753,
316 (2005); D. Kharzeev, E. Levin, and K. Tuchin, Multi-particle production and thermalization in high-energy QCD, Phys.
Rev. C 75, 044903 (2007).
[4] F. Gelis, E. Iancu, J. Jalilian-Marian, and R. Venugopalan, The color glass condensate, Annu. Rev. Nucl. Part. Sci. 60, 463
(2010).
[5] R. Ruffini, G. Vereshchagin, and S. Xue, Electron-positron pairs in physics and astrophysics: From heavy nuclei to black
holes, J. Phys. Rep. 487, 1 (2010).
[6] M. Marklund and P. Shukla, Nonlinear collective effects in photon photon and photon plasma interactions, Rev. Mod. Phys.
78, 591 (2006).
[7] A. Di Piazza, C. Muller, K. Z. Hatsagortsyan, and C. H. Keitel, Extremely high-intensity laser interactions with fundamental
quantum systems, Rev. Mod. Phys. 84, 1177 (2012).
[8] G. Mourou, T. Tajima, and S. Bulanov, Optics in the relativistic regime, Rev. Mod. Phys. 78, 309 (2006).
[9] E. Brezin and C. Itzykson, Pair production in vacuum by an alternating field, Phys. Rev. D 2, 1191 (1970)
[10] V. S. Popov, Pair production in a variable external field (quasiclassical approximation), Sov. Phys. JETP 34, 709 (1972); Pair
production in a variable and homogeneous electric fields as an oscillator problem, Sov. Phys. JETP 35, 659 (1972).
[11] V. G. Bagrov, D. M. Gitman, S. P. Gavrilov, and S. M. Shvartsman, Creation of boson pairs in a vacuum, Izv. Vuz. Fiz. 3, 71
(1975); D. Gitman and S. Gavrilov, Quantum processes in a strong electromagnetic field. Creating pairs, Izv. Vuz. Fiz. 1, 94
(1977).
[12] F. Gelis and N. Tanji, Schwinger mechanism revisited, Prog. Part. Nucl. Phys. 87, 1 (2016).
[13] S. P. Gavrilov and D. M. Gitman, Vacuum instability in external fields, Phys. Rev. D 53, 7162 (1996).
[14] A. B. Balantekin, J. E. Seger and S. H. Fricke, Dynamical effects in pair production by electric fields, Int. J. Mod. Phys. A 6
(1991) 695.
[15] Y. Kluger, J. M. Eisenberg, B. Svetitsky, F. Cooper and E. Mottola, “Pair production in a strong electric field”, Phys. Rev. Lett.
67, 2427 (1991).
[16] Y. Kluger, J. M. Eisenberg, B. Svetitsky, F. Cooper and E. Mottola, “Fermion pair production in a strong electric field”, Phys.
Rev. D 45, 4659 (1992).
[17] A. M. Fedotov, E. G. Gelfer, K. Yu Korolev and S. A. Smolyansky, Kinetic equation approach to pair production by a time-dependent electric field, Phys. Rev. D, 83, 025011 (2011).
[18] S. P. Kim and C. Schubert, “Nonadiabatic quantum Vlasov equation for Schwinger pair production,” Phys. Rev. D 84, 125028
(2011).
[19] L.V. Keldysh, Dynamic Tunneling. Her. Russ. Acad. Sci. 86, 413–425 (2016)
[20] R. Schutzhold, G. Schaller, and D. Habs, Signatures of the
Unruh Effect from Electrons Accelerated by Ultra-Strong
Laser Fields, Phys. Rev. Lett. 97, 121302 (2006); 97,
139902(E) (2006).
[21] W. G. Unruh, Notes on black hole evaporation, Phys. Rev. D
14, 870 (1976).
[22] G. Mahajan and T. Padmanabhan, Particle creation, classicality, and related issues in quantum field theory: I.
Formalism and toy models, Gen. Relativ. Gravit. 40, 661
(2008); Particle creation, classicality, and related issues in
quantum field theory: II. Examples from field theory, Gen.
Relativ. Gravit. 40, 709 (2008).
[23] G. W. Gibbons and S. W. Hawking, Cosmological event
horizons, thermodynamics, and particle creation, Phys. Rev.
D 15, 2738 (1977).
[24] L. H. Ford, Gravitational particle production and inflation,
Phys. Rev. D 35, 2955 (1987).
[25] E. Greenwood, D. C. Dai, and D. Stojkovic, Time-dependent
fluctuations and particle production in cosmological de
Sitter, Phys. Lett. B 692, 226 (2010).
[26] L. Sriramkumar and T. Padmanabhan, Probes of the vacuum structure of quantum fields in classical backgrounds, gr-qc/9903054
An $A_{5}$ discrete symmetry group is used to construct a neutrino mass model that can reproduce deviation from golden ratio mixing. Here, the neutrino masses are obtained through Type-I see-saw mechanism. The neutrino masses and mixing patterns predicted by the model can explain the current data with good accuracy. In this work, the correlation between neutrinoless double beta decay parameters $|m_{\beta \beta}|$ and neutrino oscillation parameters are also investigated. The analysis is consistent with latest cosmological bound $\Sigma m_{i}\leq$ 0.12 eV.
In this talk, we study the finite temperature properties of a 10-D version of a hardwall model with probe D7-branes and separate cutoffs for the branes and the bulk. In particular, we describe the possible phases and the phase transitions of QCD-like theory in this holographic model.
A cosmic muon veto detector (CMVD) is being built around the mini-ICAL detector at the IICHEP transit campus in Madurai. The CMVD aims to study the feasibility of building a cosmic muon veto for a shallow depth neutrino detector. For this purpose, the CMVD needs to have a muon detection efficiency of more than $99.99\%$ and false positive rate of less than $10^{-5}$. The CMVD consists of veto walls on 3 sides and the top of mini-ICAL. The veto walls are made of extruded plastic scintillator (EPS) strips of 4.5 - 4.7 m long, 10 and 20 mm thick and 50 mm wide in dimension. Each EPS strip has two holes separated by 25 mm and running along length of the EPS. One wavelength shifting (WLS) fibre of 1.4 mm diameter is inserted though each hole for light collection. One Hamamatsu made SiPM with an active area of 2 mm $\times$ 2 mm is placed at the two ends of each fibre for signal readout. In order to minimise the number of electrical connectors, two EPS strips are glued together to form the smallest unit of the CMVD, called a di-counter (DC). Each DC is fitted with a counter mother board (CMB) to connect the SiPMs with the processing electronics using an HDMI cable.
For the ease of veto wall construction, an aggregate unit, called a Tile, is made by combining four DCs together. The DCs are glued together - sideways - onto a base plate of aluminium honeycomb panel to provide mechanical support and rigidity. The EPS strips have a coating of $TiO_{2}$ for maximising internal light reflection, but the $TiO_{2}$ does not help in stopping external light from entering the scintillator. The tiles must be extremely light-leak proof to achieve the goal of having a false positive rate of less than $10^{-5}$. To achieve light-leak proofing, the tiles are wrapped with either a black Tedlar or a black Low-density polyethylene (LDPE) sheet. Another layer of heat shrinkable PVC sleeve is added to the wrapped tiles to prevent any abrasive damage to the Tedler or LDPE layer. 95 such tiles, made from 380 di-counters will be used in the CMVD.
A tile thus fabricated, must be qualified for use in the CMVD. Since the individual di-counters used in making a tile are pre-qualified in all other aspects, the tiles need to be tested only for light leak. A tile is declared as light-leak proof, if the noise rate of each of the di-counters is less than a pre-determined value. There are two components in the noise rate measured from a DC, one is the noise because of an actual light leak through the wrapping, and the other is the dark noise produced by the SiPM. In order to negate the effect of the dark noise of the SiPM, the dark noise rate of the SiPM is first established by isolating it form the scintillator strips, and wrapping it in a black cloth before taking he noise rate measurement. This value is subtracted from the noise rate of the tile, to arrive at the noise rate caused due to light leak if any.
This paper will discuss the fabrication procedure of scintillator tiles and the testing procedure used to qualify them as well as summarise the test results.
Despite successfully explaining most of the global neutrino oscillation data, the three neutrino oscillation framework fails to accommodate the anomalous results from the short-baseline (SBL) experiments during the last two decades. The active-sterile neutrino oscillations with a mass-squared difference ($\Delta m^2_{41}\simeq$1 eV$^2$) much larger than the standard atmospheric ($\Delta m^2_{31}$) and solar ($\Delta m^2_{21}$) mass-squared splittings can explain the SBL anomalies quite well. However, in this work, we probe the active-sterile oscillations for a wide range of $\Delta m^2_{41}$ ($10^{-5}$- $10^{2}$ eV$^2$) at two long-baseline experimental facilities, DUNE and T2HK(JD)/ T2HKK(JD+KD). We also consider the near detector for DUNE and the Intermediate Water Cherenkov Detector (IWCD) for JD/KD to constrain the active-sterile mixing angles. We explore the CP-violation discovery potential and the CP-phase reconstruction capabilities of these experiments at different mass-squared splittings. We observe that the CP-sensitivity is maximum when $\Delta m^2_{41} \sim \Delta m^2_{31}$. The inclusion of the near detectors will help us verify the allowed regions of the SBL anomalies. We find that the sensitivity reach of the DUNE and JD/KD experiments for the active-sterile mixing angles $\theta_{14}$ are respectively $\sim$2$^\circ$ and $\sim$3.5$^\circ$ at 90$\% $ C.L., when $\Delta m^2_{41}$ is around 1 to 10 eV$^2$. Similarly, for $\theta_{24}$, the best bounds are coming around 1.5$^\circ$ and 0.5$^\circ$ for JD/KD and DUNE, respectively.
We study the production of color-neutral and singly-charged heavy leptons at the proposed International Linear Collider. We use the optimal observable technique to determine the statistical accuracy to which the coupling of such fermions to the Z gauge boson (vector, axial or chiral) can be measured in case of signal-only hypothesis as well as in presence of non-interfering SM background. A UV-complete model that contains these particles, as well as a dark matter candidate, is used for illustration to yield opposite sign leptons (OSL) plus missing energy ($E_{miss}$) as our final state signal. We demonstrate how the uncertainties to extract NP couplings optimally are changed depending on the background contamination. As an example, the effect of judicious cuts on kinematic variables like missing energy ($E_{miss}$) that segregate the signal from SM background, to optimal uncertainty of NP couplings are demonstrated.
A core-collapse supernova explosion releases 99\% of the progenitor star's gravitational energy in the form of neutrinos resulting in emission of a huge number of MeV neutrinos ($\sim \mathcal{O}(10^{56})$). This neutrino emission takes place in three different phases, namely the {\it neutronisation burst, accretion and cooling} pertaining to different physical processes. Interestingly, neutrinos from neutronisation burst and accretion phases carry unique signatures of neutrino masses and mixing. For the electron antineutrino detectors, the faster rise of heavy lepton flavour antineutrinos at the SN core can result in different temporal signal characteristics for different mixing scenarios. Similarly, the absence of all other flavours but the electron neutrinos makes the neutronization burst phase sensitive to the mixing scenarios and can help us to resolve the long-standing problem of neutrino mass ordering, i.e., normal mass ordering (NMO) and inverted mass ordering (IMO). In this work, we investigate the possibilities of determining the neutrino mass ordering with Hyper-Kamiokande (HK), JUNO (for accretion phase) and DUNE (for neutronisation burst phase) for different SN models at galactic distances ($\sim$ 10 kpc). All these detectors are found to be capable of distinguishing the two scenarios, NMO and IMO at great statistical significance for most of the SN models.
References:
[1] P. D. Serpico, S. Chakraborty, T. Fischer, L. Hudepohl, H.-T. Janka and A. Mirizzi, Probing the neutrino mass hierarchy with the rise time of a supernova burst, Phys. Rev. D 85 (2012) 085031 [1111.4483].
[2] Hyper-Kamiokande collaboration, Hyper-Kamiokande Design Report, 1805.04163.
[3] JUNO collaboration, Neutrino Physics with JUNO, J. Phys. G 43 (2016) 030401 [1507.05613].
[4] DUNE collaboration, Long-baseline neutrino oscillation physics potential of the DUNE experiment, Eur. Phys. J. C 80 (2020) 978 [2006.16043].
[5] R. Buras, H.-T. Janka, M. Rampp and K. Kifonidis, Two-dimensional hydrodynamic core-collapse supernova simulations with spectral neutrino transport. 2. models for different progenitor stars, Astron. Astrophys. 457 (2006) 281 [astro-ph/0512189].
[6] K. Nakazato, K. Sumiyoshi, H. Suzuki, T. Totani, H. Umeda and S. Yamada, Supernova Neutrino Light Curves and Spectra for Various Progenitor Stars: From Core Collapse to Proto-neutron Star Cooling, Astrophys. J. Suppl. 205 (2013) 2 [1210.6841].
The phenomena of neutrino oscillation is an excellent platform to study new-physics beyond the Standard Model, popularly known as BSM. The unknown couplings involving neutrinos, termed non-standard interactions (NSI), may appear as `new-physics' in different neutrino experiments. The neutrino NSIs may have significant effects on neutrino oscillations and CP-sensitivity, which may be studied in various neutrino experiments. The concept of the coupling of a neutrino with a scalar is being explored and it looks promising. The effects of scalar NSI may appear as a perturbation to the neutrino mass matrix in the neutrino Hamiltonian and its effect is energy independent. Interestingly the matter effects due to scalar NSI scales linearly with the matter density.
In this work, we have performed a model-independent study of the effects of scalar NSI at DUNE for the first time. The neutrino mixing parameters may get affected due to the inclusion of scalar NSI as it modifies the effective mass of neutrinos. We have probed the effect of scalar NSI in neutrino oscillations and its impact on the measurements of various mixing parameters. We have looked into the effects of scalar NSI on different oscillation channels relevant to the experiment. We then explored the impact of scalar NSI on the CP-violation as well as CP-measurement sensitivities at DUNE. We show that the effect of scalar NSI on the CP sensitivity is significant and for some cases, scalar NSI can mimic the standard CP-violation sensitivity causing intricacy in $\delta_{CP}$ measurements.
References:
[1] L. Wolfenstein, Neutrino Oscillations in Matter, Phys. Rev. D 17 (1978) 2369.
[2] S.-F. Ge and S. J. Parke, Scalar Nonstandard Interactions in Neutrino Oscillation, Phys. Rev. Lett. 122 (2019) 211801 [1812.08376].
[3] A. Medhi, D. Dutta and M. M. Devi, Exploring the effects of scalar non standard interactions on the CP violation sensitivity at DUNE, JHEP 06 (2022) 129 [2111.12943].
To cope with severe radiation dosage and increased event pileup in a high luminosity environment, the existing endcap calorimeters of the CMS experiment will be replaced by a high granularity calorimeter (HGCAL). To make the most of the increased granularity, a precise placement of detector modules into the sampling planes is of utmost importance. This requires the module components’ physical dimensions and other features to be within strict bounds. We have developed a suite of programs for an optical coordinate measuring machine, used to perform various measurements on HGCAL baseplates, and have subsequently extracted quality assurance results from the recorded data. Our key objective is to verify the baseplates produced in India for their quality, such as flatness, thickness, and accuracy in fiducial placement, and to provide feedback to the manufacturers for better quality control. The talk will describe various baseplate features and their measurement methods.
We study the quantisation of $\kappa$-deformed Dirac field by adopting a quantisation method that uses only equations of motion for quantising the field. Starting from $\kappa$-deformed Dirac equation, valid up to first order in the deformation parameter $a$, we derive deformed unequal time anti-commutation relation between deformed field and its adjoint, leading to an undeformed oscillator algebra. We then derive a deformed oscillator algebra by imposing unequal time anti-commutation relations between $\kappa$-deformed Dirac field and its adjoint to be undeformed. We construct the deformed number operator by calculating conserved charge associated with the global phase transformation symmetry. We show that this deformed number operator has a mass-dependent correction term, which is expected to have experimental significance in particle physics. We also show that charge conjugation is not a symmetry of the Dirac equation in the $\kappa$-deformed space-time.
Inspired by the resemblance of the Hamiltonian of harmonic oscillator with that of the square of length operator in 2-D space, we propose a method to quantize length and area in 2-D canonical noncommutative space in analogy to the quantization of energy of harmonic oscillator problem. We attempt to extend our method to the case of other canonical noncommutative spaces. In 3-D, we explicitly construct a set of raising and lowering operators along with other operators in such a way that the square of length operator is expressed in terms of the normal ordering of operators. Taking noncommutativity among spatial coordinates in 3D, we solve the eigenvalue equation involving the square of length operator to get the quantization of length from which the quantizations of area and volume are inferred. In Minkowski spacetimes where time and space noncommutes, quantization is not possible in 1+1 and 2+1 dimensions. We also analyze the possibility of quantization of length in 3+1 dimensions when time noncommutes with spatial coordinates.
The LHC machine collides proton-on proton every 25 ns. In the recently started operation of Run-3, the peak instantaneous luminosity delivered is about 2 x 10^[34} s-1 cm-2. This results in about ~40 TB/s of data flow from the detector, all of which cannot be stored offline for detailed analysis. The most interesting events are selected quickly via a 2-tier trigger in real time. The first one, called Level -1 trigger, is a hardware based trigger and the second one uses a computer farm for detailed event analysis. For identification of interesting events with photons and electrons in their final state the Level 1 hardware trigger implements various algorithms that weed out mundane processes.In this talk we present the reconstruction of electron/photon for Level 1 trigger along with an overview of the Electron/Photon trigger setup for Run 3. We will also present the performance of the Level 1 Electron/Photon trigger in the latest Run 3 data.
Vector Boson Scattering (VBS) is widely recognized as an indirect probe for BSM searches in the gauge boson sector which can be described using the standard model effective field theory (SMEFT) approach. However, the EFT formalism is often not applied in a truly consistent manner. In this paper, limitations of the EFT approach to constrain new physics effects in the data are discussed with particular emphasis on perturbative unitarity conditions on the EFT amplitudes. We study the WZ scattering process in the fully leptonic decay mode using CMS Run II data. Results for the searches of the anomalous quartic gauge couplings (aQGC) using the full clipping method will be presented. A comparison between standard observable transverse WZ mass and new observable full WZ mass reconstructed using a simulation based approach will also be shown.
"Scotogenic" Model is a very popular model to explain the dark matter (DM) stability along with the neutrino mass generation in a very simple and elegant way. However, in this model 'ad-hoc' $Z_2$ symmetry is needed to explain the DM stability and it does not shed any light on the flavor structure of neutrino, that's why for the explanation of flavor structure of neutrino one has to add another symmetry apart from $Z_2$ symmetry in the Scotogenic model. In this talk, we show how dark symmetry and flavor structure of neutrino can be explained just by adding only one symmetry on top of the Standard Model symmetry. In this work we have added $A_4$ flavor symmetry which is able to describe the flavor structure of neutrino and the breaking of $A_4$ into its sub-group $Z_2$ will explain the DM stability. In this work neutrino mass is generated through Scoto loop + Seesaw mechanism. We show that the model successfully explains the DM phenomenology along with the latest neutrino oscillation data. Also, our model provide testable prediction for the neutrino-less double beta ($0\nu\beta\beta$) decay experiments.
Motivated by the precise experimental measurements of the heavy flavor baryons in the recent experiments, we have calculated the magnetic moments and radiative M1 decay widths of low-lying heavy flavor charmed baryons for $J^P={1/2}^+$ and ${3/2}^+$ states, in the framework of screened quark charge. We analyze the modification of quark charge by employing screening effect on quark charge inside the baryon. Further, we have incorporated the state mixing effects in flavor degenerate baryon magnetic moments, and consequently, on M1 decay widths.
Recently, Tibet $AS_{\gamma}$ has discovered the long-awaited detection of diffuse gamma-rays with energies between 100 TeV and 1 PeV from the Galactic disk region, thus proving the existence of Galactic PeVatrons. It has been shown that these data broadly agree with prior theoretical expectations. We study the possible implication of these gamma-rays within the well-motivated scenario of Heavy DM decay into a wide range of Standard model final states in the presence of various astrophysical background models of sub-PeV gamma-rays. For almost all the final states, we have obtained the strongest constraints on the lifetime of decaying PeV-scale DM. Our constraints are robust for various DM density profiles. Near future data from Tibet $AS_{\gamma}$ and various other detectors may help us discover DM particle identity using this technique.
The NOvA experiment at Fermilab consists of two functionally identical liquid scintillator detectors called near detector and far detector to study neutrino oscillations using GeV-scale neutrinos from the Fermilab NuMI beam. Due to its location close to the earth's surface, surface area of over 4,000 (m^{2}), and little overburden,
the NOvA far detector is sensitive to an extensive range of magnetic monopole masses and velocities. With the help of the far detector, we are looking for signals of relic monopoles in the cosmic rays flux that might have been produced in the early universe. We have developed the data-driven trigger(DDT), a robust trigger algorithm optimized for continuously searching the magnetic monopole-like patterns in the live data. Due to the surface proximity of the far detector, the major challenge for this analysis at the offline level is the rejection of cosmic ray background in the collected data. In this talk, I will present the status of the search for fast-moving magnetic monopoles using the data collected by the NOvA far detector.
Cosmic rays interacting with air molecules produce cascades of particles which have hadronic, electromagnetic, and muonic components. Among these, muons can reach deep underground. These are used to study cosmic rays at different underground depths. As muons are mainly produced from the decay of pions and kaons, the muon rate depends on the probability of mesons to decaying versus interacting. The decay or interaction probabilities depend on the density of the atmosphere which varies seasonally. So the muon flux should vary seasonally and have its maximum in summer and minimum in winter. This seasonality has been verified by large number of experiments in past. But in 2015, MINOS presented an anti-correlation between muon flux and atmospheric temperature for multi-muon events.
NOvA is a long-baseline neutrino experiment, consisting of two functionally identical detectors called the near and far detector with the goal of studying neutrino oscillations. The near detector, 110m underground, collects cosmic data at a rate 40Hz. We use CORSIKA to simulate cosmic rays. In this talk, I will present a study of single and multi-muon seasonal variation from CORSIKA and data collected by the near detector.
21 cm line arising from neutral hydrogen is one of the most important tools for understanding the thermal and ionization history of the early universe. Primordial black holes (PBHs) are one of the oldest and well motivated DM candidates. Hawking radiating low-mass PBHs ($10^{15} −10^{18}$ g) can heat up the Intergalactic medium (IGM) by injecting all Standard Model particles and that can affect the global 21 cm signal. Recently, EDGES has claimed an excess in their detected global 21 cm signal, though more recently SARAS 3 has rejected that claim. By considering an EDGES-like measurement of the global 21 cm signal, we derive sensitivities on non-spinning and spinning PBH dark matter (DM). These sensitivities are competitive with existing bounds from various other astrophysical observables. Besides, we also investigate projected bounds on PBH DM abundance using the global 21 cm signal expected from the Dark Ages. We show that in future, unambiguous measurement of global 21 cm signal can either potentially discover or constrain PBH DM.
Abstract
With the advancement in astrophysical instrumentation, the sensitivity and amount of observations associated with the neutron star are improving continuously. This demands sophisticated theoretical models of its structure and composition. Among its various layers, the crust possesses a challenge due to its complexity and importance in various observed phenomena, such as pulsar glitches, quasi-periodic oscillations (QPOs), etc. [1]. In view of crust structure of a neutron star, the compressible liquid drop model (CLDM) has various advantages compared to others, such as lower computational requirement, effectiveness, boundary conditions, etc. [2]. The CLDM model incorporates the compressibility of nuclear matter, negative lattice Coulomb energy, and the suppression of surface tension by the neutron gas [3, 4]. In CLDM formalism, the finite-size effects are introduced via surface and Coulomb energy parametrization in an ad-hoc manner, limiting the results’ sensitivity. The surface energy plays the most significant role and is responsible for the possible deviations [5, 6]. In this work, we intend to investigate the role of surface energy parametrization in the CLDM formalism and its sensitivity toward various neutron star properties. It is seen that the crustal properties of neutron star are significantly affected by the surface energy parametrization. We use the effective relativistic mean field model (E-RMF) to describe the nuclear interaction. The role of the equation of state (EoS) described within E-RMF formalism is also investigated in context to the properties of neutron star crust, such as crust thickness and mass, nuclear pasta thickness and transition properties, etc. The crustal properties are found to be sensitive to the density-dependent symmetry energy and slope parameter.
References
[1] M. Gearheart et al., Mon. Not. R. Astron. Soc 418 (2011) 2343.
[2] T. Carreau et al., A&A 640 (2020) A77.
[3] V. Parmar et al., Phys. Rev. D 105 (2022) 043017.
[4] V. Parmar et al., Phys. Rev. D 106 (2022) 023031.
[5] D. Ravenhall et al., Nucl. Phys. A 407 (1983) 571.
[6] W. G. Newton et al., Astrophys. J. Suppl. Ser. 204 (2013) 9.
We will present the study of rare decay modes B^+ → D_s^{*+}h^0 and B^+ → D^+h^0, where h^0 denotes the neutral mesons (η and K^0) using a data sample of the Belle experiment. These rare decay modes are poorly measured in the world, and we first time studied them using the full Belle data collected at an asymmetric KEKB e+e- collider situated at Tsukuba, Japan. Along with rare decay modes, we will report improved measurements in the branching fraction of the color-suppressed decays B^0 → D^0h^0.
Simulation studies with Lambda particles for benchmark test at miniCBM experiment
The Compressed Baryonic Matter (CBM) experiment at the upcoming Facility for Anti- proton and Ion Research (FAIR), Darmstadt, Germany will study dense regions of phase space diagram of strongly interacting matter in nucleus-nucleus collisions of 2-11 AGeV Kinetic Energies. Unprecedented interaction rates reaching upto 10 MHz will be the unique feature at CBM and long with a free-streaming mode of data acquisition for all the detector subsystems. A precursor to CBM called mini-CBM experiment has been setup at SIS18 beamline of GSI as part of FAIR-phase0 program. Real size Prototype detectors of various subsystems of CBM such as the Silicon Tracking Station(STS), Muon Chamber (MuCh) system, Time of Flight (TOF) Detector etc. have been installed. Testing the
reconstruction abilities of the detectors in self-triggered mode at the highest rates in mCBM is one of main goals at mCBM. In this report, we present the simulation results on time-based reconstruction of Λ as part of the benchmark study in mCBM experiment.
Data with O-Ni collisions at T= 2 AGeV have been recently collected at a rate of 10^5 ions per collision at mCBM, and analysis of this data is underway. We have attempted to investigate the characteristics of Lambda reconstruction through realistic simulations. It has been carried out in CBMROOT framework. The actual geometry of the experimental setup along with detector material was implemented in GEANT. URQMD based event
generator was used to simulate O-Ni collisions at T=2 AGeV. A total 10^8
events were simulated and transported using GEANT 3 monte-carlo (MC) based engine. After transport, we convert MC points into digital signals (or digis) and perform hit and time-based event reconstruction, by clubbing together the hits in a time window of 200 ns. The potential proton and pion track candidates were extracted primarily based on their time of flight and a set of topological cut parameters.
The reconstruction efficiency was estimated to be about 0.01 percent. The description of the mCBM experimental setup and the details of the reconstruction procedure in a triggerless approach will be presented and discussed.
Motivated by the recently reported anomaly in W boson mass by the CDF collaboration with $7\sigma$ statistical significance, we consider a singlet-doublet (SD) Majorana fermion dark matter (DM) model where the required correction to W boson mass arises from radiative corrections induced by SD fermions. While a single generation of SD fermions, odd under an unbroken $Z_2$ symmetry, can not explain the W boson mass anomaly while being consistent with DM phenomenology, two generations of SD fermions can do so with the heavier generation playing the dominant role in W-mass correction and lighter generation playing the role in DM phenomenology. Additionally, such multiple generations of SD fermions can also generate light neutrino masses radiatively if a $Z_2$-odd singlet scalar is included.
The approximate solution of the Schrodinger equation in D-Dimensions for Harmonic plus Modified Yukawa-Kratzer potential (HMYKP) is investigated using the Nikiforove-Uvarov (N-U) method [1,2]. This method is based on solving the second-order linear differential equation by reducing it to a generalized equation hypergeometric type by a suitable variable change [3]. We studied the four-quark systems with the $cs\bar{c}\bar{s}$, $cq\bar{c}\bar{q}$ quark structures within the framework of the non-relativistic quark model.
The HMYKP is written as,
$$
V(r)= a_1r^2+\frac{a_2 e^{-2\alpha r}}{r^2} -\frac{a_3 e^{-\alpha r}}{r}+a+D_e-\left(\frac{A_1}{r}-\frac{A_2}{r^2}\right)
$$
Using HMYKP new analytical exact energy eigenvalue and eigenfunction were obtained in fractional form using the N-U approach [4,5]. We have used a heavy-light tetraquark system to verify the method's applicability and we have recalculated their mass spectra and fractional radial wave. The obtained mass spectra have been compared to experimental data and also found to improve in another comparison with other studies apart from that we also calculate heavy-heavy and heavy-light flavored meson mass spectra [6]. We conclude that the N-U method plays a good role in hadron physics.
[1] A.F. Nikiforov, V.B. Uvarov, Special Functions of Mathematical Physics (Basel: Birkhauser) (1988)
[2] K.R. Purohit, R.H. Parmar, and A.K. Rai, Eur. Phys. J. Plus. 135, 286 (2020)
[3] K.R. Purohit, R.H. Parmar, and A.K. Rai, Annals of physics 424, 168335 (2021)
[4] K.R. Purohit, R.H. Parmar, and A.K. Rai, Molecular Modeling 27(358) (2021)
[5] K.R. Purohit, R.H. Parmar, and A.K. Rai, Journal of Mathematical Chemistry DOI: 10.1007/s10910-022-01397-w (2022)
[6] K.R. Purohit, R.H. Parmar, and A.K. Rai, Physica Scripta 97(4) 044002. (2022)
We have studied the mass spectra and decay rates of fully heavy pentaquarks systems $QQQQ\bar{Q}$ (where $Q= c,b$) by using a non-relativistic potential model. In this model, a complex five-body problem is reduced to a simpler two -body problem. The Schrodinger wave equation has been solved numerically with Cornell-type potential. The non-relativistic potential includes Spin-Spin, Spin-Orbit interactions and tensor components of one gluon exchange interaction. We have computed heavy Quarkonia's spectra and decay rates. The spectroscopy of low-lying $S-$ and $P-$ waves are also analysed for their $J^{PC}$ values .The computed masses and decay rates to these states matches with the available theoretical and experimental data.
The Fano plane is a visual mnemonic for deriving octonion algebra, for remembering multiplication of Clifford algebra vacuum wavefunctions not in the unintuitive matrix representations of Pauli and Dirac, but rather in the original intent of Clifford, as an algebra of geometric objects of 3D space - one spin 0 point, three spin 0 lines, three spin 1/2 areas, and one spin 1 volume (1,3,3,1). Clifford product is sum of dimension-reducing dot and increasing wedge products, transforms between bosons and fermions of dynamic SUSY. An earlier paper discusses this in some detail [1]. We extend that analysis from flat 4D Minkowski spacetime to 6D phase space. It conserves angular momentum, with fermions residing in the even dimension (0,2,4,6) algebra of eigenmodes, bosons in odd (1,3,5) transition modes. Remarkably, the swapping of e3 and e4 basis vectors in the conventional 'scalar plus seven square roots of negative one' math representation is replicated by the topological duality of 0D scalar electric and 3D pseudoscalar magnetic charge in the physics. Magnetic charge and 1D magnetic flux quantum are identical in SI units. Topological inversion of magnetic charge swaps magnetic 'dipole' moment and and flux quantum, such that Bohr magneton is axial bivector, and flux quantum a topological vector dipole, with poles at infinity. We explore this mix of spin, dimensionality, and topology, and how it relates to violation of spin conservation in S-matrix modes of the geometric representation.
[1]https://www.researchgate.net/publication/332174377_Quantum_Gravity_in_the_Fano_Plane
In 2020, there are four narrow states of $\Omega_b$ baryon listed by Particle Data Group (PDG) \cite{PDG}, having one-star status, which have no confirmed $J^p$ value. The resonance masses are: $\Omega_b(6315)^-$, $\Omega_b(6330)^-$, $\Omega_b(6340)^-$, and $\Omega_b(6350)^-$. Using the hypercentral approach, we enumerated masses of the excited states of $\Omega_b$ baryon. In Hypercentral Constituent Quark Model (hCQM), the screening potential is employed as confining potential with color-Coulomb potential. We determine the possible $J^p$ values for these four newly observed $\Omega_b$ states and compared them with results obtained by other theoretical approaches. Furthermore, the properties of $\Omega_b$ baryon have been studied. The ground state magnetic moment (spin $\frac{1}{2}$ and $\frac{3}{2}$), transition magnetic moment, radiative decay width and strong decay width are calculated using the enumerated mass spectra.
References:
1. R.L. Workman et al. (Particle Data Group), Prog. Theor. Exp. Phys. 2022, 083C01 (2022).
2. LHCb collaboration, (R. Aaij et. al.), Phys.Rev.Lett. {\bf 124} 8, 082002 (2020).
3. D. Jia, J. H. Pan, C. Q. Pang, {\it Eur. Phys. J. C} {\bf 81}, 434 (2021).
4. Zalak Shah, Amee Kakadiya, Keval Gandhi and Ajay Kumar Rai, Properties of Doubly Heavy Baryons, \textit{Universe} \textbf{7}, 337 (2021).
5. Amee Kakadiya, Zalak Shah, Keval Gandhi and Ajay Kumar Rai, Few-Body Syst. 63, 29 (2022).
6. Amee Kakadiya, Zalak Shah and Ajay Kumar Rai, International Journal of Modern Physics A \textbf{37}, No. 11n12, 2250053 (2022).
We study the statistical significances for exclusion and discovery of proton decay at current and future neutrino detectors. Various counterintuitive flaws associated with frequentist and modified frequentist statistical measures of significance for multi-channel counting experiments are discussed in a general context and illustrated with examples. We argue in favor of conservative Bayesian-motivated statistical measures, and as an application we employ these measures to obtain the current lower limits on proton partial lifetime at various confidence levels, based on Super-Kamiokande's data, generalizing the $90\%$ CL published limits. Finally, we present projections for exclusion and discovery reaches for proton partial lifetimes in $p \rightarrow \overline \nu K^+$ and $p \rightarrow e^+ \pi^0$ decay channels at Hyper-Kamiokande, DUNE, JUNO, and THEIA.
The Deep Underground Neutrino Experiment (DUNE) is a leading experiment in neutrino physics which is presently under construction. DUNE aims to measure the yet unknown parameters in the three flavor oscillation scenario which includes discovery of leptonic CP violation, determination of the mass hierarchy and determination of the octant of $\theta_{23}$. Additionally, the ancillary goals of DUNE include probing the subdominant effects induced by new physics. A widely studied new physics scenario is that of additional sterile neutrinos. We consider some of the essential sterile parameters impacting the oscillation signals at DUNE and explore the space of sterile parameters as well as study their correlations among themselves and with the yet unknown CP violating phase, $\delta$ appearing in the standard paradigm. The experiment utilizes a wide band beam and provides us with a unique opportunity to utilize different beam tunes at DUNE. We demonstrate that combining information from different beam tunes (low energy and medium energy) available at DUNE impacts the ability to probe some of these parameters and leads to altering the allowed regions in two-dimensional space of parameters considered.
The measurements on lepton flavor universality violation in semileptonic $b\to s$ and $b\to c$ transitions hint towards a possible role of new physics in both sectors. Motivated by these anomalies, we investigate the lepton flavor violating $B\to K^*_2 (1430)\mu^{\pm}\tau^{\mp}$ decays. We calculate the two-fold angular distribution of $B\to K^*_2\ell_1\ell_2$ decay in presence of vector, axial-vector, scalar and pseudo-scalar new physics interactions. Later, we compute the branching fraction and lepton forward-backward asymmetry in the framework of $U^{2/3}_1$ vector leptoquark which is a viable solution to the $B$ anomalies. We find that the upper limits are $\mathcal{B}(B\to K^*_2\mu^-\tau^+)\leq 0.74\times10^{-8}$ and $\mathcal{B}(B\to K^*_2\mu^+\tau^-)\leq 0.33\times 10^{-7}$ at $90\%$ C.L.
This contribution study the correlation between two global observables of an event activity i.e. the relative transverse multiplicity activity classifier ($R_{\rm {T}}$) in Underlying Event (UE) and transverse spherocity ($S_{0}$) in proton-proton collisions. This would allow us to understand the soft particle production using the differential study of $R_{\rm {T}}$ and $S_{0}$. We have used the PYTHIA 8 Monte-Carlo (MC) with a different implementation of color reconnection and rope hadronization models to demonstrate the proton-proton collisions data at $\sqrt{s}$ = 13 TeV. The relative production of hadrons are also discussed extensively in low and high transverse activity regions. An experimental confirmation of these results is feasible using ALICE Run 3 data which will provide more insight into the soft physics in the transverse region which is useful to understand the small system dynamics.
We study the effect of the QCD critical point on moments of fluctuations of experimental observables in theoretical model at energies similar to RHIC beam energy scan (BES) energies. In heavy-ion collision experiments, the QCD critical point can be found via the non-monotonic behavior of many fluctuation observables as a function of the collision energy. Locating the point requires a scan of the phase diagram by varying temperature and chemical potential which can be performed by varying the initial collision energy $\sqrt{s}$. The event-by-event particles multiplicity fluctuations can be characterized by the moments of the event-by-event multiplicity distributions. The most important characteristic feature of a critical point is increase and divergence of fluctuations. The magnitude of fluctuations in conserved quantities like net-baryon, net-charge and net-kaon at finite temperature are distinctly different in the hadronic and QGP phase. The ratios of the moments as experimental observables cancel the volume of the system and can be directly compared to the ratios of susceptibilities from theoretical calculation. Higher-order moments like mean (M), variance ($\sigma^2$), skewness (S), kurtosis ($\kappa$) depend on the higher power of $\xi$ i.e., $S\sim\xi^{4.5}$ and $\kappa\sim\xi^{7}$. Experimentally measuring conserved quantities is difficult due to experimental limitation, therefore net-proton, net-pion, net-kaon are measure as the proxy of ($\Delta B$, $\Delta Q$, $\Delta S$). Thus the need for different models becomes predominant to estimate the value of different observables. The Polyakov loop enhanced Nambu-Jona-Lasinio (PNJL) model of QCD, is such an effective model, which possesses the benefit of having characteristics similar to the observables. Higher-order moments like skewness (S), kurtosis ($\kappa$) and their products ($ s\sigma$, $\kappa\sigma^{2}$) which are calculated in the PNJL model, are sensitive to the correlation length of the hot and dense medium created in the collision, making them more prone to search for the critical point. We also compare the value of higher-order moment products or cumulant ratios with STAR and HRG data with different values to understand the existence of critical point.
The secondary particles produced by the ultr high energy cosmic rays (UHECR) interacting with the atmospheric atom creates an air showers. The shower produce due to cosmic cascade have different parameters to be studied. The longitudinal and lateral parameter of secondary particles produced by the UHECR has been performed by air extension showers simulation (AIRES). It gives the lateral distribution of particles, energy of cosmic neutrinos reaching the ground surface, air shower produced with different zenith and azimuth angle, effect of thinning energies on air shower, shower produced at different observing levels.
The K_{s}->pi^{+} pi^{-} sample gives access to low momentum pions, which are useful for studying the particle-identification performance. In this work, we have validated the sPlot technique using Belle II simulated sample for K_{s}->pi^{+} pi^{-} at integrated luminosity of 10fb^{-1}. The Belle II is the upgraded experimental facility at SuperKEKB, KEK, Japan. In this work, we study the relative difference between true efficiencies and that obtained from the sPlot technique for different pion-identification criteria in bins of momentum and cosine of the polar angle. This study is now included as part of the Belle II Systematic Correction Framework.
In this work, we explored the hierarchy, octant and CP violation sensitivities of P2O experiment in its three proposed configurations in the standard three-flavor scenario and in the presence of an extra light sterile neutrino. We have compared them with the DUNE experiment results. We have shown that the near detectors are crucial for the study of the sterile neutrinos, as for the far detectors, the oscillation frequency is averaged out due to the rapid oscillation of the sterile mass squared difference. We found that water Cherenkov type near detectors are pretty insensitive to sterile oscillation parameters like $\theta_{1 4}$, that is why we preferred to use liquid argon time projection chamber (LArTPC) type near detector (DUNE like ND) for all three configurations of P2O. Super-ORCA detector configuration of P2O is better at probing $10 eV^{2}$ sterile neutrinos, whereas DUNE is better at investigating $1 eV^{2}$ sterile neutrinos. We found that the Super-ORCA configuration of P2O gives the best sensitivity to hierarchy, octant and CP violation studies.
Jets on propagating through quark gluon plasma (QGP) - a strongly interacting medium of deconfined quarks and gluons produced in heavy ion collisions, are quenched or modified. One of the manifestations of jet quenching is the increased asymmetry of jet energies between leading and sub-leading jets in back-to-back di-jets. Recent results from CMS [1] suggest that sub-leading jets in asymmetric dijets are subjected to stronger modification presumably because they traverse a longer path-length in QGP. However, this interpretation may not be as straightforward as it seems because of other effects like those due to jet energy loss fluctuations and the jet energy itself. In this work, we therefore perform this analysis for photon-tagged jets where photons can be considered as a reasonable proxy for the initial jet energy, at least to the first order, using a pQCD inspired model for jet energy loss called JEWEL [2]. Thus, upon comparing jet shapes of sub-leading jets between pp and Pb-Pb collisions by sampling events based on the energy imbalance of photon and the recoil jet, it may be possible to study how the energy loss itself affects the modification of jet shapes and put some constraints on the path-length effect as well.
[1] CMS Collaboration, ”In-medium modification of dijets in PbPb collisions at √s_NN = 5.02 TeV”, Journal of High Energy Physics 05, 116 (2021)
[2] K.C. Zapp, F. Krauss and U.A. Wiedemann, ”A perturbative framework for jet quenching”, Journal of High Energy Physics 03, 080 (2013)
Studies of the jet substructure and subject multiplicity in electron-proton neutral current deep inelastic scattering (NC DIS) at the future Electron-Ion Collider (EIC) for $Q^2 > 125$ GeV$^2$ are presented, for three center of mass energies, $\sqrt{s}$ = 63.2, 104.9 and 141 GeV.
Data are simulated by using two Monte Carlo event generators PYTHIA 8.304 and RAPGAP 3.308.
Jets and subjets are produced by using longitudinally invariant $k_T$ and anti-$k_T$ cluster algorithms.
Various jet radii are implemented to study the jet substructure and the subjet multiplicity.
The subjet multiplicities are also studied at different values of jet-resolution scale.
The initial motivation to study d+Au collisions was to use it as a control experiment to decouple the effects of cold nuclear matter effects in the nuclear modification factors (RAA) obtained from heavy ion collisions like Au+Au. Since the year 2013, there has been a growing evidence of the possibility of formation of Quark Gluon Plasma (QGP) in small systems. Suppression in the nuclear modification factor RAA of Pi0 and jets is observed in the central d+Au collisions, which could be attributed to formation of QGP droplets but, along with this, the results also indicate a counter-intuitive enhancement of RAA in peripheral events.
Direct photons are transparent to the QGP and thus RAA of direct photons at high pT should be unity for all classes of event activity. We observe that, in d+Au system, for central collisions the RAA of direct photons is close to unity but for peripheral collisions there is a significant enhancement which matches the degree of enhancement that is observed in RAA of Pi0s. This indicates a bias in centrality determination using Glauber model for small system collisions.
Furthermore, the direct photon measurement for d+Au can be used to experimentally determine the effective number of binary collisions Nexp for each event sample. By using this Nexp, the RAA of pi0 is re-obtained and this is close to unity for peripheral collisions but still show a significant suppression in central collisions which could be an indication of a formation of QGP droplets in central d+Au collisions.
In this talk, I will highlight preliminary approved results from d+Au collisions and the status of analysis in p+Au and 3He+Au system.
Introduction:
For Phase-2 of the operation of the LHC, starting in 2029, CMS will undergo major upgrades to its detectors and readout electronics. A completely new first-level trigger system will ensure that the
excellent physics performance of CMS is maintained or improved under the challenging pile-up conditions in Phase-2. The new trigger system, based on generic ATCA processing boards hosting XilinxUltrascale Plus FPGAs and interconnected with links at 25 Gb/s, will exploit high granularity information from the calorimeters, muon systems and a track finder, reconstructing tracks from the silicon strip tracker at the bunch crossing rate. The trigger system will contain algorithms such as particle flow that previously only have been employed in software at the higher trigger levels. The final stage in the level-1 trigger, the Global Trigger (GT), will receive high-precision trigger objects from the muon-, calorimeter-,
track- and particle flow triggers
ELM Test Suite development
The generic ATCA based main Trigger board –APx (Advanced Processor board) has several daughter boards installed as mezzanine boards and one of them is ELM2 (Embedded Linux Mezzanine ver 2.) The main purpose of ELM is to serve as an on-board control interface for ATCA modules. Our group TIFR
took the responsibility to fabricate, QA, and design test firmware. The Test firmware developed at TIFR targets Embedded Linux Mezzanine rev2 or ELM2/ This device is based on a ZYNQ System-on-Chip (SoC) device from Xilinx. Eventually,it is required to have several hundreds of these modules. In order to provide a suitable quality control for such production volume, it was vital to have an automated test
stand that requires minimal operator interference.
In collaboration with Department of Physics(DOP), University of Florida, USA, we developed an automated test Suite that includes provisions for testing all the implemented external interfaces. The developed test suite developed in 2 frameworks:-1 Bare-metal/standalone- Application running directly on HW, no OS layer is present. 2.Using Customized Linux- Built in customized manner, which includes Petalinux Kernel and Centos 7 Rootfs. The test framework includes automated tests of DDR connected to ZYNQ in standalone/bare-metal manner. However, the Linux based test framework includes multiple automated tests for Gigabit ethernet modules, Configuration and tests for clock synthesizers, and frequency measurements. Data integrity tests for EEPROM devices. The test suite is validated on the test setups in DOP,Florida.I will be talking about the details of the application developed and discuss the methodlogy to test
different interfaces of the ELM2 board in this talk
The dark matter (DM) problem has been investigated and discussed
in many papers, within the regime of quantum field theories at zero
temperature. Experimental evidence of DM from various experiments such
as Planck, limits the relic abundance of Dark Matter, $\Omega_{DM}h^2 \sim
0.120 \pm 0.001$, with data from both Planck and WMAP.
The precision on this result is expected to improve
further. Hence it becomes important to calculate DM annihilation cross
sections to better precision. In particular, thermal contributions to
the annihilation cross section can become significant.
It therefore becomes important to investigate the problem using thermal
field theory techniques. Such theories of bino-like DM interacting
with a heat bath of fermions, scalars and photons, have already been
shown to be infra-red (IR) finite to all orders in perturbation theory
[1] .
We therefore use these theories to find the temperature dependence of
the DM annihilation cross sections.
An MSSM-inspired model [2], with a bino-like Dark Matter candidate $\chi$ is
used for investigating the temperature dependence of the cross
section for the process $\chi \chi \to f \overline{f}$. Here $f$'s are
the Standard Model fermions and $\phi$'s are charged scalar doublets.
We have computed the 1-loop higher order thermal corrections to this
scattering cross section. We find terms with $T^2$ dependence, at order $\alpha$, where $T$ is the temperature of the heat
bath. The calculations are performed within the approximation where the
scalar mass is large, $m_\phi > m_\chi$, much larger than the fermion
mass. The thermal region of interest in calculating the relic densities
is in the region $m_\chi/T \sim 20$, at freeze-out, after the electro-weak phase
transition. A novel feature of the calculation
is its use of the Grammer-Yennie technique [3] to isolate the
IR finite components.
Keywords : Dark Matter, Thermal field theory, IR divergences
References :
[1] Pritam Sen, D. Indumathi, Debajyoti Choudhury,
Eur.Phys.J.C ,79, (2019) 6, 532.
[2] M. Beneke, F. Dighera, A. Hryczuk, JHEP, 1410 (2014) 45;
[Erratum: JHEP ,1607 (2016) 106.
[3] G. Grammer, Jr. and D. R. Yennie, Phys. Rev. ,D 8
(1973) 4332.
Phenomenologically, from the sign of the Ruppeiner scalar curvature, one can predict the nature of dominant interactions among black hole microstructures. In the extended phase space, thermodynamic geometry has been of special interest for black holes as the singularities of Ruppeiner scalar curvature of the metric signal critical behaviors. Initially, we constructed the thermodynamic properties of AdS black holes with dark energy in the form of quintessence and investigated the P-V criticality. We encounter the presence of an extra term in the critical temperature expression arising due to the presence of dark energy or quintessence. In the grand canonical ensemble, we computed the corresponding normalised scalar curvature taking (T,V) as the fluctuation coordinates for fixed values of electric potential. For lower values of electric potential, the dominance of attractive interaction is observed while for higher values of electric potential, repulsive interaction dominates. Further, the interaction remains constant at the phase transition where the black hole microstate change.
We measure the forward-backward asymmetry (A6) of the dimuon system and the longitudinal polarisation (FL) of the dikaon system with respect to the squared dimuon mass (q2) using the toy MC samples. The goal is to verify whether in data-like conditions with a similar number of signal and background events, the analysis is able to measure the angular observables of interest (A6, FL). The fitter is validated with the toy MC samples. In the Feldman-Cousins (F-C) 2D contours study a two-dimensional confidence region is obtained by considering the two parameters of interest (A6, FL) simultaneously. The aim is to estimate the statistical uncertainty, as MINOS fails to give the correct estimation of the statistical error, because of the central values of the fitting results (A6, FL) lie very close to the physical boundary. The 2D contours obtained by F-C method are more precised than MINOS at 68.3% CL, especially when the fitting results are hitting the boundary.
The NuMI Off-Axis 𝛎e Appearance (NOvA) is a long-baseline accelerator based neutrino oscillation experiment designed to study the oscillation of muon neutrinos to electron neutrinos (𝛎𝜇→𝛎e) using a muon neutrino beam. Neutrino spectrum before oscillation is observed at the 290-ton Near Detector (ND) located 100 m underground, 1 km from the source, and after oscillation the spectra is observed at the 14 kton Far Detector (FD) operating on the surface, 810 km away from the neutrino parent production source. The long-baseline neutrino oscillation experiments are entering an unprecedented level of precision measurements.
Flux is an important input for neutrino oscillation as well as cross-section measurements. Therefore, precise flux prediction is essential to achieve the physics goals of the current and future long-baseline neutrino-oscillation experiments. Hadron scattering and production uncertainties are limiting systematic in predicting the accelerator neutrino flux. The models employed to simulate hadron production from the nuclear target lead to intrinsic uncertainties of flux prediction. The typical neutrino flux uncertainty in the current generation of accelerator-based neutrino experiments is between 5% and 15%. To improve the prediction of the neutrino flux, we plan to make corrections based on constraints from the external hadron production experiments NA61 and EMPHATIC. We will use the Package to Predict Flux (PPFX) to achieve the aforementioned target.
In this presentation, we will talk about the GEANT4-based simulation using two different models, FTFP_BERT and QGSP_BERT. We use a tool G4HP to extract cross sections from thin target simulation. We plan to show the data/MC comparison study of G4HP simulation with the NA61 hadron production data. Further, we will show the FTFP_BERT and QGSP_BERT cross section for the NuMI using G4HP.
We present a conservative statistical treatment to propagate uncertainties in atmospheric lepton flux production to inference in neutrino astronomy. Systematic disagreements between hadronic interaction models are explored and their implications for the production of the ‘prompt’ components of atmospheric neutrinos and muons propagated to fitting for the astrophysical fluxes of neutrinos through a transparent and robust Bayesian framework.
The prototype detector of ICAL experiment at the India-based Neutrino Observatory i.e., mini-ICAL is in operation at the IICHEP, Madurai. A Cosmic Muon Veto detector around the mini-ICAL is being built using extruded plastic scintillators with embedded WLS fibers to propagate light to SiPMs for detecting scintillation photons. The SiPMs will be calibrated using an ultrafast LED driver. An experimental setup was built using a thermal chamber to characterise the SiPMs in a temperature controlled environment. The readout electronics involves trans-impedance amplifiers to amplify the SiPM output pulses and a digital storage oscillscope for the data collection. Along with the basic characterisation i.e. gain and breakdown point estimation of the SiPM, various other charateristics of the Hamamatsu SiPM (S13360-2050VE), e.g. signal shape, optically correlated and uncorrelated noise, recovery time etc were studied as a function of the SiPM's overvoltage ($V_{ov}$), number of photoelectrons and the ambient temperature. This paper will cover the details of the experimental setup and results.
The model of pinning and unpinning of superfluid vortices is considered the most popular explanation behind pulsar glitches. However, the reason behind the almost instantaneous unpinning of a large number of vortices still needs a proper mechanism. We proposed that the neutron-vortex scattering in the inner crust of a pulsar may be responsible for such vortex unpinning. The strain energy released by the crustquake is assumed to be absorbed in some part of the inner crust. It causes pair-breaking quasi-neutron excitations from the existing free neutron superfluid in the bulk of the inner crust. The scattering of these quasi-neutrons with the vortex core neutrons should unpin a large number of vortices from the thermally affected regions and result in pulsar glitches. We consider a few geometries of the affected pinning region to study the implications of the vortex unpinning in the context of pulsar glitches. We find that a Vela-like pulsar can release about $\sim 10^{11} - 10^{13}$ vortices by this mechanism and results in glitches of size $\sim 10^{-11} - 10^{-9}$. We also explored the possibility of a vortex avalanche triggered by the movement of the unpinned vortices. An estimate of the glitch size caused by an avalanche shows a favourable result. The time scales associated with various events are compatible with glitch observations.
Assuming that the mixing patterns - tribimaximal (TBM) and golden ratio (GR) mixings are realized at high energy scale, we study the impact of renormalization group equations (RGEs) on neutrino masses and mixings at high energy scales consistent with the cosmological bound on the sum of the absolute three neutrino masses, $\sum_{i}|m_{i}|< $0.23 eV. We consider ($10^{13}-10^{15}$) GeV as high energy scales. We include scale-dependent vacuum expectation value (VEV) leading to the decrease of neutrino masses with the increase of energy scale. The validity of the two mixing patterns in the high scale is shown in the analysis.
General Relativistic Magnetohydrodynamics
(GRMHD) is an essential frame-work to study astrophysical systems such as binary neutron star mergers, which is one of the source of the gravitational waves.Usually, in GRMHD numerical studies, the ideal
magnetohydrodynamics limit ( large magnetic Reynolds number) is used, and other dissipative effects (due to bulk and shear viscosity, etc.) are usually neglected. On the other hand, in heavy-ion collisions, two heavy nuclei collide and create a hot and dense Quark-Gluon-Plasma
(QGP) that evolves for a few femtoseconds like a strongly coupled fluid, gradients are usually large, and dissipative effects are not negligible. Unlike astrophysical systems, the space-time metric considered in heavy-ion collisions is flat. In short, in GRMHD, we generally neglect the viscous effects, and in special relativistic MHD
(as in heavy-ion collisions), we ignore the curvature of space-time. Again , any relativistically consistent theory, in principle, must obey the causality condition, i.e., the speed of propagation of perturbations cannot be superluminal. So here we study the wave propagation and stability of general relativistic non-resistive dissipative second-order magnetohydrodynamic equations in curved space-time. we have tested the causality and stability of the second-order theory in curved space-time in the presence of linearised metric perturbation and derived dispersion relations for various modes. Interestingly, we found the coupling of gravitational modes with the usual magnetosonic modes in the small wave-number limit. Also, we show additional non-hydrodynamical modes
arise due to gravity for a bulk-viscous fluid.
The energy deposition due to the pair annihilation of neutrinos into electrons can energize events such as gamma ray bursts (GRBs). This energy deposition can also be enhanced in different spacetime backgrounds. In this talk I will discuss whether an inclusion of $Z^\prime$ mediated neutrino annihilation process can alter the energy deposition. Comparing with the GRB observation data, we obtain bounds on gauge coupling for different backgrounds.
Diffuse gamma-ray emission by interactions of ultra-high-energy cosmic rays (UHECRs) with the 2.7K cosmic microwave background (CMB) is expected to have an isotropic distribution around 10—100 TeV. This radiation carries the information on the distribution of energetic sources and hence the cosmological evolution of the universe. The GRAPES-3 array comprises ∼ 400 densely packed scintillator detectors deployed over an area of 25,000 m$^2$ and a large area tracking muon telescope (560 m$^2$). The muon telescope has the ability to differentiate the gamma-rays from charged cosmic rays through their muon content. Based on the data measured by the GRAPES-3 experiment, we place 90% C.L. upper limits on the intensity of gamma-rays relative to cosmic rays at energies from 10–300TeV.
Primordial Black Holes (PBHs) in the mass range $\sim 10^{17}- 10^{22}$g are currently unconstrained, and can constitute the full Dark Matter (DM) density of the universe. Motivated by this, in the current work, we aim to relate the existence of PBHs in the said mass range to the production of observable Gravitational Waves (GWs). We follow a model-independent approach assuming that the PBHs took birth in a radiation dominated era from enhanced primordial curvature perturbation at small scales produced by inflation. We show that the constraints from CMB and BAO data allow for the possibility of PBHs being the whole of DM density of the universe. Finally, we derive the GW spectrum induced by the enhanced curvature perturbations and show that they are detectable in the future GW detectors like LISA, BBO and DECIGO.
Prospects of direct and indirect detection of DM are distinctively correlated to the phase space distribution of DM within the galactic haloes. A promising avenue to detect and constrain the properties of particulate DM is to explore the capture and subsequent heating signatures of DM annihilation from astronomical objects. The aim of this article is to systematically study the impact of observational uncertainties and cosmological simulations on the rate of capture of DM particles within celestial objects. Additionally, we probe a variety of dark matter-nucleon scattering cross-section for some empirically motivated, isotropic velocity distributions. Within the limits of the standard halo model, we find a ∼ 10% increase in the capture rate, taking into account the astrophysical uncertainties. Whereas this number can jump upto ∼ 100% if the velocity distribution of DM particles within the galactic halo is favored to be a non-standard distribution. We also report a significant dependence of the resolution and sophistication of the cosmological simulations on the capture rates.
In the standard cosmological model the universe is assumed to
be statistically isotropic & homogeneous when averaged on large scales. The dipole anisotropy of the CMB is ascribed to our peculiar motion due to local inhomogeneity. There should then be a corresponding dipole in the sky map of high redshift sources. Using catalogues of radio galaxies and quasars we find that this expectation is rejected at >5σ. This undermines the standard practice of boosting to the ‘CMB frame’ to analyse cosmological data, in particular for inferring an isotropic acceleration of the Hubble expansion rate which is interpreted as due to Λ.
Motivated by the various theoretical studies regarding the efficient capturing of dark matter by neutron stars, we explore the possible indirect effects of captured dark matter on the cooling mechanism of a neutron star. The equation of states for different configurations of dark matter admixed star at finite temperature is obtained using the relativistic mean-field formalism with the IOPB-I parameter set. We show that the variation in the dark matter momentum vastly modifies the neutrino emissivity through specific neutrino generating processes of the star. The specific heat and the thermal conductivity of a dark matter admixed star have also been investigated to explore the propagation of cooling waves in the interior of the star. The dependence of theoretical surface temperature cooling curves on the equation of state and chemical composition of the stellar matter has also been discussed along with the observational data of thermal radiation from various sources. We observed that the dark matter admixed canonical stars with $k_{f}^{\rm DM} > 0.04$ comply with the fast cooling scenario. Further, the metric for internal thermal relaxation epoch has also been calculated with different dark matter momentum and we deduced that increment of dark matter segment amplify the cooling and internal relaxation rates of the star.
In order to cope with very high radiation dose and hadron fluences at the High Luminosity Large Hadron Collider (HL-LHC), a new silicon Inner Tracker will be built for the Phase-2 Upgrade of the CMS experiment. The new Inner Tracker will contain 2 billion silicon pixels. These pixel modules will be composed of pixel sensors with pixel size of 100x25 um2 or 50x50 um2 and a new ASIC, designed in 65 nm CMOS technology, developed by the RD53 collaboration.
CMS is currently testing multiple silicon-based particle detector technologies, including thin planar sensors and 3D sensors, at Fermilab. In order to test the performance, efficiency, and resolution of these devices, a pixel telescope, pointing resolution of ~5 um, consisting of 10 layers of silicon particle detector devices is placed in the 120 GeV proton beam at the Fermilab Test Beam Facility. By comparing reference tracks from the pixel telescope, one can evaluate different detector technologies. Fermilab Irradiation Test Area (ITA), uses a 400 MeV Linac protons to irradiate the sensors.
The resolution, efficiency, and charge collection of planar and 3d pixel sensors before and after irradiation will be shown. These test results inform the choice of sensor technology for the innermost layers of the upgraded pixel detector.
A 256-pixel imaging camera for a 4-meter class Imaging Atmospheric Cherenkov Telescope (IACT) is being developed in house by TIFR. The camera uses a 4 x 4 array of SiPMs as photodetector for each pixel. The pixel signals are pre-conditioned in the front-end electronics modules and fed to back-end modules. The front-end electronics comprises the modules for pre-amplification and biasing of pixel sensors. The back-end electronics of camera continuously samples the pre-amplified pixel signal @ 1 GSPS using a switched capacitor array of 1024 ns sampling depth, and records the pulse profile after receiving a valid final trigger signal. The back-end electronics also handle the tasks such as generating pre-triggers & a final trigger to digitize the pixel signals, data concentration, internal calibration of the hardware, recording the event and monitoring data packets to PC in control room etc. Both front-end and back-end modules of the camera follow a modular design concept. The camera back-end has three type of modules viz., 16 Cluster Digitizer Modules (CDMs), 1 Data Concentrator Module (DCM) and 1 Control & Trigger Module (CTM). All the back-end modules are connected to each other using a backplane PCB and accommodated in a customized VME crate. The camera operation is managed with the help of a stack of firmware and software programs developed indigenously. The design, development and current status of the camera will be presented in this talk.
Silicon (Si) detectors are commonly used in nuclear and particle physics experiments due to their capability to precisely measure the energy, position, and time of the particles produced during the experiment. There are different types of silicon detectors fabricated (Si pads, Si pixels, Si strips, MAPs type etc.) based on the need of its applications in nuclear, particle and medical physics. The silicon detectors are mainly used in particle tracking and vertex detectors. A sandwich structure of a pad array coupled with high Z material such as tungsten, if arranged in layers, could be used as an electromagnetic calorimeter for measuring the energy and shower profile of high-energy electrons and gamma rays produced in collider experiments. In this context, pad detectors are being fabricated on a 6-inch silicon wafer at Bharat Electronics Limited (BEL), Bangalore. It will be an array of 8 cm x 9 cm, consisting of 72 pad cells each with an active area of 1 cm x 1 cm. During this presentation the design, fabrication, and test results of Si pad detectors will be reported.
Identification of low transverse energy photons from the calorimeter energy deposits is a challenging task in a hadron collider environment. The electromagnetic calorimeter subsystem of CMS has an average noise level of about 30 MeV (80 MeV ) in the Barrel (Endcap) region. The existing photon Identification scheme in proton-proton collisions for the CMS experiment is effective for photons above 8 GeV. For Heavy Ion collisions, dedicated customization of the reconstruction method has led to the identification of photons down to 2 GeV, which helped light by light scattering analysis. In the p-p collisions where the low-q^2 QCD activity is dominant, the low p_T photon multiplicity is very large. We have developed a scheme for identifying photons, as low as 4 GeV, in p-p collisions, using a multivariate technique. This development could help CMS in expanding its reach to rare radiative heavy flavor decays, such as Bs_0→ mu mu gamma , where more than 90 % of the decay phase has a photon less than 10 GeV. Highlights of the this study will be presented in this talk.
The GRAPES-3 experiment is home to the world’s largest muon telescope containing nearly 4000 proportional counters of each dimension 6m x 6 m x 0.1m. Construction of another large muon telescope is currently under progress which is expected to enrich the physics potentials of GRAPES-3 in addressing the origin of Galactic cosmic rays, through accurate measurements of cosmic ray composition as well as enable the identification of PeV gamma ray sources. Nearly 4000 proportional counters were required for the new muon telescope. Various fabrication and indigenous test facilities were created at the GRAPES-3 laboratory along with skill development to perform precision tasks. The challenge to fabricate such a large number of detectors using old mild steel tubes was accomplished by members of the GRAPES-3. We will present the challenges faced, optimizations, and innovations during the process of fabrication, performance validation, and the successful installation of the proportional counters in the muon telescope.
The CBM experiment at FAIR aims to explore the QCD phase diagram at high net baryon density and moderate temperature by colliding heavy nuclei at an energy range of 4 - 12 AGeV. The Muon Chamber (MuCh) detector at CBM is dedicatedly designed to detect muon pairs originating at different stages of heavy ion collisions. MuCh consists of absorber segments and detector layers positioned in between the absorber pairs to facilitate a momentum dependent track identification in high particle density environment upto an interaction rate ~10 MHz. Gas Electron Multiplier (GEM) chambers are used in the first two stations of MuCh. Two prototype real size GEM detectors called mMuCh setup have been installed in the mCBM (mini CBM) experiment using beams from SIS18 at GSI, Germany. This mCBM is a part of FAIR phase-0 program where a pre-series production of CBM detector systems are being tested with their triggerless streaming readout chain before final CBM production. In the mCBM campaign 2020, data for mMuCh along with other sub detectors have been collected for the Pb+Au collisions at Ebeam = 1.06 AGeV upto an intensity of ~10$^{8}$/spill and with different Au target thicknesses. In this work, we will present the performance of mMuCh chambers in terms of the linearity of the response of the mMuCh detectors with beam intensity and its time and spatial correlations with other sub detectors. Three layers of the time of flight (TOF) detectors have been used to form the tracks in the data. After employing a time based event reconstruction and tracking on the mCBM data we have studied the GEM performance in terms of the degree of spatial correlations by constructing residual distributions between reconstructed GEM hits and projected reconstructed tracks at the GEM plane. After re-aligning the detectors positions we tried to estimate GEM detector efficiency and studied its dependence with GEM HV and velocity ($\beta$) of traversing tracks in mCBM. We acknowledge the help extended by our colleagues at GSI during the data taking.
Loosely bound light nuclei are produced in abundance in heavy-ion collisions. There are two main possible models to explain their production mechanism - the thermal model and the coalescence model. Thermal model suggests that the light nuclei are produced from a thermal source, where they are in equilibrium with other species present in the fireball. However, due to the small binding energies, the produced nuclei are not likely to survive the high temperature conditions of the fireball. The coalescence model tries to explain the production of light nuclei by assuming that they are formed at later stages by the coalescence of protons and neutrons near the kinetic freeze-out surface. The final-state coalescence of nucleons will lead to the mass number scaling of the elliptic flow ($v_2$) of light nuclei. This scaling states that the $v_2$ of light nuclei scaled by their respective mass numbers will follow very closely the $v_2$ of nucleons. Therefore, studying the $v_2$ of light nuclei and comparing it with the $v_2$ of protons will help us in understanding their production mechanism.
In this talk, we will present the transverse momentum ($p_{T}$) and centrality dependence of $v_2$ of $d$, $t$, and $^3\text{He}$ and their antiparticles in Au+Au collisions at $\sqrt{s_{NN}}$ = 14.6, 19.6, 27, and 54.4 GeV. Mass number scaling of $v_2(p_T)$ of light (anti-)nuclei will be shown and physics implications will be discussed.
Isobar collisions, $^{96}_{44}$Ru+$^{96}_{44}$Ru and $^{96}_{40}$Zr+$^{96}_{40}$Zr, at $\sqrt{s_{\mathrm {NN}}}$ = 200 GeV have been performed at RHIC in order to study the charge separation along the magnetic field, called the Chiral Magnetic Effect (CME). The difference in nuclear deformation and structure between the two isobar nuclei may result in a difference in the flow magnitudes. Hence, elliptic flow measurements for these collisions give direct information about the initial state anisotropies. Strange and multi-strange hadrons have a small hadronic cross-section compared to light hadrons, making them an excellent probe for understanding the initial state anisotropies of the medium produced in these isobar collisions. The collected datasets include approximately two billion events for each of the isobar species and provide a unique opportunity for statistics hungry measurements.
In this presentation, we will report the elliptic flow ($v_{2}$) measurement of $K_{s}^{0}$, $\Lambda$, $\overline{\Lambda}$, $\phi$, $\Xi^{-}$, $\overline{\Xi}^{+}$, $\Omega^{-}$, and $\overline{\Omega}^{+}$ at mid-rapidity for Ru+Ru and Zr+Zr collisions at $\sqrt{s_{\mathrm {NN}}}$ = 200 GeV. The centrality and transverse momentum ($p_{T}$) dependence of $v_{2}$ of (multi-)strange hadrons will be shown. System size dependence of $v_{2}$ will be shown by comparing the $v_{2}$ results obtained from Cu+Cu, Au+Au, and U+U collisions. The number of constituent quark (NCQ) scaling for these strange hadrons will also be tested. We will also compare the $p_{T}$-integrated $v_{2}$ for these two isobar collisions. Transport model calculations will be compared to data to provide further quantitative constraints on the nuclear structure.
The fireball of quarks and gluons formed in relativistic heavy ion collisions converts to hadrons as it cools below the chiral crossover temperature $(T_{CO})$. The hadrons so formed may further interact till a point where the densities are low enough to make the reaction rates negligible. After this the yields of individual species do not change significantly and get essentially fixed. This is called chemical freezeout. The freezeout parameters have been extracted using statistical models through fitting to mean hadronic yields. A numerical coincidence arises at zero baryon chemical potential where the extracted chemical freezeout temperature matches with the chiral crossover temperature obtained through lattice QCD. This has till now been treated as only a coincidence without much physics reasoning attributed to it.
We explore the dynamical origin of chemical freezeout by examining the chemical relaxation time of a gas of the SU(3) octet of pseudoscalar mesons in the linear response approximation. The cross-sections are computed using amplitudes from unitarized chiral perturbation theory at next-to-leading-order. With only 12 input parameters namely the pion decay constant $(f_\pi)$, three masses $(m_\pi,\ m_\eta,\ m_K)$ and 8 low energy constants, the amplitudes agree with scattering data and generate the resonances up to masses of about 2 GeV. Our results show that the relaxation time is large (about 100 fm) near $T_{CO}$ and the system cannot remain in chemical equilibrium once it enters the chiral symmetry broken phase. The long relaxation times are directly related to the fact that these mesons are pseudo-Goldstone bosons of chiral symmetry breaking. We further argue that as the relaxation time near $T_{CO}$ is much larger than the typical timescale for expansion, freezeout has to occur at the chiral crossover temperature.
We study the thermoelectric response of a thermal medium of deconfined quarks and gluons in the framework of relativistic kinetic theory. The response of the medium is quantified by the Seebeck and Nernst coefficients which relate the mutually longitudinal and transverse components, respectively, of the induced electric field and the temperature gradient. To obtain the above coefficients, we use the relativistic Boltzmann transport equation in the relaxation-time approximation, with interactions being incorporated via masses generated by thermal medium, extracted from one loop perturbative thermal QCD.
In the strong magnetic field regime ($|eB|≫T^2$), thermal excitation of fermions to higher Landau levels is exponentially suppressed. As such, the lowest landau level (LLL) approximation becomes feasible which leads to fermion dynamics purely along the direction of the magnetic field, $B$ (1-dimensional). Owing to this vanishing transverse motion, the Nernst coefficient vanishes.
In the weak magnetic field regime ($|eB|≪T^2$), 2 prominent changes occur: 1) The fermion dynamics is no longer restricted which leads to non zero Nernst coefficient. Thus, the thermoelectric response becomes a 2× 2 matrix with diagonal elements representing Seebeck coefficient and the off-diagonal elements, the Nernst coefficient. 2) The quasiparticle mass of the fermion evaluated using one-loop perturbation theory yields different masses for the left and right handed chiral quark modes, thereby lifting the degeneracy of the chiral modes.
The Seebeck coefficient of the medium (absolute values) is found to be a decreasing function of temperature ($T$) in both regimes of the magnetic field, $B$. However, its sign is negative in strong $B$ and positive in weak $B$, suggesting that the direction of the induced electric field is flipped as $|B|$ decreases in the medium. The magnitudes in the weak $|B|$ regime are larger (∼ 2 times) than that in strong $|B|$. Further, in the weak $|B|$ regime, the L-mode Seebeck coefficient elicits a larger response than the R-mode. The sensitivity of the Seebeck coefficient to changes in temperature is found to be comparatively enhanced in the strong $|B|$ limit. The Nernst coefficient is also a decreasing function of temperature with the L mode response being stronger than the R mode. It is zero in the LLL approximation (strong $|B|$ limit) as well as for $B=0$. An interesting consequence of the non-degenerate chiral quark masses in the weak $|B|$ limit is that for certain values of $T$ and $B$, the R-mode quasiquark mass comes out to be negative, which is unphysical. It is found that this happens in such a way so as to generate an upper bound for the ratio $|eB/T^2|$. For $|eB|=0.2 m_\pi^2$ and above, the condition $|eB|/T^2≪1$ is thus enforced by the theory, consistent with the initial assumption.
Photons provide snapshots of the evolution of relativistic heavy-ion collisions as they are emitted at all stages and do not interact with the medium strongly. With access to the versatility of RHIC, measurements of low momentum direct photons are made possible across different system size and beam energies. An excess of direct photons, above prompt photon production from hard scattering processes, is observed for a system size corresponding to $dN_{ch}/d\eta$ of 20-30, with a large azimuthal anisotropy and a characteristic dependence on collision centrality. After subtracting the prompt photon component, the inverse slope of the spectrum is continuously increasing with the effective temperature for the $p_T$ range from 1-2 GeV/c being 250 MeV/c, and about 400 MeV/c for the range from 2 to 4 GeV/c. Within the experimental uncertainty, there is no indication of a system size dependence of the inverse slope. In this talk, results from small system and Au+Au collisions from the PHENIX experiment will be presented.
Elastic light-by-light scattering, $\gamma\gamma\rightarrow\gamma\gamma$, is a pure quantum mechanical process, also proposed as a sensitive channel to study physics beyond the standard model. We present the first combination of $\gamma\gamma\rightarrow\gamma\gamma$ cross-section measurement at the LHC, using lead-lead data recorded by the ATLAS and CMS collaborations at 5.02 TeV with the aim of checking the consistency with different standard model predictions. We find the averaged cross-section of 115 $\pm$ 19 nb that is consistent with standard model predictions within two standard deviations. For the first time, we also consider the contribution from $\eta_{b}$(1$S$) meson production to the diphoton invariant mass distribution.
Indian Scintillator Matrix for Reactor Anti-Neutrinos (ISMRAN) experiment is a very short-baseline (~ 13 m from the reactor core) and above-ground reactor anti-neutrino experiment, aiming to measure the energy spectrum of anti-neutrinos from the Dhruva research reactor, BARC, Mumbai. ISMRAN experiment is also sensitive to searching for sterile neutrino and monitoring the reactor thermal power in a non-intrusive way. Anti-neutrinos are indirectly detected by measuring the response of positron and neutron signals inside the ISMRAN volume which are created by inverse beta decay (IBD) process of anti-neutrino interaction with the plastic scintillator bars (PSBs). The ISMRAN detector setup consists of 90 PSBs, each having a dimension of 10cm x 10cm x 100cm and wraped with Gadolinum foils, arranged in a matrix of 9x10 inside a passive shielding of 10 cm lead and 10 cm borated polyethylene. The complete setup is mounted on a movable base structure which will allow us to make the measurements at different distances from the reactor core.
In this article, we will describe the optical model, energy resolution model and energy non-linearity model of PSBs. We have also performed in-situ energy calibration at reactor off condition to undertand the uniformity of detector response among the PSBs over the period of time. Measurement of backgrounds at reactor on and reactor off conditions and discriminating this backgrounds using machine learning technique will be presented in this article. We will also describe the signal-to-noise ratio, optimized selection process used to identify the anti-neutrino candidate events using the ISMRAN detector array.
The full scale ISMRAN experiment was installed and commissioned in the Dhruva reactor hall and the physics data campaign was started at the end of year 2021, in the round-the-clock mode.
The goal of the Short Baseline Neutrino (SBN) program is to definitively address observed anomalies that may originate from short baseline neutrino oscillations and to search for evidence of the existence of light sterile neutrinos with unprecedented sensitivity in eV^2 mass range. In the SBN experiment, the near detector (SBND) is close to the source and the intermediate detector is MicroBooNE and the far detector is named ICARUS and all the three detectors receive on-axis Booster Neutrino Beam (BNB). In this talk I will focus on the status of SBND and ICARUS experiment and describe how SBN will resolve a long-standing neutrino anomaly that can be explained by the existence of a new, non-interacting, “sterile” neutrino.
Several beyond standard model phenomena can affect neutrino oscillation prominently. Two such scenarios are non-standard interaction (NSI) and Lorentz Invariance Violation (LIV). Both phenomena have significant impact on neutrino oscillation. Although both theories emerged quite differently, it is still challenging to discriminate between them. The contributions arising from NSI and LIV to the effective Hamiltonian of neutrino are similar, which make them indistinguishable. NSI affects neutrino oscillation only in the presence of matter, while the effects of LIV can be observable in vacuum as well as in matter. Protvino to ORCA (P2O) is a long baseline experiment with the longest proposed baseline of 2595 km, and it can have a significant matter effect and act as an excellent tool to explore NSI and LIV. In this work, we have studied the effects of NSI and LIV and attempted to distinguish between NSI and LIV at P2O experiment.
The proposed magnetized iron calorimeter (ICAL) by the INO collaboration is a 51 kTon detector made up of 151 layers of 56 mm thick iron having an air gap of 40 mm in between each iron layer where Resistive Plate Chambers (RPCs), active detectors providing position and timing information, will be placed. ICAL is designed to detect muons generated by the charge current interaction of the atmospheric $\nu_{\mu}$ and $\bar{\nu}_{\mu}$ with iron. ICAL is designed to provide magnetic field $\sim$ 1.5 T with more than 90$\%$ of its volume having more than 1 T magnetic field. Magnetic field is one of the critical component of the ICAL, since it makes it capable of identifying the electric charge of a muon, for example, and also helps in momentum reconstruction of tracked muons. Since the goal of the ICAL is to make precision measurements of the neutrino oscillation parameters, an accurate estimate of the magnetic field in the iron is very important. However, the magnetic field in ICAL will be estimated from measurements using Hall sensors or search coils and this can introduce errors in the reconstructed muon momentum which affect the physics analysis of data. A study of how the error in measurement of magnetic field will further propagate in the reconstruction of momentum and other aspects of physics analysis will be presented. This study is done using GEANT4 based simulations for ICAL detector.
The Accelerator Neutrino Nucleon Interaction Experiment (ANNIE) at Fermilab is designed to measure the final-state neutron multiplicity of the neutrino-nucleus interaction. The ANNIE's Gadolinium-loaded water Cherenkov detector is situated in the Booster Neutrino Beam (BNB) at Fermilab. The measurements of ANNIE are crucial for future long-baseline neutrino experiments since a better understanding of the neutrino-nucleus interaction will help to reduce the associated systematic uncertainties. ANNIE is the first neutrino experiment to deploy Large Area Picosecond Photodetectors (LAPPDs) technology in its detector. It will demonstrate the capabilities of LAPPDs in the water Cherenkov detector with a transformative impact on the photodetection-based neutrino detector technology. In this talk, I will briefly discuss the ANNIE detector system and the experiment's status, emphasizing the recent results.
For better understanding of neutrino properties, we require precision measurements of the oscillation parameters. Presently the systematic uncertainties in these parameters are large, to which 20-25\% arises due to the lack in the understanding of $\nu/\bar \nu$-N and $\nu/\bar \nu$-A cross sections. Future high precision measurements require the systematic uncertainties to be reduced to 2-3\% level. Theoretical as well experimental works are being performed worldwide. In the few GeV energy region, MINER$\nu$A is the best experimental effort in this direction. MINER$\nu$A uses fine-grained tracking detector commissioned for recording (anti)neutrino interactions produced by the NuMI beamline at the Fermilab. MINER$\nu$A is dedicated to extract $\nu/\bar \nu$-A cross sections on a variety of nuclear targets ($C$, $H_2O,~He,~Fe,~Pb$ and $CH$) and study nuclear medium effects by obtaining the cross section ratios. MINER$\nu$A ran in two modes, low energy ($\langle E_\nu\rangle \sim 3.5GeV$) and medium energy ($\langle E_\nu\rangle \sim 6GeV$).
In this symposium we shall present the preliminary results for charged current antineutrino deep inelastic scattering (DIS) [$\bar \nu_\mu + N(bound~inside~nuclear~target) \to \mu^+ + X(jet~of~hadrons)$] cross sections in the medium energy mode of MINER$\nu$A. The events observed are restricted to the region of 2$\leq E_{\bar \nu_\mu}\leq $50 GeV, 2 $\leq E_{\mu} \leq$ 50GeV and muon scattering angle $\theta_\mu<17^o$ with respect to the beam. Moreover, this DIS analysis employs cuts on the center of mass energy of hadronic system $W>2GeV$ and four momentum transfer squared $Q^2>1~GeV^2$. We shall discuss various techniques used in the cross section extraction at MINER$\nu$A for passive as well as active targets. The results of $\frac{\sigma^A}{E_{\bar \nu}}$ vs $E_{\bar \nu_\mu}$ and $\frac{d \sigma^A}{dx_{Bj}}$ vs $x_{Bj}$ (fraction of the momentum carried by the struct parton) for $A = Fe,~Pb,~C,~CH$ shall be presented. The ultimate aim of this study is to extract the cross section ratio of $C$, $Fe$ and $Pb$ to $CH$. This will be the first direct measurement of nuclear effects in DIS with antineutrinos.
A review of results related to the so called B anomalies is presented. The results come both the e+e- B factories and LHC. The talk concludes with a discussion of some the recent results from Belle II experiment related to anomalies.
The $B^{0}\to K^{0}\pi^{0}$ decay is mediated by flavor-changing neutral currents, which are suppressed in the standard model (SM), and it provides an indirect route to search for beyond-the-SM particles. The decay is a key factor in improving the sensitivity of the $K$--$\pi$ isospin sum-rule. The first time-dependent analysis of the decay within Belle II is performed using a sample of $e^{+}e^{-}$ collisions corresponding to $189.8 fb^{-1}$ of integrated luminosity recorded at the $\Upsilon(4S)$ resonance. We measure the decay branching fraction $\mathcal{B}(B^{0} \to K^{0} \pi^{0}) = [11.0 \pm 1.2\, (stat.) \pm 1.0\, (syst.)] \times 10^{-6}$ and direct $CP$ violation asymmetry $A_{CP} (B^{0} \to K^{0} \pi^{0})= -0.41_{-0.32}^{+0.30}\, (stat.) \pm 0.09\, (syst.)$.
The measurements of several lepton flavor universality (LFU) violating observables in the decays induced by the quark level transition $b → c\tau \bar{\nu}$ provide the hint of physics beyond the standard model. These deviations can be resolved by adding a single vector leptoquark to the Standard Model. To further explore this leptoquark, we estimate the leverage of new physics in $b → c\tau \bar{\nu}$ on $\Lambda_b → p\tau\bar{\nu}$ decay in the context of $U_1$ leptoquark model. In this model, the new physics couplings in $b → u\tau \bar{\nu}$ transition can be written in terms of $b → c\tau \bar{\nu}$ couplings and hence the extent of allowed new physics in $\Lambda_b → p\tau\bar{\nu}$ would be determined by $b → c\tau \bar{\nu}$ transition. The new physics parameter space is obtained by performing a fit to all $b → c\tau \bar{\nu}$ data and we obtain predictions of several $\Lambda_b → p\tau\bar{\nu}$ observables. We find that the current $b → c\tau \bar{\nu}$ data allows an order of magnitude enhancement in the branching ratio as well as in the LFU ratio. The other observables such as convexity parameter, lepton forward-backward asymmetry, longitudinal polarization of final state baryon and tau lepton are consistent with the SM value.
We constrain the parameter space of a simplified dark matter model with a spin-0 mediator and a fermionic dark matter from low energy observables like anomalous magnetic moment, FCNC processes like neutral meson mixing, rare decays of $ B_0, B_s^0, K $ meson, semileptonic $ b \to s \ell \ell $ decays, invisible decays of B and K meson and process like $ t \to b W_{\mu} $ decay. FCNC is generated in this model via one loop penguin diagram. We have looked into the phenomenology in the higher values of mediator mass as well as the low mass region($ M_S \leq 10GeV $). Tight constraints are obtained for both regions. Those constraints are used to further constrain the dark sector parameters from relic density and spin-independent crosssection limit given by XENON1T. Those constraints can also be used for other phenomenological studies.
The study of B decays has led to a much better understanding in the flavor sector of the SM. In the CP-violation study, the measurement of sin(2β) has an important role. The “golden channel” Bd0 → J/ΨKs0, plays an outstanding role for a clean measurement of sin(2β), where β is one of the angles of the CKM unitarity triangle. The channel Bs0 → J/ΨKs0, is related to Bd0 → J/ΨKs0 through interchanging all d quarks with s quarks. The determination of the effective lifetime of Bs0 → J/ΨKs0 will be an essential step towards the time-dependent CP violation study. As a first step, we are measuring the effective lifetime of the decay Bs0 → J/ΨKs0 with the full RunII data collected by CMS detector.
Although an integral part of the Standard Model, neutrinos still remain the least understood of all known fundamental particles. There are many open questions in neutrino physics, starting from their very nature to their mass generation, their mixing and oscillation patterns. In this talk, I will cover the current theoretical understanding of neutrinos and the various open questions that remain unanswered. I will also discuss why neutrinos can be the gateway to the elusive beyond Standard Model physics and the various interesting and sometimes surprising connections between neutrinos and Dark Matter, Proton decay etc.
With the establishment of the oscillation phenomena, neutrino physics is entering a precision era, although there are still several unknowns, such as the neutrino mass ordering, possible CP violation in neutrino physics, amongst others. In this talk, I will review the current experimental status of neutrino physics and the planned experiments at least in the next two decades which will enable us to measure some of these important parameters which are unknown today.
Future colliders are necessary to understand the Higgs boson at percent level precision. Any deviations at percent level in the Higgs sector points to new physics at 10 TeV scale. Natural choice made by the world community calls for a plan to build an electron-positron Higgs factory followed by a 10-TeV scale collider, using 100-TeV center-of-mass proton-proton collider or a 10-TeV muon collider. The talk will discuss options gaining momentum worldwide on both Higgs Factory and 10-TeV collider fronts.
In 2020 the US Department of Energy announced its intention to build the Electron Ion Collider (EIC) at Brookhaven National Laboratory. Immediately, an EIC project team consisting of scientists from BNL and Jefferson Laboratory was formed and project moved forward. It has gone through important critical decisions and evaluations since then and poised to start construction in 2025 culminating in first collisions in early 2030. In this talk I will present an overview of the scientific motivation of the project, both for high-energy particle and nuclear physicists, and the project status. I will conclude with highlights of opportunities for early career scientist (graduate students, post docs and faculty) to influence the machine and detector design and do physics at this facility as future leaders in the field.
One of the important concepts that governs the amplitude and phase of energy transmission is impedance. The other is the concept of geometric wavefunction that arises from geometric algebra. While Pauli sigma matrices form the basis of space in 3D, the Dirac matrices are basis vectors of space-time in the geometric representation. Wavefunction interactions are modeled by geometric products, which turns fermions into bosons and vice-versa. Physical manifestation of vacuum wavefunction interactions follows from assignment of appropriate quantized E and B fields to the eight vacuum wavefunction components. This is utilized to calculate quantized impedance network as a function of energy, with its nodes specified in powers of alpha, the em coupling constant. The particle lifetimes have been multiplied by speed of light to obtain their coherence lengths, which are in turn converted to corresponding energy units and certain particle lifetimes such as that of $\pi_{0}$ and $\eta$ are seen to be matching with the nodes of impedance network. Utilizing the fact that impedances must be matched for the energy transmission essential in a decay, we determine the branching ratios for $\pi_{0}$.
The clockwork mechanism is a relatively new mechanism to generate suppressed couplings in a theory containing no small parameters. We develop a new class of clockwork theories with an augmented structure of the near-neighbour interactions along a one-dimensional closed chain. Such a topology leads to new and attractive features in addition to generating light states with hierarchical couplings via the usual clockwork mechanism. For one, there emerges a $\mathbb{Z}_2$ symmetry under the exchange of fields resulting in a physical spectrum consisting of $\mathbb{Z}_2$ even and odd states with a two-fold degeneracy at each level. The lightest odd particle, being absolutely stable, could be envisaged as a potential dark matter candidate. Evidently, the theory can also be obtained as a deconstruction of a five-dimensional theory embedded in a geometry generated by a linear dilaton theory on a $S^1/\mathbb{Z}_2$ orbifold with three equidistant 3-branes.
The High-Luminosity Large Hadron Collider (HL-LHC) is the upgraded version of LHC with high luminosity nearly ten times larger than the recorded integrated luminosity at LHC. It will enable physicists to explore well-known systems, including the Higgs boson, in greater detail and identify unusual new phenomena like Supersymmetry (SUSY). SUSY is a widely studied theory of physics beyond standard model (SM), where each fermion is assigned with a bosonic superpartner and vice-versa. The superpartners are having the same quantum numbers as its partner of SM particles except their spin. A search is performed for the supersymmetric partner of the SM top quark, the top squark, in the final state containing hadronically decaying tau leptons and large missing transverse momentum $E_{T}^{miss}$. The top squark sector can serve as a probe to a rich variety of models and phase space scenario. The higgsino-like scenarios or high-tan $\beta$ region favours the electroweak gaugions decay to the third generation fermions, tau leptons. The left handed and right handed top squarks can mixed with each other and their polarization can be measured by measuring the tau polarization in the 1-prong hadronic tau decay channel. The search uses a simulated data events from delphes at 3000 $fb^{-1}$ luminosity at HL-LHC.
We propose an E8⊗E8 unification of the standard model with pre-gravitation, on an octonionic space (i.e. an octonion-valued twistor space equivalent to a 10D space-time). Each of the E8 has in its branching an SU(3) for space-time and an SU(3) for three fermion generations. The first E8 further branches to the standard model SU(3)c⊗SU(2)L⊗U(1)Y and describes the gauge bosons, Higgs and the left chiral fermions of the standard model. The second E8 further branches into a right-handed counterpart (pre-gravitation) SU(3)grav⊗SU(2)R⊗U(1)g of the standard model, and describes right chiral fermions, a Higgs, and twelve gauge bosons associated with pre-gravitation, from which general relativity is emergent. The extra dimensions are complex and they are not compactified, and have a thickness comparable to the ranges of the strong force and the weak force. Only classical systems live in 4D; quantum systems live in 10D at all energies, including in the presently observed low-energy universe. We account for 208 out of the 496 degrees of freedom of E8⊗E8 and propose an interpretation for the remaining 288, motivated by the trace dynamics Lagrangian of our theory.
The fundamental nature of neutrinos, whether they are Dirac or Majorana fermions, is still unknown and has been an open question for long time. If we consider neutrinos to be Majorana type, then the two flavour neutrino mixing matrix contains a Majorana phase. However, this phase doesn't appear in neutrino oscillation probabilities for vacuum as well as for matter modified oscillations. This leads us to the questions, "what are the conditions under which the Majorana phase appears in the oscillation probabilities ?". We find that the Majorana phase remains in the oscillation probabilities if the neutrino decay eigenstates are not the same as the mass eigenstates. In such a condition we find the possibilities of two kinds of CP-violation in our work: one due to the Majorana phase and the other due to the off-diagonal parameter of the neutrino decay matrix. We point out an another interesting result that the CP-violating terms in the oscillation probabilities are sensitive to neutrino mass ordering.
The nature of neutrinos, whether Dirac or Majorana, is hitherto unknown. Assuming neutrinos to be Dirac, which needs $B-L$ to be an exact symmetry, we make an attempt to explain the observed proportionality between the relic densities of dark matter (DM) and baryonic matter in the present Universe ${\it i.e.,}\,\, \Omega_{\rm DM} \approx 5\, \Omega_{\rm B}$. Assuming the existence of heavy $SU(2)_L$ scalar doublet $(X= (X^0, X^-)^T)$ in the early Universe, an equal and opposite $B-L$ asymmetry can be generated in left and right-handed sectors by the CP-violating out-of-equilibrium decay $X^0 \to \nu_L \nu_R$ since $B-L$ is an exact symmetry. We ensure that $\nu_L-\nu_R$ equilibration does not occur until below the electroweak (EW) phase transition during which a part of the lepton asymmetry gets converted to dark matter asymmetry through a dimension eight operator, which conserves $B-L$ symmetry and remains in thermal equilibrium. The remaining $B-L$ asymmetry then gets converted to a net B-asymmetry through EW-sphalerons which are active at a Temperature above 100 GeV. To alleviate the small-scale anomalies of $\Lambda$CDM, we assume the DM to be self-interacting via a light mediator, which not only depletes the symmetric component of the DM but also paves a way to detect the DM at terrestrial laboratories through scalar portal mixing.
After a decade of the discovery of Higgs boson the direct search for new particles has put an energy gap between the SM and new physics. In this scenario the framework of effective field theory is the ideal one for going forward. Nowadays its a common practice to explain deviations from SM predictions by incorporating effective operators. We can treat the SM as an effective theory by adding higher dimensional terms to its Lagrangian and trying to capture the footprint of the more complete UV theory, this is commonly known as the bottom-up approach. On the other hand, we can choose a complete UV theory, identify the heavy degrees of freedom, integrate them out and obtain operators of higher mass dimension, known as the top-down approach. Covariant Derivative Expansion (CDE) is one of the methodologies that integrate out heavy fields and generate the effective operator and their Wilson coefficient. The two most intriguing traits of CDE are, firstly, the method is manifestly gauge-invariant so the effective operators generated at the end are also gauge-invariant. Secondly, its applicability is universal. Encapsulating these features there is a formula dubbed as the universal one-loop effective action (UOLEA) which has algorithmic essence to it. The Mathematica based package CoDEx based on the UOLEA is one of the tools that can integrate out heavy particles from the tree as well as one-loop diagrams and generate effective operators of mass dimension-6.
Low mass dark matter (sub-GeV/$c^2$) search has been a primary objective for direct detection experiments over the last few years. The SuperCDMS HVeV Si detector is sensitive to low-mass dark matter candidates due to the $\mathcal O$(eV) energy resolution ability. Based on the Neganov-Trofimov-Luke (NTL) principle, the phonon-sensitive HVeV device can distinguish the single charge excitations inside the crystal. This study aims to calibrate the energy of three 1 gram cryogenic Si HVeV detectors in the $\mathcal O$(keV) scale and study the Compton steps (K shell and L shell steps at 1.8 keV and 0.15 keV, respectively) using the detector response in both 0V and 100V bias voltages across the crystal. In this symposium, we will be presenting the updates on the Compton step analysis for Si HVeV detectors. The understanding of Compton steps for these detectors will be used to calibrate the big SuperCDMS HV detectors for the $2^{\mathrm{nd}}$ generation SuperCDMS experiment at SNOLAB.
Dark Matter(DM), having no non-gravitational interaction with standard model(SM), residing in an internally thermalized sector decoupled from standard model may undergo number changing self-scatterings in the early universe. In the non-relativistic regime, these reactions, such as 3$\rightarrow$2 process can make the DM temperature to cool at much slower rate than the standard non-relativistic matter due to cannibalism effect. As shown in the earlier studies, there are very strong constraints from structure formation if the cannibal phase takes place in the matter-dominated epoch. We show that, DM decoupled from SM undergoing cannibalism which freezes out at radiation-dominated epoch can be viable, satisfying all cosmological and theoretical constraints. We solve coupled Boltzmann equations for DM density and temperature to find present DM abundance for different DM self couplings. Then we evaluate cosmological constraints on these parameters from big-bang nucleosynthesis bounds on the relativistic degrees of freedom, CMB power spectrum and Lyman-$\alpha$ bounds on the free streaming length of DM and theoretical bounds on the upper limit of 3$\rightarrow$2 annihilation cross-section from S-matrix unitarity. We find that, a scalar cannibal DM with mass range of around 90 eV to 600 TeV can produce the observed DM relic density and be consistent with all constraints when the initial DM temperature($T_{\rm DM}$) in lower than $T_{\rm SM}$, with $T_{\rm SM}/8000\leq T_{\rm DM}\leq T_{\rm SM}/1.1$.
Organic scintillators are capable of providing efficient gamma-ray and
neutron detection in mixed neutron-gamma radiations field. The energy de-
position due to Compton scattered electrons provides major contribution in
response function due to presence of comparatively low atomic number (Z)
elements in organic scintillators. In this work, organic scintillator
EJ-315 is used to study the response functions with non-existence of full energy peaks of standard gamma radioisotopes: Ba-133, Cs-137 and Co-60. The detector is calibrated using Compton edge of experimentally measured
response function. A gamma unfolding procedure is utilized for unfurling
measured as well as simulated response functions or complex gamma spectra. A Pulse Shape Discrimination (PSD) technique is developed to identify if the signal is generated by a gamma ray or a neutron.
The Belle II detector is located at the SuperKEKB energy-asymmetric $e^{+}e^{−}$ collider and has acquired the world’s highest instantaneous luminosity this year. Charged particle identification (PID) in Belle II is provided by the TOP (Time Of Propagation) counters in the barrel region. We report the overall and TOP-focused PID performance in recently recorded 208 $fb^{−1}$ data with the decay $D^{∗+} \rightarrow D^{0}[K^{-}π^{+}]π^{+}$ as a control sample.
Gamma-ray bursts (GRBs) are the most luminous (events in the Universe after the big bang with $ E_{\gamma, iso} \approx 10^{48}-10^{54} erg $), the brightest and short time (lasting from few seconds to few hours) flash of very high energy electromagnetic radiation occurring at an average rate of one event per day at cosmological distance, explosions of the universe. These astrophysical events produce electromagnetic radiations from optical to very high energy gamma rays (> GeV), even ultra-high energy particles like cosmic rays, neutrinos are also expected to be produced in these sources.
Generally, GRBs classifications have been into distinct two groups, “short” and “long”, based on the well established bimodal fitting of duration distribution plot and also theoretically their sources of emission GRBs as well. In general, events with a duration of less than about two seconds are classified as short gamma ray bursts(SGRBs) and long gamma-ray bursts(LGRBs). These SGRBs account for about 30% of GRBs and mainly it’s source of origin are neutron-star neutron-star or neutron-star black-hole merger. Most of the observed GRBs events, about 70% have a duration of greater than two seconds and are named as LGRBs. And mainly origin of long GRBs collapse of massive star into a black hole and also from supernova explosions. However, in past many of are, have pointed the indications of third class of GRBs, which is intermediate duration in between SGRBs and LGRBs.
The aim is to investigate through the machine learning statistical approach whether a third kind of class is present in the T90 duration distributions in the intermediate time interval based on followings datasets. Here for my analysis, I have considered the dataset of more than 500 GRBs, for that I used Fermi/GBM, AstroSat/CZTI and Swift/BAT dataset. The idea is to after fitting the followings datasets, then I illustrated of a decision boundary between two Gaussian distributions.
The CMS experiment at CERN uses a two-stage trigger system to filter and store events of physics importance: a hardware-based Level 1 (L1) trigger that uses fast electronics to process data in a pipeline fashion at 40 MHz with an output rate of 100 kHz and a software-based High-Level Trigger (HLT) run on computer farms with an output rate of around 1.5 kHz. Many novel trigger algorithms, coupled with technological developments such as heterogeneous computing in GPU's were developed to cope with the increased luminosity and physics needs of Run3. This talk summarises the performance of these triggers, both L1 and HLT in early Run3 data.
Coherent Elastic Neutrino-Nucleus Scattering (CEvNS) is a phenomenon in which a neutrino and nucleus collide elastically in a coherent manner. This process involves low-energy neutrinos (having energy between 10keV and a few MeV) and surpasses any other neutrino-coupling scattering cross section by a wide margin, but observing the results has always been challenging due to the tiny recoil energy of the target nucleus. This is why, despite the fact that D. Freedman predicted this phenomenon in 1973, it was not actually observed until 2017 at Oak Ridge National Laboratory(U.S.A.) by the COHERENT collaboration. In this review article, we will explore the mechanism underlying this process, its compatibility with the Standard Model, and the characteristics that make it challenging to observe experimentally. Then, we will review the various methods used by experiments such as COHERENT, CONNIE, and CONUS to successfully observe the scattering results. We will conclude with a discussion of the applications of these experiments, such as portable neutrino detectors, measuring nuclear sizes, and how these results can also aid in the search for sterile neutrinos, leading to their use in probing dark matter.
The non-thermal production of dark matter (DM) usually requires very tiny couplings of the dark sector with the visible sector and therefore is notoriously challenging to hunt in laboratory experiments. Here we propose a novel pathway to test such a production in the context of a non-standard cosmological history, using both gravitational wave (GW) and laboratory searches. We investigate the formation of DM from the decay of a scalar field that we dub as the reheaton, as it also reheats the Universe when it decays. We consider the possibility that the Universe undergoes a phase %of kination
with \textit{kination-like} stiff equation-of-state ($w_{\rm kin}>1/3$) before the reheaton dominates the energy density of the Universe and eventually decays into Standard Model and DM particles. We then study how first-order tensor perturbations generated during inflation, the amplitude of which may get amplified during the kination era and lead to detectable GW signals. Demanding that the reheaton produces the observed DM relic density, we show that the reheaton's lifetime and branching fractions are dictated by the cosmological scenario. In particular, we show that it is long-lived and can be searched by various experiments such as DUNE, FASER, FASER-II, MATHUSLA, SHiP, etc. We also identify the parameter space which leads to complementary observables for GW detectors such as LISA and u-DECIGO. In particular we find that a kination-like period with an equation-of-state parameter $w_{\rm kin}\approx 0.5$ and a reheaton mass $\mathcal O(0.5-5)$ GeV and a DM mass of $\mathcal O (10-100)$ keV may lead to sizeable imprints in both kinds of searches.
In the Abelian projection of QCD, it has been proved that every charge (electrical or magnetic) of a dyon screens its own direct potential to which it minimally relates and anti-screens the dual potential, resulting in dual superconductivity in accordance with the generalized Meissner effect. A dual superconductivity and confinement-incorporating Abelian Higgs model has been developed and its string representation has been established in terms of the average of Wilson loops in this QCD Abelian projection. In the context of the restricted chromodynamics (RCD) of the SU(2) and SU(3) gauge theories, the research of monopole condensation and the chromomagnetic superconductivity has been conducted. The monopole current in RCD chromomagnetic superconductors has been determined by constructing the RCD Lagrangian and the partition function for monopoles in terms of string action and the action of the current around the strings. In conclusion it is the coherence length and not the penetration length that governs the monopole density.
Keywords: Superconductivity, monopoles, restricted chromodynamics (RCD), chromomagnetic superconductivity
Correlations between multiparticle cumulants and mean transverse momentum in proton-proton (pp), proton-lead (pPb), and peripheral lead-lead (PbPb) collisions are presented as a function of charged-particle multiplicity. This correlation carries information on the origin of flow in small collision systems by showing a characteristic sign change at very low multiplicity. In PYTHIA8 events this sign change exists as a result of nonflow effects. To reduce the nonflow dependence, a new correlator combining multiparticle cumulants and average transverse momentum is suggested. In this talk, we will present results for this correlator using two and four-particle cumulant for the second and third-order Fourier harmonics for the above three systems. Predictions based on the color-glass condensate and hydrodynamic models are compared to the experimental results.
We make a comprehensive study of vector-like fermionic dark matter and flavor anomalies in a simple extension of standard model. The model is added with doublet vector-like fermions of quark and lepton types, and also a $S_1(\bar{\textbf{3}},\textbf{1},1/3)$ scalar leptoquark. An additional lepton type singlet fermion is included, whose admixture with vector-like lepton doublet plays the role of dark matter and is examined in relic density and direct detection perspective. Electroweak precision observables are computed to put constraint on model parameter space. We constrain the new couplings from the branching ratios and angular observables associated with $b \to sll (\nu_l \bar \nu_l)$, $b \to s \gamma$ decays and also from the recent measurement on muon anomalous magnetic moment. We then estimate the branching ratios of the rare lepton flavor vioalting $B_{(s)}$ decay modes such as $B_{(s)} \to l_i^\mp l_j^\pm$, $B_{(s)} \to (K^{(*)}, \phi) l_i^\mp l_j^\pm$.
Dark matter within the framework of minimal extended seesaw
In this paper, we study the prospect of ECAL barrel timing to develop
triggers dedicated to long-lived particles decaying to jets, at the level-1 of
HL-LHC. We construct over 20 timing based variables, and identify two of them
which have better performances and are robust against increasing PU. We
estimate the QCD prompt jet background rates accurately using the "stitching"
procedure for varying thresholds defining our triggers, and compute the signal
efficiencies for different LLP scenarios for a permissible background rate. The
trigger efficiencies can go up to O(80%) for the most optimal
trigger for pair-produced heavy LLPs having high decay lengths, which degrades
with decreasing mass and decay length of the LLP. We also discuss the prospect
of including the information of displaced L1 tracks to our triggers, which
further improves the results, especially for LLPs characterised by lower decay
lengths.
We have studied different viscous coefficients of thermal QCD medium at finite magnetic field and chemical potential in the kinetic theory approach. The interactions among partons have been incorporated through their thermal masses. It is found that the magnetic field reduces both shear ($\eta$) and bulk ($\zeta$) viscosities, whereas the chemical potential enhances these viscosities. Thus, completely different effects of magnetic field and chemical potential on shear and bulk viscosities have been noticed. This study further facilitates to comprehend the sound attenuation through the Prandtl number (Pr), the nature of the flow through the Reynolds number (Re), the fluid characteristics and the information on the phase transition of matter through the specific shear ($\eta/s$) and specific bulk ($\zeta/s$) viscosities, respectively. The observation on the Prandtl number reveals that the momentum diffusion prevails over the thermal diffusion in the sound attenuation and the presence of magnetic field makes this dominance strong, contrary to the chemical potential which makes it weak. An increase of the Reynolds number due to the magnetic field and its decrease due to the chemical potential have also been observed, and the nature of the constituent flow remains laminar. Furthermore, $\eta/s$ gets decreased at finite magnetic field, opposite to its increase at finite chemical potential, whereas a decrease of $\zeta/s$ is observed in the presence of both magnetic field and chemical potential.
We use Machine Learning with an event-generator (Sar$t$re) for
the process: $e \p\rightarrow e'\ p' \ V_M $, $e \ A\rightarrow e'\ A' \ V_M $ .
Sar$t$re uses 3-dimensional look-up tables to generate events
in which the first two moments of the Amplitude are stored. In eA collisions the generation of these lookup tables takes many months. I will present a method, using neural networks, which reduces the computing time by up to 90%. This will be important for doing simulations in the ongoing preparations for the electron-ion collider.
More than eighty years after they were first proposed, neutrinos still remain an enigma. Although they are an integral part of the Standard Model, still we know very little about them. In particular, the Dirac or Majorana nature of neutrinos remains a mystery. For a long time, theoretical particle physicists believed that neutrinos must be Majorana in nature and several elegant mass generation mechanisms have been proposed for Majorana neutrinos. However, in recent years there is a renewed interest in exploring the possibility of neutrinos being Dirac particles. In this talk, I will discuss many ways in which naturally small Dirac neutrino masses can be generated. I will also discuss the various interesting and sometimes surprising connections between Dirac nature of neutrinos and Dark Matter stability, proton decay etc.
We investigate the displaced vertex scenarios in the case of Type-III and Type-I scenarios at the LHC, MATHUSLA and muon colliders. Seesaw mechanisms are motivated to explain small but tiny neutrino masses, and as a result we expect heavy charged or neutral fermions to exist. These heavy fermions tend to have displaced decay due to rather small Yukawa couplings. However, LHC being a machine where the partons collide with their momentum according to the parton distribution function. Often due to higher longitudinal boost such decays are more displaced towards the $Z$-axis as compared to the perpendicular directions. We performed a comparative study of the longitudinal and transverse boost and probed the regions of parameter space at the LHC and FCC for energies of 14, 27 and 100 TeV. On the contrary, at a muon collider though the momentum is constant at each event, transverse momentum tends to diverge. We analyse such features and draw regions that can be probed by present and future colliders.
We investigate the possibility to identify the nature of spin of exotic charged particles at the
future e + e − collider in l ± + 2j + E T final state choosing IDM and MSSM as examples for the new
physics models with scalar and fermionic exotic charged particles, respectively. We choose four
benchmarks for the mass parameters for a significant deviation from the SM W + W − background.
We find that the cos θ of W boson constructed from jj pair and lepton have the potential to identify
the MSSM signal compared to the IDM signal in longitudinally polarized initial beams. A more
robust comparison is seen in the shape of the azimuthal angle of the W boson and charged lepton,
which can identify the IDM signal further if the beams are transversely polarized.
In order to understand the behavior of the Quark-Gluon Plasma(QGP), heavy quarks can be very essential tools. Our study offers insight into the interaction of the charm quark in the thermalized, deconfined medium. The information about the charm quark interaction in the medium is incorporated into its drag and diffusion coefficients. As the relaxation time of the charm quark is expected to be much higher than that of the light quarks, the heavy quarks can carry out information about the medium. In this work, using the Color String Percolation Model (CSPM), we have estimated the relaxation time ($\tau_{\rm c}$), drag coefficient ($\gamma$), and transverse momentum diffusion coefficient ($ \rm B_{0}$) of charm quark as functions of temperature. We have also computed the spatial diffusion coefficient ($\rm D_{s}$) as a function of $T/T_{\rm c} $. We have finally compared our results with various phenomenological models and the lQCD data. At around the critical temperature, $T_{\rm c}$, we find $D_{\rm s}$ to be minimum, which indicates a significant increase in the interaction strength of the charm quarks in the medium.
In this paper, we analyse the JLA data on Supernova observations in the context of $k-$essence dark energy model with Lagrangian $L=VF(X)$, with a constant potential $V$ and the dynamical term $X = (1/2)\nabla_{\mu}\phi\nabla_{\nu}\phi = \dot{\phi}^2/2$ for a homogeneous scalar field $\phi(t)$, in a flat FRW spacetime background. Scaling relations are used to extract temporal behaviour of different cosmological quantities and the form of the function $F(X)$ from the data. We explore how the parameters of the model, viz. value of the constant potential $V$ and a constant $C$ appearing in the emergent scaling relation, control the
dynamics of the model in the context of JLA data, by setting up and analysing an equivalent dynamical system described by a set of autonomous equations.
Event shapes are classical tools for the determination of the strong coupling and for the study of hadronization effects in electron-positron annihilation. In the context of analytical studies, hadronization corrections take the form of power-suppressed contributions to the cross section, which can be extracted from the perturbative ambiguity of Borel-resummed distributions. We propose a simplified version of the well-established method of Dressed Gluon Exponentiation (DGE), which we call Eikonal DGE (EDGE), which determines all dominant power corrections to event shapes by means of strikingly elementary calculations. We believe our method can be generalized to hadronic event shapes and jet shapes of relevance for LHC physics.
A deconfined medium of quarks and gluons called Quark-Gluon Plasma (QGP) is produced when heavy-nuclei are collided at relativistic energies. The formation of QGP is often characterized by a phenomenon called strangeness enhancement where the production of strange-to-non-strange particles are enhanced relative to peripheral or proton-proton interactions. Besides the enhancement in K/π ratios, a non-monotonic energy dependence was also reported for Λ̄ to p̄ ratios at CERN SPS, attributed to a signature for the strangeness enhancement . As anti-particles are produced directly from the reaction, the Λ̄/ p̄ ratios are considered as a cleaner probe for the strangeness enhancement. However, at this energy range hadronic interactions are dominant and, particularly for Λ̄ and p̄, processes like baryon-anti-baryon (BB̄) annihilation can significantly modify final yields and spectral shape which may lead to an apparent enhancement in the Λ̄/p̄ ratios. In this work, we use UrQMD hadronic transport model, to investigate the role of baryon-anti-baryon (BB̄) annihilation on Λ, Λ̄ hyperon production and its effect on Λ̄/p̄ ratios. The UrQMD calculations with BB̄ annihilation can produce the trend of average transverse mass spectra for Λ and Λ̄, as well as, the characteristic enhancement in Λ̄/p̄ ratios in data as a function of centrality and collision energy. Furthermore, Λ̄/p̄ ratios extracted from the feed-down corrected SPS data are in good agreement with UrQMD model calculations with BB̄ annihilation. This suggests that Λ̄/p̄ enhancement can not be interpreted as a direct signature for strangeness enhancement and BB̄ annihilation has a significant role to play.
The study of high multiplicity proton-proton collisions has revealed striking similarities with respect to the observations made for nucleus-nucleus collisions.
The understanding of underlying particle production mechanisms in pp collisions is therefore important. Multiplicity and pseudorapidity distributions of inclusive photons are one of the basic measurements to shed light on the physics processes involved in these collisions. The photon production is dominated by neutral pion decays and thus complementary to those of the charged particles.
In this work, we will present the measurements of inclusive photon multiplicity at forward rapidities using PYTHIA 8 simulation at LHC energies. The effect of Multiple Parton Interactions (MPIs) and colour reconnection (CR) mechanisms on photon production will be studied in detail.
Non-local correlations are typically measured in terms of the Bell's inequality parameter. It was shown that the non-local advantage of quantum coherence (NAQC) is a better measure of non-locality than the Bell's inequality parameter in neutrino systems [1]. We investigate the effects of nonstandard interaction (NSI) on these measurements in the context of many accelerator and reactor experiments for two flavour neutrino oscillation scenario. We observe that the effects of NSI are enhanced in the KamLAND experimental setup. Furthermore, we demonstrate that, while NAQC is a more powerful measure of non-locality, Bell's inequality parameter is more susceptible to NSI effects [2].
1.M. L. Hu, X. M. Wang, and H. Fan.,
Phys. Rev. A 98, no. 3, 032317 (2018).
2. B. Yadav, T. Sarkar, K. Dixit, and A. K. Alok, Eur. Phys. J. C, no. 5, 1-10 (2022).
We study the analytical attractor solutions of third-order hydrodynamic theory under one-dimensional boost-invariant expansion and employ these to analyze the spectra of thermal particles from quark-gluon plasma. We use these analytical solutions to constrain the allowed initial states by demanding positivity and reality of energy density throughout the evolution. Moreover, we evaluate the thermal particle yields within the framework of hydrodynamic attractors. We observe that the evolution corresponding to attractor solution results in the maximum production of thermal particles.
The nature of neutrino(whether Majorana or Dirac), and the origin of neutrino masses are still mysteries to be resolved. Also, the recent results of (g-2)$_{e,\mu}$ measurements deviate from the Standard Model (SM) predictions and motivate towards the new physics beyond the SM. In this work, we propose a model with the minimal field content in the framework of anomaly free extension of Standard Model; i.e. U(1)$_{L_e-L_{\mu}}$ symmetry model. We find this model capable of explaining the low energy neutrino phenomenology and anomalous magnetic moment(g-2)$_{e,\mu}$ of electron and muon, simultaneously. The field content is extended by a SU(2)$_L$ singlet scalar field $\phi$ and three right handed neutrinos N$_R$(R = 1,2,3). The neutrino masses are generated using the Type-I seesaw mechanism. The extended model leads to the results, which are in consistency with the experimental values of (g-2)$_{e,\mu}$ and also satisfy all relevant experimental data.
In the present work, we have studied the electron gravitational form factors (GFFs) in the light-front QED model. We considered the physical electron a composite system consisting of a bare electron and a photon.GFFs are defined in terms of overlap of light-front wave functions. We have also studied the mechanical properties of the electron.
The motivation of the cosmic muon veto (CMV) detector is to explore the feasibility of building a large scale neutrino experiment at shallow depths. Our earlier studies with a small scale experimental setup have yielded encouraging results with cosmic muon veto efficiency of 99.98%. However, a much larger scale experiment is required to establish and improve this result. With an aim to achieve 99.99% veto efficiency and a false-positive rate of less than $10^{−5} $, an extruded plastic scintillator (EPS)-based active veto system for cosmic ray muons is being built around the existing miniICAL detector- a scaled down version of the ICAL detector, at the transit campus of India based Neutrino Observatory, Madurai. Each EPS consists of two WLS fibres to collect the scintillation photons and four silicon-photomultipliers (SiPMs) as photo-transducers. The smallest module called a di-Counter is formed by combining two EPS units. A super-module called a tile, comprises 4 such modules. A veto layer is formed by placing these super modules adjacent to each other to cover entire mini-ICAL detector. To achieve high efficiency and also to cover the dead space, upto four of these layers are stacked, to form a veto-wall. Four of these walls- one on top, two on either sides and one on rear end, are part of active veto system which covers the miniICAL detector.
Performance of modules as well as of the super-modules along with WLS fiber and SiPM readout system as well as that of muon reconstruction in the miniICAL have been well established. Using these developments, this work examines the feasibility of building such a large veto system around the miniICAL detector using the GEANT4 toolkit. In this work, efficiency of the CMV is estimated using reconstructed muon tracks in the RPC stack with sufficient hits and good fit quality. The performance of the CMV detector is tested with and without a magnetic field using the muon reconstruction algorithm and extrapolating the same to the veto detector. The overall expected performance of the CMV around the miniICAL will be discussed in this presentation.
Neutrinos are the only elementary fermions which might be their own antiparticles, i.e. they could be Majorana fermions. When a Majorana neutrino is exchanged in a process, it is possible to have lepton number violation. This has been used to propose various probes for Majorana neutrinos, such as the neutrinoless double beta decay. Additionally, since a Majorana neutrino is quantum mechanically indistinguishable from its antiparticle, quantum statistical considerations could as well be exploited. Since neutrino and antineutrino are never detected close to their point of production, formulating such quantum statistical probes have always been challenging. In this talk we will present a few ideas that exploit quantum statistics to probe Majorana neutrinos. These probes are interesting because they are not necessarily constrained by the smallness of neutrino mass. Such probes can potentially probe whether neutrinos participate in any non-standard interactions as well.
The existence of different phases of matter produced in relativistic heavy-ion collisions require a hadronic description, creating much interest in the hadronic phase. We explore the possibility of thermalization and applicability of hydrodynamics in a hadron gas medium using the Knudsen number (Kn). Kn << 1 implies a system with a large number of collisions, thus making the system thermalized. Further, we probe the nature of the system by studying its viscosity and compressibility through Reynolds (Re) and Mach (Ma) numbers. These dimensionless parameters are studied for different system sizes and baryochemical potentials (μB). The obtained values of these observables at high temperatures point towards the possibility of inviscid compressible flows in the system (Kn << 1, Re >> 1, and Ma ∼ 1). The comparable values of Kn over different system sizes indicate the applicability of hydrodynamics for different systems, from high multiplicity pp to heavy-ion collisions.
The deconfined phase of QCD called quark-gluon plasma (QGP) created in the relativistic heavy-ion collision experiments is of the size of a few fermi, which is comparable to the characteristic interaction scale. Hence, understanding the effect of the finite-system geometry with a suitable boundary condition is necessary for its theoretical understanding. We study the finite volume effects using different boundary conditions (periodic, anti-periodic, MIT, etc.) within the framework of the two-flavour Nambu—Jona-Lasinio (NJL) model. We look at the chiral quark condensate and the quark number susceptibilities. We also compare the results from the NJL model at the mean-field level with results obtained from lattice QCD, where lattice results are available.
The seesaw mechanism is a popular approach to give a viable explanation for the source of non-zero neutrino mass and for the cause of matter dominance of the Universe - two of the most important open problems that could not be answered in the Standard Model (SM) of Particle Physics. A minimal extension of the SM is studied, incorporating a type-I+II seesaw mechanism with only one right-handed neutrino and one Higgs triplet scalar. These heavy particles contribute to the generation of tiny neutrino mass, which is inversely proportional to the corresponding heavy particle masses. Considering that leptogenesis is achieved by the decay of the right-handed neutrino, the new source of CP asymmetry comes solely from the decay of the right-handed neutrino by one-loop vertex correction involving the Higgs triplet scalar. The model's predictability enhances by introducing Fritzsch type 2-zero and 3-zero textures for the neutrino mass matrix and the non-diagonal charged lepton mass matrix, respectively. We execute the parameter space study following the latest neutrino oscillation parameter data, and the phenomenological importance of this hybrid texture is analyzed. We study leptogenesis in the two-flavoured and three-flavoured regimes, and we observe that the leptogenesis in different flavoured regimes, within the temperature range $T\subset[10^{10},10^{11}]$ GeV, can efficiently predict baryon asymmetry of the Universe within experimentally obtained range.
Correlations among final-state particles at various pseudorapidity ($ \eta $) values is an important probe to study the underlying mechanism of particle production. Forward-backward correlation is a significant tool to understand the dynamics of multi-particle production as it is believed to be free from final state effects. The forward-backward correlation strength ($ b_{corr} $) has been calculated for multiplicity in backward $(N_{b})$ and forward $(N_{f})$ $ \eta-$intervals as follows:
$
b_{corr} [N_{b}, N_{f}]= \frac{
In the nucleus-nucleus collisions, the volume varies on event-by-event basis and cannot be controlled. Thus, there is a need of quantities which can measure properties of the strongly interacting system independent of volume fluctuations. The strongly intensive quantities are such quantities which are independent of volume as well as volume fluctuations. In Ref. [1,2], two families ($ \Delta $ and $ \Sigma $) of strongly intensive quantities were introduced. In the independent particle production model, where inter-particle correlations are absent, these quantities ($ \Delta $ and $ \Sigma $) are normalized in such a manner that they attain the value of unity. In the absence of any event-by-event fluctuations, the $ \Delta $ and $ \Sigma $ are equal to zero.
The strongly intensive variable ($ \Sigma $) has been calculated for two extensive variables i.e., $N_{b}$ and $N_{f}$ as follows:
$
\Sigma [N_{b}, N_{f}]
= \frac{
The centrality dependence of these observables has been investigated. The $ b_{corr} [N_{b}, N_{f}] $ has also been studied for different charge combinations.
[1] Gorenstein, M. I., and M. Gaździcki, Physical Review C 84.1 (2011): 014904.
[2] Gazdzicki, Marek, M. I. Gorenstein, and M. Mackowiak-Pawlowska, Physical Review C 88.2 (2013): 024907.
Detailed simulation studies using the GEANT4 Simulation tool on the detection of neutrons have been carried out. Based on the guidance through simulation the goal is to carry out an experiment at VECC Kolkata. A mono-energetic neutron beam and neutrons produced from the reactions of alpha and proton beams on the targets of Indium and Tantalum have been used for the studies. It has been planned to use GEM based gaeous detector for the neutron detection. Triple GEM detector consists of drift plane, GEM foils and readout plane. The detector layout of the simulation consists of a converter material (e.g. B10 and Gd) coated on the inner portion of the drift plane for the conversion of neutrons. The charge particles (e.g. alpha & Li7 for B10) produced in the converter materials create ionization in the drift volume of the GEM detector consisting of a gas mixture of Ar and CO2 (70% Ar, 30% CO2). Neutron detection efficiencies for mono-energetic neutron beams for different converter thicknesses and various threshold cuts (energy deposition value) will be reported. Neutron yields for the setup involving alpha and proton beams have been studied for different target thickness. The gammas produced in these reactions are rejected using lead sheets alongwith threshold cuts on energy deposition. The systematic studies of the variation of yields involving several beam-target combinations and detector-converter configurations will be presented and discussed.
The high granularity calorimeter (HGCAL) is an upgrade to the current CMS endcap calorimeters, designed to deal with the severe radiation dosage expected during the high-luminosity LHC. The majority of the HGCAL will be composed of robust and cost-effective 8" hexagonal silicon sensors, with the last five interaction lengths being based on highly segmented plastic scintillators. Multiple full and partial silicon sensors are mounted together with electronics and cooling to form structures called cassettes, which are further attached together to form each of the 47 layers of HGCAL. For optimal electronic connections, it has been decided that the sensors will be placed in different orientations inside a cassette. The hexagonal readout cells inside the sensors break their rotation symmetry, requiring new definitions for different sensor orientations. We have devised a method to properly orient all silicon sensors to the geometry description of HGCAL in the CMS software framework. We have also added the feature of providing cassette shifts for a more accurate representation of the HGCAL geometry with realistic clearances. The talk will discuss both these efforts in detail.
GTMDs are the mother distribution functions from which GPDs and TMDs can be derived under a specific limit. GPDs and TMDs have been used extensively in the literature to understand the 3-dimensional spatial and spin structure of hadrons. We study the GTMDs of quarks in the light-front dressed quark model. Recently it was claimed that extraction of GTMDs of quark and gluon is possible in the exclusive double Drell-Yan process and exclusive hard diffractive di-jet production in the deep inelastic scattering. In the experiments, skewness is never zero. This makes it an exciting and compelling case to obtain the skewness dependence of GTMDs. We derive the analytical expression of GTMDs of quarks for non-zero skewness in the light-front dressed quark model. Further application and use of the GTMDs obtained are discussed in the context of orbital angular momentum and spin-orbit correlations.
We systematically calculate the mass-spectra of tetraquarks $[cc \bar{b} \bar{b}]$ and $[cb \bar{c} \bar{b}]$ in a non-relativistic diquark-antidiquark model [1,2]. The spin-dependent terms have been incorporated to describe the splitting structure for orbital and radial excitations. We have successfully obtained the experimentally observed $B_{c}^{\pm}$ mesons to fit the model’s parameters which are used to obtain the masses of tetraquarks. The masses of these tetraquarks are found to be in the range of 12.5 GeV- 13.5 GeV, and are compared with the two-meson threshold. The details of the study will be presented in the conference.
References:
[1] Rohit Tiwari, D. P. Rathaud, Ajay Kumar Rai Eur. Phys. J. A 57, 289 (2021).
[2] Rohit Tiwari, D. P. Rathaud and A. K.Rai, Indian J. Phys. 96, 1-22 (2022).
In the pre-equilibrium stage of relativistic heavy-ion collisions, strong quasi-classical gluon fields emerge. These dense, coherent, colored electric and magnetic fields are known as Glasma. Glasma fields evolve, and the lifetime of these strong fields is of the order of the formation and thermalization time of the QGP, typically a short fraction of fm/c. Heavy quarks (HQs) are good probes to study these early stages of high-energy collisions. We aim to study the diffusion of heavy quarks in the evolving Glasma (EvGlasma). Also, we perform a systematic comparison of the diffusion of HQs in the evolving Glasma fields with that of the Markovian-Brownian motion in a thermalized medium of gluons. We observe the superdiffusion of HQs in the EvGlasma fields as the transverse momentum broadening, $\sigma_p$ of HQs increases non-linearly during the very early time. We also find that for a smaller value of saturation scale, $Q_s$, the average transverse momentum broadening is approximately the same for the two cases, but for a larger value of $Q_s$, Langevin dynamics underestimates the $\sigma_p$.
Axion-like particles (ALPs) are weakly interacting particles that are predicted to exist by many beyond standard model theories. A large number of experiments have been constructed or are under construction to search for these ALPs both directly and indirectly (through astrophysical or cosmological observations). In this work we have studied how photon signals originating from the oscillation of relativistic ALPs, produced from the decay of a cosmologically stable scalar DM of mass $10^{-7}-10^{-2}$ $\text{eV}$ inside a dwarf spheroidal galaxy (dSph), can be detected by the upcoming radio telescope Square Kilometer Array (SKA). We show that observation of dSphs with the SKA can help us put strong bounds on the ALP-photon coupling in the ALP mass range $m_a < 10^{-12}$ eV. We further show, for a fixed ALP mass and coupling, SKA observation can also help us put bounds in the DM mass vs lifetime parameter space, thus opening up a new avenue in the indirect detection of DM.
Recent experimental results obtained by the LHCb for the decays through $b→s$ transition have depicted the possibility of lepton flavour violation (LFV) indicating the existence of new physics (NP) as the LFV decays are strongly suppressed in the standard model (SM). In the last few years, experiments have obtained the upper limit of the branching fractions to be of the order of $10^{-5}$ for the decays $B_s^0→τ^± μ^∓$ and $B^0→τ^± μ^∓$ [1, 2]. This is in stark contrast with the predicted values of the SM for these decays, which is of the order of $10^{-54}$ [3]. Similar results are obtained for many other LFV decays [4]. These results also indicate signals of new physics in these decay channels. To explore new physics, there are several new physics models to study the LFV decays, like the non-universal $Z'$ model, lepto-quark model, two-Higgs doublet model, etc. In $Z'$ model, NP contributes at tree level by $Z'$ - mediated flavour changing $b→s$ and $b→d$ transitions. In this work, we intend to investigate the differential branching fractions of LFV decays $B→K_2^* (1430)l_1^+ l_2^-$ in non-universal $Z'$ model, where $l_1$ and $l_2$ denote two leptons of different flavours. The LFV decays $B→K^* l_1^- l_2^+$ have already been studied in $Z'$ model [5]. Since $K_2^*$ is higher excited spin-2 state of $K^*$ meson [6], we expect to observe tracks of NP in the $B→K_2^* (1430)l_1^+ l_2^-$ decay too. We hope that the results which will be obtained from this study may be helpful to the present understanding of the LFV decays.
Acknowledgement
M. Mandal acknowledges DST, Govt. of India for providing the INSPIRE Fellowship (IF200277) during her research.
References
In standard model (SM) of particle physics neutrinos are massless because of the absence of its right handed counterpart. In several extensions of SM, due to the inclusion of the right handed neutrino, the neutrino flavour mixing matrix becomes nonunitary (arXiv:1503.08879 [hep-ph]). Another way to incorporate the NP beyond SM is the consideration of non standard interaction (NSI) which can be mediated via both vector and scalar bosons. Vector NSI directly affects the matter potential in neutrino oscillation ([arXiv:1907.00991 [hep-ph]]), while the scalar NSI contributes correction term to the neutrino mass matrix ([arXiv:1812.08376 [hep-ph]]). Due to large abundance of dark matter (DM) particles in the universe, it is possible for the scalar mediator to be a potential DM candidate, generating dark NSI. In this work the effect of non-unitary mixing matrix is analysed on the neutrino flavour oscillation probability in presence of dark NSI, considering the cases of both normal and inverted mass hierarchy in context of long baseline experimental setup.
We investigate the possibility of two-zeros in inverse neutrino mass matrix ($M_{\nu}^{-1}$) in light of "dark" large mixing angle (dark-$\theta_{12}$) solution to the solar neutrino problem where solar mixing angle lies in the second octant ($\sin^{2}{\theta_{12}}\simeq 0.7$). The zeros in right-handed Majorana neutrino mass matrix $M_{R}$ corresponds to the zeros in $M_{\nu}^{-1}$ if Dirac and charged lepton mass matrices are diagonal. Out of fifteen possible two-zero textures, only seven are found to be consistent with dark-$\theta_{12}$ solution. All the textures with vanishing (1,1) element are found to be inconsistent with dark-$\theta_{12}$ solution. We, also, obtained predictions of the model for $0\nu\beta\beta$ amplitude $|M_{ee}|$. For five out of seven allowed textures, the predicted 3$\sigma$ lower bound on $0\nu\beta\beta$ amplitude $|M_{ee}|$ is $\mathcal{O}(10^{-2})$ which is within the sensitivity reach of $0\nu\beta\beta$ decay experiments like SuperNEMO, KamLAND-Zen, NEXT and nEXO. Furthermore, these textures are found to be necessarily $CP$-violating. Within Type-I seesaw setting, we have shown that the allowed $M_{\nu}^{-1}$ textures can be realized using $A_{4}$ discrete flavor symmetry wherein the standard model particle content has been enlarged with three right-handed neutrinos and a scalar singlet field.
Current and prospective state-of-the-art low-threshold direct dark matter detection experiments with multi-ton mass scale are promising facilities to probe neutrino properties and study light mediators. Very recently the LUX-ZEPLIN (LZ) and XENONnT collaborations have published initial data from their search for Weakly-Interacting-Massive-Particles (WIMPs). In these experiments, elastic neutrino-electron scattering (E$\nu$ES) induced by solar neutrinos is reported to be one of the main background components. Therefore, the new data allow us to place constraints on various neutrino properties within and beyond Standard Model through E$\nu$ES. In this talk I will discuss our recent work with the comparative study of neutrino electromagnetic properties and neutrino-generalized-interactions (NGIs) with light mediators, using E$\nu$ES exploiting the recent LZ and XENONnT data. In particular my focus will be on the adopted methodology and the data fitting.
In recent years, the flavour changing charged current (FCCC) $b→c\bar l ν_l$ transitions have gained special attention both in experiment and in phenomenological study to explore new physics (NP) beyond the standard model (SM). Recently, the BaBar, Belle and LHCb have measured lepton flavour universality (LFU) ratios $R_{D^{(*)}}$ and the world average values of the colliders exhibited around $3.3σ$ tension from their corresponding SM results [1], implies possible hint of NP. Such anomalies associated with $b$ hadron decays raise a question. Whether the similar inconsistency can be observed in charm decays induced by $c→s\bar l ν_l$ transitions! Most of the recent experimental measurements on the pure leptonic and semileptonic $D$ decays except $D_{(s)}^+→η^{(\prime)}\bar l ν_l$ decays agree with the SM predictions. In the last few years, a lot of theoretical efforts have been made for searching NP contribution in $D$ decays [2, 3]. Experimental results of branching ratios for $D_{(s)}^+→η^{(\prime)}\bar l ν_l$ decays show greater than $1σ$ tension from their corresponding SM predictions [4] which indicates possibility of the existence of NP. Inspired by these results, we will study the $D_s^+→η^{(\prime)} μ^+ ν_μ$ decays in $W'$ model [5-7] to explore NP effects. Basically, $W'$ is a theoretically predicted vector boson which arises from the simplest extension of the SM electroweak gauge group by adding extra $SU(2)$ group. In this work, we will predict the new $W'$ coupling parameters using recent experimental results of branching fractions of the above decays. Finally, the effects of NP on the branching fractions will be investigated using our new coupling parameters. Our investigation may be helpful to search $W'$ boson in future collider experiments.
References
[1] Y. Amhis et al. (Heavy Flavor Averaging Group), [arXiv:2206.07501v1 [hep-ex]] (2022).
[2] X. Leng, X. L. Mu, Z. T. Zou and Y. Li, Chinese Phys. C 45, 063107 (2021) [arXiv:2011.01061v1 [hep-ph]].
[3] K. Jain and B. Mawlong, 20th Conference on Flavour Physics and CP Violation, Oxford, MS, 2022, [arXiv:2207.04935v1 [hep-ph]].
[4] R. L. Workman et al. (Particle Data Group), Prog. Theor. Exp. Phys. 2022, 083C01 (2022).
[5] J. D. Gomez, N. Quintero and E. Rojas, Phys. Rev. D 100, 093003 (2019) [arXiv:1907.08357 [hep-ph]].
[6] X. L. Mu, Y. Li, Z. T. Zou and B. Zhu, Phys. Rev. D 100, 113004 (2019) [arXiv:1909.10769 [hep-ph]].
[7] S. Mahata, P. Maji, S. Biswas and S. Sahoo, Int. J. Mod. Phys. A 36, 2150206 (2021).
The 2m$\times$2m single gap Resistive Plate Chambers (RPCs) are used as the active detector elements in the mini-Iron Calorimeter (miniICAL) detector at IICHEP, Madurai. Single Gap RPCs are known to provide a time resolution on the order of $\sim$1 ns. The position resolution is mainly dependent on the strip width. For the miniICAL RPCs the strip width is $\sim$3 cm, which provides a position resolution of $\sim$1 cm. This detector is designed to operate in a magnetic field with a maximum value of 1.5 T for charge and momentum measurement of muons at the earth's surface. Even a small improvement in the position and time resolution will help to improve the momentum and directionality measurements of muons and eventually this result can be propagated to final ICAL experiment at underground facility.
The width of the discriminated pulse from the front-end amplifier is proportional to the time over threshold (ToT) of the RPC pulse. This information is used to correct the time-walk in the signal due to varying pulse heights. The intrinsic time resolutions of RPCs are improved from $\sim$0.77-0.98 ns to $\sim$0.57-0.66 ns. The ToT as well as the time difference between successive strips is used to improve the position resolution also. Improvement of time and position resolution of large scale single gap RPC along with other issues such as the effect of the termination resistor on the pickup strip will be presented.
The current work focuses on the bounce realization and inflationary dynamics of modified Chaplygin gas under the purview of bulk viscosity. The bouncing scale factor considered here corresponds to $a(t)={\frac{a(1+3t^2\sigma)}{2}}^\frac{1}{3}$ and the modified Chaplygin gas is characterized by the barotropic equation of state (EoS) $p=\epsilon\rho- (1+\epsilon)A\rho^{-\gamma}$. The EoS parameter, reconstructed for the bouncing scale factor, has been studied, and we have demonstrated the pre-bounce and post-bounce realizations. Also, we have carried out the Hubble flow dynamics under the current scenario regarding the e-folding number. In the next phase of the study, we conducted a statistical analysis of the model parameters and subsequently implemented the Shannon entropy maximization procedure to optimize the model parameters. For the Shannon entropy maximization, we have explored the redshift data. Finally, we have assessed the model using fuzzy set theory. We have developed a continuous fuzzy membership function and assessed the model by judging the fuzzy membership grades of the model-generated EoS parameters for different redshifts to assess its departure from $\Lambda$CDM.
In describing a model many kinds of symmetries are made to involve, helping us to investigate neutrino phenomenology. In this paper, we explore a very simple permutation group i.e. $S_3$ to get neutrino mass and mixing in the inverse seesaw framework. To avoid certain drawbacks of the traditional discrete symmetries we introduce their modular forms. Here, the Yukawa couplings transforms under modular symmetry and are expressed in-terms of the Dedekind eta function. Hence the work of the flavon fields are now carried by the modular Yukawa couplings reducing their usage. By doing so, we hope to make clear the impact and importance of modular $S_3$ symmetry, which is taken into account when explaining neutrino mixing consistent with the most recent findings. We also examine the non-zero reactor mixing angle and make an effort to adjust the model parameters properly. Finally, we briefly discuss muon $(g-2)$ in our model explaining the current results.
With the onset of LHC, several studies of small collision systems (proton-proton and proton-lead) at high multiplicity have revealed collective phenomena similar to those observed in heavy-ion collisions where these effects can be understood through the formation of hot and dense partonic matter, Quark-Gluon Plasma (QGP). However, jet quenching, one of the most important characteristic features of QGP formation in heavy-ion collisions, has not yet been observed in small systems, thereby posing questions about the origin(s) of the collective-like effects and the possible formation of QGP in such systems. In this work, we have studied the radial transverse momentum density profile $\rho(r)$ inside inclusive charged-particle jets in high multiplicity and minimum bias proton-proton (pp) collisions at $\sqrt{s}$ = 13 TeV using PYTHIA 8 Monash 2013 Monte Carlo simulation. We will present the possible sources contributing to the modification of $\rho(r)$ at high multiplicity pp events compared to the minimum bias ones.
Non-perturbative formulations are essential to understand the dynamical compactification of extra dimensions in superstring theories. The type IIB (IKKT) matrix model in the large-$N$ limit is one such conjectured formulation for a ten-dimensional type IIB superstring. In this model, a smooth space-time manifold is expected to emerge from the eigenvalues of the ten bosonic matrices. When this happens, the SO(10) symmetry in the Euclidean signature must be spontaneously broken. The Euclidean version has a severe sign-problem since the Pfaffian obtained after integrating out the fermions is inherently complex. In recent years, the complex Langevin method (CLM) has successfully tackled the sign problem. We apply the CLM method to study the Euclidean version of the type IIB matrix model and investigate the possibility of spontaneous SO(10) symmetry breaking. In doing so, we encounter a singular-drift problem. To counter this, we introduce supersymmetry-preserving deformations with a Myers term. We study the spontaneous symmetry breaking in the original model at the vanishing deformation parameter limit. Our analysis indicates that the phase of the Pfaffian induces the spontaneous SO(10) symmetry breaking in the Euclidean type IIB model.
Analysis of the charged multiplicity in proton-proton inelastic interactions at the LHC energies in the setting of Dual Parton Model is presented. The data simulated at different energies in various pseudo-rapidity windows using the event generator PYTHIA8 are analysed and compared with the calculations from the model and the published data from the CMS experiment. The theoretical Koba-Nielsen-Olesen (KNO) scaling of the multiplicity distributions is studied and compared with the experimental results at $\sqrt{s}$ = 0.9, 2.36, 7 TeV. Predictions from the model for the KNO distributions at $\sqrt{s}$ = 13, 13.6 TeV and for the future LHC energy of 27 TeV are computed and compared with the simulated data.
Gas Electron Multiplier (GEM) detector, one of the advanced members of the Micro Pattern Gas Detector~(MPGD) group, is widely used in High Energy Physics (HEP) experiments. The good rate handling capability and spatial resolution make it a desired tracking detector for high-rate HEP experiments. Investigation of the long-term stability is an essential criterion for any tracking device used in HEP experiments.
To investigate the long-term stability of a single mask triple GEM detector prototype, it is irradiated continuously using a Fe$^{55}$ X-ray source of energy 5.9 keV and operated with Ar/CO$_2$ gas mixture in continuous flow mode. The gain and energy resolution of the chamber are calculated from the 5.9 keV X-ray peak and studied as a function of time. The applied voltage, divider current and also the environmental parameters (ambient temperature, pressure and humidity) are recorded continuously. It is observed that at a fixed applied voltage, the divider current of the detector is changing with time and as a result, the gain of the detector also changes. A systematic investigation is carried out to understand the probable reasons behind the observed variation in divider current and also to find its possible remedies. The details of the experimental setup, methodology and results will be presented.
The higher twist T-even transverse momentum dependent distribution (TMD) $h_3(x, {\bf p_\perp^2})$ for the proton has been examined in the light-front quark-diquark model (LFQDM). By deciphering the unintegrated quark-quark correlator for semi-inclusive deep inelastic scattering (SIDIS), we have derived explicit equations of the TMD for both the scenarios when the diquark is a scalar or a vector. Average as well as average square transverse momenta have been computed for this TMD. Additionally, we have discussed its transverse momentum dependent parton distribution function (TMDPDF) $h_3(x)$.
We discuss leptogenesis in a specific scotogenic model, where the Standard Model is extended by scalar and fermionic singlets and doublets charged odd under a $\mathcal{𝑍}_2$ parity. This model is phenomenologically attractive as it is designed to dynamically generate small neutrino masses, provide viable dark matter candidates and also account for the current value of the $(g_{\mu}−2)$ anomaly. In this talk, we discuss the production of a lepton asymmetry via the decays of the heavy fermionic singlets in this model, which is then converted into the observed baryon asymmetry through the sphaleron process. We identify regions of parameter space where successful leptogenesis is compatible with the $(g_{\mu}−2)$ anomaly, lepton-flavour violating decays, such as $\mu \rightarrow e\gamma$, and the relic density of dark matter.
Radiative transitions between quarkonium states are interesting and are characterized by $\Delta L=0$ are the magnetic dipole, M1 transitions, while those characterized by $|\Delta L|=1$ are the electric dipole, E1 transitions. The M1 transition mode is sensitive to relativistic effects, specially between different spatial multiplets (where $n > n'$), while the E1 transitions are much stronger than M1 transitions, and involve transitions between excited states.
We calculate the radiative decay widths of heavy-light quarkonia for the above mentioned processes in the framework of $4\times 4$ Bethe-Salpeter equation (BSE), which is a fully relativistic approach that incorporates the relativistic effect of quark spins and can also describe internal motion of constituent quarks within the hadron in a relativistically consistent manner, due to its covariant structure. Our wave functions satisfy the 3D BSE, which is in turn obtained from 3D reduction of the 4D BSE under Covariant Instantaneous Ansatz (which is a Lorentz-invariant generalization of Instantaneous Approximation), already have relativistic effects. Further, our transition amplitudes also have relativistically covariant form.
We thus use 4x4 Bethe-Salpeter equation under Covariant Instantaneous Ansatz to calculate the M1 transitions [1-2], $0^{-+}\rightarrow 1^{--} \gamma$, $1^{--}\rightarrow 0^{-+}\gamma$ and E1 transitions [1-2] involving axial vector mesons such as, $1^{+-} \rightarrow 0^{-+}\gamma$, $0^{-+}\rightarrow 1^{+-} \gamma $, $1^{++}\rightarrow 1^{--}\gamma$, and $1^{--}\rightarrow 1^{++}\gamma$, for which very little data is available as of now. We make use of the general structure of the transition amplitude, $M_{fi}$ as a linear superposition of terms involving all possible combinations of $++$, and $--$ components of Salpeter wave functions of final and initial hadrons, expressible in a covariant forms in terms of transition form factors. In the present work, we make use of leading Dirac structures in the hadronic Bethe-Salpeter wave functions of the involved hadrons, which makes the formulation more rigorous. We evaluate the decay widths for both the above mentioned $M1$ and $E1$ transitions. We have used algebraic forms of Salpeter wave functions obtained through analytic solutions of mass spectral equations for ground and excited states of $1^{--}$,$0^{-+}$ and $1^{+-}$ heavy-light quarkonia in approximate harmonic oscillator basis to do analytic calculations of their decay widths. We have compared our results with experimental data [3], where ever available, and other models.
References:
1. V.Guleria, E.Gebrehana, S.Bhatnagar, Phys. Rev. D104, 094045 (2021) (and references therein).
2. V.Guleria, E.Gebrehana, S.Bhatnagar (Under preparation) (2022).
3. P.A.Zyla et al., (Particle Data Group), Prog. Theo. Expt. Phys. 2020,
083C01 (2020).
Due to the ongoing absence of various well-motivated beyond (the) Standard Model (BSM) signals at the Large Hadron Collider, there is a renewed interest in model-independent search strategies. Autoencoders are a class of neural networks that can learn the properties of complex high-dimensional distribution utilising an information bottleneck, first mapping the input to a lower-dimensional latent representation and then reconstructing the input features from the reduced information. They have been proposed for various model-independent searches at the LHC. In this talk, we will discuss Graph Autoencoders that can learn inductive jet representations without explicit usage of a graph readout operation. When trained to reconstruct only QCD jets, these graph autoencoders with "edge-reconstruction" networks can learn to differentiate various signal jets.
Although classical autoencoders have advantages, quantum computing technology promises to leverage quantum mechanical properties like entanglement and superposition to speed up various computational problems. Quantum machine learning based on Noisy-intermediate-scale-quantum devices is particularly efficient in learning from a low amount of data. We will also discuss quantum autoencoders based on variational quantum circuits for the problem of anomaly detection at the LHC and compare their performance and training efficiency with similarly expressive bit-based classical autoencoders.
In non perturbative Quantum Chromo Dynamics(QCD),potential model formalism has been quite successful in exploring the physical properties of heavy flavored mesons, where the determination of wave function of the concerned heavy flavored meson system is very essential. We employ Dalgarno’s perturbation theory(DPT),Variational method and Variationally Improved Perturbation Theory (VIPT) to solve the Schrodinger equation in a QCD inspired potential model with the linear plus coulomb Cornell potential. The computed wave function of the concerned system is then used to determine several static and dynamic properties of heavy flavored mesons. Comparison of all three these methods is discussed in terms of the obtained results and also with the available experimental data.
By employing Heavy quark effective theory, we predicted masses of n = 3 strange bottom mesons. Using theoretical information available on charm mesons and flavor symmetry parameters, we calculated masses for radially excited (n = 3) P- wave bottom meson states.
From calculated masses, we plot Regge trajectories in planes (J, $M^2$ ) and ($n_r$, $M^2$ ). It nicely fit on data. Our results may provide crucial information for higher excited states and may motivate upcoming experiments at LHCb, PANDA, BESIII, $D\emptyset$ etc. to look for these states.
Why is there something instead of nothing? Every particle in nature has it's corresponding antiparticle.In theory as well as in experiments it is seen that particle antiparticle always comes in pairs.Yet we only observe matter in our everyday environment.So does our universe fundamentally favours matter over antimatter?
Was this matter antimatter asymmetry present at the birth of the universe or did the universe developed it later.
To answer this question Andrei Sakharov in 1967 proposed following conditions necessary for breaking matter antimatter symmetry:
1. Baryon Number Violation: According to Grand Unified Theory baryon number is not conserved in integration of fermions with heavy GUT scale gauge and Higgs bosons.GUT tells us that the early universe had very high energy density that could lead to creation of heavy particles (leptoquarks)which could mediate B violation.
2.CP Violation:
Parity operator operates on the particle's wave function by reflecting all the vectors through origin which changes the handed-ness of the particle.For parity conservation both both left handed and right handed particles should decay with equal probability.
Charge conjugation operates on a particle to convert it into it's antiparticle.
If we apply both the operators on a right-handed particle it will result in a left-handed antiparticle.For CP to be conserved rate of RH particle should be equal to LH antiparticle.
In 1980 Val Fitch and Jim Cronin performed an experiment on K mesons which showed the CP violation.Since CP treats matter and antimatter differently so during early universe it must have enhanced the rate of conversion of antimatter to matter.
3.Thermal non equilibrium: In thermal equilibrium all the processes and their reverse have same rate but for matter to be dominating than antimatter it shall require thermal non equilibrium
Beyond the Standard Model:
1.The CP violation from weak forces accounts for a very small amount which is not enough to even leave matter for one galaxy. Could CP violation also be present in the strong interaction and if not then could there be some other hidden force beside the Standard Model responsible for the CP violation
2.Baryon number violation can exist in Grand Unified Theory but to check if GUT is possible in our universe we need to look for proton decay which is not supported in Standard Model
3.Leptogenesis: they are the scenarios where lepton number violation generated in heavy sterile neutrino produces baryon asymmetry
Experimental work is being done by Fermilab to detect difference in neutrino and antineutrino oscillations(morphing) which can provide a strong evidence for the Leptogenesis
Galactic cosmic rays are deflected by the Sun’s magnetic field, leading to significant energy-dependent temporal and spatial variations in their intensity. The muons observed at GRAPES-3 arise from extensive air showers as cosmic ray secondaries originating in the interactions of these cosmic rays with the upper atmosphere. We observe strong correlations between the muon flux measured by GRAPES-3 and the upper atmospheric temperature, as well as the Interplanetary magnetic field (at Lagrange point L1). These correlations make the atmospheric muon flux a promising tool for monitoring both the upper atmosphere temperature and the interplanetary magnetic field in real time. I will present the detailed analysis technique and results of data from more than 17 years of operation of the GRAPES-3 muon telescope, as well as plans for a future live monitoring system using atmospheric muons and data from the Aditya L1 experiment by ISRO.
Please ignore.
We have studied weak pion production off the nucleon induced by (anti-) neutrino. The model is built by taking the contribution from non-resonant background terms and the dominant $\Delta(1232)$ resonance. We also include higher resonances(four-star in PDG) such as $P_{11}(1440)$, $D_{13}(1520)$, $S_{11}(1535)$, $S_{11}(1650)$ and $P_{13}(1720)$, which helps us to extend the model to higher invariant mass and hence to higher neutrino energies. We write the chiral lagrangian for neutral current for non-resonant diagrams, which helps us to include a complete set of diagrams at the lowest order. We derived the isospin-relations for the NC for the resonant terms as well. The vector form factors for the resonances are obtained from Helicity Amplitudes provided by MAID. For axial coupling, we rely on PCAC and GTR. The cross-section results are presented and discussed for all the possible channels of NC single pion production induced by NC interactions. We are also tuning the resonance parameters to make them consistent with the electro- and photo-production data.
In this work, we study the phenomenological effects of an eV-scale sterile neutrino in neutrinoless double beta decay and active neutrino masses and mixings. We use $A_4 \times Z_4$ discrete symmetry extension of Standard Model to develop a model of neutrino masses and mixings in 3+1 scheme within the Minimal Extended Seesaw mechanism. We consider an $A_4$ triplet right-handed neutrino N and one eV-scale sterile neutrino singlet S in which the deviation from $\mu-\tau$ symmetry in neutrino mass matrix is generated through an antisymmetric interaction of the right-handed triplet neutrino. This model successfully explains leptonic mixing with non-zero $\theta_{13}$ and the cosmological bounds on sum of active neutrino mass $\sum m_i < 0.12$ eV. The effects on neutrinoless double beta decay and baryogenesis via resonant leptogenesis are also studied and significant results are observed within the experimental bounds.
Neutrino physics gives us an opportunity to investigate new physics beyond the standard model. Recent data from the two long-baseline accelerator experiments, NO$\nu$A and T2K, appear to show some discrepancy in the standard 3-flavor scenario. Here, we intend to explore the next generation of long-baseline neutrino experiments T2HK and DUNE. We study the sensitivities of the non standard interaction (NSI) couplings ($|\epsilon_{e \mu}|$, $|\epsilon_{e \tau}|$) and the corresponding CP-phases ($\phi_{e \mu}$ and $\phi_{e \tau}$). While both the future experiments are sensitive to NSI of the flavor changing type arising from $e-\mu$ and $e-\tau$ sectors, we find that DUNE is more sensitive to the NSI parameters than that of T2HK. In addition to that we study the impact of NSI on the sensitivities of standard CP-phase $\delta_{CP}$ and atmospheric mixing angle $\theta_{23}$ in the normal as well as inverted ordering. We also observe difference in probabilities for both the experiments in the presence of NSI.
According to General Relativity spacetime is curved in the presence of matter. Einstein-Cartan theory is a simple extension of GR where the spin of the matter also affects the curvature of spacetime. This new structure of spacetime requires an affine connection which is no longer torsionless. So, the connection becomes an independent variable alongside the metric. If we want to include fermions in the theory it is convenient to introduce the tetrads and spin connections. In this discussion, we will try to describe a first order theory of fermions under gravity in terms of tetrads and spin connections. The torsion is not included by hand but the fermions themselves act as a source of torsion. As we vary the action we see that fermionic torsion has no dynamics and gives an effective GR with a four fermi spin-torsion interaction. Although in literature the spin-torsion interaction described is of the form of axial current-axial current coupling, it has been recently proposed that in the most general case it could be a chiral current-chiral current coupling. One of the consequences of spin-torsion chiral coupling is that it can give rise to a force on fermions which in turn can contribute to the mass of the fermion. We calculated the effect of this coupling on propagation of ultra fast neutrinos through matter. We will try to show the effects of the spin-torsion coupling on neutrino oscillation probabilities.
Over the past ten years, the evidence for charmed mesons has increased rapidly and remarkably in comparison to the bottom mesons [1]. In the bottom sector, however, it is challenging to identify the broad resonance states because of large non-resonant continuum contributions. To date, the experimental groups have confirmed only ground and low-lying excited states of bottom mesons [2-4]. In the upcoming years, we hope that more experimental data may be published. The CERN-based LHCb experiment will be in a unique position for this.
Recently, the LHCb [5] measured the masses $(M)$ and decay widths ($\Gamma$) of two new states of excited $B_s$ mesons into $BK$ decay mode as
$B_{sJ}(6063)$: $(M, \Gamma) = (6063.5 \pm 1.2 \pm 0.8, 26 \pm 4 \pm 4)$ MeV,
$B_{sJ}(6114)$: $(M, \Gamma) = (6114 \pm 3 \pm 5, 66 \pm 18 \pm 21)$ MeV,
considering the first statistical and second systematic uncertainty. Theories and phenomenological studies for the masses of excited $B_s$ mesons suggest that these recently discovered states could be the first orbitally excited states [6-8]. Now is a good time to do a detailed theoretical analysis of excited bottom mesons.
Motivated by the recent observation of the orbital excitation $B_{sJ}(6063)$ and $B_{sJ}(6114)$ by the LHCb Collaboration [5], we have carried out a systematic study of the excited $B_s$ mesons in a framework of heavy quark effective theory (HQET). Using the spin-flavor symmetry of the heavy-quark and chiral symmetry of the light quark, we explore the flavor independent parameters $\Delta_F^{(c)} = \Delta_F^{(b)}$ and $\lambda_F^{(c)} = \lambda_F^{(b)}$ to calculate the masses of experimentally unknown bottom mesons [9]. Moreover, their strong decay behavior to the ground state bottom mesons plus light pseudoscalar mesons is determined [10, 11]. We believe that the present study will not only shed light on the properties of these observed bottom mesons, but will also provide useful clues for future experimental research of the radially and orbitally excited states.
References:
[1] P.A. Zyla et al. (Particle Data Group), Prog. Theor. Exp. Phys. 2020.8 (2020), p. 083C01.
[2] R. Aaij et al. (LHCb Collaboration), JHEP 2015.4 (2015), p. 1.
[3] R. Aaij et al. (LHCb Collaboration), Phys. Rev. lett. 110.15 (2013), p. 151803.
[4] T. Aaltonen et al. (CDF Collaboration), Phys. Rev. D 90.1 (2014), p. 012013.
[5] R. Aaij et al. (LHCb Collaboration), Eur. Phys. J. C 81.7 (2021), p. 601.
[6] D. Ebert, R. Faustov and V. Galkin, Eur. Phys. J. C 66.1 (2010), p.197.
[7] N. Devlani, A.K. Rai, Eur. Phys. J. A 48.7 (2012) p. 104.
[8] V. Kher, N. Devlani, A.K. Rai, Chin. Phys. C 41.9 (2017), p. 093101.
[9] P. Colangelo, F De. Fazio, F. Giannuzzi, S. Nicotri, Phys. Rev. D 86.5 (2012), p. 054024.
[10] K. Gandhi, A.K. Rai, Eur. Phys. J. C 82.9 (2022), p. 777.
[11] K. Gandhi, A.K. Rai, Eur. Phys. J. A 57.1 (2021), p. 23.
We present a study of contributions from non-perturbative (NP) effects which include multi-parton interactions (MPI), and hadronization effects in Monte Carlo (MC) event generator HERWIG7 for dijet final states in proton-proton collisions at √s = 13 TeV. As the most precise higher-order predictions of perturbative Quantum Chromodynamics (pQCD) usually do not account for such effects, these must be estimated using MC event generators when compared to experimental data. We report the derivation of these NP effects and their uncertainties for triple and double-differential dijet cross-sections in proton-proton collisions at √s = 13 TeV with the newest release of the HERWIG7 MC event generator at leading-order (LO) and next-to-leading (NLO) with matched parton showers. For this study, around 500M events were generated with HERWIG 7.2 and analyzed using the Rivet framework. The primary sources of NP effects, hadronization, and MPI were also studied separately.
Several experiments in High Energy Physics and Neutrino Physics use
Resistive Plate Chambers (RPCs) made of bakelite electrodes for more than a couple of decades. There are several future experiments that may use bakelite RPCs. Most of these experiments use bakelite electrodes coated with polymerized linseed oil on their inner surfaces. It has been a common practice for ensuring the long-term stability of the RPC modules. However, oil-coated bakelite RPCs have several problems not only during developmental stages but also in their performances.
In the conference, how a noble idea of indigenous development of oil-free bakelite RPC has solved the problems associated with oil-coated RPCs, retaining the performance of these detectors will be presented. Also, the
development of oil-free bakelite MRPC along with its performance will be
discussed.
T2HK is an upcoming long-baseline experiment which will have two water Cherenkov detector tanks of 187 kt volume each at distance of 295 km from the source. An alternative project, T2HKK is also under consideration where one of the water tanks will be moved to Korea at a distance of 1100 km. The flux at 295 km will cover the first oscillation maximum and the flux at 1100 km will mainly cover the second oscillation maximum. As physics sensitivity at the dual baseline rely on variation in statistics, dependence of systematic uncertainty, effect of second oscillation maximum and matter density, 187 kt detector volume at 295 km and 187 kt detector volume at 1100 km may not be the optimal configuration of T2HKK. In this work, we have tried to optimize the ratio of the detector volume at both the locations by studying the interplay between the above mentioned parameters. For the analysis of neutrino mass hierarchy, octant of $\theta_{23}$ and CP precision, we have considered two values of $\delta_{CP}$ as 270$^\circ$ and 0$^\circ$ and for CP violation we have considered the value of $\delta_{CP} = 270^\circ$. These values are motivated by the current best-fit values of this parameter as obtained from the experiments T2K and NO$\nu$A respectively. Interestingly we find that if the systematic uncertainty is negligible then the T2HK setup i.e., when both the detector tanks are placed at 295 km gives the best results in terms of hierarchy sensitivity at $\delta_{CP}=270^\circ$, octant sensitivity, CP violation sensitivity and CP precision sensitivity at $\delta_{CP}=0^\circ$. For current values of systematic errors, we find that neither T2HK, nor T2HKK setup is giving better results for hierarchy, CP violation and CP precision sensitivity. The optimal detector volume which is of the range between 255 kt to 345 kt at 1100 km gives better results in those above mentioned parameters.
Since the detection of gravitational waves, their interaction with different physical systems has been of interest. We study the phenomenon of parametric resonance of abelian(U(1)) and non-abelian(SU(2)) gauge fields in presence of oscillatory space-time background. Momentum analysis shows modes undergoing parametric resonance enhance small fluctuations initially present in the fields; which further results in increase of physical observables such as energy density, CP-violating E$\cdot$B/$F\tilde{F}$, etc. Preliminary numerical simulations in 2+1 dimensions imply that apart from a color factor the growth of energy density in non-abelian gauge fields is similar to that of abelian gauge fields. Also, our results suggest that in the early universe gravitational waves may enhance CP violation resulting in chiral magnetic effects, enhance instanton transitions, production of particles etc. Local resonant mode of gauge fields, with zero CP violation initially, evolve to field configuration with non-zero CP.
The discovery of neutrino oscillations i.e. the discovery of neutrino mass, and progress of other experimental observations motivate us to develop models that can address multiple beyond Standard model issues that can be tested using present and future experiments. One such economic model is Ma's Scotogenic model, which generates Majorana neutrino mass at the 1-loop level and includes a dark matter candidate. We present a new variation of the Scotogenic model, which has an asymmetric loop contributing to neutrino masses unlike in the other variations of the Scotogenic model. Our $Z_4$ symmetric Scotogenic model preserves divergence cancellation of the original $Z_2$ model but generalizes the structure of the Feynman diagrams, not requiring symmetry between the right and left side of the Feynman loop. To generate a non-vanishing contribution to neutrino mass we break $Z_4$ symmetry going to $Z_2$ symmetry via a new singlet SU(2) scalar taking a VEV. We further discuss lepton flavour violation, dark matter freeze out and other phenomenology applying the latest experimental results to constrain our model and provide a viable parameter space for our model.
Various astrophysical objects like Neutron Stars, Magnetars, Blackholes, etc. have been extensively studied in the last few decades to predict and analyze observations about such compact objects. Despite the multiple attempts, the exact nature of the matter located inside the core of the above compact objects is still an open problem in Astrophysics. In this work, we attempt to find the profile of spherically symmetric non-rotating Neutron Stars by assuming that the core of the Neutron Star is made up of strange quark matter. The calculation has also been carried out with a core consisting of non-strange quark matter for comparison. We also include the impact of magnetic field on the profile of the Neutron Stars employing a modified Tolman-Oppenheimer-Volkoff system of equations. Using the profile thus obtained, Neutron Star cooling rate and cooling rate as a function of radius have also been calculated with and without magnetic field effects using NSCool code. Two approaches namely; a fixed value of the magnetic field and a more realistic distance-dependent magnetic field obtained by using a fitted eight-order polynomial have been used in the current calculation. Finally, we plot the Neutron cooling rate with and without magnetic fields by using the different equations of states along with corresponding observed data for a few neutron Stars.
Non-topological solitons, of which Q-Balls are particular examples, are localised solutions in field theories which have finite energies. Q-Balls were first explored by Sidney Coleman back in 1985, and then extensively studied by others. Due to the non linearity in the equations of motion, it is very difficult to obtain analytical solution except in some limiting cases (thin and thick wall limit). Recently there has been a resurgence in the study of Q-Balls, initiated by the discovery of certain novel methods to obtain analytical solutions. We study in detail the astrophysical implications if such objects were to form Exotic Compact Objects (ECOs). We have derived previously unknown bounds and observational signatures for such objects.
Abstract: The Compressed Baryonic Matter (CBM) experiment at the Facility for Antiproton and Ion Research (FAIR)[1] accelerator complex in Darmstadt, Germany, aims to examine the QCD phase diagram in the area of high net baryon densities using N-N collision. The SIS-100 accelerator ring will produce accelerated beams in the initial phase of FAIR up to the energies of about 30 GeV for protons, 12A GeV for heavy ions, and 15A GeV for light ions. The detection of di-muons created in high-energy heavy-ion collisions in the beam energy range of 4A to 12A GeV is one of the significant physics observables at SIS100. The Muon Chamber (MuCh) system [2][3] is designed to identify muon pairs that are produced in high-energy heavy-ion collisions in the beam energy range from 4 to 40A GeV[4]. We will report our present simulation results for the reconstruction of omega mesons in central Au+Au collisions at the beam energy 8A GeV using the Muon Chamber (MuCh) detector system. As a signal, we considered the ω (ω → μ + μ− ) meson, generated by the PLUTO [5] event generator and embedded into background events generated with UrQMD [6] event generator. The signal-to-background ratio (S/B) and reconstruction efficiency (ϵω) have been computed. The efficiency correction and the invariant mass spectra of the omega meson have also been determined and will be presented in detail.
References:
1. FAIR: M. Durante et. al., Phys. Scr. 94 (2019) 033001
2. MuCh detector system: Muon Chamber (MuCh) Technical Design Report (TDR),
3. O. Singh et al., CBM Progress Report 2018.
4. CBM Collaboration, Eds. S. Chattopadhyay et. al., GSI-2015-02580.
5. PLUTO: I. Frohlich et. al.,PoS ACAT (2007) 076, arXiv:0708.2382 [nucl-ex].
6. UrQMD: S.A. Bass et al., Prog. Part. Nucl. Phys. 41 (1998) 255.
In this work we examine the model dependence of the stringent constraints on the gluino mass obtained from the Large Hadron Collider (LHC) experiments by analyzing the Run II data using specific simplified models based on several ad hoc sparticle spectra which cannot be realized even in the fairly generic pMSSM models. We first revisit the bounds on the gluino mass placed by the ATLAS collaboration using the $1l + jets + \met$ data. We show that the exclusion region in the $M_{\widetilde{g}}-M_{\widetilde{\chi}^0_1}$ plane in the pMSSM scenario sensitively depends on the mass hierarchy between the left and right squarks and composition of the lighter electroweakinos and, to a lesser extent, other parameters. Most importantly, for higgsino type lighter electroweakinos (except for the LSP), the bound on the gluino mass from this channel practically disappears. However, if such models are confronted by the ATLAS $jets + \met$ data, fairly strong limits are regained. Thus in the pMSSM an analysis involving a small number of channels may provide more reliable mass limits. We have also performed detailed analyses on neutralino dark matter (DM) constraints in the models we have studied and have found that for a significant range of LSP masses, the relic density constraints from the WMAP/PLANCK data are satisfied and LSP-gluino coannihilation plays an important role in relic density production. We have also checked the simultaneous compatibility of the models studied here with the direct DM detection, and the LHC constraints.
In this work, we find the bounds on Dirac CP phase, which are consistent with Dark Matter(DM) and neutrinoless double beta ($0\nu\beta\beta$) decay in the constrained scenario of hybrid textures of neutrino mass matrix. In our previous work, we obtain a connection between ($0\nu\beta\beta$)-decay and DM. As a result, we get six hybrid textures, which reproduce correct low energy phenomenology. We further note that out of these six hybrid textures one is disallowed by bounds on relic density of DM. Therefore, in total five hybrid textures satisfy both low energy phenomenology and bounds on relic density of DM. After numerical analysis, we find an interesting parameter space between dark matter mass and Dirac CP phase. Thus obtain Dirac CP phase in the range $107.7-338.5$ degree.
In the Standard Model(SM), the physics of charm meson is not expected to have new physics(NP) discovery potential because the relevant CKM matrix elements $V_{cs}$ and $V_{cd}$ are well known, CP asymmetries and $D^{0}$-$\bar{D^0}$ oscillations are small. It has been pointed out that the c$\rightarrow$ u$\gamma$ decays might have some contributions from the non-minimal supersymmetry, which is the NP scenario. It was suggested that NP would result in deviation from $R_{\rho/\omega}$. From earlier studies, we noticed $R_{\rho/\omega}$ could be violated already in the SM framework, while a similar relation for $D_{s}^{+}$ radiative decays offers a much better test for c$\rightarrow$ u$\gamma$. Further radiative $D_s$ decays, such as $D_{s}^{+}$ $\rightarrow$ $K^{*+}\gamma$ and $D_{s}^{+}$ $\rightarrow$ $\rho^+$ $\gamma$, have not been observed yet. Here we present a study of radiative Ds decays and aim to measure the branching fraction using the Belle data. The analyses are based on the full data set recorded by the Belle detector at the $\Upsilon($4S) resonance containing 772 million $B\bar{B}$ pairs from $e^{+}e^{−}$ collisions produced by the KEKB collider. This is also the first measurement in these decay modes.
LHC Run-I and RUN-II data highly constrain masses of electroweakinos in R-parity conserved (RPC) scenarios through various final states usually associated with large missing energy. In R-parity violating (RPV) scenarios the situation may differ depending on various RPV decay modes of the lightest supersymmetric particle. Trilinear RPV coupling term ($\lambda_{ijk}L_i.L_je_k^c$) allows the lightest supersymmetric particle, neutralino, to decay into two charged leptons (electrons and muons are considered) and one neutrino. For our analysis, we choose the decay channel having at least four charged leptons in the final state. In this work, we look for the projected reach of direct searches at the High Luminosity LHC (HL-LHC, operating at $\sqrt{s}=14$ TeV, $\mathcal{L}=3000 fb^{-1}$). We probe the projected exclusion and discovery range at HL-LHC using cut-based analysis as well as machine learning-based analysis.
The lack of information before Big Bang Neucleosynthesis (BBN) allow us to assume the presence of a new species $\phi$ whose energy density redshifts as $a^{-(4+n)}$ where $n>0$ and $a$ is the scale factor. In this non-standard cosmological setup, we have considered $U(1)_{L_\mu-L_\tau}$ $\otimes U(1)_X$ gauge extension of the Standard Model (SM) and studied different phases of the cosmological evolution of a thermally decoupled dark sector such as leak-in, freeze-in, reannihilation, and late-time annihilation. This non-standard cosmological setup facilitates a larger portal coupling $(\epsilon)$ between the dark and the visible sectors even when the two sectors are not in thermal equilibrium. The dark sector couples with the $\mu$ and $\tau$ flavored leptons of the SM due to the tree level kinetic mixing between $U(1)_X$ and $U(1)_{L_\mu-L_\tau}$ gauge bosons. We show that in our scenario it is possible to reconcile the dark matter relic density and muon $(g-2)$ anomaly. In particular, we show that for $3\times 10^{-4} ≲ \epsilon ≲ 10^{-3}$, $30{\rm MeV} ≲ m_{Z^\prime} ≲ 300{\rm MeV}$, $n=4$, and $1{\rm TeV} ≲ m_\chi ≲ 10{\rm TeV}$ relic density constraint of dark matter, constraint from muon $(g-2)$ anomaly, and other cosmological, astrophysical constraints are satisfied.
A deconfined state of quarks and gluons i.e. QGP (quark gluon plasma) is created in relativistic heavy ion collisions in LHC at CERN and RHIC at BNL and a phase transition to hadronic matter is supposed to occur at around 160 MeV temperature. This extreme state of matter is also supposed to be created after the big bang. On the other hand, in recent times, non-central heavy ion collisions are taking more attention. In non-central collisions, a high magnetic field is created in the direction perpendicular to the reaction plane. Owing to this fact, in recent time, magnetohydrodynamics is also developing to study phenomenological aspects of heavy ion collisions and it involves the transport coefficients like viscosity and conductivity of RHIC/LHC matter in presence of magnetic field. The transport coefficients are used as the input parameters for the hydrodynamic simulations. In the presence of the magnetic field, the charged particles get affected, and the system becomes anisotropic. Therefore, the transport coefficients become anisotropic.
In this work, we have calculated transport coefficient of hadronic matter in presence of temperature and magnetic field using the linear sigma model (LSM). The shear viscosity over entropy density (𝜂/𝑠) is estimated in relaxation time approximation. Point like interaction rate of hadrons are evaluated in presence of magnetic field to calculate temperature and magnetic field dependent relaxation time. We considered only temperature dependent masses coming from mean field effects. Value of viscosity over entropy density is lower in presence of magnetic field than the value of it in only thermal medium. 𝜂/𝑠 has a minimum in presence and as well as in absence of magnetic filed near the crossover transition.
The multi-particle scattering amplitudes in gauge theories are plagued with infrared singularities. A basic understanding of these singularities is essential and has been the focal point for many decades of theoretical study. The soft function collects all the soft singularities of the multi-parton amplitude. As a result of factorization, the soft function can be isolated from the entire amplitude and studied separately. The renormalization properties of the soft function allow us to write it in terms of the finite soft anomalous dimension as $S=\text{exp}\bigg[-\frac{1}{2} \int_{\mu^2}^{\infty} \frac{d \lambda^2}{\lambda^2}\ \Gamma^s (\alpha_s(\lambda^2,\epsilon)) \bigg]$. The diagrammatic approach involves the concept of Webs which are the set of Feynman diagrams that enter the soft function's exponent. Webs can be used to unravel the structure of the soft anomalous dimension at different orders of the perturbation theory. The kinematic and the corresponding color factors of webs mix via the web mixing matrix. In this talk, we extend the earlier studies done at four loops in JHEP05(2020)128 to five loops. We present our results at five loops to help understand the structure of soft anomalous dimension appearing at five loops. We have computed the mixing matrices and the exponentiated color factors of all the cwebs appearing at five loops and six lines. Our study will form an essential step toward understanding the structure of soft anomalous dimension.
Recently, in 2020 the LHCb Collaboration reported the discovery of four extremely narrow excited $\Omega_{b}^{-}$ states such as $\Omega_{b}(6316)^{-}$, $\Omega_{b}(6330)^{-}$, $\Omega_{b}(6340)^{-}$ and $\Omega_{b}(6350)^{-}$ decaying into $\Xi_{b}^{0}K^{-}$ [1]. Experimentally only the ground state $\Omega_{b}^{-}$ have been observed with the quantum number $J^{P}$ = $\frac{1}{2}^{+}$, where $J$ is the total spin and $P$ denotes the parity. The latest review article of Particle Data Group (PDG) [2] reported the world average masses of these recently observed excited states of $\Omega_{b}^{-}$ baryon, but their $J^{P}$ values are still missing. In the present work, we systematically study the mass spectra of $\Omega_{b}^{-}$ baryon and try to assign the possible spin-parity to these experimentally observed states. The Regge phenomenology with the assumption of linear Regge trajectories has been employed and the relations between Regge slopes, intercepts, and baryon masses have been extracted [3-6]. With the aid of these relations, ground state masses are obtained for $\Omega_{b}^{-}$ baryon. Further, the Regge slopes are extracted in the $(J,M^{2})$ plane to obtained the orbitally excited state masses. Similarly, the values of Regge parameters are calculated in the $(n,M^{2})$ plane for each Regge lines and estimated the radially excited state masses lying on that Regge trajectory. The obtained results are in good agreement with the experimental observations where available and close to the predictions of various theoretical approaches. Our results suggest that all the four newly observed excited states belongs to $1P$ states having negative parity. The obtained mass relations and the mass value predictions could provide useful information in future experimental searches and the spin-parity assignment of these states.
References
$[1]$ R. Aaij et al. (LHCb Collaboration), Phys. Rev. Lett. 124, 082002 (2020).
$[2]$ R.L. Workman et al. (Particle Data Group), Prog. Theor. Exp. Phys.2022, 083C01 (2022).
$[3]$ J. Oudichhya, K. Gandhi, and A.K. Rai, Phys. Rev. D 104, 114027 (2021).
$[4]$ J. Oudichhya, K. Gandhi, and A.K. Rai, Phys. Rev. D 103, 114030 (2021).
$[5]$ J. Oudichhya, K. Gandhi, and A.K. Rai, Phys. Scr. 97, 054001 (2022).
$[6]$ J. Oudichhya, K. Gandhi, and A. K. Rai, arXiv:2204.09257v1 [hep-ph] 2022.
Strangeness enhancement in high-energy heavy-ion collisions remains a key signature to identify the formation of Quark-Gluon Plasma (QGP) in such collisions. The study of strange hadrons and resonances may provide valuable information about the strongly interacting matter produced in heavy-ion collisions. In particular, the resonance particles are important because of their shorter lifetime (a few $f\rm m/c$), comparable to the medium lifetime and due to the rescattering and regeneration processes at the freeze-outs, the yields of the resonances may vary w.r.t to the non-resonance particles. Recent studies of small collision systems at the Large Hadron Collider (LHC) show unambiguous similarities for hadron production between high multiplicity $pp$, $pPb$ collisions and $PbPb$ collisions. The strangeness enhancement and ratio of yields of identified hadrons play important role in characterizing the LHC data in different collision systems.
In this contribution, we investigate the strange hadron and resonance yields using a $p$QCD-inspired multiple parton scattering approach-based model, EPOS3 including the hydrodynamical evolution of produced particles. The results of the yield ratios of identified hadrons will be presented for $pp$, $pPb$ and $PbPb$ collisions at various LHC energies, exploiting the model parameters to understand the sensitivity of the microscopic mechanism of hadron production.
We determine the properties of 1P states of Charmonium & Bottomonium in the presence of baryonic chemical potential using Quasi particle approach. Here we employed the medium modified form of Cornell potential which has both Coulombic as well as String part. This enables us to study the properties of heavy Quarkonia even above the critical temperature. Using Quasi particle approach with baryonic chemical potential we study the binding energy and the dissociation temperature of the 1P states of quarkonia. The mass spectra of 1P states of quarkonia is also calculated in the presence of baryonic chemical potential.
As the strength of the magnetic field ($𝐵$) becomes weak, novel phenomena,
similar to the Hall effect in condensed matter physics
emerges both in charge and heat transport in a thermal
QCD medium with a finite quark chemical potential ($\mu$).
So we have calculated the transport coefficients
in a kinetic theory within a quasiparticle framework,
wherein we compute the effective mass of quarks for the
aforesaid medium in a weak magnetic field (B) limit
($|eB|$< QCD up to one loop, which depends on $𝑇$ and $𝐵$ differently to
left- ($L$) and right-handed ($R$) chiral modes of quarks, lifting
the prevalent degeneracy in $L$ and $R$ modes in a strong magnetic field
limit ($|𝑒𝐵|>>𝑇^2$). Another implication of weak
$𝐵$ is that the transport coefficients assume a tensorial structure:
The diagonal elements represent the usual (electrical and thermal)
conductivities: $\sigma_{\text{Ohmic}}$ and $\kappa_0$ as the
coefficients of charge and heat transport, respectively
and the off-diagonal elements denote their Hall counterparts:
$\sigma_{\text{Hall}}$ and $\kappa_1$, respectively.
It is found in charge transport that the magnetic field acts on
$L$- and $R$-modes of the Ohmic-part of electrical conductivity in
opposite manner, viz. $\sigma_{\text{Ohmic}}$ for $L$- mode decreases
and for $R$- mode increases with $𝐵$ whereas the Hall-part $\sigma_{\text{Hall}}$
for both $L$- and $R$-modes always increase with $𝐵$.
In heat transport too, the effect of the magnetic field on the usual thermal
conductivity ($\kappa_0$) and Hall-type coefficient ($\kappa_1$) in both
modes are identical to the abovementioned effect of $𝐵$ on charge
transport coefficients.
We have then derived some coefficients from the above transport
coefficients, namely Knudsen number ($\Omega$ is the ratio of
the mean free path to the length scale of the system)
and Lorenz number in Wiedemann-Franz law. The effect of $𝐵$ on $\Omega$
either with $\kappa_0$ or with $\kappa_1$ for both modes are identical to
the behavior of $\kappa_0$ and $\kappa_1$ with 𝐵. The value of
$\Omega$ is always less than unity for the entire temperature range,
validating our calculations. Lorenz number ($\kappa_0$/$\sigma_{\text{Ohmic}}𝑇$) and
Hall-Lorenz number ($\kappa_1$/$\sigma_{\text{Hall}}𝑇$) for $L$-mode
decreases and for $R$-mode increases with a magnetic
field. It also does not remain constant with $T$, except for
the $R$-mode Hall-Lorenz number where it remains almost constant
for smaller values of $B$.
In this work we have carried out the study of Jarlskog like parameter of the neutrino mass matrix with two constrained condition i.e one vanishing minor and zero sum of the neutrino mass eigenvalues. Out of the six possible cases of a neutrino mass matrix with one vanishing minor, we have carried out the study on only three cases i.e $C_{11}=0$, $C_{12}=0$ and $C_{13}=0$ by imposing the zero sum condition of the neutrino mass eigenvalues in flavor state basis. For each case the dependence of Jarlskog like parameter $\nu_{\alpha\beta}^{ij}$ on the the Dirac CP phase $(\delta)$ and the Majorana CP phase ($\alpha$ and $\beta$) has been studied by plotting $\nu_{\alpha\beta}^{ij}$ for the constrained ranges of $\delta$, $\alpha$ and $\beta$ for which the cases are viable under the $3\sigma$ range of the current neutrino oscillation data.
The non-relativistic hypercentral Constituent Quark Model (hCQM) has been employed for the study of light, strange to the heavy baryons. The approach is based on the idea of parametrizing the quark dyanmics through the constituent quark mass. The potential being chosen is such as to solely depend on hyperradius (x) of the Jacobi coordinates so that incorporating three body effects in the reduced x form. The present study is focused on light, strange baryon N to $\Omega$. The mass spectra so obtained using hCQM with linear confining potential as well as spin-dependent terms and some correction factors to take care of hyperfine splitting, provides a range of states. The least experimentally explored strange baryon might have a mass to look for in future experimental facilities namely PANDA at FAIR-GSI. The linear Regge trajectories are observed for (n,$M^{2}$) and (J,$M^{2}$) and magnetic moments for ground and other possible mixed state has been calculated [1-3]. The spectra opens the door to look for various decay channels, wherein electromagnetic decays have been studied. Also, few strong decay channels for light and strange baryons are calculated. Some of the results are very well in accordance of the PDG masses [4], however other resulhs show the path for possible modification in the potential model and compare a variety of theoretical and phenomenology approaches. It is noteworthy here that few of the baryon states in long debate such as $\Lambda$(1405) and $\Omega$(2012) whose exact nature is not clear, here not all such states are observed through this model. Thus, with more experimental states in upcoming time will boost up the understanding and possibility to define the phenomenology approach through new way.
References:
1) C. Menapara and A. K. Rai, Chin. Phys. C 46, 103102 (2022); Chin. Phys. C 45, 063108 (2021); EPJ Web of Conferences 258, 03004 (2022)
2) C. Menapara, Z. Shah and A. K. Rai, Chin. Phys. C 45, 023102 (2021)
3) Z. Shah, K. Gandhi and A. K. Rai, Chin. Phys. C 43,024106 (2019)
4) R. L. Workman et. al. [Particle Data Group], Prog. Theor. Exp. Phys. 2022, 083C01 (2022)
Supernova neutrinos are weakly interacting particles which are produced when a massive star collapses to form a compact object losing 99% of the gravitational binding energy of the remnant in the form of neutrinos with energies a few tens of Mev in a few tens of seconds. They were observed for 1987A core-collapse supernova (SN1987A) in the Large Magellanic Cloud (LMC) , 50 kpc away from Earth. The detection capabilities have increased by orders of magnitude since 1987 and the next observation of core-collapse will provide a great deal of information for both physics and astrophysics. SNOwGLoBES (SuperNova Observatories with GLoBES), is a software whose goal is to record much more events than before to analyze the supernova neutrinos and to study the neutrino oscillations in more depth , for computing interaction rates and distributions of observed quantities for supernova burst neutrinos in common detector materials. A study is carried out to determine flux parameters by parameter fit algorithm, using different cross-section models, more accurately in SNOwGLoBES.
The construction of a cosmic muon veto detector (CMVD) is in progress to shield the mini-Iron Calorimeter detector at IICHEP, Madurai. The goal of the CMVD is to study the feasibility of a shallow depth (100 m) neutrino experiment. The estimated reduction in cosmic muon flux will be $10^{6}$ at a depth of 1 km. The same order of reduction in cosmic muon flux at a shallow depth (100 m) will be possible, only if the cosmic muon veto detetor will have veto efficiency of more than 99.99$\%$ and fake rate of less than $10^{-5}$.
The CMVD will consist of $\sim$ 4.5 m long extruded plastic scintillators (EPSes), WLS fibre to collect the scintillation photons and the silicon-photomultipliers (SiPMs) for the readout. A total of 760 EPSes will be required in making of the CMVD. Two EPSes will be glued together to make one unit called di-counter. It is essential to test all the components (i.e. di-counters, SiPMs and the readout electronics) of the CMVD before installation to achieve the required veto efficiency goal. To test the di-counters, a cosmic muon coincidence setup is made using additional three di-counters to generate a trigger of cosmic muon trajectory. DRS modules are used to collect muon signals of the test di-counters. This paper will cover the details of the experimental setup and the test results for all the tested di-counters.
Transport properties act as crucial probes to analyze the QCD matter produced in ultrarelativistic heavy-ion collisions. Their dependencies on quantities like temperature and chemical potential can help us to locate the phase transition boundary in the QCD phase diagram. In this work, we perform a study of the thermal conductivity, electrical conductivity, and their corresponding diffusivities in a hadron resonance gas with van der Waals (VDW) interactions. Both the attractive and repulsive interactions between the meson-meson and (anti)baryon (anti)baryon have been taken care of within the model. The dissipative parameters have been calculated by using the Boltzmann Transport Equation (BTE) under the Relaxation Time Approximation (RTA). The effect of temperature and baryochemical potential on the conductivities are studied, which are then compared with several existing theoretical models. Indications of a possible first-order liquid-gas phase transition is seen at the higher baryochemical potential at low temperatures. Finally, we have also estimated the diffusivities in the hadronic medium, which decreases with the temperature and baryochemical potential.
Lorentz invariance is a well-known fundamental symmetry, serving as the pillar of widely accepted theories such as quantum field theory and Einstein’s theory of relativity, and has deep connections with the charge, parity, and time-reversal symmetry. The search for Lorentz invariance violation (LIV) is getting more attention in recent years due to many theories of beyond standard model (Stochastic space-time foam, quantum loop gravity, string theory, etc.) predicting LIV in high-energy physics.
We adopt the non-isotropic model to study the LIV parameters. The key feature of the non-isotropic model is sidereal variations in the oscillation probabilities arising from the breakdown of rotational symmetry and CPT asymmetries comparing neutrino and antineutrino modes. The Sidereal effect occurs by the direction-dependency of neutrino state evolution during propagation. In this work, we compare the impact of LIV in different long-baseline neutrino experiments. We have also investigated the effect of the non-isotropic LIV parameter on appearance and disappearance channels in time-independent analysis.
We present the new constraint in LIV parameters using the event information for the Numu disappearance channel and the Nue appearance channel. The sensitivity of the NOνA and T2K experiment for the LIV parameters are also projected.
We study the effect of NLO QCD correction merged with parton shower on the distribution of polarization observables that are associated to the top quark polarization and examine the prospect of identifying the genesis of scalar leptoquark by looking into their pair production at the LHC. We study various angular and energy variables at the NLO+PS accuracy to distinguish scalar leptoquarks originating from different models and do multivariate analysis with jet-substructure variables to reach a higher discovery potential.
Various observables in the muon sector have shown persistent deviations from the Standard Model (SM) predictions. The muon's anomalous magnetic dipole moment measured by Fermilab has shown a deviation of $4.2 \sigma$. Significant anomalies in data have also been observed in the semileptonic $B$-meson decay observable called $R_{K^{(*)}}$. These anomalies might not be independent ones and could be manifestations of the same Beyond the SM theory. Models with TeV-range leptoquarks (LQ) are suitable candidates for explaining these anomalies. In this talk, we look at various minimal coupling scenarios of different vector LQs motivated by these anomalies. Among many possibilities, we find that only a few scenarios can explain these anomalies. We also discuss the bounds on the parameter space from the LHC and compare them with the constraints from muon anomalies.
Please Download the following proceeding material and use it to write the proceeding.
Plenary Speaker : 6 pages
Mini-review : 5 pages
Parallel speaker :4 pages
Posters : 2 Pages.