Cosmology offers today one of the most important frontiers of physics. The mystery of dark matter and the puzzle of dark energy are still outstanding. On the observational side, there is an exponential growth of accurate and important data, which will help in establishing the new needed theories.
In this conference we plan to bring together researchers from the observational, computer simulation and theoretical sides working in cosmology, astrophysics and astroparticle physics, to discuss the current situation as well as prospects for the future improvements.
All the topics will be discussed from different angles starting from the booming flow of observational data. The topics to be discussed will include:
A number of invited talks from leading scientists as well as a number of contributed ones from participants will be presented.
The conference will take place in presence in the Miramare campus, in a conductive atmosphere, appropriate for the dramatic paradigm shift that we may be witnessing in these days.
The conference will begin on the morning of Monday and end before lunch on Saturday.
The conference fee is 200€, both for contributing speakers and attendees. There is still the possibility to attend remotely, with a fee of 30€. In the registration form the applicants must provide the requested information, in order to receive the payment instructions.
There will be a Special Issue of the peer-reviewed MDPI journal "Physics" related to this Conference. In this regard, participants can submit an original work/review/letter connected with their invited/contributed talk/accepted poster.
Scientific Organizing Committee | Local Organizing Committee |
TBA
Euclid is an ESA space mission that aims to investigate the dark Universe. Over the six years of nominal survey operations, Euclid will observe billions of galaxies, probing the Universe's large-scale structure out to 10 billion light-years, covering a third of the celestial sphere. Employing Weak Gravitational Lensing and Galaxy Clustering probes, Euclid aims to detect the signatures of dark matter and dark energy. This mission constitutes an extraordinary scientific and engineering effort, culminating in the recently published first test images. This preliminary information shows Euclid's potential to achieve its ambitious goals, paving the way for the science-ready high-quality data that will soon follow. In this talk, I will provide an overview of the scientific objectives of the mission and its design, along with a description of the strategies to leverage the vast amount of data that lies ahead.
In the upcoming decade, scientists will face significant new challenges in the field of Cosmology: experiments will observe our Universe collecting an unprecedented amount of data, which must be analyzed with an extraordinary level of precision in order to interpret them and efficiently extract valuable information for data-driven discoveries . Cosmology therefore represents the perfect playground for the application of machine learning techniques, which, as a matter of fact, are undertaking an impressive growth in this field. In this talk I will summarize some of these applications and the most promising results obtained so far, with a particular focus on the use of Neural Networks in the study of the Cosmic Microwave Background.
The BICEP/Keck experiments are compact refracting telescopes mapping the polarization of the Cosmic Microwave Background (CMB) from the South Pole in Antarctica. The primary goal is to detect or set limits on primordial gravitational waves by observing B-modes of the polarization pattern. Recently the BK18 results have been released which include all data taken up to and including the 2018 observing season. The new 95GHz map from BICEP3 now reaches an equal depth to the previous 150GHz map from BICEP2/Keck, and large amounts of new 220GHz data from Keck achieve a higher signal-to-noise on dust than the Planck 353GHz channel. A multicomponent fit to the cross-spectral data remains an adequate description of the data and gives a limit on the tensor-to-scalar ratio of r<0.036 (95%), with no priors taken from other regions of sky. Running maximum likelihood search on simulations we obtain unbiased results and find that sigma(r) = 0.009. I will discuss the BK18 data and analysis, as well as the ongoing program including delensing in conjunction with SPT.
The Simons Observatory (SO) is a new cosmic microwave background experiment being built on Cerro Toco in Chile, due to begin observations in 2024. SO will measure the temperature and polarization anisotropy of the cosmic microwave background in six frequency bands, from 27 to 280 GHz. The initial configuration of SO will have three small-aperture 0.5-m telescopes (SATs) and one large-aperture 6-m telescope (LAT), with a total of 60,000 cryogenic bolometers. Our key science goals are to characterize the primordial perturbations, measure the number of relativistic species and the mass of neutrinos, test for deviations from a cosmological constant, improve our understanding of galaxy evolution, and constrain the duration of reionization. The SATs will target the largest angular scales observable from Chile, mapping ~10% of the sky to a white noise level of 2 μK-arcmin in combined 93 and 145 GHz bands, to measure the primordial tensor-to-scalar ratio, r, at a target level of σ(r)=0.003. The LAT will map ~40% of the sky at arcminute angular resolution to an expected white noise level of 6 μK-arcmin in combined 93 and 145 GHz bands, overlapping with the majority of the LSST sky region and partially with DESI. With up to an order of magnitude lower polarization noise than maps from the Planck satellite, the high-resolution sky maps will constrain cosmological parameters derived from the damping tail, gravitational lensing of the microwave background, the primordial bispectrum, and the thermal and kinematic Sunyaev-Zel'dovich effects, and will aid in delensing the large-angle polarization signal to measure the tensor-to-scalar ratio. The survey will also provide a legacy catalog of 16,000 galaxy clusters and more than 20,000 extragalactic sources. This talk will present the science goals of SO and the recently approved plans to upgrade the current design (SO:UK, SO:JP and Advanced SO).
Imaging Atmospheric Cherenkov Telescopes measure cosmic gamma radiation at very-high-energies (VHE) between 20 GeV and about 100 TeV. While most of the observations are devoted to study astrophysical processes in the sources of the gamma-ray emission (Supernova remnants, pulsar wind nebulae, gamma-ray binaries, active galactic nuclei, gamma ray bursts etc), distant gamma rays can also be used to test cosmological models. In particular, the cosmic background low-energy photon field (called Extragalactic Background Light, EBL) is causing energy dependent absorption feature in the observed spectra of VHE gamma-ray sources. Alternatively, if the density of the EBL is known, the absorption feature can be used to determine the Hubble constant H0. Moreover, measured temporal and spatial emission recorded from distant blazars is sensitive to intergalactic magnetic fields in the voids. In this presentation I explain the technique used, the results obtained so far and give an outlook to the future prospects.
Galaxy observations and N-body cosmological simulations produce conflicting dark matter halo density profiles for galaxy central regions. While simulations suggest a cuspy and universal profile (UDP) of this region, the majority of observations favor variable profiles with a core in the center. We investigate the convergency of standard N-body simulations, especially in the cusp region. We simulate the well known Hernquist model using the SPH code Gadget-3 and consider the full array of dynamical parameters of the particles. We find that, although the cuspy profile is stable, all integrals of motion characterizing individual particles suffer strong unphysical variations along the whole halo, revealing an effective interaction between the test bodies. This result casts doubts on the reliability of the velocity distribution function obtained in the simulations. Moreover, we find unphysical Fokker-Planck streams of particles in the cusp region. The same streams should appear in cosmological N-body simulations, being strong enough to change the shape of the cusp or even to create it. Our analytical analysis, based on the Fokker-Planck approach, confirms the obtained numerical results and also suggests that the UDPs generally found by the cosmological N-body simulations may be a consequence of numerical effects. A much better understanding of the N-body simulation convergency is necessary before the 'core-cusp problem' can properly be used to question the validity of the CDM model.
The distribution of dark matter within halos can reveal key information about cosmology. Therefore, making the density profiles increasingly more precise is fundamental for the study of dark matter. We introduce a new dynamics-based method to calculate dark matter density profiles from halo simulations. Each particle in a snapshot is ‘smeared’ over its orbit to obtain a profile which is averaged over a dynamical time. The profiles calculated using this technique are in very good agreement with the traditional ‘binned’ estimates and show significant reduction in Poisson noise for the same number of particles. Including information about the dynamics of the particles also allows for more precise calculation of the gravitational potential down to the softening length of the simulation. This, in turn, makes it possible to extrapolate the shape of the dynamical density profiles at very small radii, which shows promising results when compared to a higher resolution version of the same snapshot. (C. Muni, A. Pontzen, et al in prep)
The DarkSide program already produced leading results for both the low mass ($M_{WIMP}<10GeV/c^2$) and high mass ($M_{WIMP}>100GeV/c^2$) dark-matter direct-detection searches with its primary DarkSide-50 detector. Operating since 2013, DarkSide-50 was a 50-kg-active-mass dual-phase Liquid Argon Time Projection Chamber (TPC), filled with low radioactivity argon from an underground source. The next step of the DarkSide program consists the construction of a new generation experiment within the Global Argon Dark Matter Collaboration that engages all the current argon-based experiments. DarkSide-20k is designed as a 20-tonne fiducial mass dual-phase liquid argon TPC with SiPM-based cryogenic photosensors with high detection efficiency. The detector will be housed at the INFN Gran Sasso (LNGS) underground laboratory, just like its predecessor, and will be nearly free of any instrumental background for exposures of >100 tonne x year. DarkSide-20k is expected to attain a WIMP-nucleon cross-section exclusion sensitivity of $6.3\times 10^{-48}\, cm^2$ for a WIMP mass of $1TeV/c^2$ in a 200 t yr run. The talk will highlight the latest updates on the ongoing R\&D activities toward the construction of this large-scale argon detector and its capabilities.
The dark matter halo sparsity, i.e. the ratio between spherical halo masses enclosing two different overdensities, provides a non-parametric proxy of the halo mass distribution which has been shown to be a sensitive probe of the cosmological imprint encoded in the mass profile of haloes hosting galaxy clusters. Mass estimations at several overdensities would allow for multiple sparsity measurements, that can potentially retrieve the entirety of the cosmological information imprinted on the halo profile. Here, we investigate the impact of multiple sparsity measurements on the cosmological model parameter inference. For this purpose, we analyse N-body halo catalogues from the Raygal and M2Csims simulations and evaluate the correlations among six different sparsities from Spherical Overdensity halo masses at Δ =200/500/1000 and 2500 (in units of the critical density). Remarkably, sparsities associated with distinct halo mass shells are not highly correlated. This is not the case for sparsities obtained using halo masses estimated from the Navarro-Frenk-White (NFW) best-fit profile, which artificially correlates different sparsities to order one. This implies that there is additional information in the mass profile beyond the NFW parametrisation and that it can be exploited with multiple sparsities. In particular, from a likelihood analysis of synthetic average sparsity data, we show that cosmological parameter constraints significantly improve when increasing the number of sparsity combinations, though the constraints saturate beyond four sparsity estimates. We forecast constraints for the CHEX-MATE cluster sample and find that systematic mass bias errors mildly impact the parameter inference, though more studies are needed in this direction.
Numerous observations confirm the existence of dark matter at astrophysical and cosmological scales, yet the fundamental nature of this elusive component of our universe remains unknown. Theory and simulations of galaxy formation predict that dark matter should cluster on small scales in bound structures called sub-halos or clumps. Sub-halos are thought to be abundant in the Milky Way and can produce high-energy gamma rays as final products of dark matter annihilation. Recently, it has been highlighted that the brightest halos should also have a degree scale extension in the sky. In this study, we examine the prospects offered by CTA for detecting and characterizing such objects. From simple models for individual sub-halos and their population in the Milky Way, we examine under which conditions such sources can be identified with the Galactic Plane Survey (GPS) observations. We use a full spatial-spectral likelihood analysis to derive the sensitivity of CTA to extended dark matter sub-halo emission and assess to what extent the main physical parameters of the phenomenon can be determined.
Pulsar Timing Array experiments probe the presence of possible scalar/pseudoscalar ultralight dark matter particles through decade-long timing of an ensemble of galactic millisecond radio pulsars. With the second data release of the European Pulsar Timing Array, we focus on the most robust scenario, in which dark matter interacts only gravitationally with ordinary baryonic matter. Our results show that ultralight particles with masses 10^-24.0 eV≲m≲10^-23.2 eV cannot constitute 100% of the measured local dark matter density, but can have at most local density ρ≲0.15 GeV/cm3.
We calculate the energy density and pressure of a scalar field after its decoupling from a thermal bath in the spatially flat Friedman- Lemaı̂tre-Robertson-Walker space-time, within the framework of quantum statistical mechanics. By using the density operator determined by the condition of local thermodynamic equilibrium, we calculate the mean value of the stress-energy tensor of a real scalar field by subtracting the vacuum expectation value at the time of the decoupling. The obtained expressions of energy density and pressure involve corrections with respect to the classical free- streaming solution of the relativistic Boltzmann equation, which may become relevant even at long times. We present preliminary numerical and analytical results for the quantum corrections of energy density and pressure for specific expansion rates a(t). From [arXiv:2212.05518 [gr-qc]]
Serendipitous H-ATLAS fields Observations of Radio Extragalactic Sources (SHORES, PI: Marcella Massardi) is a brand new survey 2.1 GHz performed with the Australia Telescope Compact Array (ATCA). It comprises 30 discontinuous fields covering a total area of 15 sq. deg in the Herschel-ATLAS Southern Galactic Pole region (see Eales+2010), centred in candidate lensed galaxies (Negrello+14). With more than 200 hours of observing time, we reached 30μJy sensitivities. These fields have the perks of being covered by Herschel observations (H-ATLAS sgp) and many other surveys (KIDS, SDSS,, DES...). We have also observed all the SHORES fields in polarization, taking advantage of the presence of polarized calibrators and the high amount of observational time we got. Combined with the sensitivity reached, this gives us the unique opportunity to study the polarisation properties of radio-loud AGN, star-forming galaxies, and radio-quiet AGN. Further, retrieving the galaxy populations in total intensity and polarization in such a wide sky area also impacts cosmology: AGN and star-forming galaxies dominate the CMB foreground on the smaller angular scales.
One of the possible pathways to the formation of a supermassive black hole (SMBH) is the hierarchical merging scenario. Central SMBHs at interacting and merging host galaxies are observed as SMBH candidates at different separations from hundreds of pc to mpc. One of the strongest SMBH merging candidates is the galaxy NGC7727, which was resolved with the high spatial resolution mode of the MUSE integral field spectrograph on the VLT using adaptive optics. Based on these unique observations, the very precise SMBH masses and radial velocities, together with the nucleus masses and size parameters, have been estimated for the first time for this galaxy. Based on these direct observations and using our parallel and high order (4th order Hermite) GPU-accelerated dynamical N-body (phi-GPU) code, we were able to trace the evolution of the SMBH and the host galaxy nucleus from kpc to mpc scales. The main goal of our dynamical modeling was to reach the gravitational wave (GW) emission regime for the multiple BHs model. We present the set of direct N-body simulations with up to one million particles and with the relativistic post-Newtonian corrections for the SMBHs particles up to 3.5PN. From our model set, we find that the upper limit of the merging time for the central NGC7727 SMBHs is about 100 Myr.
Studying the absorption lines along the lines of sight to bright high-z QSOs is an invaluable cosmological tool, providing insight into the intergalactic/circumgalactic medium, dark matter, big-bang nucleosynthesis and general relativity. I report here the recent results of the QUBRICS (QUasars as BRIght beacons for Cosmology in the Southern hemisphere) survey and high-resolution spectroscopy with the ESPRESSO high-fidelity spectrograph, the lessons learned and the synergies (particularly in terms of Machine Learning and new instrumentation for future 30m-class telescopes) and implications for the cosmic UV background, reionization, small-scale structure and the Sandage test of the cosmic redshift drift.
In this talk, I will discuss the chemical evolution of the Milky Way in the light of the most recent observational data from Galactic surveys and missions. Indeed, we are in a golden era for this field of research thanks to the advent of large spectroscopic surveys and projects (e.g. Gaia-ESO, APOGEE, GALAH, LAMOST, AMBRE), which are enhanced by ESA Gaia mission. In this way, detailed stellar abundances of stars in the Milky Way can be measured. Then, by means of detailed chemical evolution models, it is possible to predict the chemical abundances expected in the stars of each Galactic component: halo, thick disc, thin disc and bulge. From the comparison between data and model predictions for different chemical elements from lithium to europium, we can reconstruct the history of star formation occurred in each component, and thus the history of formation and evolution of the entire Galaxy.
Surveying the large-scale structure of the universe will yield an enormous amount of high quality data for constraining cosmology and potentially detecting new physics. However, extracting the maximum amount of information from this dataset and using it to its full potential requires fast and accurate methods of simulating cosmic structure formation in the nonlinear regime. Normally this is achieved with computationally expensive N-body simulations, which are too slow to use directly for inference. In this talk I will present the results from a new field-level emulator for large-scale structure formation that is trained to map the linear perturbations of the early universe to the nonlinear outcomes of cosmological N-body evolution. The emulator is made of a convolutional neural network, augmented with style parameters that capture both cosmology and redshift dependence. The model is autodifferentiable by construction, and the redshift dependence allows for time derivatives to be computed during training, thus the model is training on the full phase-space distribution of the N-body particles. The cosmology dependence allows the model to act effectively as an ensemble of CNNs each trained on simulations with different cosmological backgrounds, making the model an emulator for structure formation that is autodifferentiable with respect to both initial conditions, cosmological parameters, and redshift. The emulator achieves percent-level accuracy down to nonlinear scales of $k\sim1~\mathrm{Mpc}^{-1}~h$, and can be used for both fast, accurate generation of a large number of mock catalogs and for field-level inference.
The damping wing signature of high-redshift quasars in the intergalactic medium (IGM) provides a unique way of probing the history of reionization. Next-generation surveys will collect a multitude of spectra that call for powerful statistical methods to constrain the underlying astrophysical parameters such as the global IGM neutral fraction as tightly as possible. Inferring these parameters from the observed spectra is challenging because non-Gaussian processes such as IGM transmission causing the damping wing imprint make it impossible to write down the correct likelihood of the spectra. We will present a tractable Gaussian approximation of the likelihood that forms the basis of a fully differentiable Hamiltonian Monte-Carlo inference scheme. Our scheme can be readily applied to real observational data and is based on realistic forward-modelling of high-redshift quasar spectra including IGM transmission and heteroscedastic observational noise. In contrast to most previous approaches, we do not only use the smooth part of the spectrum redward of the Lyman-alpha line to infer the quasar continuum but also the information encoded in the Lyman-alpha forest, taking into account the full covariance between the red and the blue part of the spectrum. We improve upon our Gaussian likelihood approximation by learning the true likelihood with a likelihood-free version of the inference scheme. To this end, we train a normalizing flow as neural likelihood estimator as well as a binary classifier as likelihood ratio estimator and incorporate them into our inference pipeline. We provide a full reionization forecast for Euclid by applying our procedure to a set of realistic mock observational spectra resembling the distribution of Euclid quasars and realistic spectral noise. By inferring the IGM neutral fraction as a function of redshift, we show that our method applied to upcoming observational data can robustly constrain its evolution up to ~5% at all redshifts between 6 < z < 10.
Understanding the effect of baryon-driven astrophysics on probes of large-scale structure is crucial to correctly interpret the increasingly precise data from ongoing surveys. The state-of-the-art cosmological hydrodynamic simulation, MillenniumTNG, represents a formidable tool in this respect. Combining a cosmologically representative volume of (740 cMpc)^3 and a mass resolution of ~3X10^7 Msun per baryonic mass element, MillenniumTNG enables us to resolve the detailed properties of galaxies over a very wide range of masses. I am therefore using this simulation and its dark-matter-only counterpart to study the impact of baryons on the concentration-mass relationship of haloes in an unprecedented halo mass range (~10^11-10^15 Msun) and redshift interval (0<z<7). I will show my preliminary results and discuss possible implications for cosmological probes such as lensing.
We present a study to measure the the shape of dark matter halos of gas rich galaxies that have extended HI disks. We have assumed that the halo axes ratios in the disk plane are approximately so that q=c/a measures the halo prolateness or oblatelness. We have applied our model to a sample of 20 nearby galaxies that are gas rich and close to face-on. We have used the stacked HI velocity dispersion and HI surface densities to derive q in the outer disk regions. We find that gas dominated galaxies (such as LSB dwarfs) that are gas dominated, have have oblate halos (q < 0.55), whereas stellar dominated galaxies have a range of q values from 0.2 to 1.3. We also find a significant positive correlation between q and stellar mass, which indicates that galaxies with massive stellar disks have a higher probability of having halos that are spherical or slightly prolate, whereas low mass galaxies preferably have oblate halos. We then compare our result with galaxies in cosmological simulations. We show that halo shape affects disk dynamics, and is important for estimating halo mass as well.
We introduce a new pairwise estimator for observing the polarised kinetic Sunyaev Zeldovich (pkSZ) effect arising from the transverse peculiar velocity of galaxy clusters. The pkSZ effect is a second-order effect in the peculiar velocities of the clusters and has a frequency spectrum that can be decomposed into y-type and blackbody components, whereas the unpolarised linear kSZ effect has a blackbody spectrum only. Thus the detectability of the pkSZ effect depends only on the sensitivity of the survey and not on other primary and secondary CMB anisotropies. We present a theoretical expectation of the estimator of the pairwise pkSZ effect and calculate the signal for different mock cluster catalogues. We also make a forecast for CMB-S4. If detected, the pairwise pkSZ effect will open up a new window into the study of the large-scale structure of the Universe.
Reionization of hydrogen in the intergalactic medium (IGM) is a landmark in structure formation. Redshift z ~ 7 is the frontier in Lyalpha and reionization studies and appears to be in the middle of reionization. In the “LAGER” project (Lyman-Alpha Galaxies in the Epoch of Reionization), we take deep narrowband images to identify Lyalpha emission at z ~ 7. In this poster we present several protoclusters, which may be associated with reionization bubbles at z~7.
The widely used Milky Way dust reddening map, the Schlegel, Finkbeiner, & Davis (1998, SFD) map, was found to contain extragalactic large-scale structure (LSS) imprints (Chiang & Ménard 2019). Such contamination is inherent in maps based on infrared emission, which pick up not only Galactic dust but also the cosmic infrared background (CIB). When SFD is used for extinction correction, over-correction occurs in a spatially correlated and redshift-dependent manner, which could impact precision cosmology using galaxy clustering, lensing, and supernova Ia distances. Similarly, LSS imprints in other Galactic templates can affect intensity mapping and cosmic microwave background experiments. This paper presents a generic way to remove LSS traces in Galactic maps and applies it to SFD. First, we measure descriptive summary statistics of the CIB in SFD by cross-correlating the map with spectroscopic galaxies and quasars in SDSS tomographically as functions of redshift and angular scale. To reconstruct the LSS on the map level, however, additional information on the phases is needed. We build a large set of 180 overcomplete, full-sky basis template maps from the density fields of over 600 million galaxies in WISE and find a linear combination that reproduces all the high-dimensional tomographic two-point statistics of the CIB in SFD. After subtracting this reconstructed LSS/CIB field, the end product is a full-sky Galactic dust reddening map that supersedes SFD, carrying all Galactic features therein, with maximally suppressed CIB. We release this new dust map dubbed CSFD, the Corrected SFD, at https://idv.sinica.edu.tw/ykchiang/CSFD.html and NASA's LAMBDA archive.
Motivated by the recent developments of cosmological models that are based on generalized entropies rather than Boltzmann-Gibbs entropy, we consider stochastically quantized self-interacting scalar fields as suitable models for dark energy. These fields shift information and effectively maximize Tsallis entropy with entropic index q=3. Second quantization effects lead to new and unexpected phenomena if the self-interaction strength is strong. The stochastically quantized dynamics can degenerate to a chaotic dynamics conjugated to a Bernoulli shift in fictitious time, and the right amount of late-time dark energy density can be generated without fine-tuning. It is numerically shown that the scalar field dynamics distinguishes fundamental standard model parameters as corresponding to local minima in the dark energy landscape. Chaotic fields of this type can offer possible solutions to the cosmological coincidence problem, and give sense to late-time dark energy as stabilizing standard model parameters in the vacuum energy landscape. C. Beck, Phys. Rev. D 69, 123515 (2004) J. Yan and C. Beck, Entropy 24, 1671 (2022)
We test Refracted Gravity (RG) [1] by investigating the dynamics of disk galaxies in the Disk Mass Survey (DMS) [2,3,4] and of three elliptical E0 galaxies in the SLUGGS survey [3,4,5] without the aid of dark matter. RG reproduces the rotation curves, the vertical velocity dispersions, and the observed Radial Acceleration Relation (RAR) of DMS galaxies and the root-mean-square (RMS) velocity dispersions of stars, and blue and red globular clusters in the E0 galaxies. Our results show that RG can compete with other theories of gravity to describe the gravitational dynamics on galaxy scale. References: [1] Matsakos, T., & Diaferio, A., 2016, ArXiv e-prints [arXiv:1603.04943] [2] Cesare, V., Diaferio, A., Matsakos, T., & Angus, G., 2020, A&A, 637, A70 [3] Cesare V., 2021, Phys. Sci. Forum, 2(1), 34 [4] Cesare V., 2023, Universe, 9(1), 56 [5] Cesare V., Diaferio, A., & Matsakos, T., 2022, A&A, 657, A133
LiteBIRD, the Lite (Light) satellite for the study of B-mode polarization and Inflation from cosmic background Radiation Detection, is a space mission for primordial cosmology and fundamental physics. The Japan Aerospace Exploration Agency (JAXA) selected LiteBIRD in May 2019 as a strategic large-class (L-class) mission, with an expected launch in the late 2020s using JAXA’s H3 rocket. LiteBIRD is planned to orbit the Sun–Earth Lagrangian point L2, where it will map the cosmic microwave background polarization over the entire sky for three years, with three telescopes in 15 frequency bands between 34 and 448 GHz, to achieve an unprecedented total sensitivity of 2.2μK-arcmin, with a typical angular resolution of 0.5○ at 100 GHz. The primary scientific objective of LiteBIRD is to search for the signal from cosmic inflation, either making a discovery or ruling out well-motivated inflationary models. The measurements of LiteBIRD will also provide us with insight into the quantum nature of gravity and other new physics beyond the standard models of particle physics and cosmology. We provide an overview of the LiteBIRD project, including scientific objectives, mission and system requirements, operation concept, spacecraft and payload module design, expected scientific outcomes, potential design extensions, and synergies with other projects.
CMB-S4 will reach critical thresholds for exploring primordial gravitational waves, light relics, and neutrinos, and map the matter throughout the Universe and capture transient phenomena in the microwave sky. We discuss the range of science and instruments.
I will discuss the chemical evolution of galaxies, with particular attention to the Milky Way for which we have the majority of chemical abundance data . I will describe what is galactic archaeology, namely how to reconstruct the history of star formation of a galaxy starting from the abundances measured now in stars and gas. The cosmic evolution of metallicity with redshift will be discussed in the light of the new data from JWST.
The intersection of the cosmic and neutrino frontiers is a rich field where much discovery space still remains. Cosmology is an independent window to the physics of light relics – active neutrinos and other light massive particles that may populate the cosmological plasma - and allows to probe their behaviour over cosmological times and scales, something unachievable via terrestrial laboratory searches. In this talk I will discuss how observations of the cosmic microwave background and the large-scale structure of the Universe can be used to constrain the properties of neutrinos and other light relics. I will focus on "new physics" scenarios (e.g. beyond-standard-model properties, axion-like particles....). I will further discuss detection prospects from forthcoming cosmological observations.
Around one third of the point-like sources in the Fermi-LAT catalogs remain as unidentified sources (unIDs) today. Indeed, these unIDs lack a clear, univocal association with a known astrophysical source. If dark matter (DM) is composed of weakly interacting massive particles (WIMPs), there is the exciting possibility that some of these unIDs may actually be DM sources, emitting gamma rays from WIMPs annihilation. We propose a new approach to solve the standard, Machine Learning (ML) binary classification problem of disentangling prospective DM sources (simulated data) from astrophysical sources (observed data) among the unIDs of the 4FGL Fermi-LAT catalogue. Concretely, we artificially build two {\it systematic} features for the DM data which are originally inherent to observed data: the detection significance and the uncertainty on the spectral curvature. We do it by sampling from the observed population of unIDs, assuming that the DM distributions would, if any, follow the latter. We consider different ML models: Logistic Regression, Neural Network (NN), Naive Bayes and Gaussian Process, out of which the best, in terms of classification accuracy, is the NN, achieving around 93\% performance. Applying the NN to the unIDs sample, we find that the degeneracy between some astrophysical and DM sources can be partially solved within this methodology. Nonetheless, we conclude that there are no DM source candidates among the pool of 4FGL Fermi-LAT unIDs.
Understanding the properties of Dark Matter is one of the most demanding challenges in modern Astrophysics and Cosmology. The Cold Dark Matter paradigm is at variance with some aspects of the observed sub-galactic scale phenomenology, hence several non-standard Dark Matter particle candidates have been considered to solve these issues. In this talk, I present a novel way to constrain and possibly rule out different Dark Matter models based on the recent determination of the cosmic star formation rate density at high redshifts (z>4). I will also showcase how such constraints will be further strengthened by upcoming refined estimates of the cosmic star formation rate density if the early data on the UV luminosity function at z > 10 from the James Webb Space Telescope (JWST) will be confirmed down to ultra-faint magnitudes.
High-energy cosmic-ray electrons and positrons cool rapidly as they propagate through the Galaxy, due to synchrotron interactions with magnetic fields and inverse-Compton scattering interactions with photons of the interstellar radiation field. Typically, these energy losses have been modelled as a continuous process. However, inverse-Compton scattering is a stochastic process, characterised by interactions that are rare and catastrophic. In this work, we take the stochasticity of inverse-Compton scattering into account and calculate the contributions to the local electron and positron fluxes from different sources. Compared to the continuous approximation, we find significant changes: For pulsars, that produce electron-positron pairs as they spin down, the spectrum becomes significantly smoother. For TeV-scale dark matter particles, that annihilate into electrons and positrons, the signal becomes strongly enhanced around the energy corresponding to the dark matter mass. Combined, these effects significantly improve our ability to use spectral signatures in the local electron and positron spectra to search for particle dark matter at TeV energies.
When falling into a galaxy cluster, galaxies experience the loose of gas due to ram pressure stripping. In particular disk galaxies loose gas from their disks and very large tentacles (in the order of hundreds of kpc) can be formed. Because of the morphology of these stripped galaxies they have received the name jellyfish galaxies. It has been found that star formation is triggered not only in the disk, but also in the tentacles of such jellyfish galaxies. The star regions formed in the tentacles of those galaxies could be as massive as 3e7 solar masses and have the sizes > 100 pc. Interestingly, these parameters in mass and size agree with those of ultra compact dwarf galaxies. In this work we get use of the state of the art magento-hydrodynamical cosmological simulation Illustris-TNG50, to study the most massive jellyfish galaxies which present large (and massive) tentacles. Our aim was to analyze the star formation regions in the tentacles of jellyfish galaxies. We find that in the tentacles of jellyfish TNG50 galaxies, star formation is triggered by ram pressure stripping and regions with masses > 1e7 solar masses are formed. These regions show defined radial distribution with half-mass radius of 1 kpc, typical of dwarf galaxies. Moreover, these regions are gravitationally self-bound. All and all we identify for the first time a new type of dwarf galaxy, which by construction lacks of a dark matter halo.
We analyze the formation of the redshifted hyperfine structure line 21-cm of hydrogen atoms in Dark Ages at 50≤z≤500 in the different cosmologies. To study its dependence on the values of cosmological parameters and physical conditions in the intergalactic medium, the evolution of the global (sky-averaged) differential brightness temperature in this line was computed in standard and non-standard cosmological models with different parameters. The standard ΛCDM model with post-Planck parameters predicts a value of the differential brightness temperature in the center of the absorption line δTbr≈35 mK at z≈87. The frequency of the line in the absorption maximum is 16 MHz, the effective half-width of the line is 17 MHz. The depth of line is moderately sensitive to Ωb and H0, weakly sensitive to Ωdm, and insensitive to other parameters of the standard ΛCDM model. But line is very sensitive to the additional mechanisms of heating or cooling of baryonic matter during the Dark Ages, so it can be a good test of non-standard cosmological models. In the models with decaying and self-annihilating dark matter, as well as with a primordial stochastic magnetic field, the temperature of baryonic matter in this period is higher if the larger is the fraction of these energy components of dark matte and magnetic field strength. The absorption line becomes shallower, desappers and transitions to emission at values of the component parameters lower than the upper limits on them following from the current observational data on the Big Bang nucleosynthesis, CMB temperature and polarization fluctuations, and formation of galaxies.
We focus on the combined analysis of 5 concentric regions in the Galactic Center (GC), observed by the High Energy Stereoscopic System (HESS) in very high energy gamma-ray spectra, as a possible way to constrain the Dark Matter (DM) density distribution in a radius smaller than 450 pc. Inspired by the multi-TeV DM interpretation of the gamma-ray cut-off, detected by HESS in the inner 15 pc, we study the gamma-ray flux in different regions, determining the astrophysical factor on different angular scales. The latter will serve us to set constraints on the density distribution of the multi-TeV Weakly Interactive Massive Particle. Also, an extra study will be performed regarding dynamical constraints on the enclosed mass within the S2 star orbit, in order to compare them with the spectral constraints obtained and a wide range of DM density profiles. Our results are compatible with the hypothesis of an enhancement of the DM distribution in the GC, with respect to the benchmark Navarro-Frenk-White density distribution profile. Also, this enhancement could be created by a cuspy profile NFW with a slope $\gamma \sim 1.3$. However, the enhancement created by a DM adiabatic spike is ruled out by the HESS spectral data for almost all kinds of DM density profiles. We also show the results for other kinds of spikes, ruling out some profiles as well. Finally, we also conclude that the upper limits on the enclosed mass within the S2 orbit rule out profiles with slopes $\gamma > 0.8$ if a DM spike is considered.
According to the LambdaCDM cosmology, present-day galaxies with stellar masses M>10^11 M_sun should contain a sizable fraction of dark matter within their stellar body. Models indicate that in massive early-type galaxies (ETGs) with M~1.5x10^11 Msun dark matter should account for ~15% of the dynamical mass within one effective radius (1 Re) and for ~60% within 5 Re. Most massive ETGs have been shaped through a two-phase process: the rapid growth of a compact core was followed by the accretion of an extended envelope through mergers. The exceedingly rare galaxies that have avoided the second phase, the so-called relic galaxies, are thought to be the frozen remains of the massive ETG population at z~2. The best relic galaxy candidate discovered to date is NGC 1277, in the Perseus cluster. We used deep integral field data to revisit NGC 1277 out to an unprecedented radius of 5 Re. By using Jeans modelling we recovered the dark matter fraction of NGC 1277 within 5 Re, and found it to be negligible (<5%; two-sigma confidence level), which is in strong tension with the LambdaCDM expectation. Since the lack of an extended halo would reduce dynamical friction and prevent the accretion of an envelope, we propose that NGC 1277 lost its dark matter very early or that it was dark matter deficient ab initio. We discuss our discovery in the framework of recent proposals suggesting that some relic galaxies may result from dark matter stripping as they fell in and interacted within galaxy clusters. Alternatively, NGC 1277 might have been born in a high-velocity collision of gas-rich proto-galactic fragments, where dark matter left behind a disc of dissipative baryons. We speculate that the relative velocities of ~2000 km/s required for the latter process to happen were possible in the progenitors of the present-day rich galaxy clusters.
In recent years, the galaxy-mass cross-correlation has predominantly been probed within weak gravitational lensing via the correlation between foreground positions and background galaxy ellipticities. However, the cross-correlation between the positions of background and foreground galaxies is an alternative observable which, up to now, has been largely overlooked. The corresponding signal is a manifestation of the gravitational lensing effect of magnification bias and has been shown to become extremely significant when using a background sample of submillimeter galaxies. In this talk, I will discuss how this submillimeter galaxy magnification bias can be effectively exploited as a cosmological probe.
The concordance model of the universe – Lambda Cold Dark Matter (LCDM) model has enjoyed a streak of success in the last twenty years. However, there is still no consensus solution to the mysterious dark matter and dark energy. Furthermore, the measurements of Hubble constant and amplitude of matter fluctuation (σ_8) from early universe are in tension with the measurements from the late universe. Consequently, various alternative models have been proposed and one of the most popular one is modified theories of gravity. In this talk, I will explain how to use the motion of galaxies (peculiar velocity) to constrain the strength of gravity which determined by the growth rate of structure parameter, and hence distinguish different models of gravity. I will also explain the new model I developed to constrain the growth rate of structure. I will then show my measurement with the largest and deepest peculiar velocity survey to date – the SDSS peculiar velocity survey. The combination of this dataset with my new model for the relationship between growth rate and galaxy velocity significantly reduces the systematic uncertainty in our final constraints. I will end by discussing the consistency of my measurements with others in the literature and the predictions of General Relativity, and prospects for the future (You can find the paper associated with this talk here https://arxiv.org/abs/2209.04166).
Stage IV galaxy redshift surveys will sample the large-scale structure of the Universe over unprecedented volumes with high-density tracers, allowing for precise measurements of the clustering statistics. In order to properly exploit the full potential of such data, a robust likelihood pipeline is required, starting with an accurate theoretical prediction of cosmological observables, down to constraints on cosmological parameters. The main probe used in the context of spectroscopic galaxy surveys is the the galaxy power spectrum. However, it has been shown that the inclusion of higher order correlation functions in the analysis can improve the accuracy with which cosmological parameters are measured. I will present a software for the joint likelihood analysis of the galaxy power spectrum and bispectrum, describe its validation against N-body simulations and its application to data from the BOSS survey. Moreover, I will discuss forecasts and preparation for data from the upcoming Euclid survey.
The searches and observations of supernovae (SNe) have been motivated by the fact that they are exceptionally useful for various astrophysical and cosmological applications. Most prominently, Type Ia SNe (SNe Ia) have been used as distance indicators showing that the expansion rate of the Universe is accelerating. The strong gravitational lensing effect provides another powerful tool and occurs when a foreground mass distribution is located along the line of sight to a background source. It can happen so that galaxies and galaxy clusters can act as “gravitational telescopes”, boosting the faint signals from distant SNe and galaxies. Thanks to the magnification boost provided by the gravitational telescope, we are able to probe galaxies and SNe that otherwise would be undetectable. Therefore, the combination of the two tools, SNe and strong lensing, in the single phenomenon of strongly lensed SNe, provides a powerful simultaneous probe of several cosmological and astrophysical phenomena. By measuring the time delays of strongly lensed supernovae and having a high-quality strong lensing model of the galaxy cluster, it is possible to measure the Hubble constant with competitive precision. In this talk, I will present some of the past and recent results that have been possible due to the observations of strongly lensed supernovae and anticipate what we can expect in the future from the upcoming telescope surveys, such as the Vera C. Rubin Observatory and Nancy G. Roman Space Telescope.
The Indian Pulsar Timing Array Consortium (InPTA) is an Indo-Japanese collaboration which does precision timing of millisecond pulsars with the upgraded Giant Metrewave Radio Telescope (uGMRT) and is part of the International Pulsar timing array since 2021. The InPTA effort stands out with its unique ability to simultaneously monitor IPTA pulsars in the L and P bands using the uGMRT. Our low-frequency radio observations allow for accurate dispersion measure estimates and enhance the precision of pulsar timing measurements by reducing systematic uncertainties and improving the overall reliability. Recently, we marked a significant milestone with the first Data Release with 3.5-year observation data of 14 milli-second pulsars, achieving comparable accuracy to other PTAs. This presentation provides a concise overview of InPTA's objectives, recent scientific results, and our ongoing contributions to the IPTA consortium.
The direct and inverse cosmic distance ladder methods provide two independent ways of estimating the Hubble constant by means of their calibrators, the absolute magnitude of Supernovae of Type Ia (SNIa), M, and the sound horizon at the baryon-drag epoch, rd. In light of the increasing relevance of the Hubble tension, it is thus of utmost importance to measure them following model-independent approaches that could be employed to shed some light on the discussion. In this work, we use state-of-the-art data on Cosmic Chronometers (CCH) and SNIa from Pantheon+ compilation to first test some standard assumptions in the LCDM: the constancy of the SNIa absolute magnitude and the robustness of the cosmological principle (CP) at z<2 with a model-agnostic approach. We do so by reconstructing M(z) and the curvature parameter using Gaussian Processes. Moreover, we use CCH in combination with data on baryon acoustic oscillations (BAO) from various galaxy surveys (6dFGS, BOSS, eBOSS, WiggleZ, DES Y3) to measure rd from each BAO data point and check their consistency. Given the precision allowed by the CCH, we find that all these parameters are fully compatible (at 68% C.L.) with constant values. This justifies our final analyses, in which we constrain them under the validity of the CP, the metric description of gravity and standard physics in the vicinity of the stellar objects. The results that we obtain are independent of the main data sets involved in the Hubble tension, namely, the cosmic microwave background and the first two rungs of the cosmic distance ladder.
We investigate the holographic complexity growth rate of a conformal field theory in a Friedman-Lemaître-Roberstson-Walker (FLRW) universe. We consider a brane universe moving in the Schwarzschild background. For this case, we compute the complexity growth rate in a closed universe and a flat universe by using both the complexity-volume and complexity-action dualities. We find that there are two kinds of contributions to the growth rate: one is from the interaction among the degrees of freedom, while the other one from the change of the spatial volume of the universe. The complexity-volume and complexity-action conjectures give different results for the closed universe case. A possible explanation of the inconsistency when the brane crosses the black hole horizon is given based on the Lloyd bound.
Several cosmological tensions have emerged in light of recent data, most notably in the inferences of the parameters $H_0$ and $\sigma_8$. We explore the possibility of alleviating both these tensions {\it simultaneously} by means of the Albrecht-Skordis ``quintessence'' potential. The field can reduce the size of the sound horizon $r_s^*$ while concurrently suppressing the power in matter density fluctuations before it comes to dominate the energy density budget today. Interestingly, this rich set of dynamics is governed entirely by one free parameter that is of $\mathcal{O}(10)$ in Planck units. We find that the inferred value of $H_0$ can be increased, while that of $\sigma_8$ can be decreased, both by $\approx 1\sigma$ compared to the \lcdm\ case. However, ultimately the model is disfavored by Planck and BAO data alone, compared to the standard $\Lambda$CDM model, with a $\Delta \chi^2 \approx +6$. When including large scale structure and supernova data we find $\Delta \chi^2 \approx +1$. We note that historically much attention has been focused on preserving the three angular scales $\theta_D$, $\theta_{EQ}$, and $\theta_s^*$ to their $\Lambda$CDM values. Our work presents an example of how, while doing so indeed maintains a relatively good fit to the CMB data for an increased number of ultra-relativistic species, it is a-priori insufficient in maintaining such a fit in more general model spaces.
Within the f(Q)-gravity framework, we perform a phenomenological study of the cosmological observables in light of the degeneracy between neutrinos physics and the modified gravity parameter, and we identify specific patterns which allow us to break such degeneracy. We also provide separately the constraints on the total mass of the neutrinos, Σmν, and on the effective number of neutrino species, Neff, using cosmic microwave background (CMB), baryon acoustic oscillation (BAO), redshift space distortion (RSD), supernovae (SNIa), galaxy clustering (GC) and weak gravitational lensing (WL) measurements. We find that all combinations of data we consider prefer a stronger gravitational interaction than LCDM. Finally, we consider the chi-square and deviance information criterion statistics and find the f(Q) + Σmν model to be statistically supported by data over the standard scenario. On the contrary, f(Q) + Neff is supported by CMB+BAO+RSD+SNIa but a moderate evidence against it is found with GC and WL data.
We use the Weakly Interacting Massive Particle (WIMP) thermal decoupling scenario to probe Cosmologies in dilatonic Einstein Gauss-Bonnet (dEGB) gravity, where the Gauss-Bonnet term is non-minimally coupled to a scalar field with vanishing potential. We put constraints on the model parameters when the ensuing modified cosmological scenario drives the WIMP annihilation cross section beyond the present bounds from DM indirect detection searches. In our analysis we assumed WIMPs that annihilate to Standard Model particles through an s-wave process. For the class of solutions that comply with WIMP indirect detection bounds, we find that dEGB typically plays a mitigating role on the scalar field dynamics at high temperature, slowing down the speed of its evolution and reducing the enhancement of the Hubble constant compared to its standard value. For such solutions, we observe that the corresponding boundary conditions at high temperature correspond asymptotically to a vanishing deceleration parameter q, so that the effect of dEGB is to add an accelerating term that exactly cancels the deceleration predicted by General Relativity. The bounds from WIMP indirect detection are nicely complementary to late-time constraints from compact binary mergers. This suggest that it could be interesting to use other Early Cosmology processes to probe the dEGB scenario
In the context of three dimensional theory of general relativity (TMG), a Kerr-like metric is obtained. Using Penrose diagrams, The causal structure and particle’s diffusion are discussed, and within the fermions tunneling effect approach and WKB approximation, the expression of Hawking temperature is also derived.
In the string axiverse scenario, light primordial black holes may spin up due to the Hawking emission of a large number of light (sub-MeV) axions. We show that this may trigger superradiant instabilities associated with a heavier axion during the black holes’ evolution, and study the coupled dynamics of superradiance and evaporation. We find, in particular, that the present black hole mass-spin distribution should follow the superradiance threshold condition for black hole masses below the value at which the superradiant cloud forms, for a given heavy axion mass. Furthermore, we show that the decay of the heavy axions within the superradiant cloud into photon pairs may lead to a distinctive line in the black hole’s emission spectrum, superimposed on its electromagnetic Hawking emission.
Infall of cold dark matter on a galaxy may result in caustic rings where the particle density is enhanced. They may be searched for as features in the galactic rotation curves. Previous studies suggested the evidence for these caustic rings with universal, that is common for different galaxies, parameters. Here we test this hypothesis with a large independent set of rotation curves by means of an improved statistical method. No evidence for universal caustic rings is found in the new analysis.
Dark Gravity (DG) is a background dependent bimetric and semi-classical extension of General Relativity with an anti-gravitational sector. The foundations of the theory are reviewed. The main theoretical achievement of DG is the avoidance of any singularities (both black hole horizon and cosmic initial singularity) and an ideal framework to understand the cancellation of vacuum energy contributions to gravity and solve the old cosmological constant problem. The main testable predictions of DG against GR are on large scales as it provides an acceleration mechanism alternative to the cosmological constant. The detailed confrontation of the theory with SN-Cepheids, CMB and BAO data is presented. The Dark Gravity theory is constantly evolving and the latest version of its living review is accessible at www.darksideofgravity.com/DG.pdf
It is well-known that the spacetime of Friedmann-Robertson-Walker (FRW) universe is a thermodynamic system, where it has temperature, entropy and satisfies the first law of thermodynamics. We recently make a further significant step that we construct the thermodynamic equation of state for the FRW spacetime for the first time, i.e. P=P (V,T) where the gravitational pressure P is directly derived from the unified first law, in fact the gravitational field equation in spherically symmetric spacetime through a first principle study. Furthermore, by using the thermodynamic equation of state, we have discovered three kinds of thermodynamic phase transitions in the FRW spacetime. We also make some investigations on insights into the potential astronomic observations of these phase transitions.
understanding.The discovery of rotation curves of disk galaxies by Rubin et al. (1980) has had far-reaching implications for the fields of astrophysics and cosmology. These findings have introduced the need of an elusive component that astrophysicists have dubbed "dark matter", which is believed to be made up of dark particles that are necessarily beyond the standard model of elementary particles. Since then, dark matter has become a building block of the current cosmological model (ΛCDM), in which the dark matter provides the initial gravitational wells upon which galaxies are built. The role of dark matter in shaping the galaxy dynamics has been firmly established and confirmed in the local Universe. However, until recently, it was not possible to validate it at high redshifts. Thanks to integral field units (IFUs), which are high-resolution spectrographs, we are now able to gain valuable insights into the early Universe and shed light on the formation and evolution of galaxies over cosmic time. In particular, observations based on IFUs allow us to study the resolved velocity profiles, or rotation curves, of galaxies at high redshifts. This enables us to answer some of the most intriguing open questions in modern astrophysics. These include:
1. Is the nature of dark matter cold as presumed in the most successful ΛCDM cosmological simulations?
2. What is the fraction of dark matter in high-redshift galaxies compared to locals?
3. Do dark matter halos evolve similarly to galaxies?
4. Do baryonic processes impact the distribution of dark matter and, if so, can they be constrained?
At the meeting, I intend to elaborate on each of these questions. Specifically, I will present a novel study that employs KROSS, KGES and KMOS3D data, comprising the largest sample (~300) of disk-like galaxies to date, spanning a redshift range of 0.5 < z < 2.5. I will present accurate rotation curves and kinematic models for the entire sample and share my results regarding dark matter fraction obtained through a halo-model independent approach. One of my key findings reveals that the dark matter fraction in the inner-to-outer region of galaxies increases with redshift, with typical estimates ranging from 75% to 90% of the total mass. Moreover, the dark matter fraction at a fixed redshift is lower in the inner region than the outskirts, but does not fall below 50%, contrary to previous studies and cosmological simulations. Finally, I will engage in a discussion on how we can examine the assembly history of dark matter halos over cosmic time using current observations of high-redshift galaxies that lack a crucial component, the stellar continuum. This work provides significant insights into the early Universe and galaxy formation, highlighting the need for future advancements in our understanding of dark matter and its role in galaxy evolution. Furthermore, recent observations by JWST, which have identified massive galaxies during cosmic dawn (Labbe et al. 2023), pose new challenges to the standard model of cosmology and further emphasize the scientific significance of these topics. Thus, investigating the rotation curves and dark matter halos throughout cosmic history will be of great importance for advancing our understanding of these phenomena.
By now more than hundred massive black hole (MBH) mass measurements of local galaxies based on stellar or gaseous motion reveal strong correlations of the MBH mass with their bulge properties, such as bulge mass, stellar velocity dispersion (sigma) and light concentration. Determining MBH masses is a challenging procedure and it is not possible to use one single method across the full sample of galaxies. Problematically, measurements from different dynamical tracers often give discrepant results, rising the question whether the variety of methods forces an additional bias on the scaling relations. Therefore, connecting mass results from different methods is necessary to evaluate the robustness and universality of the measurement results and thus crucial for improving the understanding of the interplay between the central black holes and their host galaxies. In my review I will address the following questions: Do high-mass and low-mass black holes follow the same scaling relations? Does the variety of mass measurement methods force an additional bias on the scaling relations? And how can we deal with selection biases?
While observational evidence of the so-called Dark Matter anomaly is growing up with more and more sophisticated measurements, we observe that a pure non-collisional fluid in the central regions of the galactic halos, baryonic matter dominated, cannot explain naturally the observed dynamical features. it was observed surprisingly that the DM cores have substantial correlations with the Baryonic Matter distribution on every scale mass from Dwarfs to giant Elliptic galaxies. More general correlations between the distribution of BM and DM will be shown, indicating that some direct interaction, not only gravitational, should happen between the two sectors. We encourage a change in the Paradigm from a pure theoretical approach to one where observational evidence should drive any possible explanation of the phenomenon. We encourage also a more direct link between the astrophysics and particle physics communities to converge in a more efficient collaboration to constrain the relative searches and measurements.
As the WIMP draws under increasing tension thanks to the ever increasing sensitivity of direct detection experiments, the majority of dark matter parameter space outside of the weak scale remains unexplored. Molecular and nano-scale systems are particularly well-suited to look for sub-GeV DM since their eV-scale electronic transitions may be excited through light dark matter interactions. Here, I will discuss the importance of molecular and mesoscopic systems as new directions in the direct detection of dark matter, focusing on the use of quantum dots (QDs) and organic crystals as detector targets. I will show that QDs present a particularly interesting target with inherently low-background signals and low-cost scalability. I will present the molecular Migdal effect as a new directional method to detect DM nuclear recoils using molecular systems. Finally, I will discuss the potential synergy between nanomaterials and molecules as well as applications of these formalisms for indirect detection.
The study of dark matter (DM) encompasses a wide range of models that have been extensively investigated using accelerators, underground detectors, and astroparticle physics experiments. While numerous approaches have been employed, high-energy astrophysical observations offer distinct advantages in unravelling the nature of DM candidates that are challenging to explore within laboratory settings. Notably, the detection of annihilation signals from weakly interacting massive particles (WIMPs) with masses below 1 TeV has been subject to stringent constraints through current-generation experiments. The upcoming Cherenkov Telescope Array (CTA) will detect gamma rays with energy between 20 GeV and 300 TeV with unprecedented sensitivity and probe significantly heavier WIMPs, and explore additional DM candidates. This presentation aims to elucidate the search for DM and physics beyond the Standard Model using CTA.
Gravitational evidences at different cosmological scales hint towards the existence of a dark component of the Universe, which amounts up to the 85% of its matter density. Several theoretical particle models for the dark matter (DM) predict that they are expected to weakly interact with standard model particles. From these interactions, one of the most promising channels are gamma-rays. However, despite all the efforts, DM has eluded any clear detection. Among the different astrophysical objects that we can consider as targets for our searches, dwarf irregular galaxies (dIrrs) have gained a lot of attention in the last years. We will revisit their science case, the existing gamma-ray studies carried on these objects and their results. Finally, we will put these results in context with respect to DM gamma-ray searches in other objects and explore future prospects to exploit, even further, the capabilities of dIrrs to unveil the nature of dark matter.
We use surface brightness and velocity dispersion data to constrain the properties of 10 Milky Way dwarf spheroidal galaxies (dSphs), which span over an order of magnitude in effective radius, over 4 orders of magnitude in stellar mass, and show no signs of tidal disruption. To alleviate the degeneracy between galaxy mass and velocity anisotropy (beta), the “M-beta degeneracy”, we consider boundary conditions of the spherical Jeans equation as r --> 0. These boundary conditions constrain the coefficients of a general beta parametrization at the centers of each dSph dark matter (DM) halo, which we model separately with either a cored (Burkert) or cuspy (NFW) DM density profile. The resulting best-fit NFW models tend to fit the data more poorly than do the best-fit Burkert models. The best-fit NFW models also require more (radial, tangential) velocity anisotropies at (small, large) dSph radii than do the best-fit Burkert models. For both the best-fit Burkert and NFW models, we find strong correlations between the scale radius, R_*, and effective radius, R_{eff}, of the dSph luminous matter distributions and all best-fit, DM halo parameters.
Ultralight dark matter (ULDM) is an intriguing dark matter candidate with astrophysically testable predictions. While single field models have been widely studied, they are by now fairly constrained by observations. However, in particle physics, models with N light scalar fields which interact only gravitationally are equally well motivated. In my talk, I will explore this possibility and present results from multifield ULDM simulations. I will show that dark matter halos composed of N fields are smoother and introduce less stellar velocity dispersion relative to the single field case. This results in relaxed constraints from stellar heating in ultrafaint dwarf galaxies.
The talk will be based on this paper JCAP03(2023)018. In this work, we performed a comprehensive study of the signatures of Lorentz violation in electrodynamics on the Cosmic Microwave Background (CMB) anisotropies. In the framework of the minimal Standard Model Extension (SME), we considered effects generated by renormalizable operators, both CPT-odd and CPT-even. These operators are responsible for sourcing, respectively, cosmic birefringence and circular polarization. We propagated jointly the effects of all the relevant Lorentz-violating parameters to CMB observables and provided constraints with the most recent CMB datasets. The bounds we found are orders of magnitude stronger than previous CMB-based limits, superseding also bounds from non-CMB searches. This analysis provides the strongest constraints to date on CPT-violating coefficients in the minimal SME from CMB searches.
We have recently implemented in the public L-Galaxies 2020 semi-analytical model our treatment of dust production from evolved stars and subsequent evolution driven by ISM processes. In this contribution, we discuss how the properties of simulated galaxies depend on the large-scale environment, focusing in particular on their dust component.
Based on the idea of cyclic conformal cosmology, we postulate that supermassive black holes break up at the end of a cycle of creation, and are then broken down at the onset of inflation. To do this, we use the Bose-Einstein Condensation (BEC) formulation to describe the effect of entropy production for black holes, as well as a previous document discussing a quantum number n that is attached to black holes. We formulate entropy and quantum number n, and then utilize the minimum uncertainty principle, where Delta E times Delta t equals h-bar, to actualize a prototype delta t time stop in the breakup of supermassive black holes into countless Planck mass-sized black holes. This helps to link entropy, time step, and primordial conditions and define when the cosmological constant may form and the initial inflationary expansion "speed." Finally, we argue that if the cosmological constant is dark energy, it is formed initially due to primordial black holes, as discussed in this paper.
Extensions to the standard lambda-CDM model have the potential to explain observed cosmological phenomena such as dark matter and dark energy. Measurement of the masses of galaxy clusters using different methods provide a great opportunity to contrast the modifications to gravity against standard GR scenario at small scales (non-cosmological). The Chameleon and Vainstein screening mechanisms are two promising candidates for modified gravity, which modify the gravitational potential through a fifth force. We evaluate the hydrostatic and caustic mass estimates of 5 galaxy clusters with robust data of the dynamical and kinematic observables, under GR and the screening mechanisms. We aim to apprehend the effect that the screening mechanisms will have on the mass bias while constraining the modified gravity scenarios.
The accelerated expansion of the Universe is one of the greatest mysteries of modern cosmology. Upcoming and future cosmological observations will help to shed light on this feature of our Universe. The accelerated expansion is canonically attributed to the Dark Energy (DE), encapsulated in the Lambda factor in the Einstein field equations of gravity, but its nature is still not understood. While observations supply strong evidence in favor of the standard model of cosmology Lambda-CDM, a plethora of different modified gravity models (MG) can still arise and describe gravity and DE in another way than a Lambda-constant. In addition, some tensions have been found in the Lambda-CDM which could be hinting at new physics beyond the standard model, and MG models can provide some alleviation to these tensions. In our work, we exploit the Effective Field Theory (EFT) description which allows us to describe gravity and DE in a general way, encompassing single-field models. The strength of this approach is that we can describe not only general features of gravity but also recover model-dependent results through a mapping procedure. Upon this theoretical setting, in this work, we test Lambda-CDM and MG/DE models using cross-correlations of diverse probes and forecast future sensitivity to discriminate among MG models. With the advent of next-generation wide galaxy surveys and the high sensitivity maps of the microwave sky delivered by Planck and expected from future CMB data, it is crucial and timely to investigate the interactions and complementarities of probes of the Universe that can shed light on gravity on cosmological scales.
I would like to present a poster on this work. Differential Chromatic Refraction (DCR), caused by the wavelength dependency of the refractive index of our atmosphere, is usually an effect we need to mitigate for ground-based observations. However, DCR depends on the spectral energy distribution (SED) of an object, meaning that light from sources such as supernovae (both Type Ia and Type II) and quasars with distinctive emission lines get refracted differently depending on the redshift of the source. We investigate how this can be used to our advantage to estimate astrometric redshifts of supernovae from multi-band, time-series photometry. First, we calculate these effects using image simulations and evaluate the accuracy of the astrometric redshifts, how they depend on observing strategies such as filter choices and air mass distribution as well as analysis methods. We then quantify how much combining our astrometric redshifts with conventional photometric redshifts improves the measurements. We believe that our analysis will enhance the accuracy of redshift measurements for the upcoming Large Synoptic Survey Telescope (LSST) observations, which will be valuable especially since we will not be able to obtain spectroscopic redshifts from the vast number of supernovae that will be detected.
We study the reliability of the MG-PICOLA code through resolution tests, where we vary the numerical parameters in the cosmological simulations. We do the analysis with three modified gravity models: Hu-Sawicki $f(R)$, nDGP (the normal branch of the Dvali, Gabadadze, and Porrati model), and the Symmetron. For the DGP model we compare our results with those of the MG-GLAM code. We found that MG-PICOLA simulations are suitable for the rapid exploration of MG models, since it achieves reliable results on moderately large values of numerical parameters, but with short execution times. We use these results to search for differences between the mass power spectrum of the MG models and that of the standard gravity GR.
According to the standard hot Big Bang model of cosmology, the universe was mostly ionized and hot at very early stages. Then it cooled down with time and became predominantly neutral around 380,000 years after birth. Reionization is the era when the universe is again ionized by the photons coming from the first luminous sources. This is still one of the least understood phases in the evolutionary history of the Universe, and is also known as one of the final frontiers in modern cosmology. The ionization and thermal state of the intergalactic medium (IGM) during the epoch of reionization has been of interest in recent times because of their close connection to the first stars. We try to constrain the thermal and ionization history of the universe using a semi-numerical photon conserving model SCRIPT and a variety of observables like UV luminosity function, low-density IGM temperatures, CMB scattering optical depth, etc. We study the consequences of physical effects like inhomogeneous recombination and radiative feedback on the reionization phenomena, which are necessary for accurate modelling. Furthermore, we find that the model parameters are reasonably well constrained, providing useful insights into the reionization timeline. As we track the inhomogeneities in the medium, we can also compute the large-scale 21 cm power spectra, which quantifies the fluctuations in the neutral hydrogen field. We check the prospects of 21 cm power spectra as a tracer of reionization. Our study involves creating a mock data set corresponding to the upcoming SKA-Low, followed by a Bayesian inference method to constrain the model parameters. In particular, we explore in detail whether the inferred parameters are unbiased with respect to the inputs used for the mock and if the inferences are insensitive to the resolution of the simulation. We find that the model is reasonably successful on both fronts. However, the likelihood computation for the reionization parameter space exploration can be quite expensive depending upon the cases. To tackle this issue, we have also developed a novel technique by creating a likelihood emulator based on Gaussian Process Regression (GPR) training using SCRIPT, which shows the potential to significantly speed up the parameter exploration.
In my talk I show that pre-inflationary quantum fluctuations can provide a scenario for initial conditions for the inflaton field. The proposal is based on the assumption that at very high energies (higher than the energy scale of inflation) the vacuum-expectation value (VeV) of the field is trapped in a false vacuum and then, due to renormalization-group (RG) running, the potential starts to flatten out toward low energy, eventually tending to a convex one which allows the field to roll down to the true vacuum. I argue that the proposed mechanism should apply to large classes of inflationary potentials with multiple concave regions. The findings favor a particle physics origin of chaotic, large-field inflationary models as we eliminate the need for large field fluctuations at the GUT scale. In our analysis, I provide a specific example of such an inflationary potential, whose parameters can be tuned to reproduce the existing cosmological data with good accuracy.
The two bright early galaxy candidates GL-z10 and GL-z12 [Naidu et al 2022 ApJL 940 L14, Castellano et al 2022 ApJL 938 L15], discovered with the near infra-red camera (NIRCam) imaging data from the Glass-JWST Early Release Science Program are unexpected, since they must be over a million solar masses and have built-up their masses in only < 300-400 Myr after the Big Bang in the LCDM model. Similarly the masses of six candidate galaxies at 7.4<z<9.1 [Labbe et al 2023 Nature] are to be a factor of ~20-1000 higher than the expected values. The recent discovery of the two companion sources to a strongly lensed galaxy SPT0418, which possess extraordinarily high metallicity at the cosmic age of 1.4 Gyr [Peng et al 2023 944 L36], also appears to be in tension with LCDM cosmology. While attempts are ongoing to modify the theories of galaxy formation in the early universe, we believe that it is equally important to look for alternatives to the expansion history in the early universe that may allow time for formation of these massive objects at the respective redshifts. An already discussed `eternal coasting' (scale factor varies linearly as time) model of the universe [John & Joseph 2000 Phys. Rev. D 61, 087304] is capable of explaining most cosmic observations without having any of the cosmological problems (including the coincidence problem) in it. This model is shown to have an age of 1.07 Gyr at a redshift of z=12; at z=20, it is ~700 Myr. Some special features of structure formation in this model are discussed.
The main idea of the work is to perform the dynamic evolution of the orbits of Globular Cluster subsystems (GCs) with a look-back time up to 10 Gyr. This will allow us to estimate the possible interaction of GCs with the Galactic Center (GalC) region, including the influence of the super-massive black hole (SMBH), which has dynamically changed in the past. To reproduce the structure of the Galaxy in time, we used external potentials selected from the large-scale cosmological database IllustrisTNG-100, whose properties (mass and size of the disk and halo) are similar to the physical values of the Milky Way at present. In these potentials, we have reproduced the orbits of 147 GCs in 10 Gyr lookback time using our own high-order N-body parallel dynamical code, the phi-GPU code. For the initial proper motions, radial velocity and heliocentric distance of each GC, we take prom from the Baumgardt & Vasiliev (2021) catalog based on Gaia Data Release 3. To identify clusters that have interaction with the GalC and the SMBH, we used the criteria that the relative distance between the SMBH and the GC is at least four times the sums of the GCs half-mass radii. Using these simple criteria, we obtained statistically significant rates of close passages of the GCs with the Galactic Center and the SMBH. During our investigation, we analyzed the influence of the SMBH on the GC orbital evolution to find a GC that has such a close passage and prepare the statistical probability of such events. For the selected GCs, we generated an initial mass function and performed a full N-body modeling to find out the potential influence on the GCs stellar populations due to the SMBH influence in the Galactic Center region.
Dark matter (DM) has been originally discovered in clusters of galaxies by Zwicky (1933). These systems currently remain excellent laboratories for probing DM properties through gravitational lensing, and the dynamical equilibrium of cluster visible components (intra-cluster plasma and galaxies). Comparison of the shape of cluster mass density profiles with predictions from cosmological simulations are used to constrain the properties of DM. Particularly useful in this respect is the determination of the inner slope (gammaDM) of the dark matter (DM) density profile. Cold DM cosmological simulations predict gammaDM~1. While significantly flatter slopes have been obtained in the literature, new results appear to reconcile the observational determinations of gammaDM with numerical predictions.
About 100 gravitational wave signals have been detected by the LIGO-Virgo network during the first three observing runs since 2015. The fourth observing run O4 has recently started with the addition of the KAGRA detector. Several upgrades have improved the detectors' sensitivity, and the rate of GW detections, over the years. In the talk I will describe the past GW observations, the status of detectors, the recent and planned upgrades, and expected performnaces.
The measurements of the Earth rotation rate variations, certainly important for Earth science, are relevant also for fundamental physics investigation, as they contain general relativity terms, such as de Sitter and Lense Thirring, and they provide unique data to investigate Lorentz violations. Long term continuous operation and very high sensitivity are required, the limit to be reached to study fundamental physics is 1 part in 10^9 of the Earth rotation rate. The GINGER project is based on an array of ring lasers, and will be installed inside the Gran Sasso laboratory, the plan is to have GINGER operative in 3 years time.
The latest SH0ES results claim that the distance scale route to 𝐻0 is accurate to ±1% which creates a 9% or 5𝜎 discrepancy with the Planck 𝐻0 value. Here we study the SH0ES error budget in the 3 rungs of their distance ladder. After checking the validity of the suggested correction to the Gaia distances using ≈ 1000 open clusters, we confirm that the previous Milky Way HST calibration of the Cepheid Period-Luminosity (PL) relation is discrepant at the 9% level with the new Gaia Cepheid PL calibration, or 3× the error originally quoted by SH0ES. Secondly, using open clusters we find evidence for significant variations in the Galactic reddening law that can move the ratio of total to selective absorption from 𝑅 = 3.3 to 𝑅 ≈ 4 and this source of error is not included in the SH0ES error budget. Thirdly, using a maximum likelihood technique, we find that photometric incompleteness in the PL relations in the SH0ES SNIa calibrating galaxies can cause underestimation of their distances, resulting in an ≈ 3% reduction in the 𝐻0 value. Finally, we find that the inclusion of a peculiar velocity correction for the ‘Local Hole’ may cause a further 2.6% reduction in the SH0ES 𝐻0 measurement. We conclude that the SH0ES 1% overall error may be an underestimate. For example, applying just our Cepheid incompleteness and ‘Local Hole’ corrections would already result in a SH0ES 𝐻0 value no higher than 70 km s−1Mpc−1.
The “Standard Model of Cosmology” (ΛCDM) is increasingly in tension with new observations. High redshift galaxies observed by JWST and clusters such as El Gordo appear to be too massive too early, whilst supervoids are more frequent and underdense than expected. Local bulk flows also appear to be larger than predicted and the Hubble Tension casts doubt onto our understanding of dark energy. A particularly interesting idea here is that outflows of matter from a local supervoid, for which evidence in near-IR, X-ray and radio already exists, may systematically enhance the local determination of the Hubble constant and ease the Hubble Tension. To test this hypothesis, we performed new Gpc-scale N-body simulations of the νHDM model, where supervoids analogous to the inferred “Local Hole” are already known to form frequently (Angus+ 2013) due to the enhancement of gravity. Initial results suggest that the enhanced structure formation alone can generically enhance the locally determined Hubble constant, even for vantage points outside of a Local Hole analogue. We also find that most vantage points observe a dipole in the Hubble constant. Finally, we investigate bulk flows on a few 100 Mpc scales to test if recent observations can be reproduced (Watkins+ 2023). Comparative simulations are also performed for ΛCDM and two models we dub “ΛHDM” and “νCDM”, so that the indivdual contributions of enhanced gravity and dark matter are understood.
Peculiar motions dominate the kinematics of the local universe. Also, well beyond our local neighbourhood, peculiar-velocity surveys have repeatedly reported the existence of bulk peculiar flows extending out to several hundreds of Mpc. Historically, relative-motion effects have been known to interfere with the way we have interpreted the observations in a number of cases. This work investigates the implications of bulk peculiar flows for the deceleration parameter of the universe, by employing a tilted'' cosmological model. The latter allows for two families of relatively moving observers, one of which follows the idealised reference frame of the Cosmic Microwave Background, while the other is identified with real observers living in typical galaxies like our Milky Way. We compare theory to observation by means of the Pantheon compilation of type-Ia supernovae, to show that the
tilted-universe'' scenario can reproduce the late-time cosmic acceleration without the need of dark energy, or of a cosmological constant. We also consider the possibility of a dipole-like signature in the sky distribution of the deceleration parameter using the most recent Pantheon+ supernovae-Ia dataset.
The idea of neutrino-assisted early dark energy ($\nu$EDE), where a coupling between neutrinos and the scalar field that models early dark energy (EDE) is considered, was introduced with the aim of reducing some of the fine-tuning and coincidence problems that appear in usual EDE models. In order to be relevant in ameliorating the $H_0$ tension, the contribution of EDE to the total energy density ($f_\text{EDE}$) should be around 10\% near the redshift of matter-radiation equality. We verify under which conditions $\nu$EDE models can fulfill these requirements. We find that in the situation where the EDE field is frozen initially, the contribution to $f_\text{EDE}$ can be significant but it is not sensitive to the neutrino-EDE coupling and does not address the EDE coincidence problem. On the other hand, if the EDE field starts already dynamical at the minimum of the effective potential, it tracks this time-dependent minimum that presents a feature triggered by the neutrino transition from relativistic to nonrelativistic particles. This feature generates $f_\text{EDE}$ in a natural way at around this transition epoch, that roughly coincides with the desired redshift mentioned above. Nevertheless, we show that the values of the generated $f_\text{EDE}$ are too small to address the Hubble tension.
We investigate the imprints of new long-range forces mediated by a new light scalar acting solely on dark matter. Dark fifth forces in general will modify the background evolution as well as the growth of density fluctuations. At the linear level, constraints are derived from CMB together with a full-shape analysis of the power spectrum as measured by BOSS. At the non-linear level, the presence of fifth forces induces violation of the equivalence principle in cosmological correlators. This is encoded in the breaking of consistency relations at tree level for the bispectrum, which could be directly tested with future galaxy surveys. Combining this information with the full shape power spectrum at one loop leads to an unprecedented sensitivity on dark fifth forces.
Supernova (SN) cosmology is based on the assumption that the width-luminosity relation (WLR) and the color-luminosity relation (CLR) in the type Ia SN luminosity standardization would not show luminosity offsets with progenitor age. Unlike this expectation, recent age datings of stellar populations in host galaxies have shown significant correlations between progenitor age and Hubble residual (HR). Here we show that this correlation originates from a strong progenitor age dependence of the zero-points of the WLR & CLR, in the sense that SNe from younger progenitors are fainter each at given light-curve parameters x1 and c. This 4.6 sigma result is reminiscent of Baade's discovery of the zero-point variation of the Cepheid period-luminosity relation with population age, and, as such, causes a serious systematic bias with redshift in SN cosmology. Other host properties show substantially smaller and insignificant offsets in the WLR & CLR for the same dataset, indicating that progenitor age is the root cause of the reported correlations between host properties and HR.
In this talk, we will first give a brief introduction to the $\Lambda_{\rm s}$CDM model, which explores the recent conjecture suggesting a rapid transition of the universe from anti-de Sitter vacua to de Sitter vacua, viz., the cosmological constant switches sign from negative to positive at redshift ${z_\dagger\sim 1.7}$, inspired by the graduated dark energy (gDE). And then, we will present the results of its comprehensive observational analysis showing that, predicting $z_\dagger\approx1.7$, $\Lambda_{\rm s}$CDM simultaneously addresses the major cosmological tensions of the standard $\Lambda$CDM model, viz., the $H_0$, $M_B$, and $S_8$ tensions, along with some other less significant tensions such as the BAO Ly-$\alpha$ discrepancy. We will conclude with a theoretical discussion on the possible physical mechanisms from which this scenario may be realized and their implications for our current understanding of the universe.
In this presentation, I present the current advances in the theoretical description of the halo mass function and halo bias aiming to improve the understanding of the dark sector through cluster cosmology with Euclid's photometric galaxy cluster survey. Utilizing a Bayesian approach and a suite of N-body simulations, we analyze the convergence of HMF and HB predictions, the impact of different halo finder algorithms, the violation of universality in the HMF, and the impact of baryons. Our prescriptions achieve sub-percent accuracy across distinct cosmological model variants, including massive neutrinos cosmologies and different baryonic prescriptions. This research emphasizes the importance of cluster cosmology and lays the foundation for more accurate cosmological inferences in the future.
In this talk I will give an overview of the BeyondPlanck project, which was an end-to-end re-analysis of the LFI data, and discuss a future extension to HFI within the Cosmoglobe collaboration. This method aims to process raw time-ordered data into final cosmological and astrophysical products within one computer code, and implements standard Bayesian Monte Carlo methods. By now this framework has been successfully applied to LFI and WMAP, and COBE-DIRBE is well on its way. In this talk I will describe early steps toward HFI processing.
Cosmic Microwave Background (CMB) photons experience weak gravitational lensing due to the large-scale structure of the Universe along their path. The weak lensing generates a divergence-free component of the polarization field known as B-modes, through the gravitational lensing of the primordial curl-free component of the field, known as E-modes. Estimating the lensing power spectra and delensing the CMB maps from recent and upcoming surveys are crucial tasks for the advancement of observational cosmology. However, one of the main obstacles to these tasks is the presence of foreground contamination in the CMB maps. In the era of high-sensitivity CMB experiments, the bias in delensing arising from these sources may exceed the statistical uncertainties of the observational data. Weak lensing remaps the primordial fields and introduces correlations between multipole moments in harmonic space. A quadratic combination of properly filtered maps' multipole moments can be used as an estimator to probe the mass distribution responsible for the lensing. Our objective is to study the efficiency of different lensing estimators in the context of upcoming CMB surveys and then provide new methods to mitigate the biases in lensing reconstruction. In our work, we test polarization-based quadratic estimators on simulated lensed CMB maps that possess properties corresponding to the CMB-S4 survey. Here, we present the foreground bias in quadratic estimators on simulated maps contaminated by galactic emissions.
Minkowski Functionals (MFs) are statistical tools that describe the geometry and topology of a field and, therefore, can probe information complementary to the angular power spectrum, such as non—Gaussianity and deviations from statistical isotropy. MFs have been used in many applications such as blind tests of non—Gaussianity in the CMB, improvement of parameter constraints in weak lensing maps, characterization of the morphology of foregrounds, and exploitation of non—linear scales in the Large Scale Structure. In this talk, I will introduce MFs and some of the key aspects of their mathematical foundation. I will show how we have extended the MFs formalism to the CMB polarization (spin) field in two different frameworks that can exploit the full information of polarization, beyond E and B modes (following arXiv:2211.07562 and arXiv:2301.13191). These extensions can further test the Gaussianity and isotropy of polarized emission. I will also mention some new applications of these tools, including the analysis of the polarized dust foreground, the exploitation of the lensing shear non—Gaussianity. Finally, I will introduce Pynkowski [https://github.com/javicarron/pynkowski], a public Python package that we have developed to compute MFs and other higher order statistics on different kinds of data and simulations (including CMB and Large Scale Structure), as well as the theoretical predictions for different kinds of fields. This talk is based on work done in collaboration with Alessandro Carones, Domenico Marinucci, Marina Migliaccio, and Nicola Vittorio.
Probing the Universe's large-scale structure (LSS) leads to a wealth of cosmological information. With the advent of unprecedentedly giant radio telescopes, we can start using the neutral hydrogen (HI) 21-cm emission to trace the LSS. In particular, a novel observational strategy is catching on: Intensity Mapping (IM). With IM, we relax the requirement of source detection and go after all the integrated 21-cm emissions: we can produce detailed three-dimensional maps of a good fraction of the observable Universe. On the one hand, this strategy carries a potentially revolutionary science output. But, on the other, these observations have been extremely challenging to perform. In particular, disentangling the HI IM signal from orders-of-magnitude more intense and intricate contaminants is the thorniest problem. In this talk, I will discuss how we address this challenge with first-of-their-kind observational data from the MeerKAT radio telescope, a precursor to the SKA Observatory (SKAO). Our ongoing work demonstrates that a radio array operating as a collection of independent telescopes can probe the IM cosmological signal, marking a milestone for the cosmology science case with the entire SKAO.
In this talk, I will present the results of two recent papers that both use observations of TeV gamma-rays from blazars to constrain physics models. In the first paper, we use the flux spectra from TeV blazars to constrain the viable axion-like particle parameter space. We show that an axion-like particle that mixes with photons can lead to an overproduction in the TeV spectra of distant blazars, compared to observations. In the second paper we show that adding the blazar contribution to the isotropic gamma-ray background overproduces the observed background unless blazars have an intrinsic cutoff in their TeV spectra. However, this is in tension with local blazar data, indicating the need for a modification of current astrophysical models or new physics.
Type Ia supernovae (SNae Ia), standardisable candles that allow tracing the expansion history of the Universe, are instrumental in constraining cosmological parameters, particularly dark energy. State-of-the-art likelihood-based analyses scale poorly to future large data sets, are limited to simplified probabilistic descriptions, and must explicitly sample a high-dimensional latent posterior to infer the few parameters of interest, which makes them inefficient. On the other hand, truncated marginal neural ratio estimation (TMNRE), an inference technique based on forward simulations, can fully account for complicated redshift uncertainties, contamination from non-SN Ia sources, selection effects, and a realistic instrumental model, while implicitly marginalising latent and population-level parameters to directly derive posteriors for the cosmological parameters of interest. We present an application of TMNRE to supernova cosmology in the context of BAHAMAS, a Bayesian hierarchical model for SALT parameters. We verify that TMNRE produces unbiased and precise posteriors for cosmological parameters from up to 100 000 SNae Ia. With minimal additional effort, we train a neural network to infer simultaneously the O(100 000) latent parameters of the supernovae (e.g. absolute brightnesses). Lastly, we present recent improvements to the simulator that allow it to realistically model light curves based on a probabilistic spectral energy distribution model (BayeSN), tailoring its output to current and near-future surveys. Analysing these much more complicated data requires the adoption of modern set-based neural network architectures and an extension of the truncation methodology to hierarchies of parameters, which we also discuss.
We describe a new mechanism that gives rise to dissipation during cosmic inflation. In the simplest implementation, the mechanism requires the presence of a massive scalar field with a softly-broken global U(1) symmetry, along with the inflaton field. Particle production in this scenario takes place on parametrically sub-horizon scales. Consequently, the backreaction of the produced particles on the inflationary dynamics can be treated in a \textit{local} manner, allowing us to compute their effects analytically. We determine the parametric dependence of the power spectrum which deviates from the usual slow-roll expression. Non-Gaussianities are always sizeable whenever perturbations are generated by the noise induced by dissipation.
In the near future, cosmic large scale structure will be mapped with increasing detail by the next generation of observational facilities operating at various wavelengths (radio, optical, infra-red) and exploiting various techniques. Mean-while, theory and simulations are increasing in sophistication in their ability to describe large-scale structure. These advances could potentially allow greater constraints into cosmological parameters and theories of galaxy evolution. In this talk, I will give more details on these ideas and show how we constrain cosmological parameters using fisher formalism, specifically the fNL parameter. Constraining fNL provides important information about the mechanisms that generated the primordial non-Gaussian fluctuations and the physics of the early universe. Observations of the CMB, large-scale structure, and galaxy clustering have been used to place limits on fNL. However, current constraints are still far from the target precision needed to discriminate between different models of the early universe. I will also review the current state of fNL constraints and the ongoing efforts to improve their accuracy. Finally I will discuss the challenges and limitations of different observational methods and the potential for future experiments to significantly improve our understanding of fNL and the physics of the early universe.
Ultralight dark matter (ULDM) is an intriguing dark matter candidate with astrophysically testable predictions. While single field models have been widely studied, they are by now fairly constrained by observations. However, in particle physics, models with N light scalar fields which interact only gravitationally are equally well motivated. In my talk, I will explore this possibility and present results from multifield ULDM simulations. I will show that dark matter halos composed of N fields are smoother and introduce less stellar velocity dispersion relative to the single field case. This results in relaxed constraints from stellar heating in ultrafaint dwarf galaxies.
We studied Gravitational baryogenesis in context of non-minimal mimetic gravity where mimetic matter is nonminimally coupled to Ricci scalar. Baryogenesis is considered as a process in which baryons excess over anti-baryons in the early stage of the universe. We explored how nonminimally coupled mimetic gravity could shed light on the problem of baryon asymmetry successfully. Various types of baryogenesis interaction are considered in this piece of work also the effects of these interactions on the baryon to entropy ratio for this model is discussed. In addition, we have shown that baryon asymmetry could be non-zero in this set up during the radiation era while the universe was expanding. Moreover, we investigated baryon to entropy ratio for some specific models of non-minimal mimetic gravity then using the observational data we defined some constraints on space parameters in these models.
We revisit the classic system of a Schwarzschild black hole with a thin accretion disk to investigate how to determine the parameters of this system (mass M, inclination angle i, and distance D) from the observation of shadow. A novel point of our analysis is to leave the distance between the black hole and observer to be finite. We show that one can determine (M, i, D) from the information of shadow and the flux of disk.
By infusing perturbation theory with information from small-scale N-body simulations, the EFTofLSS makes accurate predictions of summary statistics of the matter density field in the quasi-linear regime. In this work, we test the assumptions of the EFTofLSS by comparing its two flavours -- 1) a bottom-up construction which calculates the EFT coefficients by directly matching a summary statistic (e.g. the power spectrum) from perturbation theory to data, and 2) a top-down construction which estimates the coefficients from the stress tensor of the N-body simulation. Performing a study in 1+1-dimensions, we find the results from the two flavours to be in excellent agreement with each other, providing a consistency check on the assumptions that the theory makes.
Sterile Neutrinos are well motivated Dark Matter candidates. They arise naturally in models with gauged U(1) B-L symmetries to avoid anomalies. While previous studies have focused on the Sterile Neutrino abundance, knowing their spectrum is important to determine whether they can actually explain the observed Dark Matter. I numerically solve the full Boltzmann equation for Sterile Neutrinos in a model with a supercooled B-L phase transition. I identify regions in the parameter space spanned by the mass of the new Z' boson and its gauge coupling where the Sterile Neutrinos thermalize or where they keep a non-thermal spectrum. This allows to compare the model to structure formation constraints.
The geometrical and dynamical parameters of the F(R,G) gravity cosmological model is constrained through the cosmological data sets. The functional form of F(R,G) involves the square Ricci scalar and the higher power of the Gauss-Bonnet invariant. The observed value of the free parameters in the expression of H(z), the Hubble parameter, indicates a different phase of the evolution of the Universe. In all the data sets, the early deceleration and late time acceleration behavior of the Universe has been observed. We develop a set of dynamical equations for a given physical system and find the numerical solutions, along with phase-space solutions, and the stability of individual critical points. We also discuss the asymptotic behavior of the critical points of the system.
The Epoch of Reionization (EoR) neutral Hydrogen (H I) 21-cm signal evolves significantly along the line-of-sight (LoS) due to the light-cone (LC) effect. It is important to accurately incorporate this in simulations in order to correctly interpret the signal. 21-cm LC simulations are typically produced by stitching together slices from a finite number (𝑁) of “reionization snapshot”, each corresponding to a different stage of reionization. In this work, we have quantified the errors in the 21-cm LC simulation due to the finite value of 𝑁. We show that this can introduce large discontinuities (> 200%) at the stitching boundaries when 𝑁 is small (= 2, 4) and the mean neutral fraction jumps by 𝛿 ̄𝑥_HI = 0.2, 0.1 respectively at the stitching boundaries. This drops to 17% for 𝑁 = 13 where 𝛿 ̄𝑥_HI = 0.02. We present and also validate a method for mitigating this error by increasing 𝑁 without a proportional increase in the computational costs which are mainly incurred in generating the dark matter and halo density fields. Our method generates these fields only at a few redshifts, and interpolates them to generate reionization snapshots at closely spaced redshifts. We use this to generate 21-cm LC simulations with 𝑁 = 26, 51, 101 and 201, and show that the errors go down inversely with 𝑁.
We provide a comprehensive analysis of constraints on Supersymmetric gravitinos and axinos originating from spectral distortion, BBN and Lyman-alpha considerations. We analyze the current status and future prospects of such scenarios from cosmological probes. Furthermore, we provide the complementary constraints from collider data and assess the future discovery prospects.
It is my duty as a philosopher to fight dogmatism - especially when aired in weird hypotheses bolstered up with high-brow mathematics. Anticipating the Einstein SR paper of 1905 by three weeks, Poincare published a formally more refined theory based on i = V(-1), where he predicted the reality of 'ondes gravifiques'. Against Einstein he argued that space has no innate metric, its only properties being topological. Andre Mercier, a co-founder of CERN, initiator of SR's 50 yr Jubilee, and founder of Gen.Rel.Grav., once wrote a paper to that journal having the startling title: 'Gravitation IS Time'. This made me invite him to the 1994 PIRT conf. at Imperial Coll., Ld. (sponsored by Brit.Soc.Phil.Sc.), where he argued that spacetime ought to be termed timespace, or supertime. The Oxford historian of cosmology J.D. North has given some examples, proving that spacetime curvature cannot be the cause of gravitation. My cosmological hero, the mathematician E.A. Milne, also Oxford, has demonstrated how local deviations from cosmic symmetry may cause the spontaneous emergence of forces, including gravity, in the universe. His colleague, A.G. Walker, then showed that the kinematic relativity of Milne, like GR, may be generalized to cover a variety of world-models. Lately P. Rowlands of Liverpool has argued that the universe must be simple, that physical laws must be invariant temporally and spatially, and that an absolute cosmic time is definable as "the birth-ordering of non-local quantum events"; he has also shown how all the "crucial" GR effects are derivable by purely classical (i.e., Newtonian or SR) means. Further, the inventor of analytic hyperbolic geometry and developer of its connexion with SR, A.A. Ungar, has deduced a general SR formula obviating the need for dark matter (but, unaware of this, a large army of observers are still wasting fortunes in a vain search for such stuff!). Moreover, T.v. Flandern has noticed that the Sun's force of gravity stems from its true position while its light is seen in another direction. Finally, E. Baird has proven that SR and GR are logically incompatible. So the GR of Einstein is not only in blatant conflict with observational fact, but simply inconsistent. There is thus every reason to search for alternatives to GR as well as to the L-CDM model based on GR!
The standard Lambda Cold Dark Matter cosmological model has been incredibly successful in explaining a wide range of observational data, from the cosmic microwave background radiation to the large-scale structure of the universe. However, recent observations have revealed a number of inconsistencies among the model's key cosmological parameters, which have different levels of statistical significance. These include discrepancies in measurements of the Hubble constant, the S8 tension, and the CMB tension. While some of these inconsistencies could be due to systematic errors, the persistence of such tensions across various probes suggests a potential failure of the canonical LCDM model. In this seminar, I will examine these inconsistencies and discuss possible explanations, including modifications to the standard model, that could potentially alleviate them. However, I will also discuss the limitations of these proposed solutions and note that none of them have successfully resolved the discrepancies. I will highlight the need for further investigation into these unresolved tensions and the potential for new physics beyond the standard model to provide a more complete understanding of the universe.
I will discuss recent progress on self-interacting dark matter and its implications within the context of the latest observations of galactic systems, as well as high-resolution N-body simulations of cosmic structure formation. I will highlight the novel signatures of gravothermal collapse in dark matter halos, which represent a unique prediction if dark matter possesses strong self-interactions.
I will review what the perspectives of quasars in the context of observational cosmology are and I will present recent measurements of the expansion rate of the Universe based on a Hubble diagram of quasars detected up to the highest redshift ever observed (z~7.5). A deviation from the ΛCDM model emerges at higher redshift, with a statistical significance of ~4σ. If an evolution of the dark energy equation of state is allowed, data suggest a dark energy density increasing with time. I will finally show that the synergy amongst multi-wavelength facilities (current and future) will provide the needed sample statistics to obtain constraints on the observed deviations from the standard cosmological model which will rival and complement those available from the other cosmological probes.
I will give an update on the SKA project, its current status and future steps, as well as an overview of its expected scientific capabilities, with a special focus on cosmology-driven radio surveys.
Primordial black holes (PBHs) could have been formed in the very early Universe from large amplitude perturbations of the metric. Their formation is naturally enhanced during the quark-hadron phase transition, because of the softening of the equation of state: at a scale between 1 and 3 solar masses, the threshold is reduced of about 10% with a corresponding abundance of PBHs significantly increased by three order of magnitudes. Performing detailed numerical simulations we have computed the modified mass function for such black holes, showing that the minimum of the QCD transition works as an attractor solution. Making then a confrontation with the LVK phenomenological models describing the GWTC-3 catalog, we have found that a sub-population of such PBHs formed in the solar mass range is compatible with the current observational constraints and could explain some of the interesting sources emitting gravitational waves detected by LIGO/VIRGO in the black hole mass gap, such as GW190814, and other light events.
There has been a revival of interest in PBHs in recent years, especially after the discovery of gravitational waves from merging black holes with mass at the order of tens of solar masses in the LIGO/Virgo observations. As black holes with this range of mass may not form from the known astrophysical processes, it is argued that these objects may indeed be PBHs. On the other hand, PBHs are extensively studied as candidates for dark matter. The USR setup has been also employed extensively in recent years as a mechanism to generate PBHs during inflation since during this phase of inflation the curvature perturbation is not frozen and grows. We Study the PBHs formation in a multiple field inflation in diffusion dominated regime and calculate the mass fraction and the contribution of PBHs in dark matter energy density for various higher dimensional field spaces. The fields are under pure Brownian motion in a dS background with boundaries in higher dimensional field space. This setup can be realized towards the final stages of the ultra-slow-roll setup where the classical drifts fall off exponentially and the perturbations are driven by quantum kicks. We have shown that there are regions in the parameter space of the model where PBHs with various mass ranges can be generated. However, this model typically predicts PBHs which can only furnish a relatively small fraction of the dark matter.
Quasars shining during the Reionization epoch are ideal targets to investigate the early growth phases of massive galaxies and of the supermassive black holes located at their nuclei. I will present state-of-the-art near-infrared and millimeter observations, respectively probing the nuclear and host-galaxy properties in these systems, back to the earliest epochs (z~7.5). I will show that black-hole feedback is mostly efficient during the first Gyr, and it drives the onset of the black-hole galaxy coevolution observed at later epochs. I will provide an accurate description of the host-galaxy growth rate and of the cold/molecular gas reservoir. I will discuss how black-hole feedback globally affects gas kinematics and gas physical conditions, e.g. by reducing the molecular gas content and the gas ability to fragment and form stars, as also by increasing the molecular gas excitation. Finally, I will show that quasar host galaxies lie in dense environments and that mergers play a key role in building the final mass of the host.
The Pierre Auger Observatory is the world's largest scientific facility dedicated to studying ultra-high-energy cosmic rays (UHECRs). In the past nearly twenty years of activity, the Pierre Auger Collaboration has investigated the origin and nature of the most energetic particles ever observed by humankind. The detection of UHECRs, with such scarce fluxes, is only possible indirectly through the measurement of the extensive air showers that originated upon their entrance into the Earth's atmosphere. While inferring the primary mass composition from air showers is an essential but hard endeavour, the study of these showers provides an opportunity to investigate fundamental particle physics properties at energies unattainable by terrestrial accelerators. In this talk, I shall present the Observatory, its major scientific discoveries on UHECR and the current understanding of high-energy hadronic interactions that rule the shower development. Finally, I will present the Observatory upgrade programme and the expected outcomes for the coming years.
Observations support the idea that supermassive black holes (SMBHs) power the emission at the center of active galaxies. However, contrary to stellar-mass BHs, there is a poor understanding of their origin and physical formation channel. In this talk, we propose a new process of SMBH formation in the early Universe that is not associated with baryonic matter (massive stars) or primordial cosmology. In this novel approach, SMBH seeds originate from the gravitational collapse of fermionic dense dark matter (DM) cores that arise at the center of DM halos as they form. We show that such a DM formation channel can occur before star formation, leading to heavier BH seeds than standard baryonic channels. The SMBH seeds subsequently grow by accretion. We compute the evolution of the mass and angular momentum of the BH using a geodesic general relativistic disk accretion model. We show that these SMBH seeds grow to ~ 10^9-10^(10) Msun in the first Gyr of the lifetime of the Universe without invoking unrealistic (or fine-tuned) accretion rates. Based on doi: 10.1093/mnras/stad1380
The origin of supermassive black holes (SMBHs) remains an open question in astrophysics. The presence of black holes more massive than a billion solar masses at high redshifts (𝑧 > 6) challenges many formation mechanisms. One particular mechanism, which invokes the collapse of Pop III.1 stars formed with the energy input from dark matter self-annihilation as seeds for SMBHs alleviates this problem and can explain the observed number density of SMBHs in our local universe as well (Banik, Tan & Monaco 2019; Singh, Monaco & Tan 2023). By applying this seeding mechanism in a cosmological simulation performed by PINOCCHIO code in a 60 Mpc box, we can identify all the halos which would be seeded with these SMBHs and track their evolution. In this talk, I will give a brief overview of the seeding mechanism and present the results from our analysis of the seeded halos. In particular, I will discuss the evolution of the number density of SMBHs from this seeding mechanism, which shows that most of the seeds formed quite early in the universe, compared to other mechanisms. Then I will talk about the occupation fraction of halos occupied with these black holes, and the evolution of their clustering, along with its comparison with data in the local universe. I will also present the estimates of binary AGN fraction evolution and the amplitude of the gravitational wave background emanating from binary SMBHs to compare with the latest results from the Pulsar Timing Array. Finally, I will show how we can differentiate among different seeding mechanisms by searching for high redshift AGNs in the Hubble Ultra Deep Field.
The axion is a hypothetical new particle that could explain the absence of CP violation in QCD and has a very rich cosmological phenomenology. In particular a population of thermally produced axions is expected to exist, in addition to a cold dark matter population. I discuss a new conservative bound on the axion mass, from production in the early universe through scattering with pions below the QCD phase transition. In addition I will show that to further improve the bound and exploit the reach of upcoming cosmological surveys, reliable non-perturbative calculations above the QCD crossover are needed.
Cosmology is now entering into the era of high-sensitivity CMB polarization experiments, which will target the detection of primordial B-modes to definitively prove the cosmic inflation scenario. Such signal is predicted to be much lower than the polarized Galactic emission (foregrounds) in any region of the sky pointing to the need for effective component separation methods. Given our current limited knowledge of the polarized foregrounds, the blind Needlet-ILC (NILC) method has great relevance, since it does not assume any specific model for their emission. However, this algorithm cannot be straightforwardly applied to partial-sky CMB polarization data. Moreover, when tested on realistic simulations of future satellite experiments, the NILC CMB reconstruction is significantly contaminated by residual Galactic emission, which would bias the estimate of the tensor-to-scalar ratio. In this talk, after a brief introduction to the topic, I will show how NILC can be extended to partial-sky polarization observations of future ground-based CMB experiments, specifically addressing the major complications that such an extension yields. I will then present a new method, Multi-Clustering NILC (MC-NILC), which improves the foregrounds subtraction by performing NILC variance minimization in several different sky patches, identified with a fully blind approach and by taking into account the spatial variability of the spectral properties of the B-modes Galactic emission. The new pipeline has been validated on realistic simulations of the LiteBIRD satellite. We will show that MC-NILC allows to reach the sensitivity on the tensor-to-scalar ratio targeted by the experiment independently of the assumed Galactic model. The results presented in this talk can be found in arXiv:2208.12059 and arXiv:2212.04456.
We present a model in which the problem of self-reproduction can be easily avoided in the inflationary universe, even when inflation starts at Planck scales. This is achieved by a simple coupling of the inflaton potential with a mimetic field. In this case, the problem of fine-tuning of the initial conditions does not arise, while eternal inflation and the multiverse with all their widely discussed problems are avoided. Authors: M. Khaldieh, A. Chamseddine, V. Mukhanov
We present updated cosmological nucleosynthesis constraints on several models of neutrino beyond Standard Model Physics. Namely, first on the basis of the recent precise determination of the primordial abundance of He-4 we have updated the cosmological consraints on electron-sterile neutrino oscillations parameters. Second. we derive cosmological constraint on the lepton asymmetry in the model of degenerate primordial nucleosynthesis with neutrino oscillations and discuss a solution to the dark radiation problem in such a model. Third, we present updated constraints on the freezing temperature of the sterile neutrino in a model of right-handed neutrinos interacting with chiral tensor particles.
Observations imply that only 5% of the total energy of the universe is in the form of baryonic matter. The remaining 95% is dark matter (e.g. undiscovered particles) and dark energy (e.g. the cosmological constant, a new scalar field of nature). Ongoing and future research is expected to either reveal the nature of the dark sector or revise our fundamental theories. Astronomical observations probe temporal, spatial and energy scales unavailable in terrestrial experiments and are therefore better suited for this purpose. New, advanced astronomical instrumentation is being built to perform unique tests of fundamental physics, complementary to those made using supernovae, the large scale structure, the Cosmic Microwave Background, and gravitational lensing. I will present how high precision quasar absorption spectroscopy can be used: (1) to probe new physics by searching for variations in the fundamental constants of physics, and (2) to directly measure the temporal redshift evolution (redshift drift) of objects in the cosmic expansion flow. The two projects are also science goals of the Extremely Large Telescope and its ANDES instrument in particular.
Monopoles are inevitable predictions of GUT theories. They are produced during phase transitions in the early universe, but also mechanisms like Schwinger effect in strong magnetic fields could give relevant contributions to the monopole number density. I will show that from the detection of intergalactic magnetic fields we can infer additional bounds on the magnetic monopole flux. I will discuss the implications of these bounds for minicharged monopoles, for magnetic black holes, and on the possibility of monopoles as dark matter candidates.
Deviations from the standard cosmological model ($\Lambda$CDM) at early times, specifically in the context of Early Dark Energy (EDE), have garnered attention in the cosmological community as a potential solution to the Hubble/sound horizon tension. These deviations can also be achieved through modifications of gravity, providing an alternative way to modify the expansion history and potentially alleviate the $H_0$ tension. These modifications also impact the growth of perturbations, altering the shape of the Cosmic Microwave Background (CMB) spectra and the inferred value of the S8 parameter from Planck data. In this talk, I will present results for a specific modified gravity model known as the Transitional Planck Mass (TPM) model. This model incorporates a transition in the value of the effective Planck mass (or effective gravitational constant) on cosmological scales prior to recombination. I will show how such transition can be obtained within the framework of the Effective Field Theory of Dark Energy and Modified Gravity, and show constraints on the model obtained using CMB, Baryon Acoustic Oscillations, and Type Ia Supernovae data. The constraints obtained for the TPM model prefer a ∼5% shift in the value of the effective Planck mass (<10% at 2σ) prior to recombination. The transition in the TPM model can occur at any point over multiple decades of the scale factor prior to recombination, characterized by log10(a) = −5.32+0.96−0.72 (68% CL). This transition reduces the sound horizon at last scattering, resulting in an increased Hubble constant. With a combination of local measurements as a prior, the Hubble constant is determined to be 71.09±0.75 km s−1Mpc−1, and without the prior, it is 69.22+0.67−0.86 km s−1Mpc−1. The TPM model exhibits improvements in the goodness-of-fit ($\chi^2$) compared to $\Lambda$CDM, with $\Delta \chi^2 = -23.72$ when using the Hubble constant prior and $\Delta \chi^2 = -4.8$ without the prior. The TPM model allows for values of $H_0$ > 70 km s−1Mpc−1 and S8 < 0.80 simultaneously, with lower values of S8 compensating for the increase in $H_0$ relative to $\Lambda$CDM. Recent constraints obtained using Dark Energy Survey and South Pole Telescope data will also be presented. While the TPM model represents a specific modified gravity model, exploring other variants of modified gravity may offer a productive path toward potentially resolving cosmological tensions. By studying different modifications of gravity, we can gain deeper insights into the nature of the universe and its expansion, and potentially uncover new avenues for addressing the outstanding challenges in cosmology.
The existing discrepancies between the observation of local and extraction of global cosmological parameters are driving the need for an extension of the ΛCDM cosmological model. A proposed extension called SU(2)CMB describes cosmic microwave background (CMB) photons with an SU(2) instead of a U(1) gauge group. This reduces some of these tensions (such as H0, Ω𝑚, 𝜎8), pushes the recombination epoch to higher redshifts, and thereby effectively reduces CMB photon densities. Ultra-high energy cosmic ray (UHECR) interactions with CMB photons are critical to our understanding of the observed flux of all cosmic messengers (cosmic rays, neutrinos and photons). The measured and predicted fluxes are the basis used to constrain source properties and rely on the ΛCDM CMB evolution. Thus, a modification of the past CMB densities impacts these flux predictions and possibly the constraints on the sources. This contribution discusses the impact of the modified CMB evolution on multimessenger studies. In particular, we show the effects of the ΛCDM extension on the UHECR propagation horizon, increased cosmogenic neutrino fluxes, and the changes in source properties inferred from the U
In the age of precision cosmology, the LambdaCDM model precisely fits most cosmological observations; however, discrepancies have been reported in measuring a local Hubble parameter at low and high redshifts. One possible way to reduce Hubble tension is by allowing dark sectors to interact. Historically, this has been accomplished by incorporating the interaction term at the covariant derivative of the energy-momentum tensor and, thus, energy flows between the two sectors. However, such incorporation leads to instability at the perturbation level, and manual adjustment is required to rectify these instabilities. In this talk, we present a Lagrangian formulation for investigating the interaction between dark matter and dark energy. In the Lagrangian, the dark matter sector is characterized as a relativistic fluid, while a K-essence scalar field is a candidate for dark energy. The interaction function comprises the fluid number density, entropy density, particle flux number, and field variables. Such a parameter-dependent function provides a generalized structure for interaction that generates a complex equation of motion and modifies Friedmann equations. Such complexity can be consistently studied by utilizing the dynamical stability technique for a particular model. Introducing the interaction at the action level makes the theory covariant at the background and perturbation levels.
Modified Newtonian Dynamics (MOND) can partially explain the excess of rotation of galaxies, or the equivalent mass discrepancy-acceleration, without the requirement of dark matter halos. This work proposes a modification of GR based on the distorted stereographic projection of hyperconical universes, which leads to MOND effects at galactic scales. To describe the mass discrepancy-acceleration relation, a hypothesis on the centrifugal acceleration was assumed, which would show a small time-like contribution at large-scale dynamics due to the metric used. As a limit case, a covariant formulation compatible with MOND is obtained, and mass discrepancy-acceleration is satisfactorily modelled for a reference set of 61 galaxies collected from the SPARC dataset
We propose a new class of f(R) theory where its Weyl gauge symmetry is broken in the primordial era of the universe. Due to the geometrical nature of the symmetry, the symmetry breaking induces an additional non-minimal coupling of the scalar field corresponding to the f(R) model. This cannot be expected in the standard f (R) theories. We explain how this affects the evolution of the universe at cosmological scales in two parts: cosmological backreaction and CMB spectra. first, for some specific f(R) dark energy models, the effective values of the Planck constant and the cosmological constant may shift through time even though there is no change in the background evolution. This can be regarded as a genuine exemplification of the cosmological backreaction. Moreover, we prove that for f(R) inflationary models, the amplitude of the primordial gravitational waves affects the evolution of scalar perturbation, which turns out to affect the low-l multipoles of the CMB temperature anisotropy.
In this work, the Rastall model of gravity is generalized different models of non-conserve mater energy momentum model of gravity is constructed. In fact, we show that by imposing the ordinary or generalized form of Rastall assumption on perfect fluid energy-momentum tensor (EMT), one can find different {\bf {forms}} of modified Einstein's field equation (EFE). We investigate the thermodynamical behaviour of a special type of this generalization, which we called it non-conserve $f(R)$ model of gravity. We obtain that in the FLRW universe for $\lambda \neq 0$, the universe has an energy flow to apparent horizon and due to this fact the first law of thermodynamics of this model is modified. Moreover, we show that the GSL of model is modified and we {\bf achieve} the condition to keep the GSL.
We study the imprints of high scale non-thermal leptogenesis on cosmic microwave background (CMB) from the measurements of inflationary spectral index ($n_s$) and tensor-to-scalar ratio ($r$), which otherwise is inaccessible to the conventional laboratory experiments. We argue that non-thermal production of baryon (lepton) asymmetry from subsequent decays of inflaton to heavy right-handed neutrinos (RHN) and RHN to SM leptons is sensitive to the reheating dynamics in early Universe after the end of inflation. Such dependence provides detectable imprints on the $n_s-r$ plane which is well constrained by the Planck experiment. We investigate two separate cases, (I) inflaton decays to radiation dominantly and (II) inflaton decays to RHN dominantly which further decays to the SM particles to reheat the Universe adequately. Considering a class of $\alpha-$ attractor inflation models, we obtain the allowed mass ranges for RHN for both the cases and thereafter furnish the estimates for $n_s$ and $r$. The prescription proposed here is quite generic and can be implemented to various kinds of single field inflationary models given the conditions for non-thermal leptogeneis is satisfied
Inflaton-vector interactions of the type $\phi F\tilde{F}$ have provided interesting phenomenology to tackle some of current problems in cosmology, namely the vectors could constitute the dark matter component. It could also lead to possible signatures imprinted in a gravitational wave spectrum. Through this coupling, a rolling inflaton induces an exponential production of the transverse polarizations of the vector field, having a maximum at the end of inflation when the inflaton field velocity is at its maximum. These gauge particles, already parity asymmetric, will source the tensor components of the metric perturbations, leading to the production of parity violating gravitational waves. In this work we examine the vector particle production in the weak coupling regime, integrating the gauge mode amplitudes spectrum during the entirety of its production and amplification epochs, until the onset of radiation domination. Finally, we calculate the gravitational wave spectrum combining the vector mode analytical solution, the WKB expansion, valid only during the amplification until horizon crossing, and the numerical solution obtained at the beginning of radiation domination when the modes cease to grow.
A summary of the most recent results of the ATLAS Collaboration at the LHC is given. The review is focussed on those analyses which can be of interest for cosmological studies.
Minerals are solid state nuclear track detectors - nuclear recoils in a mineral leave latent damage to the crystal structure. Depending on the mineral and its temperature, the damage features are retained in the material from minutes to timescales much larger than the age of the Solar System. The damage features from the fission fragments left by spontaneous fission of heavy unstable isotopes have long been used for fission track dating of geological samples. Laboratory studies have demonstrated the readout of defects caused by nuclear recoils with energies as small as ~1 keV. Using natural minerals, one could use the damage features accumulated over geological timescales to measure astrophysical neutrino fluxes (from the Sun, supernovae, or cosmic rays interacting with the atmosphere) as well as search for Dark Matter. Research groups in Europe, Asia, and America have started developing microscopy techniques to read out the nanoscale damage features in crystals left by keV nuclear recoils. The research program towards the realization of such mineral detectors is highly interdisciplinary, combining geoscience, material science, applied and fundamental physics with techniques from quantum information and Artificial Intelligence. In this talk, I will highlight the scientific potential of Dark Matter searches with mineral detectors and briefly describe status and plans of the Mineral Detection of Neutrinos and Dark Matter (MDvDM) community.