7th International Conference on New Frontiers in Physics (ICNFP 2018)
3 July 2018: Arrival day, 4 July 2018, 8h30: Lectures day, 5 July 2018, 8:30: Opening of the main plenary session of ICNFP 2018, 12 July, 18h00: Closing of ICNFP 2018. 13 July 2018: Departure day.
The International Conference on New Frontiers in Physics aims to promote scientific exchange and development of novel ideas in science with a particular accent in interdisciplinarity. The conference will bring together worldwide experts and promising young scientists working on experimental and theoretical aspects of particle, nuclear, heavy ion and astro-particle physics and cosmology, with colleagues from other disciplines, for example solid state physics, mathematics, mathematical physics, quantum optics and other.
The conference will be hosted in the Conference Center of the Orthodox Academy of Creta (OAC), an exceptionally beautiful location only a few meters from the mediteranean sea.
The merger of two compact stars is the celebrated event in Astrophysics which provides highest baryon densities and temperatures simultaneously as well as compact objects at the limit of stability, most likely in a transition stage to a black hole which, triggered by a gravitational wave signal, is then observable in all wavelengths of the electromagnetic spectrum, in some cases also in neutrinos.
The first example of such an event is GW170817 [1] which marks the begin of the era of multi-messenger Astronomy and is traditionally referred to as "neutron star (NS) merger". With a total mass of 2.73 M_sun its progenitor was most likely a binary system like the Hulse-Taylor system of the "double pulsar" system J0737-3039 with stars of the typical binary radio pulsar mass of 1.35 M_sun involved. We discuss the characteristic features of an equation of state (EoS) of compact star matter with a strong phase transition that would allow for the occurrence of mass twin compact stars in that mass range as a consequence of a "third family" branch of hybrid stars (HSs) in the mass range from ~1.3 to ~2.0 M_sun [2-5]. This offers the possibility of a scenario of HS-NS or HS-HS merger for GW170817 which should therefore be taken into consideration when implications of GW170817 for nuclear and particle physics are discussed. If the NICER experiment on board of the ISS would measure a large radius of ~14 km for the nearest millisecond pulsar PSR J0437-4715, this would give strong support to the idea that a HS was involved in GW170817 [2].
[1] B.P. Abbott et al. [LIGO Scientific and Virgo Collaborations], Phys. Rev. Lett. 119, 161101 (2017).
[2] D. Blaschke and N. Chamel, "Phases of dense matter in compact stars", [arxiv:1803.01836] (2018).
[3] A. Ayriyan et al., Phys. Rev. C 97, 045802 (2018).
[4] V. Paschalidis et al., Phys. Rev. D 97, 084038 (2018).
[5] D. E. Alvarez-Castillo et al., "Third family of compact stars within a nonlocal chiral quark model equation of state", [arxiv:1805.04105] (2018).
The idea that spacetime might be quantised was already pondered by Heisenberg in 1930s, as a potential remedy to the divergencies lurking in quantum electrodynamics. However, the concept of a ‘noncommutative spacetime geometry’ needed over a half of century to become established as a mathematical structure. Although it has not fulfilled the original Heiseberg’s dream (so far), it revealed a completely new perspective on fundamental physics and has found applications ranging from condensed matter and particle physics to gravity and cosmology.
The lecture will be a friendly introduction to the misty realm of noncommutative geometry. I shall discuss the motivations and basic mathematical concepts basing on the operational paradigm of physics.
During last few years our group developed the most advanced model of
the hadron resonance gas [1] which not only allowed us to achieve the best description of all hadronic multiplicities measured from the lowest AGS to the highest RHIC energies, but also to reveal the remarkable irregularities at chemical freeze-out [2-5]. It is intriguing that in central nuclear collisions we found two sets of similar irregularities. The most prominent of them are the sharp peaks of the trace anomaly and baryonic charge density existing at chemical freeze-out at the center-of-mass energies 4.9 GeV and 9.2 GeV [2, 5]. They are accompanied by two sets of highly correlated quasi-plateaus in the collision energy dependence of the entropy per baryon, total pion number per baryon, and thermal pion number per baryon which are found at the center-of-mass energies 3.8–4.9 GeV and 7.6–9.2 GeV [2-4]. The low-energy set of quasi-plateaus was predicted a long time ago. On the basis of the generalized shock-adiabat model I show that the low-energy correlated quasi-plateaus give evidence for the anomalous thermodynamic properties inside the mixed phase found at the center-of-mass energies 4.3–4.9 GeV. Furthermore, based on the thermostatic properties of the mixed phase of a 1-st order phase transition and the ones of the Hagedorn mass spectrum I will explain, respectively, the reason of observed chemical equilibration of strangeness at the collision energy 4.9 GeV and above 8.7 GeV. Also I will argue that the both sets of irregularities possibly evidence for two phase transitions, namely, the 1-st order transition of chiral symmetry restoration in hadronic phase at low-energy range and the 2-nd order deconfinement transition at the higher one. In combination with a recent analysis of the light nuclei number fluctuations our results indicate that the center-of-mass collision energy range 8.8-9.2 GeV may be in the nearest vicinity of the QCD tricritical endpoint [5].
K. A. Bugaev, D. R. Oliinychenko, J. Cleymans, A. I. Ivanytskyi, I. N. Mishustin, E. G. Nikonov and V. V. Sagun, Europhys. Lett. 104, 22002 (2013).
K. A. Bugaev, A. I. Ivanytskyi, D. R. Oliinychenko, V. V. Sagun, I. N. Mishustin, D. H. Rischke, L. M. Satarov and G. M. Zinovjev, Phys. Part. Nucl. Lett. 12, (2015) 238.
K. A. Bugaev, A. I. Ivanytskyi, D. R. Oliinychenko, V. V. Sagun, I. N. Mishustin, D. H. Rischke, L. M. Satarov and G. M. Zinovjev, Eur. Phys. J. A 52, (2016) 175.
We study an exotic phase of a cold, two-dimensional, N-component Fermi gas which exhibits dynamically broken approximate scale symmetry. We identify a particular weakly damped collective excitation as the dilaton, the pseudo-Goldstone boson associated with the broken approximate scale invariance. We argue that the symmetry breaking phase is stable for a range of parameters of the theory and there is a fluctuation induced first order quantum phase transition between the normal and the scale symmetry breaking phases. This system provides a concrete cold atoms example of the Coleman-Weinberg phenomenon of dynamical violation of scale symmetry as well as a quantum field theoretical system where the Higgs field is a dilaton.
.
Introduction to the History of OAC Chapel and Meaning of Blessing by Katerina Karkala (OAC) given inside the OAC Chapel (old building), followed by a ceremony of Blessing in the OAC Chapel for interested people.
After dinner talk in the open veranda of OAC on "History of Crete" by Emanuela Larentzakis with a particular focus on the History of Chania. This talk is preparation for the excursion to Chania the 6 July.
.
Motivated by the recent lattice study by the FASTSUM collaboration [1],
thermal masses of the baryon parity-doublers are explored for
various pion masses [2]. A general trend of the octet and decuplet
parity-doublers is consistent to the results in [1], whereas their
hyperon masses are modified to a large extent for the physical pion mass.
We further investigate the fluctuations and correlations involving
baryon number in hot hadronic matter with modified masses of
negative-parity baryons, in the context of the hadron resonance gas
[3]. Confronting the baryon number susceptibility, baryon-charge
correlation, and baryon-strangeness correlation and their ratios
with the lattice QCD data, we show that the strong downward mass
shift in hyperons can accidentally reproduce some correlation
ratios, however it also tends to overshoot the individual
fluctuations and correlations. This indicates that a consistent framework
of in-medium effects beyond hadron mass shifts is required.
Selected topics on the partity doublers at high density are also
presented briefly [4,5].
References
[1] G. Aarts, et al., JHEP 1706, 034 (2017).
[2] C. Sasaki, Nucl.Phys.A 970, 388 (2018).
[3] K. Morita, C. Sasaki, P. M. Lo and K. Redlich, ``Overlap between Lattice QCD and HRG with in-medium effects and parity doubling,'' arXiv:1711.10779 [hep-ph].
[4] M. Marczenko, C. Sasaki, Phys.Rev.D 97, no. 3, 036011 (2018).
[5] M. Marczenko, D. Blaschke, K. Redlich and C. Sasaki, to appear.
The Compact Muon Solenoid (CMS) detector is one of the two multipurpose experiments at the Large Hadron Collider (LHC). It has successfully collected data during Run 1 (2010-2013) allowing to achieve important physics results, like the discovery of the Higgs boson announced in 2012.
Willing to unreveral further open questions not yet explained by the Standard Model, intense activities have been performed to further improve the detector and the trigger before the LHC restart in 2016 (Run 2), in parallel with the upgrade of the LHC.
The achieved global performance of the CMS experiment and of several subdetectors will be presented.
In order to meet the requirements of the upcoming luminosity upgrade of the LHC, the
Micromegas (MM) technology was selected to be adopted for the New Small Wheel (NSW)
upgrade, dedicated to precision tracking. A large surface of the forward regions of the Muon
Spectrometer will be equipped with 8 layers of MM modules forming a total active area of
1200 m2. The NSW is scheduled to be installed in the forward region of 1.3 < |η| < 2.7 of
ATLAS during the second long LHC shutdown. The NSW will have to operate in a high
background radiation region, while reconstructing muon tracks as well as furnishing
information for the Level-1 trigger. The project requires fully efficient MM chambers with
spatial resolution down to 100 μm, a rate capability up to about 15 kHz/cm2 and operation in a moderate (highly inhomogeneous) magnetic field up to B=0.3 T. The required tracking is
linked to the intrinsic spatial resolution in combination with the demanding mechanical
accuracy.
An overview of the design, construction and assembly procedures of the Micromegas modules will be reported. Results and characterization with cosmic rays of the first series module will also be presented.
Due to the so-called 3He shortage crisis, many detection techniques used nowadays for thermal neutrons are based on alternative converters. Possible ways to increase the detection efficiency for thermal neutrons, using solid neutron-to-charge converters, 10B or 10B4C, implemented the micromegas technology are examined. The micro-pattern gaseous detector Micromegas has been developed for several years in Saclay and is used in a wide variety of neutron experiments combining high accuracy, high rate capability, excellent timing properties and robustness. We propose here a large high-efficiency Micromegas-based neutron detector with several 10B4C thin layers mounted inside the gas volume for thermal neutron detection. The principle and the fabrication of a single detector unit prototype with overall dimension of ~ 15 x 15 cm2 and a flexibility of modifying the number of layers of 10B4C neutron converters are described and simulated results are reported, demonstrating that typically five 10B4C layers of 1-2 μm thickness can lead to a detection efficiency of 20-40% for thermal neutrons and a spatial resolution of sub-mm. The design is well adapted to large sizes making possible the construction of a mosaic of several such detector units with a large area coverage and a high detection efficiency, showing the good potential of this novel technique [1]. An alternative way is to use this multiplayered micromegas equipped with GEM-type meshes coated with 10B4C both sides and resulting on a robust and large surface detector. Another additional innovative and very promising concept for a cost-effective, high-efficiency, large scale neutron detector is to use a stack of microbulk micromegas coated with 10B4C. Simulations show that by placing four back to back microbulk micromegas detector units, efficiencies of ~ 40% at 1.8 Å can be recorded. A prototype was designed and built and the tests so far look very encouraging.
[1] G. Tsiledakis et al, JINST, 12 P09006 (2017)
E-mail of the corresponding author: georgios.tsiledakis@cea.fr
We theoretically investigate the possibility of realizing single-photon counters for photon frequencies down to 10 GHz. We propose three schemes. The first one consists of a cold electron nanobolometer, coupled to an antenna, as in Ref. [1]. In this case, the photon excites the antenna and then dissipates its energy into the normal metal island of the nanobolometer. As a consequence, the temperature of the island increases, which produces a current or voltage pulse in a couple of normal metal-insulator-superconductor tunnel junctions, used as thermometer.
In the second and third schemes, the antenna is coupled--in series or capacitively--to a Josephson junction. We present the corresponding quantum circuits and show that the schemes are mathematically equivalent. They both can be represented as a quantum particle moving in a two-dimensional potential. We analyze the mechanisms of detection in each scheme and we discuss the values of the detectors parameters which permit the photon detection.
[1] D. V. Anghel and L. Kuzmin, Appl. Phys. Lett. 82, 293 (2003).
In heavy-ion collisions (A-A) at the CERN Large Hadron Collider (LHC) energies, a strongly coupled Quark Gluon Plasma (QGP) is produced which gives rise to collective phenomena whose signatures can be retrieved in final state hadronic observables. Recent observations in small systems, such as pp collisions, show remarkable similarities among these systems, which are highly suggestive of the presence of collectivity. Current research therefore tries to identify whether a unified description of the pp and A-A data can be established.
Hydrodynamic and recombination models are tested against the measured hadron spectral shapes at low and intermediate transverse momenta ($p_{\rm T}$). In particular, most of them have problems with the correct prediction of very low $p_{\rm T}$ spectra of pions. The problem may be solved by assuming that the matter at LHC energies is produced out of chemical equilibrium. The chemical non-equilibrium model predicts that the pion abundances are characterized by the non-zero value of the chemical potential which is very close to the critical value for the Bose-Einstein condensation. The crucial point is the measurement of pions at very low $p_{\rm T}$ (< 200 MeV/$\textit{c}$), as the onset of pion condensation would manifest itself as an excess in the low $p_{\mathrm{T}}$ pion yield while the spectra of kaons and protons remain unaltered.
The ALICE Collaboration at the CERN LHC recently collected for the first time data in Xe-Xe collisions at $\sqrt{s_{\rm NN}}$ = 5.44 TeV with a low magnetic field (B = 0.2 T) as well as in pp collisions at the highest LHC energy of 13 TeV.
An overview of the new ALICE results which contribute to the understanding of collective phenomena will be presented. Pion, kaon and proton $p_T$-spectra are presented and compared to the main hydrodynamical models.
Thanks to the lower magnetic field in Xe-Xe collisions, the pion spectra can be measured down to 80 MeV/$\textit{c}$ with the Inner Tracking System (ITS), allowing the search for pion condensation effects. The search for an enhancement of pions is carried out also in very high multiplicity pp events down to the lowest $p_{\rm T}$ possible with the ALICE detector.
The relativistic heavy ion collisions represent an arena for the probe of various anomalous transport effects. Those effects, in turn, reveal the correspondence between the solid state physics and the high energy physics, which share the common formalism of quantum field theory. It may be shown that for the wide range of field - theoretic models the response of various nondissipative currents to the external gauge fields is determined by the momentum space topological invariants. Thus the anomalous transport appears to be related to the investigation of momentum space topology - the approach developed earlier mainly in the condensed matter theory. Within this methodology we analyse systematically the anomalous transport phenomena, which include, in particular, the anomalous quantum Hall effect, the chiral separation effect, and the chiral magnetic effect.
References:
[1] Wigner transformation, momentum space topology, and anomalous transport
By M.A. Zubkov. arXiv:1603.03665 [cond-mat.mes-hall]. Annals Phys. 373 (2016) 298-324.
[2] Topology of the momentum space, Wigner transformations, and a chiral anomaly in lattice models By M.A. Zubkov, Z.V. Khaidukov. JETP Lett. 106 (2017) no.3, 172-178, Pisma Zh.Eksp.Teor.Fiz. 106 (2017) no.3, 166-172.
[3] Chiral Separation Effect in lattice regularization By Z.V. Khaidukov, M.A. Zubkov.
arXiv:1701.03368 [hep-lat]. Phys.Rev. D95 (2017) no.7, 074502.
[4] Standard Model as the topological material By G.E. Volovik, M.A. Zubkov.
arXiv:1608.07777 [hep-ph]. New J.Phys. 19 (2017) no.1, 015009.
[5] Absence of equilibrium chiral magnetic effect By M. A. Zubkov.
arXiv:1605.08724 [hep-ph]. Phys.Rev. D93 (2016) no.10, 105036.
[6] Scale Magnetic Effect in Quantum Electrodynamics and the Wigner-Weyl Formalism
By M.N. Chernodub, M.A. Zubkov. arXiv:1703.06516 [hep-th]. Phys.Rev. D96 (2017) no.5, 056006.
In this work we present a comparative study of PYTHIA, EPOS, QGSJET and SIBYLL generators for proton-proton collisions at $\sqrt{s}$ = 7 TeV in the forward region. The generated charged energy flow, charged-particle distributions, charged-hadron production ratios and $V^{0}$ ratios are compared to the forward physics measurements from LHCb and TOTEM. Most of the observed differences seem to be explained by the extrapolation from the central rapidity region.
Muon radiography is an imaging technique based on the measurement of the absorption of cosmic ray muons. This technique has recently been used successfully to investigate the presence of unknown cavities in the Galleria Borbonica in Naples and in the Cheops Pyramid at Cairo.
The MIMA detector (Muon Imaging for Mining and Archeology) is a muon tracker prototype for the application of muon radiography in the archaeological and mining fields. It is made of three couples of X-Y planes each consisting of 21 scintillator bars with silicon photomultiplier read-out. The detector is compact, robust, easily transportable and has a low power consumption: all of that makes the detector ideal for measurements in narrow and isolated environments.
With this detector we have performed a measurement from inside the Temperino archaeological park in Tuscany. The park was used as a mine since Etruscans time and it is composed of a series of tunnels on multiple levels. In order to obtain information on the average density of the rock the measured absorption was compared to the simulated one, obtained from the information provided by the laser scanner measurements and the cartographic maps of the mountain above the mine. This allowed to confirm the presence of a partially accessible cavity and gave some hints on the presence of a high density vein within the rock.
The neutron time-of-flight (n_TOF) facility of CERN, based on an idea by C. Rubbia et al., became operative in 2001 and since then it occupies a major role in the field of neutron cross-section measurements. Neutron-induced reactions play a key role in several aspects of nuclear physics, from nuclear technology, where they enter in reactor calculations and design, to nuclear astrophysics and to nuclear structure.
At n_TOF a pulsed neutron beam is produced by spallation of 20 GeV/c protons on a lead target, and used together with a moderation system. In this way the n_TOF neutron beam covers about eleven orders of magnitude in energy from thermal to GeV in the first experimental area (EAR1), and from milli-eV to hundreds of MeV in the second experimental area (EAR2). The broad neutron energy range, together with the high intensity instantaneous neutron flux and the very good energy resolution makes the n_TOF facility perfectly suited to perform high-quality measurements of neutron-induced reaction cross sections.
The characteristics and performance of the two experimental areas of the n_TOF facility will be presented, together with the physics program of the n_TOF Collaboration and selected important measurements performed to date. In addition, significant upcoming developments will be introduced, both from the neutron beam production and neutron detection point of view.
The TOTEM experiment at the LHC has measured proton-proton elastic scattering in dedicated runs at √s = 2.76, 7, 8 and 13 TeV centre-of-mass
LHC energies. The proton-proton total cross-section σtot has been derived for each energies using a luminosity independent method. TOTEM has
excluded a purely exponential differential cross-section for elastic proton-proton scattering with significance greater than 7 σ in the |t| range from
0.027 to 0.2 GeV2 at √s = 8 TeV. The ρ parameter has been measured at √s = 8, 13 TeV via the Coulomb-nuclear interference, and at 13 TeV was
found to be ρ = 0.1 ± 0.01. The ρ measurement is a strong evidence of the existence of a 3-gluon bound state, predicted from theoretical models
both in Regge-like and modern QCD framework.
We propose, for the first time, the potential of physics opportunities at the ProtoDUNE detectors in the context of dark matter physics. More specifically, we explore various experimental signatures at the cosmic frontier, arising in boosted dark matter scenarios, i.e., relativistic, inelastic scattering of boosted dark matter often created by the annihilation of its heavier component which usually comprises of the dominant relic abundance. Although features are unique enough to isolate signal events from potential backgrounds, vetoing a vast amount of cosmic background is rather challenging as the detectors are located on the ground. We argue, with a careful estimate, that such backgrounds nevertheless can be well under control via performing dedicated analyses after data acquisition. We then discuss some phenomenological studies which can be achieved with ProtoDUNE, employing a dark photon scenario as our benchmark dark-sector model.
The AEgIS experiment at CERN’s Antiproton Decelerator aims to perform a direct test of the Weak Equivalence Principle for antimatter by measuring the local gravitational acceleration for antihydrogen. The first step towards this goal is the formation of a pulsed, cold antihydrogen beam, which will be created by a charge exchange reaction between laser excited (Rydberg) positronium and cold antiprotons. The antihydrogen beam deflection due to Earth’s gravity will then be measured using a moiré deflectometer coupled to a position sensitive detector.
In this talk I will give a general overview of the experiment with focus on the current status towards antihydrogen formation. I will present recent advancements on manipulation techniques for non-neutral plasmas in the AEgIS apparatus and the new positron injection scheme. I will also give an outlook for the measurements for the upcoming antiproton beam time.
The two Major Atmospheric Gamma Imaging Cherenkov (MAGIC) telescopes, placed at the Roque de los Muchachos European Northern Observatory on the Canary Island of La Palma (Spain), observe gamma-rays in the VHE energy range, starting from 30GeV to around 100TeV. This wide interval of energies allows to include different kind of sources in the schedule of observation, spreading from black hole accretion discs, relativistic jets, active galactic nuclei, pulsars, etc., in the galactic and extra-galactic space.
After an overview on the technical solutions adopted by the MAGIC collaboration for Cherenkov gamma-ray imaging, recent results from the galactic and extra-galactic observations will be presented. A major focus will be given on the new multi-messenger project with neutrino and gravitational wave alert and follow ups. Furthermore, we will discuss recent results on searches for Lorentz invariance violation and dark matter annihilation/decay.
A gravitational field model based on two symmetric tensors, $g_{μν}$ and $\tilde{g}_{μν}$, is presented. In this model, new matter fields are added to the original matter fields, motivated by an additional symmetry (δ symmetry). We call them δ matter fields. We find that massive particles do not follow geodesics, while trajectories of massless particles are null geodesics of an effective metric. Then we study the Cosmological case, where we get an accelerated expansion of the Universe without dark energy.
Electromagnetic cascading of TeV-band gamma-ray emission from distant blazars is a means to investigate the amplitude of magnetic field in the voids of intergalactic space. The flux of cascade emission from some objects is weaker than it should be, leaving two interpretation. The magnetic field may be strong enough to deflect the electron-positron pairs out of the line of sight. Alternatively plasma instabilities might drain the energy of the pairs. We present the current state of research and the most recent results, indicating that plasma instabilities in most cases cannot provide sufficiently strong energy losses. This leaves intact the evidence for pG magnetic fields in cosmic voids.
The identification of dark matter is one of the major open questions in physics, astrophysics, and cosmology. One approach consists of detecting the nuclear recoils produced by the collisions between the putative dark matter particles and a detector’s target nuclei. The CRESST-III experiment (Cryogenic Rare Event Search with Superconducting Thermometers), located at the underground facility of the LNGS (Laboratori Nazionali del Gran Sasso in Italy), uses detectors designed to probe low-mass dark matter with a sensitivity never achieved before. CaWO4 crystals are used as detector medium and operated as cryogenic detectors at temperatures around 10mK. Sensitivity for nuclear recoils below 100eV was achieved, allowing for the exploration of new parameter space in the exclusion limit landscape. The working principle of CRESST-III and the most recent results will be presented.
.
We are investigating an approach towards a realistic description of baryon ground and resonant states in a unified framework. It consists of a relativistic constituent-quark model set up along a coupled-channels theory. Thereby it becomes possible to include for baryons beyond three-valence-quark configurations further degrees of freedom that are relevant for a realistic description notably of resonant states.
So far we have managed to consider explicit pionic effects by coupling to {QQQπ} channels. In particular, we have studied in a consistent approach the influence of pion dressing on the nucleon mass as well as the Delta resonance mass and hadronic decay width; all of these values are described in good agreement with experimental data. At the same time we have obtained a microscopic description of the strong form factors at the NNπ, NΔπ, and ΔΔπ interaction vertices. They compare reasonably well with parametrizations used in phenomenological models available from the literature.
In this presentation we review the treatment of pentaquark exotic baryons in chiral soliton models. Focus is on two topics. First we study baryons that contain a heavy quark (charm or bottom) or anti-quark. This advances the bound state approach to strangeness and particularly shows that the heavy bound state selects the appropriate representation for the light flavor (up, down, strange) component of the baryon wave-function. This component models the light diquark structure. Basic heavy baryons select the anti-triplet and the sextet. Pentaquarks with a heavy anti-quark relate to the anti-sextet while those with a light anti-quark have light flavors from the anti-decapentaplet. In the second, related, analysis we investigate pentaquark decays in the Skyrme model. By definition of solitons, no term linear in meson fluctuations, that eventually could be identified as Yukawa interaction, should emerge. If it does, it is a mere short-coming of approximating the exact time-dependent soliton solution. Rather, the resonance content of meson baryon scattering must be directly analyzed to obtain the widths of collective resonances. This requires to impose constraints on the collective soliton excitations. In doing so, we show that the decay width may not be estimated from axial current matrix elements. This calculation of the decay width is also shown to be consistent with the limit when the number of colors is large and the constraints are not effective.
.
We review the postulate that an SU(2) Yang-Mills theory of scale 10^(-4) eV describes extended thermal photon gases. In particular, we discuss a number of implications for the Cosmic Microwave Background which, in turn, imply a change in the high-z cosmological model. Due to the nontrivial, deconfining thermal ground state of this theory a Planck-scale axion field acquires a potential, and we speculate that galaxy sized U(1) vortex cores of this field represent dark matter when in isolation and dark energy when occurring in percolated form. Much work is needed to match this idea to observations on galaxy clustering, lensing, and the phenomenology of spirals, and to learn about the precise equation of state of the percolate.
During the LHC Run 2, which started in 2015, the LHC has reached a peak instantaneous luminosity of $2 \cdot 10^{34} cm^{-2}s^{-1}$, i.e. twice the design value. Under these conditions, the online selection of LHC collision events is a real challenge.
The CMS online selection is performed by a two-level trigger system: the Level-1 (L1) Trigger, implemented in custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the offline reconstruction software running on a computer farm.
This presentation will describe the performance of the CMS trigger system during the LHC Run 2.
We explore supersymmetric contributions to the decay K 0 S →μ + μ − , in light of current experimental data. The Standard Model (SM) predicts B(K 0 S →μ + μ − )≈5×10 −12 . We find that contributions arising from flavour violating Higgs penguins can enhance the branching fraction up to ≈35×10 −12 within different scenarios of the Minimal Supersymmetric Standard Model (MSSM), as well as suppress it down to ≈0.78×10 −12 . Regions with fine-tuned parameters can bring the branching fraction up to the current experimental upper bound, 8×10 −10 . The mass degeneracy of the heavy Higgs bosons in MSSM induces correlations between B(K 0 S →μ + μ − ) and B(K 0 L →μ + μ − ) . Predictions for the CP asymmetry in K 0 →μ + μ − decays in the context of MSSM are also given, and can be up to eight times bigger than in the SM. The study is accepted for publication in JHEP
.
The neutral pion is the lightest strongly interacting particle in Nature. Therefore, the properties of π0 decay are especially sensitive to the underlying fundamental symmetries of quantum chromodynamics (QCD). In particular, the π0 →γγ decay width is primarily defined by the spontaneous chiral symmetry breaking effect (chiral anomaly) in QCD. Theoretical activities in this domain over the last years resulted in a high precision (1% level) prediction for the π0→γγ decay width. The PrimEx collaboration at Jefferson Lab has developed and performed two new experiments to measure the π0 →γγ decay width with high precision using the Primakoff effect. The published result from the first experiment (PrimEx-I), Γ(π0→γγ) = 7.82±0.14(stat.) ±0.17(syst.) eV, is a factor of 2.1 more precise than the previously accepted value, and it is in agreement with the chiral anomaly prediction. The second experiment (PrimEx-II) was performed in 2010 with a goal of 1.4% total uncertainty to address the next-to-leading-order chiral perturbation theory calculations. The results from the PrimEx-II experiment will be presented in this talk.
Charmed hadrons are powerful probes to investigate the properties of the state of strongly-interacting matter with very high temperature and energy density formed in ultra-relativistic heavy-ion collisions, known as the Quark-Gluon Plasma (QGP). Because of their large masses, charm quarks are produced in the early stages of the collisions and propagate through the high-density medium interacting with its constituents, thus probing the medium properties over the whole evolution of the system. For the interpretation of the results in nucleus-nucleus collisions, measurements in smaller systems are also crucial to disentangle cold nuclear matter effects from modifications induced by the presence of the QGP. Moreover, the study of charm production in pp and p-Pb collisions at the LHC is an important tool to test predictions obtained from perturbative Quantum Chromodynamics (pQCD) calculations and to test possible collective effects. The measurement of different charm meson and baryon species provides information on the charm fragmentation and hadronisation in pp collisions and their possible modifications in Pb-Pb collisions
The ALICE detector has excellent performance in terms of particle identification capabilities and vertexing that allows the study of charm production down to very low transverse momentum. Charmed hadrons and electrons from heavy-flavour hadron decays are reconstructed at central rapidity by the ALICE central-barrel.
In this talk, recent measurements of charmed meson and baryon production are presented and compared with theoretical calculations. The results include the pT-differential cross section of strange and non-strange D mesons in pp collisions at several collision energies, and their nuclear modification factor in p-Pb collisions. Recent measurements including the pT-differential cross section of Λ+c and Ξ0c baryons and the related baryon-over-meson ratios are also presented.
The study of open charm meson production provides an efficient tool for detailed investigations of the properties of hot and dense matter formed in nucleus-nucleus collisions. The interpretation of the existing data from the CERN SPS suffers from a lack of knowledge on the total charm production rate. To overcome this limitation the heavy-ion programme of the NA61/SHINE experiment at CERN SPS has been expanded to allow for precise measurements of particles with short lifetime. A new Vertex Detector, based on the MIMOSA pixel chip family, was designed and constructed to meet the challenges of open charm measurements in nucleus-nucleus collisions.
A small-acceptance version of the Vertex Detector, SAVD (Small Acceptance Vertex Detector), was installed in December 2016 for data taking with Pb+Pb collisions at 150A GeV/c, and an exploratory set of data was collected. From this data a hint of a of a D0 signal was extracted in the pi+K decay channel. This might be the first, direct observation of open charm in nucleus-nucleus collisions at SPS energies.
The physics motivation behind the open charm measurements at the SPS energies will be discussed. Moreover, the concept of the SAVD hardware and status of the analysis will be shown, discussing challenges related to the tracking in the inhomogeneous magnetic field, as well as the matching of SAVD tracks to TPCs tracks needed for the extraction of physics results. Also, the future plans of open charm measurements in the NA61/SHINE experiment related to the upgraded version of the Vertex Detector will be presented.
A model of fully developed turbulence of a compressible fluid is reviewed, an overview of turbulent regimes of a compressible fluid will be presented. Fluid dynamics is governed by stochastic version of Navier-Stokes equation. We show how corresponding field theoretic model can be obtained and further analyzed by means of the perturbative renormalization group (RG). In this approach, scaling properties are related to the ﬁxed points of the RG equations. The perturbation theory is constructed within formal expansion scheme. Permissible scaling regimes in one- and two-loop levels are discussed.
The causal structure of a spacetime $\mathcal{M}$ is usually described in terms of a binary relation $\preceq$ between events called the casual precedence relation (often referred to as $J^+$). In my talk I will present a natural extension of $\preceq$ onto the space $\mathscr{P}(\mathcal{M})$ of (Borel) probability measures on $\mathcal{M}$, designed to rigorously encapsulate the common intuition that probability can only flow along future-directed causal curves.
Using the tools of the optimal transport theory adapted to the Lorentzian setting, one can utilize thus obtained notion of 'causality between measures' to model a causal time-evolution of a spatially distributed physical entity in a globally hyperbolic spacetime. I will define what it means that a time-dependent probability measure $\mu_t \in \mathscr{P}(\mathcal{M})$ evolves causally. I will discuss how such an evolution can be understood as a 'probability measure on the space of worldlines'. I will also briefly present some preliminary results concerning the relationship between the causal time-evolution of measures and the continuity equation.
.
We discuss the importance of the electroweak (``Cho-Maison")
monopole and emphasize that the detection of this monopole,
not the Higgs particle, should be the final and topological test
of the standard model. If discovered, it should become the first
magnetically charged stable topological elementary particle in
the history of physics. Moreover, it has deep cosmological implications.
It could become the seed of the premodial blackholes and large
scale structure of universe, the source of the intergalactic magnetic
field, and generate the electroweak baryogenesis. To show this
we discuss the cosmological production of the electroweak
monopole and estimate the remnant monopole density at present
universe. We confirm that, although the electroweak phase
transition is of the first order, it is very mildly first order. So
the monopole production comes from the thermal fluctuations
of the Higgs field after the phase transition, not the vacuum
bubble collisions during the phase transition. Moreover, while
the monopoles are produced copiously around the Ginzburg
temperature $T_G \simeq$ 59.6 TeV, most of them are annihilated
as soon as created. As the result the remnant monopole density
in the present universe becomes very small, of $10^{-11}$ of
the critical density. We discuss the implications of our results
on the ongoing monopole detection experiments, in particular
on MoEDAL, IceCube, ANTARES, and Auger.
Welcome Concert by Ruben Muradyan (piano), Svetlana Nor (Violin) and Vladimir Nor (Cello) (Formal dressing is suggested)
This corresponds to one of the talks of the Speakers Bureau of LHCb, but I've been told that I should submit an abstract anyway.
The talks would sumarize the latest results and prospects of the LHCb experiment.
.
.
.
This abstract is for a plenary talk.
After the discovery of a Higgs boson in summer 2012, understanding the properties of the new particle has been a high priority of the ATLAS physics program. Measurements of Higgs boson properties sensitive to its production processes, decay modes, and spin/CP properties based on pp collision data recorded at 13 TeV are presented. The analyses in several decay channels will be described and the results of the combination of different decay channels will also be shown.
Several theories beyond the Standard Model predict the existence of additional neutral or charged Higgs particles. Results from selected recent searches for these particles will also be discussed.
This abstract is for a plenary talk.
The large integrated luminosities that are available at the LHC, allow to test the electroweak sector of the Standard Model as well as QCD calculations to highest precision. In this talk we cover both aspects, starting with the latest results from the ATLAS collaboration involving jets, dijets, photons in association with heavy flavors and vector bosons in association with jets, measured at center of mass energies of 8 and 13 TeV. All measured cross-sections are compared to state-of-the art theory predictions. Furthermore, we report on the latest results of di-boson and multiboson final states as well as the corresponding limits on anomalous gauge couplings. Another approach to test the consistency of the electroweak sector is via precision measurements. Here we report on the measurement of the tau-polarization in Z events, the W boson mass as well as a three dimensional cross-section measurement of the Drell-Yan process, allowing for the determination of the weak mixing angle.
An overview of the latest Higgs physics results obtained with the CMS experiment using proton-proton collision data collected during LHC Run 2 at $\sqrt{s} = 13 $ TeV will be presented.
Rare decays are powerful probes for Physics beyond the Standard Model
(SM), as new particles can have a large impact on physics observables.
Recent results on lepton universality tests and measurements of
branching fractions and angular distributions of rare b->sll decays have
shown tensions with the SM predictions. The LHCb experiment is ideally
suited for the study of the these flavour anomalies, due to its large
acceptance, precise vertexing and powerful particle identification
capabilities. The latest results from LHCb on the flavour anomalies will
be presented and their interpretation will be discussed.
Precision measurements of CP violating observables in b hadron decays
are powerful probes to search for physics effects beyond the Standard
Model. The most recent results on CP violation in the decay, mixing and
interference of b hadrons obtained by the LHCb Collaboration will be
presented, with particular focus on results obtained exploiting the data
collected during the Run 2 of LHC.
The IceCube Neutrino Observatory is situated at the geographic South Pole where 1km3 of ice is instrumented with 5160 optical sensors. Neutrinos are detected via their charged interaction secondaries which produce light in various ways when passing through ice. Likewise additional kinds of particles can be detected, such as muons originating from cosmic ray air showers or particles proposed beyond the standard model.
Since 2013 highly energetic neutrinos (> 1 PeV) of astrophysical origin have been observed with IceCube. This was mainly enabled by the large instrumented volume which additionally makes IceCube highly sensitive in searches for new physics.
An overview of the recent results of IceCube is given with an emphasis on new findings in the field of neutrino and beyond standard model physics. This includes the high energy neutrino cross section as well as non-standard interactions, searches for dark matter and exotic signatures.
Weakly Interacting Massive Particles (WIMPs) remain one of the most promising dark matter candidates. Many experiments around the world are searching for WIMPs and currently the best sensitivity to WIMP-nucleon spin-independent cross-section is about $10^{-10}$ pb. LUX has been one of the world-leading detectors in a search for dark matter WIMPs. Results from the LUX experiment on WIMP searches for different WIMP masses, as well as the search for axions and axion-like particles will be presented. The LUX detector will soon be replaced by its successor, the LUX-ZEPLIN (LZ) detector. With 50 times bigger fiducial mass and an increased background rejection power due to the specially design veto systems, the LZ experiment, due to take first data in 2020, will achieve the sensitivity to WIMPs exceeding the current best limits by almost 2 orders of magnitude (for spin-independent interactions and for WIMP masses exceeding a few GeV). An overview of the LZ experiment will be presented and the LZ sensitivity will be discussed in connection to the accurately modelled background based on the high-sensitivity material screening campaign.
The measurement of cosmic neutrinos is a new and unique method to observe the Universe. Neutrinos are chargeless, weakly interacting particles that can cross dense matter or radiation fields without being absorbed for cosmological distances. Indeed, they are a complementary probe with respect to other messengers such as multi-wavelength light and charged cosmic rays allowing the observation of the far universe and providing information on the production mechanism
This presentation will review the neutrino telescopes in the Mediterranean Sea that are operating or in progress. The ANTARES detector (Astronomy with a Neutrino Telescope and Abyss environmental RESearch), is the largest neutrino telescope currently in operation in the Northern Hemisphere and the first operating in sea water. Some of the ANTARES results will be reviewed, including diffuse and point-like sources searches and multi-messenger searches.Finally the future km3-scale telescope KM3NeT (Cubic Kilometre Neutrino Telescope) will be presented focusing on the expected performances and sensitivities.
The theta term is an allowable CP violating term in the QCD Lagrangian connected to topology; the coefficient multiplying it is zero to very high accuracy. Despite being very small they way QCD varys with theta allows one to probe aspects of the QCD vacuum including the interplay of chiral physics and topology. This talk discusses the issue with an emphasis on possible scenarios in which the theta dependence could be quite different from what is commonly assumed.
Chiralspin SU(2)_CS and SU(2N_F) symmetries
are symmetries of the fermionic charge operator
and of the chromo-electric intraction in QCD. They
contain as subgroups chiral symmeries of the
QCD lagrangian. In addition to the chiral
transformations they include a mixing of the left-
and right-handed components of quarks. They
emerge in QCD upon truncation of the near-zero modes of the Dirac operator as well as at high temperatures which has profound implications.
Revisiting the fast fermion damping rate calculation in a thermalized QED and/or QCD plasma in thermal equilibrium at 4-loop order, focus is put on a peculiar perturbative structure which has no equivalent at zero-temperature. Not surprisingly, and in agreement with previous C-star-algebraic
analyses, this structure renders the use of thermal perturbation theory more than questionable.
Local formulations of quantum field theory provide a powerful framework in which non-perturbative aspects of QCD can be analysed. In this talk I will outline how this approach can be used to elucidate the general analytic features of QCD propagators.
We study numerically the chromoelectric-chromomagnetic asymmetry of the dimension two gluon condensate as well as the longitudinal gluon propagator at T>Tc in the Landau-gauge SU(2) lattice gauge theory.We show that substantial correlation between the asymmetry and the Polyakov loop as well as the correlation between the longitudinal propagator and the Polyakov loop pave the way to studies of the critical behavior of the asymmetry and the longitudinal propagator. The respective values of critical exponents and amplitudes are evaluated.
The CMS experiment at the Large Hadron Collider has measured events
with two energetic jets and searched for dijet resonances, signals of new
physics beyond the standard model. The coupling of the resonance to jets
determines whether it has a natural width that is narrow or broad when
compared to the experimental resolution. I will present a search for narrow
dijet resonances and compare with the predictions of multiple
models of new physics, including a mediator of interactions between
quarks and dark matter. I will also present a search for broad dijet resonances
and discuss the implications for the value of the coupling of a dark matter
mediator to quarks.
Excursion to Chania, old Venetian Harbor. Starting from the Old Agora, one can visit the magnificent Cathedral of Chania (a three aisle basilica from 1860), the archeological Museum of Chania, the old Venetian Port with its old turkish Mosque Yali Tzami (17th century), the Castle of Chania, the Egyptian-Venitian Lighthouse (from 16th century) and more.
Concert of classical music by Svetlana Nor (violin), Vladimir Nor (cello), and Public Talk by Dr. Despina Hatzifotiadou, "Mapping the secrets of the Universe with the Large Hadron Collider at CERN" (in greek) in Sailing Club in Chania, old harbor.
For interested people Historical Talk by the archeologist Alex Ariotti on the History of jews in Greece at 18h00 (in english) (TBC) in the restored Etz Hayyim Synagogue of Chania (15th century) and visit of the Synagogue and its Exhibition.
The scientific and personal biography of Lev Lipatov with brief description of some of his most important works is presented.
It is shown how to generalize semiclassical (high-energy) approximations
for linear eigenvalue problems to nonlinear eigenvalue problems.
We revisit the issue of the vacuum angle $\theta$ dependence in the weakly coupled (Higgsed) Yang-Mills theories. Anselm and Johansen noted that the vacuum angle $\theta_{\rm EW}$ associated with the electroweak SU(2) of the Standard Model is unobservable although all fermions get masses through Higgsing and there is no axion. We generalize this idea to a broad class of Higgsed Yang-Mills theories. We also consider the issue in frameworks of Grand Unification where situation turns out to be different.
It is still an unsolved problem as to what are the microscopic degrees of
freedom that account for the entropy of large black holes. In previous
research which we quickly review, we proposed a general understanding of
the origin of the micro states of black holes in an Euclidean setting in
string theory based on the thermal scalar. Several recent lines of
research point to the importance of edge states in partly or completely
accounting for the black hole entropy. These edge states arise when the
Hilbert space does not factorize and understanding their entanglement
structure is paramount to understanding the entanglement of the degrees
of freedom over the horizon and hence the black hole entropy. We review
these developments and discuss our recent results on edge states and
their entanglement structure in gauge theories.
R. Muradyan (piano), S. Nor(violin), V. Nor(cello)
My work with Lev Lipatov will be reviewed shortly.
The use of methods of integrable systems for treating QCD amplitudes,
parton and BFKL kernels is discussed.
We suggest a renewed view on non-renormalizable interactions treated perturbatively
within a kinematically dependent renormalization procedure. It is based on the usual BPHZ R-operation which is equally applicable to any local QFT independently whether it is renormalizable or not. The key point is that the renormalization constant becomes the function of kinematical variables acting as an operator on the amplitude. The procedure is demonstrated by the example of D=8 supersymmetric gauge theory considered within the spinor helicity formalism.
An Invited talk at Lev Lipatov Memorial session
E?ects of non-zero chemical potential in the holographic QCD.
R. Muradyan (piano), S. Nor(violin), V. Nor(cello), "In memory of a great artist", Trio by P.I. Tchaikovsky, Part II-A
Searches for permanent electric dipole moments (EDMs) of fundamental particles, atoms and molecules are promising experiments to constrain and potentially reveal beyond Standard Model (SM) physics. A non-zero EDM is a direct manifestation of time-reversal (T) violation, and, equivalently, violation of the combined operation of charge-conjugation (C) and parity inversion (P). Identifying new sources of CP violation can help to solve fundamental puzzles of the SM, eg. the observed baryon-asymmetry in the Universe.
Theoretical predictions for magnitudes of EDMs in the SM are many orders of magnitude below current experimental limits. However, many theories beyond the SM require larger EDMs. Experimental results, especially when combined in a global analysis, impose strong constraints on CP violating model parameters.
Including an overview of EDM searches I will focus my presentation on the future neutron EDM experiment at TRIUMF (Vancouver). For this effort the TUCAN (TRIUMF Ultra Cold Advanced Neutron source) collaboration is aiming to build a world leading source of ultra cold neutrons based on an unique combination of a spallation target and a superfluid helium converter.
Another focus will be the search for an EDM of the diamagnetic atom 129-xenon using a 3-helium comagnetometer and SQUID detection. The HeXeEDM collaboration is anticipating to take EDM data this year in the magnetically shielded room at PTB Berlin. Results from previous test runs at PTB Berlin and FRM-II in Munich will be presented.
The anomalous magnetic moment of the muon can be both measured and computed to a very high precision, making it a powerful probe to test the standard model and search for new physics. The previous measurement by the Brookhaven E821 experiment found about three standard deviation discrepancy from the predicted value. The Muon g-2 experiment at Fermilab will improve the precision to 140 parts per billion compared to 540 parts per million of E821 by increasing statistics and using upgraded apparatus. The first run of data taking has been accomplished in Fermilab, where we already attained the statistics of E821. In this talk, I will summarize the current experimental status and briefly describe the data quality of the first run. I will compare this run data with the previous E821 experiment and investigate the scope for further improvement.
The search for magnetic monopoles is a fascinating interdisciplinary field with implications in particle physics, astrophysics, and cosmology.
In this talk, the status of the searches for magnetic monopoles at accelerators and in the penetrating cosmic radiation is reviewed with emphasis on the most recent results from the MoEDAL experiment at the LHC.
A comprehensive set of resonances measured by the ALICE experiment in pp, p-Pb and Pb-Pb collisions at different LHC energies, will be presented. In particular, the production of hadronic resonances such as $\rho^{0}$(770), K*(892), $\phi$(1020), $\Sigma$(1385)$^{\pm}$, $\Lambda$(1520) and $\Xi$(1530)$^{0}$ will be discussed in detail. In heavy-ion collisions the hadronic resonances are sensitive to the re-scattering and regeneration processes occurring in the time interval between the chemical and the kinetic freeze-outs, due to their short lifetimes. The measurements in pp and p-Pb collisions are used as a reference for heavy-ion collisions and the search for onset of collective phenomena. We will report on the transverse momentum spectra, integrated yields, mean transverse momenta, particle ratios and nuclear modification factors of hadronic resonances. Results will be compared to ones of other experiments and to theoretical models and Monte Carlo generators.
R. Muradyan (piano), S. Nor(violin), V. Nor(cello), "In memory of a great artist", Trio by P.I. Tchaikovsky, Part II-A
Hadrons carrying heavy quarks, i.e. charm or bottom, are important
probes of the hot and dense medium created in relativistic heavy-ion
collisions. Heavy quark-antiquark pairs are mainly produced in initial
hard scattering processes of partons. While some of the produced pairs
form bound quarkonia, the vast majority hadronize into open heavy
flavor particles. RHIC experiments carry out a comprehensive physics
program which studies open heavy flavor and quarkonium production in
relativistic heavy-ion collisions. The discovery at RHIC of large
high-pT suppression and flow of electrons from heavy quarks flavors
have altered our view of the hot and dense matter formed in central
Au+Au collisions at 200 GeV. These results suggest a large energy loss
and flow of heavy quarks in the hot, dense matter. In recent years,
the RHIC experiments installed silicon vertex trackers both in central
rapidity and in forward rapidity regions, and has collected large data
samples. These silicon trackers enhance the capability of heavy flavor
measurements via precision tracking.
This talk summarizes the latest RHIC experiments' results concerning
open and closed charm and beauty heavy quark production measured
through their semileptonic decays in p+p, p/d + Au and Au+Au
collisions as a function of rapidity and energy, and their
interpretation with respect to the current theoretical understanding
on this topic.
Direct photons are a very important probe to study the properties of the medium created by heavy ion collisions, since they are produced throughout the collision history and carry out information about the medium at the point of their production, without strong interaction. While high pT direct photons originating from initial hard scattering serve as a test for pQCD, low pT photons contain rich information about a hot and dense QCD medium produced in the collisions. In particular, thermal photons are of keen interest since they allow us to direct access to the thermodynamic properties of the medium. Their contribution is expected to be very large typically below 3GeV/c . PHENIX has observed for the first time an enhanced yield below 3GeV/c in Au+Au as expected, but v2 of the enhanced yield is unexpectedly large. The mechanism to produce a large direct photon yield with a large v2 is not understood yet. PHENIX has made systematic measurements of direct photons with different collision energies and species. These systematic measurements could help understand photon production mechanism in the hot QCD medium. In this presentation, we will report the latest status of the direct photon measurements.
The NA61/SHINE experiment studies hadron production in hadron-hadron, hadron-nucleus and nucleus-nucleus collisions. The physics programme includes the study of the onset of deconfinement and search for the critical point as well as reference measurements for neutrino and cosmic ray experiments. For strong interactions, future plans are to extend the programme of study of the onset of deconfinement by measurements of open-charm and possibly other short-lived, exotic particle production in nucleus-nucleus collisions. This new programme is planned to start after 2020 and requires upgrades to the present NA61/SHINE detector setup. Besides the construction of a large acceptance silicon detector, a 10-fold increase of the event recording rate is foreseen, which will necessitate a general upgrade of most detectors.
Particles resultant from heavy ion collisions at $\sqrt{s_{NN}} = 2.76\mathrm{TeV}$ are mapped in a Mollweide type of projection. We decompose the particles' distribution in Spherical Harmonics and finally calculate its angular power spectrum. In practice,detector deficiencies and lack of full pseudorapidity ($\eta$) coverage introduce artificial structures to the power spectrum, which are related only to the geometric cuts, i.e. to the $\eta$ range. We discuss what spectral fluctuations could be caused by the underlying particle distribution and what could come from statistical uncertainties. Furthermore, we explore how the power spectrum modes are possibly related to flow coefficients and how to extract them. We aim to discover which properties of the Quark Gluon Plasma (QGP) can be seen through this type of analysis.
R. Muradyan (piano), S. Nor(violin), V. Nor(cello), "In memory of a great artist", Trio by P.I. Tchaikovsky, Part II-A
The Cryogenic Underground Observatory for Rare Events (CUORE) is the first bolometric experiment searching for neutrinoless double beta decay that has been able to reach the 1-ton scale. The detector consists of an array of 988 TeO2 crystals arranged in a cylindrical compact structure of 19 towers. The construction of the experiment and, in particular, the installation of all towers in the cryostat was completed in August 2016 and data taking started in spring 2017. In this talk we present the neutrinoless double beta decay results of CUORE from examining a total TeO2 exposure of 86.3 kg yr, characterized by an effective energy resolution of 7.7 keV FWHM and a background in the region of interest of 0.014 counts/(keV kg yr). In this physics run, CUORE placed a lower limit on the decay half-life of 130Te > 1.3 10^25 yr (90% C.L.). We then discuss the additional improvements in the detector performance achieved in 2018 and the latest update on the study of other rare processes in Tellurium and in the evaluation of the background budget.
The initial motivation for this study was to deeper investigate fundamentals of relativity by developing the framework that is consistent with existence of a privileged frame but, like the standard relativity theory, is based on the relativity principle and the universality of the (two-way) speed of light and is also preserving the group structure of the set of transformations between inertial frames. (An additional motivation for such an analysis is that cosmologically a preferred reference frame does exist.) Such a framework has been developed based on the following principles: (1) A degree of anisotropy of the one-way speed is a characteristic of the really existing anisotropy caused by motion of an inertial frame relative to the preferred frame; (2) Space-time transformations between inertial frames leave the equation of anisotropic light propagation invariant; (3) A set of the transformations possesses a group structure. The Lie group theory apparatus has been applied to define groups of transformations.
After developing the theory that satisfies all those requirements, it was found that such special relativity with a privileged frame allows a straightforward extension to general relativity (GR). The extension, like the standard general relativity, is based on the equivalence principle. The difference is in that a change of variables is needed for the combination invariant under the transformations to take the form of the Minkowski interval. Then the complete apparatus of general relativity can be applied but, to calculate physical effects, an inverse transformation to the 'physical' time and space intervals is to be used.
Applying the modified GR to cosmology yields the luminosity distance -- redshift relation corrected such that the observed deceleration parameter can be negative as it has been derived from the data for type Ia supernovae. Thus,
the observed negative values of the deceleration parameter can be explained within the matter-dominated Friedman-Robertson-Walker (FRW) cosmological model of the universe and so no dark energy is needed.
A number of other observations, such as Cosmic Microwave Background (CMB), and Baryon acoustic oscillations (BAO),
that are commonly considered as supporting the late-time cosmic acceleration and the existence of dark energy, also can be well fit to the model based on the relativity with a privileged frame.
GERDA (Germanium Detector Array) is an experimental project searching for Neutrinoless Double Beta Decay of Ge-76. It is operational at Laboratori Nazionali Del Gran Sasso of INFN, since 2009, it underwent a couple of upgrades with the aim of increasing the Exposed mass, while improving the Signal to Background Discrimination, the Background Index, while keeping the excellent stability and resolution performances that since the first steps characterize the setup.
At present GERDA is the DBD project with the lowest background and the best energy resolution at the region of interest, i.e. at Qbb, that for Ge-76 is 2039.0 keV.
In the last weeks GERDA updated for the 4th time its physics results.
In this talk the main experimental facts, performances and the updated physics results will be reviewed as well as the future outlooks.
Among the theoretical models addressing the dark matter problem, the category based on a secluded sector are gaining an increasing interest. The PADME experiment, at the Laboratori Nazionali di Frascati (LNF) of INFN, is designed to be sensitive to the production of a low mass gauge boson A’ of a new U(1) symmetry holding for dark particles.
This “dark photon” is weakly coupled to the photon of the Standard Model, and it provides an experimental signature for one of the simplest implementations of the dark sector paradigm.
The DA$\Phi$NE Beam-Test Facility (BTF) of LNF will provide a high intensity, monoenergetic positron beam impacting on a low Z target. The PADME detector will measure with high precision the momentum of a photon, produced along with the A’ boson in e$^+$e$^-$ annihilation on the target, thus allowing to measure the A’ mass as the missing mass in the final state.
This technique, particularly useful in case of invisible decays of the A’ boson, will be exploited for the first time in a fixed target experiment. Simulation studies predict a sensitivity on the interaction strength ($\epsilon^2$ parameter) down to 10$^{-6}$, in the mass region 1 MeV < M$_{A’}$ < 25 MeV , for one year of data taking with a 550 MeV beam.
In 2018 the first run will take place, and early data will give the opportunity to compare detector performances with the design requirements. Intense activity is taking place to deliver and commission the PADME experimental apparatus on site.
This talk will review the status of the experiment and the prospects.
Approach by by Brodsky-Fadin-Kim-Lipatov-Pivovarov (BFKLP) for next-to-leading approximation (NLA) of Balitsky-Fadin-Lipatov-Kuraev (BFKL) evolution with generalized Brodsky-Lepage-McKenzie resummation of QCD coupling constant effects is reviewed. Applications of NLA BFKL within BFKLP approach for gamma-gamma scattering and dijet productions with large rapidity separation in hadron collisions are discussed.
I will discuss the properties of the discrete BFKL solution and show that HERA data indicate that a state usually considered as a ground state has to decouple. As a consequence, the real ground state should be close to the non-perturbative region i.e., in the saturation region. This finding, together with the known property of BFKL that it should be sensitive to symmetries beyond Standard Model, may lead to interesting future experiments.
The ELI-NP facility (Extreme Light Infrastructure - Nuclear Physics) is the pillar of the European project ELI dedicated to frontier research in nuclear physics. The pillar will comprise two major research instruments: a high power laser system and a very brilliant gamma beam system. The ELI-NP Gamma beam system will deliver an intense gamma beam with unprecedented specifications in terms of photon flux, brilliance and energy bandwidth in an energy range from 0.2 to 20 MeV.
Given the challenging characteristics of the ELI beam, a specific system equipped with four basic elements has been developed to measure and monitor the beam parameters during the commissioning and the operational phase. A Compton spectrometer, to measure and monitor the photon energy spectrum, in particular the energy bandwidth; a sampling calorimeter, for a fast combined measurement of the beam average energy and its intensity; a nuclear resonant scattering system, for absolute beam energy calibration and inter-calibration of the other detector elements; and finally a beam profile imager to be used for alignment and diagnostics purposes. This talk presents an overview of the gamma beam characterization system with focus on the Compton spectrometer and the calorimeter, which were designed, assembled and are currently under test at INFN-Firenze. The layout and the working principle of these two devices will be
described in detail, as well as the expected performance evaluated from simulations and results of detector tests.
The LUCID-2 detector is the main online and offline luminosity provider of the ATLAS experiment. It provides over 100 different luminosity measurements from different algorithms for each of the 2808 LHC bunches. LUCID was entirely redesigned in preparation for LHC Run 2: both the detector and the electronics were upgraded in order to cope with the challenging conditions expected at the LHC center of mass energy of 13 TeV with only 25 ns bunch-spacing. While LUCID-1 used gas as a Cherenkov medium, the LUCID-2 detector is in a new unique way using the quartz windows of small photomultipliers as the Cherenkov medium. The main challenge for a luminometer is to keep the efficiency constant during years of data-taking. LUCID-2 is using an innovative calibration system based on radioactive 207 Bi sources deposited on the quartz window of the readout photomultipliers. This makes it possible to accurately monitor and control the gain of the photomultipliers so that the detector efficiency can be kept stable at a percent level. A description of the detector and its readout electronics will be given, as well as preliminary results on the ATLAS luminosity measurement and related systematic uncertainties.
The Extreme Energy Events (EEE) project is a cosmic ray physics experiment and at the same time an excellent outreach project. Its scientific goal is the study of extended air showers from high energy cosmic rays and extreme energy events by detecting the muon component of the shower. To this aim, a network of muon telescopes has been installed in high schools distributed all over Italy. The project was conceived in order to interest high school students in science and give them a hands-on experience of scientific research. The students are involved in all stages of the project: construction of the muon detectors - Multigap Resistive Plate Chambers (MRPCs) – at CERN, installation, setting up and commissioning of the telescopes in their schools, study of the performance of the detectors, data-taking and analysis. At this moment ~50 stations take data during coordinated runs which last the whole academic year; more than 70 billion tracks have been collected and a number of physics results have been published.
The HiSPARC experiment carries out cosmic ray research with the help of high school students. Universities, scientific institutes and predominantly high schools collaborate by hosting their own cosmic ray detection station. The approximately 140 stations throughout the Netherlands, United Kingdom and Denmark form a very large air shower detector array which enables new scientific questions to be answered. Our outreach goal is to bring modern physics to the classroom. In this talk I will discuss how we try to reach this goal.
The Hellenic Lyceum Cosmic Observatories Network (HELYCON) aims for the development of a network of Extensive Air Shower Detector Stations, distributed in Western Greece. In 2014 three pilot HELYCON stations were installed and still in operation at the Hellenic Open University (HOU) campus in Patra. Each station comprises 3 scintillator (1 sq. meter each) detectors and one or more CODALEMA-type RF antennas. Furthermore, small-scale and low cost autonomous HELYCON stations, suitable for easy installation and operation at high school laboratories, have also been constructed and tested at the HOU lab. In this work, we report on the operation and performance of the HELYCON stations and we present also the first educational activities that had been carried out by high school students and teachers.
One of the most long-standing puzzles in nuclear astrophysics is the so-called “Cosmological Lithium Problem”. The standard Big Bang nucleosynthesis theory (BBN) predicts the abundances of the light elements $^{2}$H, $^{3}$He, $^{4}$He and $^{7}$Li produced in the early universe. The primordial abundances of $^{2}$H and $^{4}$He inferred from experimental data are in good agreement with predictions. On the contrary, the theory overestimates the primordial $^{7}$Li abundance by about a factor of three. In an attempt to solve this problem, a possible explanation was an incorrect estimation of the destruction rate from n + $^{7}$Be reactions, being the decay of $^{7}$Be responsible for the production of 95% of primordial Lithium. Data on the $^{7}$Be(n, α) and $^{7}$Be(n, p) reaction channels are scarce or inexistent, thus large uncertainty still affects the abundance of $^{7}$Li predicted by BBN theory. With the aim of obtaining reliable data on the n + $^{7}$Be reactions in a wide neutron energy range, two measurements have been performed at the n_TOF facility at CERN, taking advantage of the new high-flux experimental area (EAR2). New detectors have been specifically developed for these measurements, and new techniques employed for the production of high-purity $^{7}$Be samples. In particular, for the first time a neutron measurement has been performed on a sample produced by implantation of a radioactive beam at ISOLDE. In this talk, the experimental apparatus and sample preparation techniques will be presented, together with recent results on the $^{7}$Be(n, α) and $^{7}$Be(n, p) reactions and their implications on the Cosmological Lithium Problem.
Recent developments, results and perspectives in double beta decay experiments at LNGS by using HPGe detectors and crystal scintillators, various approaches and various isotopes, will be presented. The measurements here presented have been performed in experimental set-ups of the DAMA collaboration. These set-ups are optimized for low-background studies and operate deep underground at the National Laboratories of Gran Sasso of INFN. The presented results are of great interest in this field and for some of the considered isotopes the reached sensitivity is one of the best available up to now.
Regge cuts in QCD amplitudes are discussed. The cuts with negative signature which appear in the next-to-next-to-leading logarithmic approximation greatly complicate the derivation of the BFKL equation. Their contributions in the two- and three-loop approximations are presented
The Balitsky-Fadin-Kuraev-Lipatov equation presents a bound state of two reggeized gluons and is already known to the leading order for four decades. Its subleading corrections at the next-to-leading (NLO) level are also known both in QCD and maximally supersymmetric theory (SUSY N=4) for more than two decades. Going beyond the NLO order presents in general a non-trivial task and still to be found for QCD. However, in some simpler cases like the BFKL equation in the color adjoint state in SUSY N=4 subleading corrections are recently became available at any order of the perturbation theory. Their calculations are based on advanced integrability techniques and an educated guess work.
The integrability was first introduced in the particle physics by Lev Lipatov while solving the leading order singlet BFKL equation using conformal invariance in the coordinate space. Surprisingly the color adjoint BFKL equation possesses a similar conformal symmetry, but in the transverse momentum space. This minor, at first sight, difference results in a completely different space of functions, which builds the BFKL eigenvalue at the next-to-leading order. The color adjoint BFKL turns out to be much simpler mostly due to a variety of identities between generalized polygamma functions with integer (color adjoint) versus half-integer (color singlet) shifts or the argument.
In this talk we discuss the possible space of functions for the higher order corrections and show how they can be constructed from a simpler functions using so called reflection identities.
The major part of the talk is dedicated to the contribution of Lev Lipatov to the present understanding of the high energy scattering and his bright, still-to-be-realized, ideas regarding the future developments in this field and beyond.
We propose a new formalism for particle production in high energy collisions which aims to unify the collinear factorization approach and DGLAP evolution equation with that of BFKL/gluon saturation physics.
I discuss the implications of a new proposed approach to determine a^HLO_μ and α_QED by using space-like kinematics
Sustaining interest and engagement in particle physics. How we do it at CERN
I was the first Chinese particle physicist writing the webblog articles to share news, knowledge, thinking and stories about high-energy physics and physicists with the public, first in Fermilab's Quantum Diaries (2005), and later in Chinese ScienceNet (since 2007).In this talk I shall briefly describe my happiness, experience and lessons in this regard, and explain why it is not easy to push education and outreach in a developing country like China.
The Particle group at the University of Birmingham has a long history of a wide programme of engagement,
spanning from large events for the general public, to several projects for primary and secondary schools and teachers.
This talk will concentrate on our activities related to Science in connection with Art & Craft, both for Primary and Secondary Schools.
In particular, we will present our bespoke workshops "The Particle World", "Physics and Dance" and "Physics and Art"
and we will highlight their impact on students.
The Cosmic-Ray Extremely Distributed Observatory (CREDO) is a project dedicated to global analysis of extremely extended cosmic-ray phenomena, so-called cosmic ray ensembles (CRE), beyond the capabilities of existing detectors and observatories. Up to date cosmic-ray research has been focused on detecting single air showers, while the search for ensembles of cosmic rays, which can even cover a significant fraction of the Earth, is a scientific terra incognita. Instead of development and commissioning a completely new global detector infrastructure, CREDO proposes approaching the global cosmic-ray analysis objectives with all types of available detectors, from professional to pocket size, merged into a worldwide network. One of the observables that can be investigated in CREDO is a number of spatially isolated events collected in a small time window which could shed light on fundamental physics issues. The CREDO mission and strategy requires active engagement of a large number of participants, also non-experts, who will contribute to the project by using common electronic devices (e.g. smartphones). CREDO participants can get engaged also on the analysis level, mainly in a citizen science format, by classifying CREDO images, also those collected by their private devices, to feed and train the pattern recognition algorithms. In the talk we will show the status and perspectives of the project.
CUPID-0 is the first large mass experiment based on cryogenic calorimeters (bolometers) which implements the dual read-out of light and heat for background rejection. The detector, consisting of 24 enriched Zn 82Se crystals (5.28 kg of 82Se), is taking data in the underground LNGS (Italy) from March 2017. In this talk I will present the analysis that allowed to set the most stringent limit on the half-life of neutrino-less double beta decay of 82Se. I will show how the particle identification, enabled by the simultaneous read-out of heat and light, provides an unprecedented background level for cryogenic calorimeters of 3.6×10^{−3} counts/keV/kg/y.
Neutron-star mergers are interesting for several reasons: they are proposed as the progenitors of short gamma-ray bursts, they have been speculated to be a site for the synthesis of heavy elements, and they emit gravitational waves possibly detectable at terrestrial facilities. The understanding of the merger process, from the pre-merger stage to the final compact object-accreting system involves detailed knowledge of numerical relativity and nuclear physics. In particular, key ingredients for the evolution of the merger are neutrino physics and the matter equation of state. In this talk, I shall discuss some aspects of neutrinos in binary mergers and the impact that the equation of state has on the neutrino emission and its possible signal in water-Cherenkov detectors.
A Bayesian analysis of new hybrid nuclear equation of state models with quark-hadron pasta phase is performed using modern observational data of compact stars including those from the binary neutron star merger GW170817 [1]. The hybrid stellar models are based on a RMF model of the hadronic phase [2] and a relativistic density functional approach to the quark matter phase [3]. The occurrence of pasta phases in the transition from hadronic to quark matter is mimicked by a simple parabolic model for pressure versus chemical potential in the mixed phase region [4,5]. This provides additional pressure relative to a Maxwell construction as a finite size effect of the structures in the mixed phase. The preliminary analysis of the chosen class of hybrid models demonstrates that the most probable phase transition density is around twice the nuclear saturation density.
Solar neutrinos have played a central role in the discovery of the neutrino oscillation mechanism. They still are proving to be a unique tool to help investigate the fusion reactions that power stars and further probe basic neutrino properties. The Borexino neutrino observatory has been operationally acquiring data at Laboratori Nazionali del Gran Sasso in Italy since 2007. Its main goal is the real-time study of low energy neutrinos (solar or originated elsewhere, such as geo-neutrinos). The latest analysis of experimental data, taken during the so-called Borexino Phase-II (2011-present), will be showcased in this talk - yielding new high-precision, simultaneous wideband flux measurements of the four main solar neutrino components belonging to the "pp" fusion chain (pp, pep, 7Be, 8B), as well as upper limits on the remaining two solar neutrino fluxes (CNO and hep).
Compact star physics is in fact a way to study the state of matter under conditions of extreme compression. However, there are no direct experimental or observational methods for investigating the internal structure and content of stellar matter. Therefore, the theoretical modeling of different situations and the presence of tension in explanations of observational data on compact stars are very important. The equation of state of a high-density substance made it possible to raise the question of the possibility of exotic states of matter, such as the quark plasma. These aspects are in the focus of research on the structure and evolution of stars, which is the main content of the phenomenology of compact stars.
Guided tour by Katerina Karkala (OAC) to the nearby Gonia Monastery. The tour will include visit to the Museum of the Monastery with very old paintings and objects and a talk given in the veranda of the Museum overlooking the sea.
Bus will start form Minoa Palace In Platanias and go through stations of everyday bus.
The description of the excursion is also available on the main page of the website, in the Materials area.
6:50 BUS 1 or 8:30 BUS 2
– arrival to Metamorphosis Monastery in Chania
part of the liturgy
light breakfast
tour through the monastery and visit of paint workshop
visit the museums
10:00 –11:20 Agia Kyriaki Monastery (walking tour) with visit many small cave churches on the way
11:20 – leaving Agia Kyriaki Monastery
for Chrysopigi Monastery
11:40 - arrival to Chrysopigi Monastery
tour through the monastery, visit of museums
lunch in Chrysopigi Monasteri
13:45 - leaving for Agia Triada Monastery in Souda (close to airport) (BUS 1)
OR
13:45 – leaving for OAC (BUS 2)
14:00 - visit Agia Triada Monastery (BUS 1)
16:00 – leaving for OAC with a stop at Venizelos graves (BUS 1)
17:00 – arrival to OAC (BUS 1)
The description of the excursion is also available on the main page of the conference website, the Materials area.
The description of the excursion is also available on the main page of the conference website, the Materials area.
The weak bosons are composite particles. The scalar boson, observed
at the Large Hadron Collider, is not a Higgs boson, but a p-wave excitation of
the neutral weak boson. The mass spectrum of the excited bosons
and their decays into weak bosons and photons are discussed. We also
estimate the cross sections for the production of the new particles at
the CERN LHC.
Besides the excited weak bosons there exist also fermions, in
particular a stable heavy fermion, which provides the dark matter in our universe.
Doubly-Charged Bileptons at the LHC
.
.
Heavy quarkonium states are expected to provide essential information on the properties of the deconfined state of nuclear matter, the Quark-Gluon Plasma (QGP), formed in the early stages of ultra-relativistic heavy-ion collisions. In particular, the suppression of the strongly bound quarkonium states via the color screening mechanism can be seen as an effect of deconfinement. ALICE results on charmonium suppression in Pb–Pb collisions at the LHC seem to indicate that additional mechanisms as J/ψ production via recombination of charm and anti-charm quarks also play a role, leading to a more complex picture of the quarkonium melting in the QGP. The contribution of this so-called (re)generation mechanism is expected to be smaller for bottomonia as for charmonia.
In ALICE, quarkonia are measured down to zero transverse momentum via the dielectron (dimuon) decay channel in the central barrel (forward muon spectrometer) for the rapidity window $|y| < 0.9$ ($2.5 < y < 4$). We will report on the recent charmonium and bottomonium measurements in nucleus-nucleus collisions with ALICE at the LHC energies. The J/ψ and Υ(1S) nuclear modification factors in Pb–Pb collisions as a function of transverse momentum, rapidity and collision centrality will be presented as well as the J/ψ elliptic flow. Results on J/ψ production in Xe–Xe collisions will also be addressed. Finally, comparisons with other experimental measurements and theoretical calculations will be discussed.
Over the lifetime of the experiment, PHENIX has accumulated a vast amount of data covering nine different collision systems at nucleon-nucleon center-of-mass energies ranging from 7.7 GeV to 510 GeV. This talk will review our present understanding of how quark-gluon plasma is formed in heavy ion collisions, and how it works.
Highlights from the experimental results of the STAR experiment at the Relativistic Heavy Ion Collider at the Brookhaven Laboratory, USA, will be presented.
The present status of fusion research and development is hindered by hydrodynamical instabilities occurring at the intense compression of the target fuel by energetic laser beams. A recent patent combines advances in two fields: Detonations in relativistic fluid dynamics (RFD) and radiative energy deposition by plasmonic nano-shells. The initial compression of the target pellet can be decreased, not to reach the Rayleigh–Taylor or other instabilities, and rapid volume ignition can be achieved by a final and more energetic laser pulse, which can be as short as the penetration time of the light across the pellet. The
absorptivity of the target can be increased by one or two orders of magnitude
by plasmonic nano-shells embedded in the target fuel. Thus, higher ignition temperature and radiation dominated dynamics can be achieved with the limited initial compression. A short final light pulse can heat the target so that most of the interior will reach the ignition temperature simultaneously based on the results of RFD. This makes the development of any kind of instability impossible, which would prevent complete ignition of the target.
Correlations between geometric and dynamical anisotropies and
development of elliptic and triangular flow together with the
oscillations of femtoscopy radii are studied within the HYDJET++
model in relativistic heavy ion collisions at energies of LHC.
The point was to describe the flow and the femtoscopy observables
simultaneously. It appears that the results obtained for spatial
anisotropy alone anticorrelate with the data, whereas dynamical
anisotropy provides both the correct sign of $v_2$ and $v_3$ and
the correct phase of the oscillations of femtoscopy radii.
Nevertheless, magnitudes of the oscillations in the latter case
cannot match the measured signals. For the quantitative description
of the data we need both types of the anisotropy.
Constructing an exact correspondence between a black hole model and a particular moving mirror trajectory, we investigate a new model that preserves unitarity. The Bogolubov coefficients in 1+1 dimensions are computed analytically. The key modification limits the origin of coordinates (moving mirror) to sub-light asymptotic speed. Effective continuity across the metric ensures that there is no information loss. The black hole emits thermal radiation and the total evaporation energy emitted is finite without invoking the conservation of energy.
Classical collisional particle systems residing in thermal equilibrium have their particle velocity/energy distribution function stabilized into a Maxwell-Boltzmann distribution. On the contrary, astrophysical plasmas are collisionless particle systems residing in stationary states characterized by the so-called kappa distribution function. Empirical kappa distributions have become increasingly widespread across the physics of astrophysical plasma processes, describing particles in the heliosphere, from the solar wind and planetary magnetospheres to the heliosheath and beyond, the interstellar and intergalactic plasmas. However, a breakthrough in the field came with the connection of kappa distributions with statistical physics and thermodynamics. Here we present the statistical origin of these distributions by maximizing the entropy in the canonical ensemble. Moreover, we show their thermodynamic origin from first principles: using only the zeroth law of thermodynamics, we derive the most generalized form of particle distribution function assigned with a temperature, that is, the kappa distributions. Therefore, two particle systems in contact that can exchange heat with each other are eventually stabilized into a stationary state that is not generally described by a Maxwell-Boltzmann, but a kappa distribution function. Finally, we summarize the penetration and incorporation of kappa distributions involvements into astrophysics and space science.
It was proposed recently that the black hole may undergo a transition to the state, where inside the horizon the Fermi surface is formed that reveals an analogy with the recently discovered type II Weyl semimetals. In this scenario the low energy effective theory outside of the horizon is the Standard Model, which describes excitations that reside near a certain point P(0) in momentum space of the hypothetical unified theory. Inside the horizon the low energy physics is due to the excitations that reside at the points in momentum space close to the Fermi surface. We argue that those points may be essentially distant from P(0) and, therefore, inside the black hole the quantum states are involved in the low energy dynamics that are not described by the Standard Model. We analyse the consequences of this observation for the physics of the black holes and present the model based on the direct analogy with the type II Weyl semimetals, which illustrates this pattern.
.
.
.
In the seesaw type-I mechanism neutrinos get masses and mixing due to Yukawa coupling to the Higgs boson and sterile neutrinos -- Majorana fermions singlet with respect to the Standard Model gauge group. Sterile neutrinos of mass in keV range is a natural candidate for dark matter being produced in the early Universe by mixing with the active neutrinos originated from the Yukawa coupling after electroweak transition. Here we consider the model with additional scalar field coupled to the sterile neutrino and capable of changing its effective mass while the Universe expands. This can drastically change the sterile neutrino production, making relatively high sterile-neutrino mixing cosmologically allowed, suggesting new mechanism to produce cold dark matter sterile neutrinos, etc, depending on the dynamics of the scalar sector.
.
.
Supersymmetry breaking
We propose a new mechanism of (geometric) moduli stabilisation in type IIB/F-theory four-dimensional compactifications on Calabi-Yau manifolds, in the presence of 7-branes, that does not rely on non-perturbative effects. Complex structure moduli and the axion-dilaton system are
stabilised in the standard way, without breaking supersymmetry, using 3-form internal fluxes. K”ahler class moduli stabilisation utilises perturbative string loop corrections, together with internal magnetic fields along the D7-branes world-volume leading to Fayet-Iliopoulos D-terms in the effective supergravity action. The main ingredient that makes the stabilisation possible at a de Sitter vacuum is the logarithmic dependence of the string loop corrections in the large two-dimensional transverse volume limit of the 7-branes.
PT-symmetric quantum mechanics began with a study of the Hamiltonian $H=p^2+ x^2(ix)^\epsilon$. The surprising feature of this non-Hermitian Hamiltonian is that its eigenvalues are discrete, real, and positive when $\epsilon>0$. In this talk we study the corresponding quantum-field-theoretic Hamiltonian $H=(\partial\phi)^2 +\phi^2(i\phi)^\epsilon$ in D-dimensional space-time, where $\phi$ is a pseudoscalar field.
Resurgence is a deep phenomenon found in a wide spectrum of mathematical and physical models. I will try to explain why this is the case and demonstrate its power, much of which lies still ahead.
I will talk on instanton effects in the Hofstadter problem.
Hadroproduction process in different types of collisions are shown to be interrelated within the recently proposed participant dissipating effective-energy approach which combines the constituent quark picture with Landau relativistic hydrodynamics. Within this approach the heavy-ion measurements on multiplicities in the midrapidity as well as in the full rapidity range and the pseudorapidity distribution are shown to be well reproduced based on (anti)proton-proton collisions in the heavy-ion collision energy range from a few GeV to a few TeV. The correlations of produced particles are considered within the model of clusters correlated in the transverse plane. The model is shown to describe the near-side ridge effect of two-particle azimuthal and rapidity correlations independent of types of collisions, from hadron-hadron interactions to heavy-ion collisions. Being generalized to higher-order correlations the model shows that the ridge effect to hold for three-particle correlations.The model points to a potential signature of new physics beyond the Standard Model to be observed in three-particle azimuthal correlations which can be directly tested in experiments at the LHC.
Invited talk at Lev Lipatov Memorial Session
.
I present theoretical calculations for Higgs-boson and top-quark production,
including high-order soft-gluon corrections. I discuss charged-Higgs production in association with a top quark or a W boson, as well as single-top and top-antitop production. Total cross sections as well as transverse-momentum and rapidity distributions of the top quark or the Higgs boson are presented for various LHC energies.
QUBIC (the Q and U Bolometric Interferometer for Cosmology) is a CMB polarimeter designed to search the B-mode polarization of the CMB, the signature expected from primordial gravitational waves generated during the inflation phase of the early Universe.
QUBIC is currently being integrated and tested and will be installed in late 2018 in its observation site near San Antonio de los Cobres on the Puna plateau in Salta, Argentina at 5000m a.s.l. offering dry atmosphere and clear sky.
QUBIC is an innovative instrument based on the novel technology of bolometric interferometry that combined the high sensitivity of bolometric detectors (2048 Transition Edge Sensors) along with the observation of interference fringes (400 channels) allowing for an unprecedented control of systematic effects. Furthermore, our synthesized beam being significantly frequency-dependent, QUBIC has spectro-imaging capabilities allowing us to reconstruct multiple sub-frequency CMB polarizations maps within our two wide-band filters (150 and 220 GHz). This opens promising perspectives for the control of foreground B-modes contamination, especially in the likely presence of complex dust emission.
End-To-End simulations have shown that QUBIC will reach a sensitivity of σ(r)=0.01 after two years of integration.
The first results obtained by the DAMA/LIBRA–phase2 experiment are presented. The data have been collected over 6 independent annual cycles corresponding to a total exposure of 1.13 ton × yr, deep underground at the Gran Sasso Laboratory. The DAMA/LIBRA–phase2 apparatus, about 250 kg highly radio-pure NaI(Tl), profits from a second generation high quantum efficiency photomultipliers and of new electronics with respect to DAMA/LIBRA–phase1. The improved experimental configuration has also allowed to lower the software energy threshold. The DAMA/LIBRA–phase2 data confirm the evidence of a signal that meets all the requirements of the model independent Dark Matter annual modulation signature, at 9.5 sigma C.L. in the energy region (1–6) keV. In the energy region between 2 and 6 keV, where data are also available from DAMA/NaI and DAMA/LIBRA–phase1, the achieved C.L. for the full exposure (2.46 ton × yr) is 12.9 sigma.
Most of the galaxies seem to harbor supermassive black holes (SMBH) of mass 10^{6-10} times the solar mass in their center. Recent many surveys on the galaxies deep in the sky reveal that such heavy SMBHs have already existed at the redshift 6-7. There have been quite a few research to explain the formation of many SMBH in such an early stage of the Universe. However, there is no definite settlement despite many efforts over forty years.
In this paper, we propose a coherent collapse of the wave formed from the Bose-condensed dark matter contrary to the previous approach of the baryon particle collapse. It is very natural to expect light bosons to form the quantum condensed state. They coherently collapse to form black holes as we demonstrate in this paper. We show that this is possible by using the Gross-Pitaevski equation with some approximations.
The point of the present study is not only the formation of SMBH but the systematic separation of the dark matter cloud into the central SMBH and the surrounding dark halo structure. Actually, the whole cloud collapses into SMBH if we forget the angular momentum. On the other hand, if we introduce the present amount of angular momentum, then no black hole can be formed. Both of them are far from observational situations. Therefore we consider the early stage of a galaxy when it was acquiring angular momentum by the tidal torque mechanism. We found, using typical observational values, that the mass ratio of the SMBH and the dark halo M(SMBH)/M(dark halo) is approximately 10^{-4 to -3}.
If we further consider the Axion case for the boson, the situation completely changes due to the attractive interaction though being quite tiny. We found that this attractive force just cancels the barrier formed by the angular momentum at the scale which is written in terms of the mass, the scattering length, and the gravitational constant only. This turns out to be the scale of several parsecs and the time scale for the SMBH formation to be about 10^8 years, well within the observational constraints. Furthermore, in this axion case, many smaller black holes, of mass 10^{2-5} solar masses, are forms as well on the outskirts of the galaxy. We found the scaling in the mass function of these black holes. The above scenario is compared with the recent experimental constraints on the axion properties.
Hamiltonian of the electromagnetic and gravitational fields on asymptotically null space-like surfaces
In my talk I will describe application of resurgence to Chern-Simons topological quantum field theory on closed 3-manifolds.
The strongly intensive observable between multiplicities in two acceptance windows separated in rapidity and azimuth is calculated in the model with quark-gluon color strings acting as sources. The dependence of this variable on the string two-particle correlation function, the width of observation windows and the rapidity gap between them is found.
In the case with independent identical strings the model calculation confirms the strongly intensive character of this observable: it is independent of both the mean number of string and its fluctuation. The peculiarities of the behavior of the strongly intensive observables between multiplicities of particles with different electric charges are also analyzed.
In the case when the string fusion processes are taken into account and a formation of strings of a few different types takes place, it is shown that this observable is equal to a weighted average of its values for different string types. Unfortunately in the last case through the weighting factors this observable becomes dependent on collision conditions.
For a comparison the results of the calculation of considered observable with the PYTHIA event generator are also presented.
A new data analysis was performed, based on looser selection criteria and multivariate approach. Oscillation parameters and nu_tau cross-section have been determined with a reduced statistical uncertainty, and the discovery of tau neutrino appearance is confirmed with an improved significance level. Moreover, the search for electron neutrino events has been extended to the full dataset, exploiting an improved method for the electron neutrino energy estimation. New limits have been set in the 3+1 neutrino model.
The Very Special Relativity Electroweak Standard Model (VSR EW SM) is a theory with
SU(2)_L×U(1)_R symmetry, with the same number of leptons and gauge fields as in the usual Weinberg-Salam (WS) model.
No new particles are introduced. The model is renormalizable and unitarity is preserved. However,
photons obtain mass and the massive bosons obtain different masses for different polarizations. Besides,
neutrino masses are generated. A VSR invariant term will produce neutrino oscillations and new processes
are allowed. In particular, we compute the rate of the decays µ->e+photon. All these processes, which
are forbidden in the Electroweak Standard Model, put stringent bounds on the parameters of our model
and measure the violation of Lorentz invariance. Violations of Lorentz invariance have been predicted
by several theories of Quantum Gravity. It is a remarkable possibility that the low energy effects of
Lorentz violation induced by Quantum Gravity could be contained in the non-local terms of the VSR
EW SM.
Non-perturbative techniques are needed to study strongly coupled systems. One powerful approach is the n-particle irreducible effective action. The technique provides a systematic expansion for which the truncation occurs at the level of the action. However, renormalisation using a standard counterterm approach is not well understood. At the 2PI level one must introduce multiple counterterms, and at higher orders there is no known way to renormalise an nPI theory using counterterms. On the other hand, renormalisation is much simpler using a renormalisation group approach. We present results from a calculation using a scalar theory with quartic coupling in 4 dimensions, at the 4 loop level. The 2PI theory is renormalised using one bare coupling constant which is introduced at the level of the lagrangian. We discuss how the method can be generalised to higher order calculations.
Gauges in generalized Galileon theories
We find vacuum solutions such that massive gravitons are confined in a local spacetime region by their gravitational energy in asymptotically flat spacetimes in the context of the bigravity theory. We call such self-gravitating objects massive graviton geons. The basic equations can be reduced to the Schroedinger-Poisson equations with the tensor ``wavefunction'' in the Newtonian limit. We obtain a non-spherically symmetric solution as well as a spherically symmetric solution. The energy eigenvalue of the Schroedinger equation in the non-spherical solution is smaller than that in the spherical solution. The results suggest that the non-spherically symmetric solution is the ground state of the massive graviton geon. The massive graviton geons may decay in time due to emissions of gravitational waves but this timescale can be quite long when the massive gravitons are non-relativistic and then the geons can be long-lived. We also discuss the ultralight dark matter scenario by this geon.
Primordial Intermediate-Mass Black Holes as Dark Matter
The research on wormhole is an important issue in study of spacetime physics. The wormhole usually consists of exotic matter which satises the are-out condition and violates weak energy condition, even though there have been attempts to construct wormhole with non-exotic matter. There were also solutions of cosmological wormhole model as well as the cosmological black hole solutions. The interaction of wormholes with dark energy distributed over the universe can be one of the most important issues. Moreover, they can show a generalized theory of global and local physics, that is interested in the unification of interactions.
There had been various solutions of this structure by combining wormhole models with cosmological models, There was the solution of a wormhole inflationary expanding universe model. Also there was a wormhole solution in FLRW cosmological model which showed the expansion of the wormhole throat at the same rate as that of the scale factor. Hochberg and Kepart tried to extend the Visser type wormhole into a surgical connection of two FLRW cosmological models. Similarly there was a solution for the connection of two copies of Schwarzschild-de Sitter type wormhole as the cosmological wormhole model.
In this paper, a cosmological model with an isotropic form of the Morris-Thorne type wormhole was derived in a similar way to the McVittie solution to the black hole in the expanding universe. By solving Einstein's field equation with plausible matter distribution, we found the exact solution of the wormhole embedded in Friedmann-Lemaitre-Robertson-Walker universe. We also found the apparent cosmological horizons from the redefined metric and analyzed the geometric natures, including causal and dynamic structure according to the matter distributions of the bacground cosmological model.
We report the results from a search in Super-Kamiokande for neutrino signals coincident with gravitational-wave events using a neutrino energy range from 3.5 MeV to 100 PeV. We searched for coincident neutrino events within a time window of ±500 s around the gravitational-wave detection time. In this presentation, we report the number of events after in the window and the 90% confidence level upper limits on the combined neutrino fluence for each gravitational-wave events.
Abstract:
We study the non-perturbative superpotential in $E_8 \times E_8$ heterotic string theory on a non-simply connected Calabi-Yau
manifold $X$, as well as on its simply connected covering space $\tilde{X}$.
The superpotential is induced by the string wrapping holomorphic, isolated, genus 0 curves.
According to the residue theorem of Beasley and Witten, the non-perturbative superpotential must vanish in a large class
of heterotic vacua because the contributions from curves in the same homology class cancel each other.
We point out, however, that in certain cases the curves treated in the residue theorem as lying in the same homology class, can actually be in different homology classes with respect to the physical Kahler form.
In these cases, the residue theorem is not directly applicable and the structure of the superpotential is more subtle.
We show, in a specific example, that the superpotential is non-zero both on $\tilde{X}$ and on $X$.
On the non-simply connected manifold $X$, we explicitly compute the leading contribution to the superpotential
from all holomorphic, isolated, genus 0 curves with minimal area. The reason for the non-vanishing of the superpotental on $X$ is that the second homology class
contains a finite part called discrete torsion. As a result, the curves with the same area are distributed among different torsion classes
and, hence, do not cancel each other.
ABSTRACT
Basic input(s):
a) "On the apparent likeness of local gauges and their underlying physics"
in the file: naturesway2007.pdf on my Home Page:
URL: http://www.mink.itp.unibe.ch .
Basic ingredients: "gauging vector- and axialvector currents" or "gauging charges"
as apparently like but n o t intrinsically related to"gauging orientation" through the metric tensor or equivalently the vierbein or vielbein fields.
b) The aim of the proposed contribution is, to illustrate and clarify
the conditions , which justify the meaning "apparent" as opposed to
"intrinsic" or "essential" , in particular when "gauging orientation".
c) It is outlined, that the specification of the gauge fields , as described in a) and b) are necessary on the way to establish
a 'renormalizible' local quantum field theory , including gravity.
.
This is planned to be a review talk addressing weak, partial and interaction-free measurements. If time allows, some new effects will be presented.
We show that in no-scale models in string theory, the flat, expanding cosmological evolutions found at the quantum level can be attracted to a "quantum no-scale regime" (QNSR), where the no-scale structure is restored asymptotically. In this regime, the quantum effective potential is dominated by the classical kinetic energies of moduli fields. To be specific, we find that all initially expanding cosmological evolutions along which the 1-loop potential V_{1-loop} is positive are attracted to the QNSR describing a flat, ever-expanding universe. On the contrary, when V_{1-loop} can reach negative values, the expansion comes to a halt and the universe eventually collapses into a Big Crunch, unless the initial conditions are tuned in a tiny region of the phase space. This suggests that flat, ever-expanding universes with positive potentials are way more natural than their counterparts with negative potentials.
Best Poster Award ceremony in the lower veranda of OAC, with wine testing and dinner.
Concert of Classical Music in OAC by Ruben Muradyan (piano), Svetlana Nor (violin), Vladimir Nor (cello).
(Formal dressing is suggested).
.
In the last decades, advances in the level of precision in controlling atomic and optical systems opened up the low-energy precision frontier to fundamental physics tests. Exploitation of quantum entanglement in such systems to further improve the sensitivity of certain existing approaches is currently an active field of research. Drawing from the experiments in our lab, in this talk I will focus on the properties, generation and usage of a particular set of entangled states called spin squeezed states.
TBA
We present the latest results on the production of strange and multi-strange hadrons in pp and Pb--Pb collisions with ALICE at the LHC energies of $\sqrt{\mathrm{s}}$ = 13 TeV and $\sqrt{s\
_{\mathrm{NN}}}$ = 5.02 TeV. Strangeness production measurements are powerful tools for the study of the thermal properties of the deconfined state of QCD matter, the Quark-Gluon Plasma.
Thanks to its unique tracking and PID capabilities, ALICE is able to measure weakly decaying particles through the topological reconstruction of the identified hadron decay products.
Transverse momentum spectra of $\mathrm{K}^{0}_{S}$, $\Lambda$, $\Xi$ and $\Omega$ at central rapidity are presented as function of the collision centrality.
The so-called baryon anomaly in the ratio $\Lambda$/$\mathrm{K}^{0}_{S}$ is studied to probe particle production mechanisms: the position of the peak is sensitive to recombination processes, the high-$p_{\mathrm{T}}$ part can provide revealing insights on fragmentation and, finally, the steepness of the rising trend observed for $p_{\mathrm{T}}$ $\sim <$ 2 GeV/$c$ can be
connected to the hydrodynamic expansion of the system.
In order to study strangeness enhancement, hyperon yields are normalised to the measurements of pion production in the corresponding centrality classes.
Comparisons to lower energy results as well as to different collision systems will be shown. The talk is aimed to present a complete experimental picture that is used as a benchmark for commonly adopted phenomenological models, such as the thermal statistical hadronisation approach.
We have studied pT distributions of the invariant inclusive cross sections for the primary charged particles produced in p-Pb collisions at LHC energies with pT in the interval of: 0.5 - 20 GeV/c using HIJING and UrQMD models. We observed that both ALICE Experimental data could not be described properly by HIJING and UrQMD models, But for the particles with pT > 5 GeV/c and in the psuedorapidity regions of |ƞ| < 0.3, 0.3 < ƞ <0.8 the model predictions are very close to the experimental results. The predictions of the models are ƞ dependent while for the experiment there is no essential difference of yields for particles from the central and forward pseudorapidity intervals. The codes cannot take into account leading effect satisfactorily due to the asymmetric p-Pb fragmentation. We observed that at high pT (0.5 - 100 GeV/c) values the behavior of the distributions show some universality which do not depend upon the model ideas. The reason of the universality could be the string dynamics for the parton hadronization at high pT values.
Approximately 1 $\mu$s after the Big Bang, the universe was in the state of a Quark-Gluon Plasma (QGP), an ultra hot and dense state of matter consisting of quarks and gluons. With the Large Hadron Collider (LHC) at CERN, in Switzerland, the properties of the QGP are studied by recreating this state of matter in collisions of lead ions. Studies of collisions of protons with protons or lead ions have shown that these smaller systems can also possess features of collectivity reminiscent of a QGP. The charged-particle multiplicity is one of the most fundamental measurements and provides insights into the mechanisms of particle production.
We present results for charged-particle multiplicity distributions in pp collisions over a wide kinematic range, namely ($-3.4 < \eta < 5.0$), and new studies for p-Pb collisions. The data used in this work were obtained using the Forward Multiplicity Detector (FMD) and the Silicon Pixel Detector (SPD) of ALICE (A Large Ion Collider Experiment) at the LHC. The results are compared to different models in order to assess which scenario better describes the data and whether there are signs of QGP.
The strong coupling $\alpha_s$ is one of the fundamental parameters of the Standard Model (SM). Its precise knowledge is of crucial importance to fully exploit the potential of the LHC and future experiments in testing the SM and constrain New Physics. However, an accurate determination of $\alpha_s$, i.e. comfortably below the percent level, faces many difficulties. There are several determinations which do not reach the desired level of precision and those that do, often have to deal with systematics which are hard to quantify. Notorious examples are the uncertainties coming from missing high perturbative orders and the problem of non-perturbative corrections. In this talk I discuss how lattice field theory methods can elegantly solve these and other issues, and I will present the results of an accurate sub-percent determination of the strong coupling from first principles.
Heavy flavor quarks are unique tools for studying the properties of the Quark Gluon Plasma (QGP) produced in high-energy nuclear collisions. Since heavy quarks are predominantly created in the initial hard scatterings in a heavy-ion collision, they can access the information of the early time dynamics. In this talk we will present pT and centrality dependences of the production and elliptic flow of various charm hadrons (e.g. D0 and D±) at mid-rapidity in Au+Au 200 GeV collisions using the STAR Heavy Flavor Tracker dataset. In addition, we will present their nuclear modification factors and compare them to those for lighter hadrons as well as to theoretical calculations. Physics implications of these measurements will be discussed.
I apply the Hamiltonian reduction procedure to spacetimes of 4 dimensions without isometries in the (2+2) formalism and find privileged spacetime coordinates in which the physical Hamiltonian is expressed in terms of the conformal two metric and its conjugate momentum. Physical time is the area element of the spatial cross-section of null hypersurfaces, and the physical radial coordinate is defined by "equipotential" surfaces on a given spacelike hypersurface of constant physical time. The physical Hamiltonian is local and positive in the privileged coordinates. I present the complete set of Hamilton's equations and find that they coincide with the Einstein's equations written in the privileged coordinates. This shows that the Hamiltonian reduction is self-consistent and respects general covariance.
Decoherence may play a role in the quantum-to-classical transition of primordial cosmological fluctuations. But if it occurs in the early Universe, the interaction with the environment that gives rise to it also changes observable predictions such as the power spectrum from inflation and the amount of non Gaussianities. I will show how this opens up the possibility to observationally probe quantum decoherence.
The gravitational instability, responsible for the formation of structure in the Universe, is a Newtonian phenomenon, occurring in the weak-field limit of General Relativity. It occurs at low energies and big radii of a self-gravitating gas when thermal energy can no longer counterbalance self-gravity. I will show that if such an ideal, self-gravitating gas with constant rest mass is sufficiently heated up, it becomes subject to a novel relativistic gravothermal instability, which occurs at high energy and small radii, even if the rest mass satisfies the Newtonian weak-field approximation. In the one hand, thermal energy tends to stabilize a gas with respect to self-gravity, but on the other, being a form of mass-energy, it gravitates as well. According to Tolman-Ehrenfest effect, the temperature profile is inhomogeneous at thermal equilibrium in General Relativity. Heat rearranges itself in order to counterbalance its own self-gravity, just like rest mass does. I find that there is always a threshold beyond which thermal energy cannot support its own gravitational attraction. Applications of the phenomenon include neutron stars and core-collapse supernovae. I apply the formalism to hot protoneutron stars and generalize the Oppenheimer-Volkov calculation of the mass limit to the whole non-zero temperature regime. An ultimate upper mass-limit of 2.4 solar masses at a radius of 15km is reported at a temperature relevant to core-collapse supernovae.
Wormholes in Galileon theories
The 'projective theory of relativity' is a theory developed by American geometers such as Oswald Veblen and Dutch geometers such as Johannes Schouten, mainly between 1930 and 1935. This theory differs radically from Kaluza-Klein type theories, conformal theories of spacetime or theories such as the 'de Sitter projective relativity' although it shares with these theories geometric aspects in spaces of dimension 5. Moreover, certain versions of this theory can also be designed independently of any given spacetime metric contrarily to the precedent ones. Nevertheless, other versions can eventually include (pseudo-)Riemannian structures as substructures.
The peculiarity of the projective geometries involved in this projective theory of relativity was that it was based on spaces of dimension 5, parameterized by so-called 'homogeneous coordinates.' Since then, no physical observables could be ascribed to these homogeneous coordinates, and in particular, during the elaboration of this theory which consequently fell completely into oblivion.
We will present how this projective theory of relativity can be fully justified physically from the causal structures and localizing protocols involved in 'relativistic localizing systems' that extend `relativistic positioning systems.' In other words, we explain the correspondence between 'homogeneous coordinates' of the projective theory of relativity and the physical observables defined in relativistic localizing systems.
Then, a theoretical overview of the projective geometry involved in this theory will be presented with some physical interpretations.
Also, possible astrophysical manifestations will be presented based on projective effects and/or invariance of interactions or observations with respect to projective transformations.
I will discuss the role of lepton flavour in scalar triplet leptogenesis, a mechanism for the generation of the baryon asymmetry of the Universe involving a heavy scalar triplet, which decays in a CP-violating way into leptons and antileptons. I will show that the effects of the different lepton flavours can never be neglected, and that their proper description at high temperature requires flavour-covariant Boltzmann equations. The numerical impact of these lepton flavour effects on the predicted baryon asymmetry can be very significant in all temperature regimes, contrary to the standard leptogenesis scenario with heavy right-handed neutrinos.
Supersymmetry breaking in heterotic strings and corrections to gauge couplings
I will discuss gauge threshold corrections in (non)supersymmetric heterotic vacua. After a general introduction to the general structure of these thresholds and to their universality properties, even in the absence of supersymmetry, I will present new vacua (supersymmetric and not) which do not suffer from the decompactification problem.
The fixed-target NA61/SHINE experiment at CERN SPS is conducting a rich program on strong interactions,
which covers the study of the onset of deconfinement and the search for the QCD critical point.
To achieve these goals in NA61/SHINE the scan of a broad region of the QCD phase diagram ($T - \mu_B$) by varying the momentum
and the size of colliding systems (p+p, p+Pb, Be+Be, Ar+Sc, Xe+La, Pb+Pb) have been performed.
New NA61/SHINE results on particle spectra and fluctuations in p+p, Be+Be and Ar+Sc
collisions will be discussed.
Special emphasis will be put on measurements of particle ratios and multiplicity fluctuations versus energy and the system size.
The abtained results reflect very interesting features and might be related to the onset of deconfinement as well as to the onset
of formation of large clusters of strongly interacting matter.
Recently, the experimental setup of NA61/SHINE experiment was supplemented
with Vertex Detector (VD) which was motivated by the importance and the possibility of the first
direct measurements of open charm mesons in heavy ion collisions at SPS energies.
The presentation will also focus on the physics motivation behind the open charm measurements at the SPS energies and will provide
information on future plans of charm flavor measurements in NA61/SHINE experiment.
BM@N (Baryonic Matter at Nuclotron) is the first experiment to be realized at the accelerator complex of NICA-Nuclotron. The aim of the BM@N experiment is to study interactions of relativistic heavy-ion beams with fixed targets. The scientific program of the BM@N experiment comprises studies of nuclear matter in the intermediate energy range between experiments at SIS and NICA/FAIR facilities. The BM@N experiment has recorded first experimental data. The experimental runs were performed in the deuteron and carbon beams with the kinetic energy from 3.5 to 4.5 GeV per nucleon. The extended configuration of the BM@N set-up was realized in recent runs with the argon and krypton beams. The first measurement of short range correlations of nucleons in carbon nucleus was performed in inverse kinematics with carbon beam and liquid hydrogen target. The experimental program covering physics of heavy-ion collisions and short range correlation of nucleons as well as first experimental results on production of hyperons are presented.
In this work, we present the simulation results obtained by the research group from Faculty of Physics, University of Bucharest, involved in CBM Collaboration, at FAIR-GSI Darmstadt. The simulations have been done using YaPT system, developed in own research center. The results reflect possible bulk properties of the high excited and dense nuclear matter formed in nucleus-nucleus collisions, as well as different hydrodynamic behaviours of the nuclear matter from participant region and possible influences of the spectator regions. Different hypotheses are tested, and analysis methods include global analysis.
The experimental results from experiments performed at JINR, RHIC-BNL, LHC-CERN, as well as simulated results for CBM-FAIR and MPD-NICA-JINR are used to discuss nuclear matter compressibility and viscosity and the possible dependencies on the collision geometry and collision energy. Possible consequences of the transition regime presence are discussed, related to the cumulative effect, mainly.
The main scientific goal of the NICA project at JINR (Dubna) is the experimental exploration of the QCD phase diagram in the region of the maximum baryonic density. Systematic measurements of the production of hadrons, leptons, and light (hyper)nuclei at the NICA collider will be conducted with the MultiPurpose Detector (MPD), which comprises efficient tracking and powerful particle identification in a high track density environment. I my talk, I will discuss the NICA/MPD physics objectives and overview progress in detector construction. A theoretical motivation will be accompanied by results of realistic Monte-Carlo simulation of the proposed experimental setups.
Despite the successes of the Standard Model, the QCD-based nature of the strong interaction is insufficiently understood. The antiProton ANnihilations at DArmstad (PANDA) aims to shed light on the nature of the strong interactions by investigating with a versatile detector annihilation of cooled antiprotons stored at the High Energy Storage Ring (HESR) at the future Facility for Antiproton and Ion Research (FAIR). Employment of the aniproton annihilations allows direct population of high-spin and exotic states while cooling of the beams provides possibility for precise measurements of resonances line-shape, which is one of the key properties required for understanding of the states nature. This contribution will focus on the physics goals which will be targeted during the first years of operation of the experiment.
The measurement of quantum nonlocal observables lies at the foundations of quantum theory. We report an implementation of the von Neumann instantaneous measurements of nonlocal observables which becomes possible due to technological achievements in creating hyperentangled photons. Tests of reliability and of the nondemolition property of the measurements have been performed with high precision showing the suitability of the scheme as a basic ingredient of numerous quantum information protocols. Based on the concept of modular values, we performed the experimental extraction of the weak values of nonlocal observables. Our results overcome the absence of nonlocal observables in the physical Hamiltonian and the shortage in extracting nonlocal weak values from high order approximation, therefore significantly simplify the task of obtainning nonlocal weak values. Our methods and results can be applied to demonstrate the failure of the product rule with strong measurements or in the scenario of weak values. We also show that the nonlocal wave function can also be directly measured via our method.
Measurements are crucial in quantum mechanics, because of features like the wave function collapse after a “strong” (projective) measurement or the fact that measuring a quantum mechanical observable completely erases the information on its conjugate.
Nevertheless, quantum mechanics allows for different measurement paradigms including weak measurements (WMs), i.e. measurements performed with an interaction sufficiently weak not to collapse the original state.
These measurements result in weak values [1-6], exploited for research in fundamental physics [7-13], as well as in applied physics being powerful tools for quantum metrology [14-20].
A second example is given by protective measurements (PMs) [21], a new technique able to extract information on the expectation value of an observable even from a single measurement on a single (protected) particle [22].
In addition, other novel measurement protocols have stemmed from these measurement paradigms. It is the case of genetic quantum measurement (GQM), a recursive measurement paradigm where some analogies with the typical mechanisms of genetic algorithms [23] appear, yielding uncertainties even below the level fixed by the quantum Cramer-Rao bound for the traditional prepare-and-measure scheme.
Recently, we have been exploring a new technique named robust weak value measurement (RWM), where, in principle, weak values can be extracted not as an average on an ensemble of weakly measured particles, but even from a single particle (provided it survives the whole measurement process).
In this talk, we present the first experimental implementation of PM [22], showing unprecedented measurement capability and demonstrating how the expectation value of an observable can be obtained without any statistics.
Afterwards, we introduce the GQM paradigm, illustrating its features and advantages, verified by the experimental results obtained in our proof-of-principle experimental demonstration.
Finally, we will present RWM and show the preliminary results achieved by our experimental implementation of such protocol.
References
[1] A. G. Kofman, S. Ashhab, F. Nori, Phys. Rep. 52, 43 (2012).
[2] B.Tamir and E. Cohen, Quanta 2, 7 (2013).
[3] Y. Aharonov, D. Z. Albert, and L. Vaidman, Phys. Rev. Lett. 60, 1351 (1988).
[4] N. W. M. Ritchie, J. G. Story, and R. G. Hulet, Phys. Rev. Lett. 66, 1107 (1991).
[5] G. J. Pryde, J. L. O’Brien, A. G. White, T. C. Ralph, and H. M. Wiseman, Phys. Rev. Lett. 94, 220405 (2005).
[6] O. Hosten and P. Kwiat, Science 319, 787 (2008).
[7] Y. Aharonov et al., Phys. Lett. A 301, 130 (2002).
[8] H. M. Wiseman, New J. Phys. 9, 165 (2007).
[9] R. Mir, J. S. Lundeen, M. W. Mitchell, A. M. Steinberg, J. L. Garretson, and H. M. Wiseman, New J. Phys. 9, 287 (2007).
[10] N. S. Williams and A. N. Jordan, Phys. Rev. Lett. 100, 026804 (2008).
[11] M. E. Goggin, M. P. Almeida, M. Barbieri, B. P. Lanyon, J. L. O’Brien, A. G. White, and G. J. Pryde, PNAS 108, 1256 (2011).
[12] M. Pusey, Phys. Rev. Lett. 113, 200401 (2014).
[13] F. Piacentini et al., Phys. Rev. Lett. 116, 180401 (2016).
[14] O. Hosten and P. Kwiat, Science 319, 787 (2008).
[15] K. J. Resch, Science 319, 733 (2008).
[16] P. B. Dixon, D. J. Starling, A. N. Jordan, and J. C. Howell, Phys. Rev. Lett. 102, 173601 (2009).
[17] J. M. Hogan, J. Hammer, S.-W. Chiow, S. Dickerson, D. M. S. Johnson, T. Kovachy, A. Sugerbaker, and M. A. Kasevich, Opt. Lett. 36, 1698 (2011).
[18] O. S. Magaña-Loaiza, M. Mirhosseini, B. Rodenburg, and R. W. Boyd, Phys. Rev. Lett. 112, 200401 (2014).
[19] J. Salvail et al., Nature Phot. DOI 10.1038;
[20] J. Lundeen, B. Sutherland, A. Patel, C. Stewart, and C. Bamber, Nature 474, 188 (2011).
[21] Y. Aharonov and L. Vaidman, Phys. Lett. A 178, 38 (1993).
[22] F. Piacentini et al., Nat. Phys. 13, 1191–1194 (2017).
[23] M. Mitchell, An Introduction to Genetic Algorithms, Cambridge, MA: MIT Press (1996).
The Aharonov Albert Vaidman weak values provide a starting point for a time symmetric ontology to quantum mechanics. While some of the work on weak measurements hinted at such an ontology, it was never formally defined. I will provide results for the initial steps taken in formally defining a weak value ontology, starting with an operational definition for weak values. The operational approach clarifies the basic assumptions required in order to accept weak values as ontological elements of a theory (and weak measurements as fundamental empirical tools). I will then show that it is possible to build a neo-classical model to give a clear ontology to a recent weak measurement experiment [Hallaji et al. Nat. Phys. (2017)] that cannot be explained by classical physics. While the neo-classical model does not extend beyond the specifics of the experiment, it provides an indication of what a weak value ontology could look like.
A weak measurement performed on a pre- and post-selected quantum system can result in an average value that lies outside of the observable’s spectrum. This effect, usually referred to as an “anomalous weak value”, is generally believed to be possible only when a non-trivial post-selection is performed, i.e., when only a particular subset of the data is considered. In this work we show, however, that this is not the case in general: in scenarios in which several weak measurements are sequentially performed, an anomalous weak value can be obtained without post-selection, i.e., without discarding any data. We discuss several questions that this raises about the subtle relation between weak values and pointer positions for sequential weak measurements. Finally, we consider some implications of our results for the problem of distinguishing different causal structures.
An overview of recent developments in the applications of resurgence and transseries to string theory and 2d quantum gravity.
We address the puzzle of the light-like rolling in linear dilaton background relaxing to the tachyon vacuum. While we expect no perturbative fluctuations around the tachyon vacuum, and yet the tachyon relaxes to the vacuum, the resolution of this paradox comes in the form of an asymptotic series.
In the setting of the Painlevé I equation, which can be viewed as describing the double scaling limit of 2d quantum gravity, I describe techniques which can lead to a full understanding of the physics and mathematics encoded in resurgent asymptotic (trans)series.
Primordial black holes can be seeded by large cosmological fluctuations produced during inflation. This happens if the potential for inflation is sufficiently flat in some regions. However, in such regions, the dynamics of the inflaton is dominated by quantum diffusion rather than by classical slow roll. This implies that the standard method to calculate the amplitude of the fluctuations, hence the abundance of black holes, breaks down. We show how a proper calculation of inflationary perturbations that incorporates the effect of quantum diffusion can be performed using the formalism of stochastic inflation. We will discuss how the predictions for the primordial black holes abundance change, hence how the constraints on the inflationary potential coming from their non detection are modified.
Quantum statistics have a profound impact on the properties of systems composed of identical particles. At the most elementary level, Bose and Fermi quantum statistics differ in the exchange phase, either 0 or $\pi$, which the wave function acquires when two identical particles are exchanged. I will report on a scheme to directly probe the exchange phase with a pair of massive particles by physically exchanging their positions [1]. Importantly, the particles always remain spatially well separated, thus ensuring that the exchange contribution to their interaction energy is negligible and that the detected signal can only be attributed to the exchange symmetry of the wave function. I will discuss an implementation of this scheme using a pair of ultracold atoms that are initially prepared in the motional ground state of two distict lattice sites of a polarization-synthesized optical lattice [2].
The geometric phase can be characterized by the weak value. By considering weak measurement with decoherence, we can operationally define the geometric phase with decoherence. This definition can be connected to the Uhlmann's defined geometric phase for mixed states. Furthermore, the experimental demonstration is discussed in linear optics.
We present an implementation of a superadiabatic protocol proposed by Demirplack and Rice and by Sir Michael Berry in a superconducting circuit consisting of a transmon device operated as a qutrit (three-level system). The adiabatic process studied is STIRAP (stimulated Raman adiabatic passage), which in our system is realized by coupling the two transitions by Gaussian microwave pulsed with an appropriate timing [1]. Then, a control Hamiltonian in the subspace of the ground state and the second excited state is created by using a two-photon pulse. The three pulses produce an analog of the Aharonov-Bohm effect in the "internal space" of the qutrit: this results in a synthetic gauge-invariant phase, which can be tuned externally. The control Hamiltonian can be tailored to cancel the nonadiabatic terms, thus achieving superadiabatic transfer of population [2]. In addition, I will show another related experiment in which the transfer is realized by a non-Abelian gate [3].
References:
[1] K.S. Kumar, A. Vepsalainen, S. Danilin, G. S. Paraoanu, Stimulated Raman adiabatic passage in a three-level superconducting circuit. Nature Communications 7, 10628 (2016)
[2] Antti Vepsalainen, Sergey Danilin, Sorin Paraoanu, Superadiabatic population transfer by loop driving and synthetic gauges in a superconducting circuit. arXiv: 1709.03731
[3] S. Danilin, A. Vepsäläinen, G. S. Paraoanu, Experimental state control by fast non-Abelian holonomic gates with a superconducting qutrit. Phys. Scr. 93 055101 (2018)
Quantum vacuum fluctuations on curved spacetimes cause the emission of entangled pairs. The most remarkable instance of this effect is Hawking radiation from black hole horizons, which can however not be observed in astrophysics. Fortunately, it is possible to recreate the kinematics of waves on curved spacetimes in the laboratory to study horizon emission. Here we investigate and demonstrate the role of laboratory horizons for the production of entangled pairs. We develop a field theoretical description based on an optical analogue system in the Hopfield model to calculate the scattering matrix that completely describes mode coupling leading to the emission of pairs in various kinematic configurations. We find that horizons lead to an order of magnitude increase in the pair production, a simplification and increase of the quantum correlations, and a characteristic shape of the emission spectrum. The findings clarify a number of open questions towards the detection of the Hawking effect in these dispersive systems. Furthermore, they will be relevant in numerous optical and non-optical systems exhibiting horizons.
We calculate a finite momentum-dependent part of the photon polarization operator in a simple model of Lorentz-violating quantum electrodynamics nonperturbatively at all orders of Lorentz-violating parameters. We sum one-particle reducible diagrams into the modified photon propagator, and determine the physical photon dispersion relation as the location of its pole. The photon dispersion relation, as well as its group velocity, acquires the one-loop momentum-dependent radiative correction. We constrain the Lorentz-violating parameters for heavy charged fermions (muon, τ-lepton, top-quark) from the photon timing observations.
In this talk, we present whether the new ekpyrotic scenario can be embedded into ten-dimensional supergravity. We use that the scalar potential obtained from flux compactifications of type II supergravity with sources has a universal scaling with respect to the dilaton and the volume mode. Similar to the investigation of inflationary models, we find very strong constraints ruling out ekpyrosis from analysing the fast-roll conditions. We conclude that flux compactifications tend to provide potentials that are neither too flat and positive (inflation) nor too steep and negative (ekpyrosis).
The anti-de Sitter/conformal field theory correspondence and the membrane paradigm have illuminated many aspects of string and field theory, giving key insights into what a quantum theory of gravity might look like, while also providing tools to study a wide range of strongly coupled systems. In essence, these ideas are a statement of the holographic principle: a fundamental observation about our universe which states that all of the information contained in a bulk region of space-time can be encoded on the boundary of that region. However, these approaches are usually limited to situations where knowledge of the boundary of space or the entire future history of the universe is required. From a practical point of view this is unsatisfactory. As local observers, we are not generally able to access these types of boundaries.
To overcome these limitations we use `gravitational screens' as quasi-local observers. A gravitational screen is a 2+1 dimensional time-like hypersurface surrounding an arbitrary region of space. Projecting Einstein's equations onto the screen results in the equations of non-equilibrium thermodynamics for a viscous fluid, which encode all of the information present inside the screen in terms of the holographic fluid on the surface, without being restricted to the event horizon of a black hole or to spatial infinity. We study the dynamics and equations of state for screens in various space-times, determine the properties of the fluids that arise from different background geometries, and establish a relationship between the gravitational degrees of freedom in the bulk and the thermodynamic degrees of freedom on the screen.
In this talk we present a generalization of the multicomponent Van der Waals equation of state in the grand canonical ensemble [1, 2]. For the one-component case the third and fourth virial coefficients are calculated analytically. It is shown that an adjustment of a single model parameter allows us to reproduce the third and fourth virial coefficients of the gas of hard spheres with small deviations from their exact values. A thorough comparison of the compressibility factor and speed of sound of this model with the one and two component Carnahan-Starling equation of state is made. We show that the model with the induced surface tension can reproduce the results of the Carnahan-Starling equation of state up to the packing fractions 0.2-0.22 at which the Van der Waals equation of state is inapplicable [1]. Using this approach we develop an entirely new hadron resonance gas model and apply it to a description of the hadron yield ratios measured at AGS, SPS, RHIC and ALICE energies of nuclear collisions. We confirm that the strangeness enhancement factor has a peak at low AGS energies and that there is a jump of chemical freeze-out temperature between two highest AGS energies [1, 2]. Also we argue that the chemical equilibrium of strangeness, i.e. γs ≃ 1, observed above the center of mass collision energy 8.7 GeV may be related to a hadronization of quark gluon bags which have the Hagedorn mass spectrum, and, hence, it may be a new signal for the onset of deconfinement.
K. A. Bugaev, V. V. Sagun, A. I. Ivanytskyi, I. P. Yakimenko, E. G. Nikonov, A.V. Taranenko and G. M. Zinovjev, Nucl. Phys. A 970, (2018) 133.
K. A. Bugaev, R. Emaus, V.V. Sagun, A. I. Ivanytskyi, L. V. Bravina, D. B. Blaschke, E. G. Nikonov, A. V. Taranenko, E. E. Zabrodin and G. M. Zinovjev, Phys. Part. Nucl. Lett. 15, (2018) 210.
Energy scan of A+A from 3 to 200 AGeV are considered in UrQRMD and QGSM microscopic models. Compared with RHIC experiments on anisotropic flow has been done.
Directed and elliptic flow are considered in dynamics as well as at freeze-out.
It is found that the flow is developing slowly and reach the saturation at about t=10 fm/c. While at early freeze-out time the particles with largest freeze-out are survived.
At low energies the potentials are drastically changing the directed and elliptic flow.
Recent experimental measurements of the transition path time distributions of proteins demonstrate that these distributions are experimentally measurable. The folding unfolding dynamics of proteins is classical mechanical in nature but the experiments motivated the development of a quantum theory of transition path time distributions [1-3]. The formalism was applied to define a tunneling flight time which is found to vanish for symmetric and asymmetric Eckart barriers and a rectangular barrier, irrespective of the barrier width and height. This generalized the Hartman effect. Yet, special relativity is not violated [4].
The same approach has led to an understanding that the transition path time distribution can be used in the context of a new concept - time averaged weak values [5]. This approach has led to the formulation of a time averaged weak value commutator and uncertainty principle [6]. The latter is of special note since it leads to the derivation of a weak value time-energy uncertainty principle [5]. This insight is applied to scattering on a potential. We find that it is possible to determine both the location and the momentum of a quantum particle at the price of total uncertainty with respect to the time of arrival at the location.
Finally, we find that the concept of time averaging leads to the conclusion that it is possible to determine the complex weak value of Hermitian operators without resorting to weak measurement [7].
References:
[1] E. Pollak, Quantum Tunneling - The Longer the Path the Less Time it Takes, J. Phys. Chem. Lett. 8, 352 (2017).
[2] E. Pollak, Transition path time distribution, tunneling times, friction and uncertainty, Phys. Rev. Lett. 118, 07041 (2017).
[3] E. Pollak, Thermal quantum transition path time distributions, time averages and quantum tunneling times, Phys. Rev. A 95, 042108 (2017).
[4] J. Petersen and E. Pollak, Tunneling flight time, chemistry and special relativity, J. Phys. Chem. Lett. 8, 4017-4022 (2017).
[5] E. Pollak and S. Miret-Artés, Time averaging of weak values - consequences for time-energy and coordinate-momentum uncertainty, preprint, submitted to New J. Phys..
[6] E. Pollak and S. Miret-Artés, Uncertainty relations for time averaged weak values, to be published.
[7] E. Cohen and E. Pollak, Determination of weak values of hermitian operators using only strong measurement, preprint, arXiv:1804.11298, to be published.
Weak values (WV) and two-state-vector formalism (TSVF) [1] provide novel insights in quantum-information processing, quantum thermodynamics, nanoscale quantum systems, complex materials, etc.
In the theoretical part of the talk, we explore a new quantum effect of scattering accompanying an elementary collision of two quantum systems A and B, the latter interacting with a quantum environment. In clear contrast to a classical environment, the quantum case can exhibit new counter-intuitive features, e.g. momentum and/or energy transfer which contradict every conventional theoretical expectation.
As an example, the experimental part of the talk shows experimental evidence of a quantum deficit of momentum transfer and/or enhanced energy transfer (or, equivalently: reduced effective mass) in an elementary neutron-atom collision. The experimental method is incoherent inelastic neutron scattering (INS), available at neutron spallation sources (e.g., SNS, Oak Ridge Nat. Lab, USA). This INS-effect was recently observed [2] on single H2 molecules confined and physisorbed in (i.e., weakly interacting with) multi-walled carbon nanotube channels with diameter ~10 Å. The INS results, if interpreted within conventional theory, reveal a strikingly reduced effective mass of the translation motion of the recoiling H2 molecule, i.e. M = 0.64 ± 0.07 amu (atomic mass units). This is in blatant contrast to that of a completely free recoiling H2 for which the mass must be 2 amu.
In contrast, the finding has a “first principles” qualitative interpretation within modern theory WV and TSVF [1, 3]. A qualitative quantum-mechanical interpretation (see [3]) can reveal new features of the considered experimental observation being in clear contrast to conventional neutron scattering theory. Moreover, analyzed in the WV-theoretical context, the experimental result demonstrates the following: (1) the scattered neutron is a quantum system; (2) the experiment determines (or: measures), for the first time, the overlap of the initial-state wavepacket with that of the final-state of the recoiling H2 (in momentum space).
The effect under consideration may have far-reaching consequences also in other fields (e.g. reflectivity, SANS, SAXS), and in relativistic scattering processes.
[1] Y Aharonov, D Rohrlich. Quantum Paradoxes: Quantum Theory for the Perplexed. (Weinheim, Wiley-VCH, 2005)
[2] R J Olsen et al., Carbon 58, 46 (2013)
[3] C A Chatzidimitriou-Dreismann, Quanta 5, 61 (2016)
Where do classical and quantum mechanics part ways? How do classical and quantum randomness fundamentally differ? Here we derive (nonrelativistic) quantum mechanics and classical (statistical) mechanics within a common axiomatic framework. The common axioms include conservation of average energy and conservation of probability current. Two axioms distinguish quantum from classical mechanics: a global, time-dependent random variable, and a constraint on allowed phase space distributions. With strength on the order of Planck’s constant, they imply quantum entanglement and uncertainty relations.
As opposed to Wald's cosmic no-hair theorem in general relativity, it is shown that the Horndeski theory (and its generalization) admits anisotropic inflationary attractors if the Lagrangian depends cubically on the second derivatives of the scalar field. We dub such a solution as a self-anisotropizing inflationary universe because anisotropic inflation can occur without introducing any anisotropic matter fields such as a vector field. As a concrete example of self-anisotropization we present the dynamics of a Bianchi type-I universe in the Horndeski theory.
Texture Zero Mass Matrices will be used to
describe the flavour mixing of the leptons
and quarks. The mixing angles are calculated in terms
of the mass eigenvalues. The neutrino masses
can be calculated - they are very small
and display a normal hierarchy. An application to double beta decay is discussed.
We review our studies of spectator-induced electromagnetic (EM) effects on charged pion and kaon emission in nucleus-nucleus collisions at CERN SPS and RHIC BES energies. These are found to consist in (1) the breaking of isospin symmetry for spectra of fast pions in peripheral collisions, (2) centrality dependent distortions in ratios of emitted $\pi^+/\pi^-$ and $K^+/K^-$ in the final state, (3) charge splitting of pion and kaon directed flow, and (4) an enhancement of negative pion emission at spectator rapidity.
We compare our model simulations to experimental results from STAR, NA49 and NA61/SHINE. As it emerges from our analysis, the observed effects offer sensitivity to the actual space-time evolution of the hot and dense matter created in the course of the collision. A specific picture of the longitudinal evolution of the system emerges. While most of the energy density remains located in the central rapidity region, in peripheral collisions ``streams'' of excited matter travel and emit particles in the vicinity of the two spectator systems.
The observed EM effects also show sensitivity to the space-time scale of spectator breakup, where the spectator can be regarded as an extremely excited nuclear system. We comment on a recent study of spectator dynamical evolution [1] which appears to give very different predictions for spectator excitation energy depending on the dynamical scenario assumed. In this context we argue that EM effects in ultrarelativistic heavy ion collisions can provide a unique, independent experimental input to test nuclear models in extreme conditions. We comment on possible new measurements in the framework of the NA61/SHINE Phase II programme.
[1] K. Mazurek, A. Szczurek, C. Schmitt, and P. N. Nadtochy, arXiv:1708.03716 [nucl-th].
Numerical monte carlo study of lattice discretized QCD is the only
non-perturbative tool available for calculation of the equation of
state of strongly interacting matter from QCD at not-too-high
temperatures. Unfortunately, these numerical tools are not directly
applicable at finite baryon densities, as is required for, e.g., the
beam energy scan studies.
Using the method of Taylor expansion in chemical potential, we
estimate the equation of state of strongly interacting matter, namely
the baryon number density and its contribution to the pressure, for 2-flavor
QCD at not-too-high chemical potential. We also report the isothermal
compressibility. We examine the technicalities associated with summing
the series. We also study the quark number susceptibilities to gain
insight into properties of strongly interacting matter at high temperatures.
On photon splitting in Lorentz-violating QED
Long lived η meson, with quantum numbers of the vacuum, provides a unique, flavor-conserving laboratory to test fundamental symmetries and to search for new physics beyond the Standard Model. A new experiment to measure the η radiative decay width to a 3% accuracy via the Primakoff effect (the PrimEx-eta experiment) is currently under preparation at Jefferson Lab (JLab). The anticipated result will offer an accurate determination of the light quark-mass ratio and the η-η´ mixing angle. In addition, a recently approved JLab Eta Factory (JEF) experiment is to measure various η/η´ decays, producing the cleanest data in the world for the rare neutral decay modes. JEF will have sufficient precision to explore the role of scalar meson dynamics in chiral perturbation theory for the first time, to search for sub-GeV dark gauge bosons (a leptophobic vector B´ and an electrophobic scalar ϕ´) by improving the existing bounds by up to two orders of magnitude that is complementary to the ongoing worldwide efforts on invisible decays or decays involving leptons, and to provide the best direct detection for C-violating, P-conserving new forces. The status of these experiments and their physics impacts will be presented.
*Thanks to USA National Science Foundation for the PHY-1506303 grant.
During Physics Runs 1-2 of Horizon-xT experiment, a sizable number of which exhibit the unusual spatial and temporal structure of pulses with several maxima (or modes). The separation of the maxima can be from few tens of ns to several hundred ns. The dataset suggests that separation between maxima increases with distance from EAS core, which can not be obtained from simulations, and seem to occur only in events with energy above ~10^17 eV. To further investigate this phenomenon, the HT-KZ, an ultra-high energy cosmic rays detector system, is currently under construction at Nazarbayev University (NU), Kazakhstan. It is designed to study the spatial and temporal structure of Extensive Air Showers with the energy of the primary above ~1017 eV, and with high time resolution of the shower disk profile and timing synchronization between the detection points (both ~1 ns). Detector system construction at NU is conducted in collaboration with the Tien Shan high-altitude Science Station (TSHSS). Based on computer simulations, several prototype designs were created, constructed and tested. The overview of Horizon-xT detector system and the details of the unusual events data will be presented as well as design features and testing data from prototype modules currently in operation at NU.
TBA
Public Talk in English
Opera Gala, OAC amphitheater. Artists: Kalliopi Petrou, Soprano, Alessia Tofanin, piano. (Formal dressing is suggested). The program can be found under materials at the home webpage.
We study the structure and energy dependence of vorticity and hydrodynamic helicity fields in peripheral heavy ion collisions using the kinetic Quark-Gluon String and Parton-Hadron-String Dynamics models. We observe the formation of specific toroidal structures of vorticity field (vortex sheets). Their existence is mirrored in the polarization of hyperons of the percent order. Its rapid decrease with energy was predicted and recently confirmed by STAR collaboration. The energy dependence is sensitive to the temperature dependent term derived and discussed in various theoretical approaches. The antihyperon polarization is of the same sign and larger magnitude. The crucial role of strange vector mesons is also discussed.
In this talk, I will first motivate theoretically some models entailing CPT Violation (CPTV), which might be responsible for the observed matter-antimatter asymmetry in the Cosmos, and may owe its origin to either Lorentz-violating background geometries in the early Universe, or to an ill-defined CPT generator in some quantum gravity models entailing decoherence of quantum matter. For the latter category of CPTV, I argue that entangled states of neutral mesons (Kaons or B-systems) can provide smoking-gun sensitive tests of such phenomena, and describe the relevant phenomenology.
Standard quantum mechanics does not allow for systems existing in quantum superpositions of different times, since time is not an observable but a parameter. For the same reason the standard formalism does not allow for entanglement of states taken in different times. In this context I want to recall an atomic-interferometry experiment which is not widely known, but which may be regarded as the case where a superposition of different times was actually observed. Secondly, I will outline a formalism where one distinguishes between a four-position observable (with x_0=ct included) and a flowing time parameter tau. The dynamics is given by a flowing-time Schroedinger equation and spatial expansion of the Universe is the process that compensates localization of the Universe wave function around a given moment of the flowing time. Superpositions and entanglement of different four-positions are in this formalism possible. If time allows, I will say a few words about a looped quantum dynamics, with loops in space and time.
TBA
MoEDAL, is a pioneering LHC experiment designed to search for anomalously ionizing messengers of new physics such as magnetic monopoles or massive (pseudo-)stable charged particles, that are predicted to existing a plethora of models beyond the Standard Model. It started data taking at the LHC at a centre-of-mass energy of 13 TeV, in 2015. Its ground breaking physics program defines a number of scenarios that yield potentially revolutionary insights into such foundational questions as: are there extra dimensions or new symmetries; what is the mechanism for the generation of mass; does magnetic charge exist; and what is the nature of dark matter. MoEDAL purpose is to meet such far-reaching challenges at the frontier of the field. We will present the results from the MoEDAL detector on Magnetic Monopole and highly ionizing electrically charged particle production that are the world’s best. In conclusion, progress on the installation of MoEDAL’s MAPP (MoEDAL Apparatus for the detection of Penetrating Particles) sub-detector prototype will be very briefly be discussed.
The Belle II experiment is a substantial upgrade of the Belle detector and will operate at the SuperKEKB energy-asymmetric $e^+ e^-$ collider. The accelerator has already successfully completed the first phase of commissioning in 2016. First electron positron collisions in Belle II are expected for April 2018. The design luminosity of SuperKEKB is $8 \times 10^{35}$ cm$^{-2}$s$^{-1}$ and the Belle II experiment aims to record 50 ab$^{-1}$ of data, a factor of 50 more than the Belle experiment. This large data set will be accumulated with low backgrounds and high trigger efficiencies in a clean $e^+e^-$ environment. This talk will review the detector upgrade, the achieved detector performance and the plans for the commissioning of Belle II.
.
.
Abstract: We consider quantum steering by non-Gausssian entangled states. The Reid steering criterion based on the Heisenberg uncertainty relation fails to detect steerability for many categories of such states. Here, we derive a tighter steering criterion using the Robertson–Schrödinger uncertainty relation. We show that our steering condition is able to detect steerability of several classes of non-Gaussian states such as entangled eigenstates of the two-dimensional harmonic oscillator, the photon subtracted squeezed vacuum state and the NOON state.
Summary:
Based on two assumptions of Locality (The measurement choices for one system do not affect the measurement outcomes of a spatially separated system) and sufficient condition of Realism (If, without in any way disturbing a system, one can predict with some specified uncertainty the value of a physical quantity, then there exists a stochastic element of physical reality which determines this physical quantity with at most that specific uncertainty [\textit{Phy. Rev. A 80, 032112}]) Reid in 1989 proposed an experimental scenario based on the quadrature phase components of two correlated and spatially separated light fields [\textit{Phy. Rev. A 40, 913}] and pointing out the fact that in quantum mechanics their exists such correlations in composite quantum systems described by entangled states, where it is possible by suitable choice of measurements on a quantum system to infer the outcomes of two non commuting observables corresponding to a spatially separated quantum system, without interfering with the later system, with precision greater than the Heisenberg's Uncertainty limit.\
Let two spatially separated parties, say, Alice and Bob, share a bipartite quantum system. Alice measures $\hat{X}_{A1}$ and $\hat{X}_{A2}$ on her quantum system and based on the outcome $x_{A1}$ and $x_{A2}$ she can make an estimate of the result for Bob's outcome $x_{B1}$ and $x_{B2}$ correspond to the Bob's measurements $\hat{X}_{B1}$ and $\hat{X}_{B2}$. Alice can thus estimate the uncertainties (standard deviation of the Alice's estimated result of Bob's outcome from the original outcome correspond to the Bob's measurement) of the outcomes of the two observables. Now, by Heisenberg's uncertainty principle, quantum mechanics imposes a limit to the precision with which one can determine the values to observables corresponding to noncommuting operators and by the locality condition Alice's choice of measurements cannot affect the elements of reality (Outcomes of Bob's measurements) of Bob. Therefore, using the sufficient condition of Realism, the precision with which one can determine the product of the average inference variances of two noncommuting observables is limited by Heisenberg's Uncertainty Principle. Hence, violation of this Reid inequality can be interpreted as signature of the correlations embodied in an Entanglement that is called Steering. \
In recent developments in quantum information theory, continuous variable non-Gaussian states have applications in several protocols discussed in Ref.[\textit{Phys. Rev. Lett. 82, 1784(1999)}]. Extensions of the entanglement criterion [\textit{Phys. Rev. A 83, 032118 (2011)}, \textit{Phys. Rev. A 90, 062303 (2014)}] for non-Gaussian states as well as the Bell violations [\textit{Phys. Rev. A 76, 052101 (2007)}] have been studied of such states. It is thus relevant to study steering by non-Gaussian entangled states. The inferred variance inequality based on the Heisenberg uncertainty relation proposed by Reid fails to reveal the steerability of such states. In Ref. [\textit{Phys. Rev. Lett. 106, 130402 (2011)}], Walborn et al raised a question as to whether such states violate some higher order steering inequality in the sense that Reid's criterion based on second order moment and thus failed to detect the correlations which are of higher than the second order in the tested observables. In the present work we ask a somewhat different though related question regarding the steerability of pure non-Gaussian entangled states: Is the Reid's inequality based on variances tight enough to reveal steerability for various categories of non-Gaussian states ? \
Quantum steering is fundamentally linked with quantum uncertainty, and hence, other versions of uncertainty relations have also been employed to obtain correspondingly different steering relations such as the entropic criterion [\textit{Phys. Rev. Lett. 106, 130402 (2011)}] and the fine-grained [\textit{Phys. Rev. A 92, 042317 (2015)}] steering inequalities. The steering bound proposed by Reid is based on the Heisenberg uncertainty relation for two conjugate observables. A more generalized form of variance based uncertainty relation was derived by Robertson and Schrödinger for any two Hermitian observables. In order to address the question posed above, in this paper we investigate the Reid criterion in context of the Robertson-Schrodinger uncertainty relation for the purpose of studying steerability of non-Gaussian states. We have found out the fact that our steering condition is tighter than the Reid criterion based on the Heisenberg uncertainty relation. We have studied steering by several examples of non-Gaussian entangled pure states, such as the 2D harmonic oscillator states (LG beams), the photon subtracted squeezed vacuum state and the NOON state which do not demonstrate steering through the Reid criterion.\
The talk is devoted to discussion of space-time domain finiteness effects for quantum Unruh-DeWitt detector which operates in this domain. We discuss a special renormalization procedure which happens to be different in finite and infinite domain cases. It is demonstrated that, as is typical for renormalization, a new dimensionful parameter appears, having the meaning of detector's recovery proper time. It plays no role in the leading order of perturbation theory but can be important non-perturbatively. We analyze the structure of finite time corrections to various observables. It is found that in large-time limit they can be described in a universal way, in a sense, and non-vanishing in adiabatic limit effects are of special interest. As an application, Landauer's principle interpretations for finite domain case are studied.
Driven from the property that electrons in external laser beams can change their spin alignment even perpendicularly to the corresponding photon propagation direction [1,2], we are investigating the full spin-dependent interaction of the electron spin with the photon spin in Compton scattering. We are able to construct dynamics, in which the intrinsic angular momentum of photons and electrons along the photon propagation direction are not conserved for a specific kinetic setup of incoming and outgoing particle momenta [3]. To give a full picture of the process we also present the angle resolved cross section, Stokes parameters and spin expectation values. To the end we also discuss, how the dynamics can be used to establish entanglement between the photon polarization and the electron spin.
[1] S. Ahrens, H. Bauke, C. H. Keitel, C. Müller, Phys. Rev. Lett. 109, 043601 (2012).
[2] S. Ahrens, H. Bauke, C. H. Keitel, C. Müller, Phys. Rev. A 88, 012115 (2013).
[3] S. Ahrens, C.-P. Sun, Phys. Rev. A 96, 063407 (2017).
Quantum cryptography is a process for developing a perfectly secret encryption key that can be used with any classical encryption system. This paper presents a study of the EPR state protocol [1], the first continuous variable quantum key distribution protocol. We propose an algorithm to this protocol and subsequently its implementation on FPGA (Field-Programmable Gate Array). For the implementation, we used Xilinx's ISE System Edition tool as Software and Xilinx's Artix7 Nexys4 DDR board as hardware.
Plasma media is quantum mechanical even at high temperature and low density since the Heisenberg uncertainty principle is necessary to keep the electrons from collapsing into the ions, thus it cannot be described in the framework of classical mechanics. Nevertheless the classical molecular dynamics and Monte Carlo simulations became a traditional way of calculation of thermodynamic and transport properties of non-ideal plasmas. The main difficulty of these methods remains the treatment of bound states and ‘sign’ problem of electrons in dense plasma. As this is a quantum-mechanical or quantum-statistical problem more complex simulation approaches should be used [1]. In this work we combine the Wigner and Feynman formulations of quantum mechanics with path integral Monte Carlo and molecular dynamics methods for construction of the quantum dynamics method in the phase space. An explicit analytical expression of the Wigner function in path integral form has been derived, accounting for Fermi statistical effects by an effective pair pseudopotential in phase space allowing to avoid ‘sign’ problem of fermions. Pseudopotential depends on coordinates, momenta and the degeneracy parameter of fermions and takes into account Pauli blocking of fermions in phase space. A new quantum Path Integral Monte Carlo method has been developed to calculate average values of arbitrary quantum operators in phase space. Calculations of the momentum distribution functions and the pair correlation functions of degenerate ideal Fermi gas have been carried out in good agreement with the analytical expressions for testing the developed approach over a wide range of momentum and degeneracy parameter. Comparison of the obtained momentum distribution functions of strongly correlated Coulomb systems with the Maxwell–Boltzmann and the Fermi distributions shows the significant influence of the strong interparticle interaction both at small momenta and in high energy quantum ‘tails’.
We have also developed the new method for calculation of the kinetic coefficients of dense plasma. To calculate kinetic properties we use quantum dynamics in the Wigner representation of quantum mechanics. The Wigner-Liouville equation is solved by a combination of molecular dynamics and path integral methods. The initial conditions of the Wigner-Liouville equation are sampled by path integral Monte Carlo method which allow to calculate also such thermodynamic quantities as the internal energy, pressure and pair distribution functions in a wide range of density and temperature. To study the influence of the interparticle interaction on the dynamic properties of dense plasmas we apply the quantum dynamics in the canonical ensemble at finite temperature and compute temporal momentum-momentum correlation functions and electrical conductivity according to the quantum Kubo formulas. Our numerical results agree well with available theoretical and experimental results. This work has been supported by the Russian Science Foundation via grant 14-50-00124.
Electron microscopy has revolutionized our understanding of biomolecules, cells, and biomaterials, by enabling their analysis through imaging with (near-) atomic-scale resolution. However, the high-energy electrons typically used for electron microscopy are known to cause damage to biological specimens. Specimen damage is related to the fact that image information is shot-noise limited, meaning that a minimum number of electrons is required to form an image with a specified signal-to-noise ratio. Recent advances in quantum metrology might allow us to overcome these resolution limits [1]. For example, by passing an electron many times through a sample, the Heisenberg limit is reached, where the accuracy of a measurement scales as 1/Ne, a √(N_e) improvement over the shot-noise limit. We are following up on the interaction-free measurement (IFM) scheme of Elitzur and Vaidman [2] based on a Mach-Zehnder interferometer. This technique was conceptually extended to success probabilities arbitrarily close to one using an approach analogous to a discrete form of the quantum Zeno effect. We have proposed designs for electron microscopes containing electron wave splitters and mirrors to implement this measurement scheme [3] and are experimenting with subsystems for such a microscope.
A slice of biological material is by no means an absorber of electrons. The typical effect on a transmitted electron is a local change of the phase of the electron wave and a loss of kinetic energy. Since the latter is effectively a detection of the passing electron, it collapses the wave function and thus occurs with a certain probability, usually much smaller than one. This complicates the relation between the information gained and the damage done [4]. We are trying to derive some rules for this but have not yet succeeded.
Acknowledgement
This research is funded by the Gordon and Betty Moore Foundation.
References
[1] Giovannetti, V.; Lloyd, S.; Maccone, L. Science 2004, 306, 1330–1336.
[2] Elitzur, A. C. and Vaidman, L. Found. Phys. 1993, 23, 987–997
[3] Kruit, P. etal. Ultramicroscopy 164 2016, 31–45
[4] Thomas, S.; Kohstall, C.; Kruit, P.; Kasevich, M.; Hommelhoff, P. Phys. Rev. A 2014, 90, 053840.
We will briefly review possibilities of experiments exploring quantum aspects of the electron field interaction using structured electron beam.
Among these we can mention the vertical version of the Aharonov Bohm effect and the light electron coupling through Landau states.
Moreover we will illustrate the use of the orbital angular momentum (OAM) sorter and electrostatic elements in order to control single OAM quantum state.
TBA
A customary relativistic quantum scattering theory implies that all the particles in a reaction have definite momenta, that is, they are described by the delocalized plane waves. When the well-normalized wave packets are used instead (say, of Gaussian form), the scattering cross sections get corrections of the order of $\lambda_c^2/\sigma_{\perp}^2 \ll 1$ where $\lambda_c$ is a Compton wave length of a particle with a mass $m$ and $\sigma_{\perp}$ is a beam width. For modern electron accelerators, they do not exceed $10^{-16}$, whereas for the well-focused beams of electron microscopes they can reach $10^{-6}$. Here we show that these non-paraxial effects are enhanced when one collides non-Gaussian packets instead: the vortex beams with high orbital angular momentum $\ell \gg \hbar$, the quantum superpositions like the so-called Schrödinger cats, etc. Moderately relativistic vortex electrons with $\ell$ up to $10^3$ have been recently generated, they have large mean transverse momentum, which grows as $\sqrt{\ell}$, and, as a result, the non-paraxial effects in scattering are $\ell$ times enhanced for these beams and can reach $10^{-3}$. We calculate the non-paraxial corrections to the plane-wave cross section in a model-independent way, give examples from QED and QCD, study a contribution of a phase of a scattering amplitude, compare different models of the in-states, and show that for well-focused beams these effects can compete with the two-loop contributions to the basic QED processes like $e^-e^- \rightarrow e^-e^-, e^-\gamma \rightarrow e^-\gamma$, etc.
The IceCube Neutrino Observatory, located at the geographic South Pole, is the world’s largest neutrino telescope. It instruments one cubic kilometer of ice with more than 5000 optical sensors and is designed to detect the light emitted by particles produced in neutrino-nucleon interactions in the ice.
Magnetic monopoles are hypothetical particles with non-zero magnetic charge, and are predicted to exist in many extensions and unifications of the Standard Model of particle physics. A wide range of masses is allowed for magnetic monopoles, leading to a broad speed range for a hypothetical cosmic flux. A magnetic monopole passing through IceCube would produce light through several different physical processes, where the speed of the monopole determines which one dominates. This light can then be readily detected by IceCube’s optical modules.
In this talk I will give an overview of the methods and results of current and recently finished searches for a cosmic flux of magnetic monopoles in IceCube. A focus will be put on the most recent search for magnetic monopoles with speeds above the Cherenkov threshold in ice, where the dominant background consists of the rare and extremely high energy astrophysical neutrinos. A new sensitivity on the cosmic flux of magnetic monopoles will be presented, improving on the previous upper limits by approximately an order of magnitude.
MoEDAL, is a pioneering LHC experiment designed to search for anomalously ionizing messengers of new physics. It started data taking at the LHC at a centre-of-mass energy of 13 TeV, in 2015. Its ground breaking physics program defines a number of scenarios that yield potentially revolutionary insights into such foundational questions as: are there extra dimensions or new symmetries; what is the mechanism for the generation of mass; does magnetic charge exist; and what is the nature of dark matter. After a brief introduction I will report on MoEDAL's progress to date, including our past, current and expected future physics output. I will also, discuss two new sub-detectors for MoEDAL: MAPP (Monopole Apparatus for Penetrating Particles) now being prototyped at IP8; and, MALL (Monopole Apparatus for very Long Lived particles), currently in the planning stage. I will conclude with a brief description of our program for LHC Run-3.
The Large Hadron Collider is reaching energies never achieved before allowing the search for exotic particles in the TeV mass range. In a continuing effort to find monopoles we discuss new signatures to detect them. These signatures include multiphoton events, monopole charge scattering and charge magnetic dipole scattering.
We present a study of searching for massive long-lived particles at the MoEDAL detector. MoEDAL is sensitive to highly ionizing avatar such as magnetic monopoles or massive (meta-)stable charged particles and we focus on the latter in this talk. In the ATLAS and CMS analyses for long-lived particles, some conditions are usually required for triggering or reducing the cosmic ray background, whereas those conditions are not necessary at MoEDAL, due to its extremely low background.
On the other hand, MoEDAL requires the particle to have low velocities (e.g., beta < 0.2 for the particles with unit charge), which result in small signal cross-sections. Using Monte Carlo simulations, we compare MoEDAL vs ATLAS/CMS sensitivities for various long-lived particles in supersymmetric models, and seek for a scenario where MoEDAL is complementary to ATLAS and CMS.
This contribution is based on an upcoming article.
TBA
The laws of thermodynamics classify energy changes for macroscopic systems as work performed by an external driving and heat exchanged with the environment. For quantum systems in contact with an external environment, the very identification of heat and work is a challenge, since work cannot be directly accessed by measurement. Quantum systems continuously monitored by a detector have recently provided a formidable platform to explore energy exchanges with the environment at a quantum level, both theoretically and experimentally.
Here we introduce thermodynamic quantities, heat, and work, along single quantum trajectories of continuously monitored systems based on the identification of the deterministic unitary part (work) and the stochastic non-unitary part (heat) of the evolution. We analyze the consistency of the introduced quantities by showing that they fulfill the second law of thermodynamics in the form of a generalized Jarzynski equality in the presence of tailored quantum feedback [1]. We present the experimental data reporting the detection of the proposed heat and work in superconducting-based setups [2]. Finally, we show that the system-detector information exchange displays non-classical features uniquely due to quantum measurement back-action, which re detected in experiments [3].
[1] Jose J. Alonso, E. Lutz, and A. Romito. Thermodynamics of Weakly Measured Quantum Systems. Phys. Rev. Lett. 116 080403 (2016).
[2] M. Naghiloo, D. Tan, P. Harrington, J. Alonso, E. Lutz, A. Romito, and K. Murch. Thermodynamics along individual trajectories of a quantum bit. arXiv preprint arXiv:1703.05885 (2017).
[3] M. Naghiloo, J. Alonso, A. Romito, E. Lutz, and K. Murch. Information gain and loss for a quantum Maxwell’s demon. arXiv preprint arXiv:1802.07205 (2018).
Fractional quantum Hall states are predicted to host exotic anyonic excitations, which offer the exciting prospect of topologically-protected quantum computation. Mach-Zehnder interferometry has been suggested as a probe for the anyonic statistics. However, all experimental attempts to measure such an interference signal have failed to date, despite the high visibility of interference fringes in the integer quantum Hall case. In our work we have studied the relation between this null result and another recent surprising experimental finding, namely the detection of upstream neutral modes in virtually all fractional quantum Hall states (including, e.g., filling 1/3), not only in hole-like filling factors (such as 2/3). We have shown that the excitation of upstream modes acts as a ``which path’’ detector, which degrades the interference visibility in the Mach-Zehnder geometry exponentially with the total length of the interferometer arms, even when the lengths are exactly equal. We have also found how this which-path detection can lead to doubling of the Aharonov-Bohm periodicity in a Fabry-Perot geometry. This latter phenomenon can be used to experimentally quantify the effects of the neutral modes, and thus design better Mach-Zehnder setups which could overcome the neutral-induced suppression.
Solid state breakdown counters (SSBC) combine threshold properties of nuclear track detectors (NTDs) with the convenience of electronic event registration in real time. Their simple low-cost design, combined with high dE/dx thresholds, make them an attractive candidate for experiments searching for magnetic monopoles and other highly ionizing exotic particles. In one use case, the traditional NTDs could be “sandwiched” between two thin layers of segmented SSBCs. The SSBC would then provide an immediate feedback about the potential passage of a magnetic monopole, pinpointing the position of the track. That would drastically reduce the time needed to scan and analyze the NTD images. In this talk we describe the recently proposed variant of the SSBC that offers additional advantages. The prototype SSBCs are now being fabricated at the University of Alabama. They are anticipated to be deployed as a part of the MoEDAL experiment at the Large Hadron Collider. Furthermore, the SSBC offers a unique opportunity to search for magnetic monopoles that can’t reach the Earth due to the geomagnetic cutoff. The talk also describes the joint effort of the Aerospace Engineering and Physics departments of the University of Alabama to deploy the SSBCs in the geosynchronous orbit using CubeSat satellites.
We identify a phase-space minimum h in space plasmas that connects the energy of correlated plasma particles to an equivalent wave frequency. In particular, while there is no a priori reason to expect a single value of *h across plasmas, we find a very similar value of h** ≈ (7.5±0.3)×10^-22 J·s using several independent analytical and statistical methods: 1) solar wind plasma measurements, 2) various space plasmas typically residing in stationary states out of thermal equilibrium and spanning a broad range of physical properties, 3) the entropic limit emerging from statistical mechanics, 4) waiting-time distributions of explosive events in space plasmas. Finding a quasi-constant value for the phase space minimum in a variety of different plasmas, similar to the classical Planck constant but 12 orders of magnitude larger, may be revealing a new type of quantization in many plasmas and correlated systems more generally.
The existence of dark matter, or at least some interaction capable of mimicking its effects, is by now established beyond reasonable doubt. There is a huge spectrum of possibilities when it comes to answering the question of what is the nature of dark matter. One interesting possibility is that it is composed of very light scalar or pseudo-scalar particles, sometimes dubbed Weakly Interacting Slim Particles (WISPS). WISP dark matter can have masses ranging from $10^{−22}$ eV to a few eV, and models motivated by the strong-CP problem suggest that its coupling to ordinary baryonic matter is very weak. This implies that colliders are not necessarily the ideal experiments to search for these light particles. Here we report on the progress of an ongoing experiment aimed at searching new interactions of nature (so-called “fifth forces”) mediated by WISPS. Our setup consists of a quantum optomechanical oscillator - a Silicon Nitride membrane - cooled near its ground state placed within micrometric distances from an optically levitated Silica microsphere in a high-finesse optical cavity. Motion of the membrane-microsphere system is monitored via a Pound-Drever-Hall scheme, and force and displacement sensitivities of up to $10^{−19}$ N/√Hz (microsphere) and $10^{−15}$ m/√Hz (membrane) are obtained. As we will show, the membrane-microsphere geometry presents several advantages in the search for sub-micron-range fifth forces and our setup is capable of placing interesting constraints on WISP dark matter candidates. Furthermore, we will discuss opportunities in the emerging field of “table-top” fundamental physics experiments, lying at the cross-borders of high-energy physics, quantum information and metrology.
The Advanced proton-driven plasma Wakefield Acceleration Experiment (AWAKE) [1,2] is a proof-of-principle experiment studying the generation of wakefields by proton bunches as well as the acceleration of electrons in these wakefields. The proton-driven plasma wakefield acceleration (PWFA) promises the potential of accelerating electrons to energies relevant for high-energy physics with a gradient up to GV/m, therefore promising to decrease the length and cost of future colliders. AWAKE uses a ~12cm-long, 400GeV/c proton bunch provided by the CERN Super Proton Synchrotron. The sharp ionization front created by a 4.5TW, 120 fs laser pulse co-propagating with the proton bunch creates a 10 m column of Rubidium plasma with a density in the 1-10e14/cc range. This ionization front provides the seed for the seeded self-modulation (SSM) [3], which radially modulates the proton bunch into a series of microbunches with a 1-3 mm periodicity. A probe electron beam is injected into the wakefields driven by the microbunches in order to be accelerated.
The very successful experiments started with a study of the proton self-modulation in 2017, and continued with this year's first attempt at electron acceleration. We will introduce the accelerator concept, present an overview of the experimental setup, describe the results of the ongoing experiments and give an outlook for a further development of the acceleration method.
[1] A. Caldwell et al., Nucl. Instr. and Meth. in Phys. Res. A 829 3 (2016).
[2] E. Gschwendtner et al., Nucl. Instr. and Meth. in Phys. Res. A 829 76 (2016).
[3] P. Muggli et al., Plasma Physics and Controlled Fusion 60(1) 014046 (2017).
The search for the Lepton Flavor Violating decay $\mu \to e \gamma$ exploits the most intense continuous muon beams, which can currently deliver $\sim 10^8$ muons per second. In the next decade, accelerator upgrades are expected in various facilities, making it feasible to have continuous beams with an intensity of $10^9$ or even $10^{10}$ muons per second. We investigate the experimental limiting factors that will define the ultimate performances, and hence the sensitivity, in the search for $\mu \to e \gamma$ with a continuous beam at these extremely high rates. We then consider some conceptual detector designs and evaluate the corresponding sensitivity as a function of the beam intensity.
The accuracy of clocks has continuously improved up to $10^{-19}$, nowadays [1,2], allowing to probe time-dilation in gravitational fields up to millimeter scale vertically at earth surface. We are investigating the effect of inhomogeneous evolution of the background time on spatially coherent quantum objects. In our study we consider a timed Dicke state excitation of nuclear isomer transitions of atoms in a crystal [3]. We find that photons from superradiant reemission of the Dicke state excitation are deflected by gravity. A similar effect by replacing the gravitational acceleration with a centrifugal acceleration can enhance the deflection of the photons. To the end we discuss feasibility questions.
[1] G. E. Marti et. al., Phys. Rev. Lett. 120, 103201 (2018).
[2] S. L. Campbell et. al., Science 358, 90 (2017).
[3] W.-T. Liao, S. Ahrens, Nature Photonics 9, 169 (2015).
Although many properties of the vortex particles with orbital angular momentum (OAM) $\ell$ can be described within a model of a non-normalized Bessel beam, it does not allow one to go beyond a paraxial approximation, which is crucial for proper study of the spin-orbit effects and for scattering problems in atomic physics, nuclear and high-energy physics, especially when the quantum interference and coherence play an important role. Accurate estimates of the non-paraxial effects require that the vortex wave packets be 3D localized, described in a Lorentz invariant way, and applicable beyond the paraxial regime. Despite the recent interest in the relativistic electron vortices, such a model is still lacking. Here we develop a model of a packet that is Gaussian in momentum space, localized both in 3D space and time, characterized by a mean 4-momentum, by a momentum uncertainty $\sigma \sim 1/\sigma_{\perp}$, which is a Lorentz scalar and vanishing, $\sigma \ll m$, in the paraxial regime, and by the OAM. We argue that this wave packet is a more adequate model for relativistic vortex electrons and calculate the non-paraxial corrections to the observables like energy, magnetic moment, etc. We find that compared to the ordinary Gaussian beam for which they are $\sim \lambda_c^2/\sigma_{\perp}^2 \ll 1$ ($\lambda_c$ is a Compton wave length and $\sigma_{\perp}$ is a beam width), these corrections are $\ell$ times enhanced and can reach $10^{-3}$ for already available beams with $\ell > 10^3$ and $\sigma_{\perp} < 1$ nm. We discuss possible means for detecting these effects.
We consider the problem of correctly identifying a malfunctioning quantum device that forms part of a network of N such devices. We first study the case of sources assumed to prepare identical quantum pure states, with the faulty source producing a different anomalous pure state. We show that the optimal probability of successful identification requires a global quantum measurement, and investigate several local measurement strategies whose performance is only slightly worse. We also address the case of quantum channels, where the malfunctioning channel is assumed to perform a known unitary, and show that in this case the use of entangled probes provide an improvement that even allows perfect identification for values of the unitary parameter that surpass a certain threshold. However, this advantage disappears for very large networks where product state probes yield the same performance. Finally, we find that for rank-one and rank-two Pauli channels, op- timal identification can be achieved by product state inputs and separable measurements for any size of network; for rank-3 and general amplitude damping channels optimal identification requires entanglement with N ancillas. However, whereas for rank-three Pauli channels entanglement is advantageous for any size of network, for a general amplitude damping channel the advantage of entanglement with ancillas disappears as the size of the network grows.
The project of the low-energy $e^+e^-$ collider ($\mu\mu$tron) operating near the muon-pair production threshold is developed in BINP (Novosibirsk). The construction of the collider is planned to be started in 2019. The $\mu\mu$tron parameters and configuration (a luminosity of $8\times 10^{31}$ cm$^{-2}$c$^{-1}$, an center-of-mass energy spread of 400 keV, and beams collision with a large crossing angle) allow to perform experiments on study of dumuonium properties. The dimuonium is the $\mu^+\mu^-$ bound state that has not yet been observed. At $\mu\mu$tron it will be possible to detect about 40 thousand dimuonium atoms per year (10$^7$ s). In this report we describe the physics program of $\mu\mu$tron.
.
SHIP is a new general purpose fixed target facility, whose Technical Proposal has been reviewed by the CERN SPS Committee and by the CERN Research Board. The two boards recommended that the experiment proceeds further to a Comprehensive Design phase in the context of the new CERN Working group "Physics Beyond Colliders", aiming at presenting a CERN strategy for the European Strategy meeting of 2019. In its initial phase, the 400GeV proton beam extracted from the SPS will be dumped on a heavy target with the aim of integrating 2×10^20 pot in 5 years. A dedicated detector, based on a long vacuum tank followed by a spectrometer and particle identification detectors, will allow probing a variety of models with light long-lived exotic particles and masses below O(10) GeV /c2. The main focus will be the physics of the so-called Hidden Portals, i.e. search for Dark Photons, Light scalars and pseudo-scalars, and Heavy Neutrinos. The sensitivity to Heavy Neutrinos will allow for the first time to probe, in the mass range between the kaon and the charm meson mass, a coupling range for which Baryogenesis and active neutrino masses could also be explained. Another dedicated detector will allow the study of neutrino cross-sections and angular distributions. ντ deep inelastic scattering cross sections will be measured with a statistics 1000 times larger than currently available, with the extraction of the F4 and F5 structure functions, never measured so far and allow for new tests of lepton non-universality with sensitivity to BSM physics.
In the macroscopic limit, quantum mechanics reproduces the deterministic laws of motion associated with classical physics. Nevertheless it is impossible to reconcile the uncertainty limited statistics of quantum dynamics with the classical notion of causality as expressed by universal laws of motion. Here, I explain how the complex phases of Hilbert space describe a causality dominated by the action of transformations that describe the various effects of external forces associated with quantum measurement and quantum state preparation. It is shown that quantum randomness emerges as a result of universal causality relations which include the relation between objects and the macroscopic means of control. Importantly, this kind of randomness cannot be explained in terms of incomplete information about hypothetical elements of reality residing in the quantum object because any meaningful definition of such realities would require a specific causal connection with the means of observation and control. However, quantum mechanics already provides a complete description of all possible causal connections based on an exchange of quantum coherence in interactions that tend to entangle the object with its observable effects in the outside world. As a result, there is no good motivation for the redundant introduction of elements of reality independent of the laws of causality that provide the only scientifically valid method by which a physical object can be known.
TBA
Linear physics is "easy" to solve. Whether in a purely quantum mechanical system, or when considering quantum fields, quadratic (i.e., linear) Hamiltonians give rise to well known and established phenomena. Exotic and trademark phenomena in quantum field theory in curved spacetime, such as the Hawking effect and the Unruh effect, are all consequences of linear physics.
Most physics is, in general, nonlinear. Nonlinearities appear in the most diverse areas of physics and have, so far, been treated with ad-hoc (and often numerical) methods.
Here we present a novel approach to study nonlinearities with analytical tools. We have already successfully applied these tools to solve problems of interest in opto-mechanical systems, which are now being explored as potential probes for ultra-precise measurements of gravitational fields and curvature.
Our results can be extended to Hamiltonians of arbitrary form, therefore potentially offering a tool to explore nonlinear dynamics of any kind.
Applications and outlook are also discussed.
We propose a theoretical scheme in which a regularized scalar field theory emerges naturally from entangling an otherwise standard scalar field with an ancillary, non-dynamic field. Using suitable initial and final Gaussian states of the two-field system, it is possible to retain the Feynman-diagrammatic expansion of the standard theory with a modified ‘’weak” propagator having a regularized Hadamard function, the ultraviolet behavior of which can be freely tuned through the choice of states. In particular, we discuss a sequence of states whose limiting Hadamard function vanishes, leading to the so-called Wheeler propagator—a quantum realization of the half-advanced/half-retarded potential of Wheeler-Feynman absorber theory. Te Wheeler propagator example provides a simple illustration of how finite self-energies in $\phi^3$ and $\phi^4$ theories arise from this scheme.
Participants of round table, 11 july 2018, main auditorium, 18:30-20:00 (in alphabetic order):
Convenor: Albert de Roeck, CERN, Switzerland.
A short review on the state-of-the-art of searches for magnetic monopoles is presented. Theoretical scenarios predicting them and the motivation behind postulating the existence of monopoles are briefly highlighted. Various techniques for direct and indirect searches of monopoles are reviewed, whereas emphasis is given on the results of the searches and the exclusion limits set in cosmic and collider experiments. Present and future experiments for magnetic monopole detection, such as ATLAS and MoEDAL at the LHC, is also discussed.
Partially based on https://arxiv.org/abs/1806.03607, as well as some on-going theoretical and experimental collaborations addressing the foundations and applications of quantum entanglement.
TBA
.
Curent neutrino oscillation data tell us that what is behind the observed neutrino mixing pattern should at least be an approximate mu-tau flavor symmetry. This suggests that there may exist an exact mu-tau flavor symmetry in the neutrino sector at a superhigh energy scale (e.g., the scale that the seesaw mechanism works), and it will be spontaneously broken due to the renormalization-group evolution down to the electroweak scale. We shall report our latest studies in this respect, and establish the connection between mu-tau symmetry breaking and the octant of theta(23) and the quadrant of delta in the standard parametrization of the PMNS matrix. We find that current experimental data can be interpreted in this way. Some model-building issues will also be addressed.
The T2K long baseline neutrino oscillation experiment measures muon neutrino disappearance and electron neutrino appearance in an accelerator produced neutrino and anti-neutrino beams. We report on the analysis of our data from 2.6E21 protons on target exposure. Results for oscillation parameters including the CP violation parameter and neutrino mass ordering are shown.
Super-Kamiokande (SK), a 50 kton water Cherenkov detector in Japan, is observing both atmospheric and solar neutrino neutrinos and is searching for supernova (relic) neutrinos, proton decays as well as dark matter like particles. The installation of new front-end electronics in 2008 marks the beginning of the 4th phase of SK (SK-IV).
A three-flavor oscillation analysis was conducted with the atmospheric neutrino data in order to study the mass hierarchy, the leptonic CP violation term, and other oscillation parameters. In addition, the observation of solar neutrinos gives precise measurements of the energy spectrum and oscillation parameters. Using more than 20 years data, SK covers more than 1.5 solar
activity cycles and this enables us to perform an analysis about a possible correlation between solar neutrino flux and 11 year activity cycle.
In this presentation, we overview the recent results from SK and give the prospect toward the future project of SK-Gd.
The 760 ton ICARUS T600 detector performed a successful three-year physics run at the underground LNGS laboratories, searching for atmospheric neutrino interactions and performing with the CNGS neutrino beam from CERN a sensitive search for LSND like anomalous nu_e appearance which contributed to constrain the allowed parameters to a narrow region around Δm2~eV2, where all the experimental results can be coherently accommodated at 90% C.L. The T600 detector underwent a significant overhauling at CERN and has now been moved to Fermilab, to be soon exposed to the Booster Neutrino Beam to search for sterile neutrino within the SBN program, devoted to definitively clarify the open questions of the presently observed neutrino anomalies.
The proposed contribution will address ICARUS achievements, its status and plans for the new run and the ongoing analyses also finalized to the next physics run at Fermilab.
The Telescope Array (TA) is the largest experiment in the northern
hemisphere to study origin of extremely high energy cosmic rays,
which is one of unsolved puzzles in the nature.
TA is a hybrid detector system consisting of surface detector (SD)
array and atmospheric fluorescence detectors (FDs).
507 SDs are arranged on a 1.2km grid over an area of about 700 km$^2$.
Three FD stations enclose this SD array, and each FD station views
108$^\circ$ in azimuth and 30$^\circ$ in elevation.
TA has been operated for ten years.
We summarize our recent results on spectrum, anisotropy and composition.
Finally, we also introduce the TA low energy extension (TALE) experiment and
the TAx4 experiment as its higher energy extension.
The subtle interplay of infrared singularities in quantum electrodynamics and perturbative quantum gravity and information theoretic issues such as quantum entanglement between soft and hard degrees of freedom will be discussed. It will be argued that the inevitable loss of soft photons and gravitons in a scattering experiment leads to decoherence of the out-going state, that this decoherence persists even when the incoming states are infrared-safe coherent states, and that mathematical issues in scattering theory with normalizable incoming wave packets distinguishes between competing methods and requires infrared-safe in-states.
We propose a model for a self-gravitating electromagnetic monopole in a string-inspired model in the presence of Kalb-Ramond torsion and dilaton.The model includes a regularisation of the core of the monopole. We give arguments for the existence of a thin shell structure inside the core and a bag-like structure of the monopole. The regularisation of the inner-core involves a de Sitter metric and allows a determination of the thin shell structure of the monopole.The monopole mass and charge are proportional to the torsion strength.
The existence of magnetic charges remains one of the great questions in high energy physics and their search has gained momentum as recent models predict these may be observable at current colliders. They appear in field theories in two forms: the widely studied but heavily suppressed monopole with structure (soliton), and the not so well covered point like monopole. The latter was first proposed by Dirac as the source of a singular magnetic field, and in effect symmetrises Maxwell's equations. Following this line of research, this study analyses these sources as matter fields in an effective field theory, which carry spins 0, $\frac{1}{2}$ or 1. All three cases are currently under investigation by the MoEDAL collaboration at CERN and the theoretical expressions for kinematic distributions proposed in this work serve as guides to these searches.
The cross section distributions in each case are derived from a U(1) invariant gauge theory. It is not assumed that, like the electron, the monopole's magnetic moment is generated through spin interactions at minimal coupling, as it may be quite large. Instead, the analytical expressions in the spin $\frac{1}{2}$ and $1$ cases are kept completely general through the inclusion of a phenomenological parameter $\kappa$, related to the gyromagnetic ratio $g_R=1+\kappa$. In fact, the inclusion of this parameter gives the effective theory a sense of validity in the high energy limit if the magnetic coupling scales with the particle's velocity $\beta=\frac{v}{c}$.
.
Highly-ionizing particles are predicted by several scenarios of Beyond the Standard Model physics. On the one side, they can be massive long-lived charged particles, characterized by ionization much higher than any Standard Model particle with unit charge, due to their velocity being significantly below the speed of light. On the other, high ionization can come from multiple charge, predicted by the models of multiple-charged particles, or magnetic monopoles that are characterized by very high equivalent electric charge, and consequently high ionization. An overview of recent ATLAS and CMS searches for highly-ionizing particles will be presented.
The full paper is available on arXiv https://arxiv.org/abs/1710.07212 .
A physical theory comprises a mathematical formalism, which allows for predicting the outcomes of scientific experiments, and some ontological interpretation.In the case of quantum theory, the predictions are probabilistic and often conflict with proposed descriptions of the experiment in terms of classical information.
The arise of apparently classical information during a measurement,i.e., a definite result, poses a conceptual problem for quantum theory. In what is called standard quantum mechanics , the measurement-update rule, commonly associated with a collapse, is a break with the otherwise unitary evolution governed by the Schrödinger equation. The formalism, however, provides no indication about when to apply this rule: It does not state what qualifies some interactions as measurements but not others (the measurement problem).
Everett’s relative-state formalism seems to avoid the problem by postulating universal, unitary quantum theory. This, however, detaches the formalism from predicting outcomes of experiments, for which some sort of Born rule – i.e., a means to calculate probabilities of measurement results – is needed. This is not specified by the original relative-state formalism at all, but the use of the Born rule has been motivated by a many-worlds interpretation and decision-theoretical arguments.
We want to stress that universal, unitary quantum theory is a new type offormalism which is fundamentally different from the measurement-update rule of standard quantum mechanics. It is not a new interpretation ; the many-worlds interpretation is the best-known interpretation of the relative-state formalism. One can regard a generalised version of Bohmian mechanics as a different interpretation of that formalism.
We treat the relative-state formalism as a different formalism than the Born and measurement-update rule of standard quantum mechanics. We postulate an alternative "Born rule" motivated by the work of G. Hermann. Equipped with this "Born rule," the relative-state formalism reproduces the same probabilities as standard quantum mechanics for consecutive measurements on one quantum system – the same level of observation. But the two formalisms are inequivalent in case of encapsulated observers – different levels of observation, Wigner's-friend-type experiments. The latter was first considered by D. Deutsch’s version of the Wigner’s fiend experiment.
An observer – the friend – performs a measurement on the quantum system emitted by the source. Both the system and the friend (or the friend's memory) are then jointly measured by a superobserver – Wigner.
Standard quantum mechanics suggests that, according to the friend, the state of the system collapses to the eigenvector associated with the observed measurement result. To Wigner, however, the joint quantum system supposedly evolves unitarily. Such a subjective-collapse model, namely that each agent attributes a collapse merely to their own measurement, leads to seemingly contradictory predictionsamong the agents.
Descriptions of Wigner's-friend-type setups based on the relative-state formalism do not give rise to problematic predictions, neither do objective collapse models,orany other version in which there is consensus on the application of the measurement-update rule.
The possibility of classical communication between the agents in a Wigner's-friend-type experiment is essential for the problematic predictions to give rise to an actual contradiction.
Quantum measurements can lead to results that seem counter intuitive when we attempt to assign a classical meaning to the corresponding quantum observable. This is particularly apparent for joint measurements, which reveal global properties of a system while maintaining ignorance about some local properties. In the pigeonhole paradox, three quantum pigeons placed in two holes appear to each be in a different hole when measured in pairs, seemingly violating the pigeonhole principle. Here we present progress on an optical quantum pigeonhole experiment and show that the violation holds at all measurement strengths, including in the case where the measurement is so weak that measurement back-action cannot be used to explain the paradox. Furthermore we show how the results of the weak measurement follow a consistent sum rule which breaks down as the measurements become stronger and measurement back-action is more significant.
It is well known that certain quantum correlations like quantum steering exhibit a monogamous relationship. In this paper, we exploit the asymmetric nature of quantum steering and show that there exist states which exhibit a kind of polygamous correlation, where the state of one party, Alice, can be steered only by the joint effort of the other two parties, Bob and Charlie. %This is in any way not in contradiction with the existing literature. Since, instead of looking at whether Alice can steer both Bob and Charlie simultaneously or not, we investigate whether there exists states such that Alice can only be steered by Bob and Charlie together.
As an example, we explicitly single out a particular set of $3$ qubit states which exhibit polygamous relationship and also provide a recipe to identify the complete set of such states. We also provide a possible application of such states to an information theoretic task, we term as quantum key authentication (QKA). QKA can also be used in conjunction with other well known cryptography protocols to improve their security and we provide one such example with quantum private comparison (QPC).
The strong experimental evidence for the violation of Bell's inequalities is widely interpreted as the ultimate proof of the impossibility to describe Bell's polarization states of two entangled particles in terms of a local and realistic statistical model of hidden variables in which the observers are free to choose the settings of their measurements. In this paper we note, however, that Bell's theorem relies crucially on a fourth implicit theoretical assumption, in addition to the explicit hypotheses of locality, physical realism and 'free will'. Namely, we note that the proof of Bell's inequalities for generic models of hidden variables implicitly assumes that there exists an absolute reference frame of angular coordinates, with respect to which we can define the polarization properties of the hidden configurations of the pair of particles as well as the orientation of the measurement apparatus that test them. This implicit additional assumption, however, is not required by any fundamental physical principle and, indeed, it may be incorrect if the hidden configuration of the pair of entangled particles breaks the rotational symmetry along an otherwise arbitrary direction. We explicitly show that by giving up to this additional implicit assumption it is possible to build a local and realistic model of hidden variables for Bell's polarization states, which complies with the demands of 'free-will' and fully reproduces the quantum mechanical predictions. The model presented here offers a new insight into the notion of quantum entanglement and the role of measurements in the dynamics of quantum systems. It may help also to develop new tools for performing numerical simulations of quantum systems.
We consider a two-player coordination game, similar in spirit to the
CHSH game, in which Alice and Bob attempt to maximize the area of a
rectangle. Alice and Bob each have two random variables, and the
rectangle’s area is represented by a certain parameter, which is a
function of the correlations between their random variables. We show
that this parameter is a Bell parameter - i.e., the achievable bound
using only classical correlations is smaller than the achievable bound
using non-local quantum correlations (in the quantum case, the random
variables are outcomes of quantum measurements). We continue by
generalizing the parameter to the case in which Alice and Bob each
have n random variables and wish to maximize a certain volume in
n-dimensional space. We call this parameter a multiplicative Bell
parameter and prove its Tsirelson (quantum) limit for a qubit.
Finally, we investigate the case of local hidden variables and
demonstrate, under specific assumptions, a reduction from the problem
of finding the Bell bound to an integer programming problem. We then
show that for any deterministic strategy of one of the players the
Bell parameter is a n-variables harmonic function whose maximum
saturates the integer programming bound for many values of n. We
conjecture that this maximum can be computed efficiently, while the
integer programming problem cannot be solved efficiently.
TBA
We point out a fundamental problem that hinders the
quantization of general relativity: quantum mechanics is formulated in
terms of systems, typically limited in space but infinitely extended
in time, while general relativity is formulated in terms of events,
limited both in space and in time. Many of the problems faced while
connecting the two theories stem from the difficulty in shoe-horning
one formulation into the other. A solution is not presented, but a
list of desiderata for a quantum theory based on events is laid out.
The uncertainty associated with the probing of quantum state is expressed in terms of the effective abundance (measure) of possibilities for its collapse. New kind of uncertainty limits entailed by quantum description of the physical system arise in this manner.
Most thermodynamic systems live and die by the Boltzmann exponential;
the standard occupation functions (Fermi-Dirac, Bose-Einstein, and
Boltzmann) are defined by it. In discrete energy systems, state
degeneracy is usually of secondary importance, while in continuous energy
systems, density of states functions may dominate the Boltzmann factor at
low energies but never at high. However, this need not be the case.
Recently, a new type of degeneracy (supradegeneracy) has been proposed in
which state degeneracy increases more quickly with energy than the
Boltzmann exponential, thereby dominating it at high energies. The result
are systems that display a form of population inversion at thermal
equilibrium without the need for non-equilibrium pumping. No naturally
occurring supradegenerate systems appear to exist; however, analysis
indicates man-made supradegenerate systems should be constructable. Some
are predicted to have remarkable properties, including allowing tests of
the limits to the second law of thermodynamics.
In this presentation, the essentials of supradegeneracy will be reviewed
and second law tests proposed. Laboratory experiments (currently in
progress) will be described in which supradegeneracy is being
investigated. These involve silicon that is differentially doped with
p-type impurities near its valence band edge, forming a suprathermal
"energy ladder" up into the band gap. It is predicted that electrons can
climb the ladder to suprathermal energy states (E >> kT) driven solely by
thermal energy from the lattice. Should these silicon experiments
demonstrate the effect, efforts will be made to build energy ladders
across the entirety of narrow-gap semiconductors. This should allow new
and sensitive tests of the absolute status of the second law.
Holographic duality shows increasing evidence of a deep relation between quantum entanglement in quantum field theory (QFT) and gravity. While the 'entanglement entropy' captures spacetime physics outside a black hole horizon, the 'complexity' is proposed to be dual to the inside of the horizon. Contrary to much progress on the 'holographic complexity', complexity in QFT is not defined well. In this talk, we propose a definition of complexity in QFT based on three axioms for complexity and general symmetry properties of QFT. We show our proposal can unify other field theoretic approaches and agrees with the holographic (gravity) results.
This talk will present our recent efforts towards quantifications and unification of quantum macroscopicity, coherence, and nonclassicality. It will first cover our size measure for macroscopic quantum superpositions based on the phase-space structure [1] and another measure based on the degree of disturbance by coarse-grained measurements [2]. We will then present a more recent result that unifies two well-known yet independently developed concepts, i.e., the quantum coherence and the nonclassicality of light [3]. The concept of quantum coherence was recently developed based on the framework of quantum resource theories [4], while the notion of nonclassicality of light that has been established since the 1960s based on the quantum theory of light [5]. Our orthogonalization process enables one to quantify the coherence of an arbitrary continuous-variable state in the coherent-state basis, which leads to the conclusion that the coherence and the nonclassicality are identical resources [3]. Finally (if time is allowed), the talk will briefly discuss experimental implementations of macroscopic quantum states in optical systems [6,7].
[1] C.-W. Lee and H. Jeong, "Quantification of macroscopic quantum superpositions within phase space," Phys. Rev. Lett. 106, 220401 (2011).
[2] H. Kwon, C.-Y. Park, K.-C. Tan, and H. Jeong, "Disturbance-based measure of macroscopic coherence," New J. Phys. 19, 043024 (2