Partikeldagarna 2019 will be held at Linköping University on October 2-3 2019, in close connection with Fysikdagarna 2019 happening on October 3-5. The workshop is organized by the Particle and Astroparticle Physics Section of the Swedish Physical Society (Fysikersamfundet). The workshop starts on Wednesday at 10 am (registration with coffee) and ends on Thursday at 6 pm (the social dinner of Fysikdagarna is this evening, included in the fee). In addition, an optional social dinner specifically for the participants of Partikeldagarna will be organized on Wednesday evening.
The aim of the meeting is to have a status report and a professional discussion of current particle and astroparticle physics research in Sweden. Partikeldagarna 2019 will follow the same structure as previous years, with a mix of talks given by individual (preferably young) researchers presenting their work, group reports and summaries from funding agencies and international organizations where Sweden is represented.
Partikeldagarna will also feature two invited speakers covering hot topics in particle and astroparticle physics:
Registration: two steps! (please indicate any food restrictions/preferences in both forms)
Abstract submission: from July 1st to September 5th at noon.
In order to submit an abstract and register, you will have to use a CERN indico account. Contact the organizers for support, if needed.
Payment: 900 SEK for meeting (including light lunch on October 2nd, dinner on October 3rd, and all coffee breaks) and 600 SEK for the social dinner of October 2nd. In other words, the dinner for Partikeldagarna (Wednesday) is optional, and the social dinner for Fysikdagarna (Thursday) is included in the fee.
Accommodation: see separate accommodation tab.
Location: Linköping University
Room: Planck, Fysikhuset
Organizers: Riccardo Catena (Chalmers), Arnaud Ferrari (Uppsala), Christian Ohm (KTH)
Blazars are very bright gamma ray sources in the sky that are visible out to very large redshifts. Powered by material falling into a supermassive black hole, blazars have jets that can emit gamma rays up to TeV energies. As these TeV photons propagate through space they are expected to encounter ambient magnetic fields and extragalactic background light (EBL). The presence of magnetic fields can convert them to Axion like particles (ALPs), while the presence of the EBL absorbs the initial spectrum via pair production. We studied the multi-wavelength behaviour of KUV 00311-1938, a blazar with an unknown redshift, finding that a simple EBL model can explain the observed emission. We subsequently used the EBL model to constrain the redshift of the source.
ALTO is a future very-high-energy (VHE) gamma-ray observatory, which is currently in the prototyping phase. The proposed design of the array consists of more than a thousand Water Cherenkov Detectors (WCDs) each coupled with a liquid scintillator. The observatory will be installed at an altitude of 4 to 5 km above the sea level in the Southern Hemisphere. WCDs sample the secondary particles in the extensive air showers generated by VHE gamma-rays and cosmic rays while the scintillators help in tagging the presence of muons. Preliminary studies using the simulation shows that the scintillators help in improving the signal over background discrimination by 15 to 30% depending on the energy range. The first phase of the prototype, comprised of two WCDs and scintillators, has been taking data since February 2019. In the poster, I will present the simulation performance and some results from the prototype activities. Finally I will also present the future steps of the project.
We present methods to find anomaly-free gauged U(1) Froggatt-Nielsen type models using results from real algebraic geometry. These methods should be of general interest for model building beyond the Standard Model when rational charges are required. We consider models with a gauged U(1) flavour symmetry with one flavon and provide several model examples based on different physical assumptions. Necessary conditions for these models to be free from gauge anomalies are derived and we show that the field content of the Standard Model is not sufficient and thus additional fields are needed. Two such extensions are considered; two Higgs doublets and right-handed neutrinos providing Dirac masses. With these extensions, the fermion masses and mixings in the Standard Model, including neutrinos, can in great part be explained.
Moreover, we show that the UV-behaviour of these models are in general plagued by Landau poles. Two different UV-completions are considered; through vector-like fermions and through Higgs doublets. In the fermion completion, the gauge couplings are in general plagued with Landau poles while in a scalar completion this may be avoided, but instead the quartic couplings generally blow up. Thus, the generic case is that neither completion works, but the scalar completion might be saved by appropriate choice of parameters in the scalar potential. This conclusion does not change if we allow U(1) to be anomalous or global.
At high densities and temperatures the standard quantum field theoretical approach to particle physics should be modified. Temperature enters explicitly in observables, for instance in decay rates, and under certain conditions expected results deviate significantly from the case of zero-temperature. I have put together a collection of thermal decay rates covering scalars, pseudo scalars and fermions consequently expanding the existing literature. I aim to lay out the procedure of thermal calculations detailing the explicit appearance of temperature in the two-point correlation function. I also highlight the interpretation of the thermal decay rate in comparison with the zero-temperature dito.
It is well-known that the spinor-helicity method can significantly simplify the calculations of both Feynman diagrams and scattering amplitudes. In this work, we attempt to further simplify the Feynman-diagram calculation by converting the spinor-helicity method into a flow method, analogous to allowing a one-line journey from Feynman diagram to inner products. The cases of massless QED and QCD will be discussed.
The sources and accelerating mechanisms of cosmic rays, with energies as high as 10^{20} eV, are not completely understood. Gamma ray bursts (GRBs) have long been considered as promising source candidates, yet so far don't show evidence for a correlated neutrino signal to prove this hypothesis. Previous analyses by IceCube have searched for neutrino in coincidence with the prompt phase of the GRBs, typically lasting for 100s or less. Here I will describe a search for neutrino correlations before and after this prompt phase using an extended time window. Presently I am calculating the spatial and temporal coincidence of a list of GRBs with the neutrino detections made by IceCube and comparing the results for different precursor and afterglow models hypothesised for short and long duration GRBs. These searches build on a similar methodology as was used for the detection of longer time-scale transients such as blazar flares, applying it for the first time to short transients like GRBs.
An important part of the ATLAS dark matter (DM) search programme is comprised of searches for new resonances (dark matter mediators) decaying to hadronic final states. This talk will give an overview of one such analysis: the recently-published search for low-mass dark-matter mediators decaying to jets, with an associated high-pT photon. This search triggers on an associated photon in order to significantly extend the sensitive region of ATLAS for dark matter mediators and generic resonances. Also covered is the reinterpretation of the resulting limits in the case of non-discovery.
Current estimates put Dark Matter to 26.8% of the energy-matter content of the universe, but very little is known about it other than its gravitational interactions. Efforts to learn more about Dark Matter include searching for it at high energy particle colliders. The lack of information about the nature of Dark Matter makes this a complicated task, and many searches are performed in different channels, and considering different theoretical models. In my master thesis, I explore two such analyses, performed in the ATLAS collaboration using data from the ATLAS detector at the Large Hadron Collider at CERN: the tW+MET final state and the tt̄+MET final state. I have made a generation-level study of the overlap between the signal regions used, and come to the conclusion that there is some. I have also compared the models used in these analyses, the 2HDM+a and the simplified spin-0 pseudoscalar model.Given the simplifications made in my study, however, more sophisticated approaches should be used before anything conclusive can be said.
Recently, searches for pair production of Higgs bosons in several final states have been carried out by the ATLAS exeperiment at the Large Hadron Collider (LHC). This study focuses on the search for non-resonant di-Higgs production decaying to a final state with two $b$-jets and two $\tau$-leptons using 36.1 fb$^{-1}$ of data recorded by the ATLAS detector. The analysis for this process has already been performed. Boosted decision trees (BDTs) are used in the analysis to improve the separation of the signal from background processes and several variables that provide good discrimination between signal and background are used as inputs to the BDT. This study aims to unfold the BDT of the analysis and optimize a cut-based analysis so that the gain from using the BDT can be estimated. Two variables, related to the invariant masses and angular distances of the Higgs boson decay products, are defined and the optimal cuts are found to be $X_{m_{\tau\tau}m_{bb}}>1.8$ and $X_{\Delta R_{\tau\tau}\Delta R_{bb}} >4.0$. Then, the upper limits on the SM $HH$ production cross section are set when fitting $m_{HH}$ with the cut-based analysis. An expected limit of 0.78 pb, 23 times the SM prediction is obtained when neglecting systematic uncertainties, compared to the limit of 15 times the SM as recomputed when using the BDT. Comparing the two results, the sensitivity is worsened by 50% when not using the BDT.
With strong constraints on the mass of vector-like quarks (VLQ) from the top partner (T) decay T->SM, it is necessary to consider non-standard decays of such partners. This talk considers models where the VLQ decays to a BSM (pseudo)scalar (S) and a top-quark. The scalar is assumed to be fermiophobic, and dominantly decays into two SM bosons. With dedicated analyses, we realistically quantify the sensitivity of the LHC to both the T and S masses, assuming both current and foreseen luminosities.
Experimental observations and advanced computer simulations in High Energy Physics (HEP) paved way for the recent discoveries at the Large Hadron Collider (LHC) at CERN. Currently, Monte Carlo simulations account for a very significant amount of computational resources of the Worldwide LHC Computing Grid (WLCG).
In looking at the recent trends in modern computer architectures we see a significant deficit in expected growth in performance. Coupled with the increasing compute demand for High Luminosity (HL-LHC) run, it becomes vital to address this shortfall with more efficient simulation.
The simulation software for particle tracking algorithms of the LHC experiments predominantly relies on the Geant4 simulation toolkit. The Geant4 framework can be built either as a dynamic or a static library, the former being the more widely used approach. This study focuses on evaluating the impact of both having libraries statically vs dynamically linked and compiler optimization levels, on the simulation software’s execution time.
Multiple versions of the more widely used versions of compilers for UNIX-like systems have been used for these investigations. Both compiler optimization levels (e.g. O3, O2 on GCC) and link-time optimization (LTO) have been studied. Initial results indicate that significant execution time reductions can be obtained by switching from static to dynamic linking.
One overarching objective of science is to further our understanding of the universe, from its early stages to its current state and future evolution. This depends on gaining insight on the universe’s most macroscopic components, for example galaxies and stars, as well as describing its smallest components, namely elementary particles and nuclei and their interactions. It is clear that this endeavor requires combined expertise from the fields of astroparticle physics, particle physics and nuclear physics. Pursuing common scientific drivers also require mastering challenges related to instrumentation (e.g. beams and detectors), data acquisition, selection and analysis, and making data and results available to the broader science communities. Joint work and recognition of these “foundational” topics will help all communities grow towards their individual and common scientific goals. This contribution presents the work that various communities and experiments are doing to solve one of the many common challenges faced by particle physics and astrophysics: the necessity of dealing with large, sometimes heterogeneous datasets and derive insight from them in short periods of time.