Welcome to the website for the 2021 CAP Virtual Congress abstract submission and scheduling. The Program Committee is looking forward to offering an exciting and full slate of talks and sessions, along with important networking opportunities. NOTE: All scheduling is programmed to Eastern Time zone. Use the button in the top right of the page to select which time zone you want to view the schedule in. As a reminder, the CAP office staff continue to work remotely from home in accordance with the current provincial recommended best practices, therefore, should you have any questions, please contact the CAP office by e-mail at programs@cap.ca. Robert Thomson, CAP President |
Welcome to the CAP2021 Indico site which is being used for abstract submission and, later, congress scheduling.
Bienvenue au site web Indico pour ACP2021. Ce site servira à la soumission de résumés et, plus tard, à la préparation de l'horaire.
The abstract system for the 2021 CAP Virtual Congress has now been re-opened for invited speaker abstracts and the submission of post-deadline poster abstracts and will close 23h59 EDT on March 31st. Thinking of submitting an abstract and don't have an Indico account? Click on the Call for Abstracts link in the menu on the left for information on how to create one.
L'appel à résumés pour le Congrès virtuel de l'ACP 2021 est maintent réouvert pour les résumés des conférenciers invités et pour les résumés d'affiches après la date limite. Il sera fermer à 23h59 HAE mercredi le 31 mars. Vous envisagez de soumettre un résumé et vous n'avez pas de compte Indico ? Cliquez sur le lien Appel de résumés dans le menu de gauche pour savoir comment en créer un.
CONGRESS REGISTRATION is scheduled to open around 2021 May 1.
L'INSCRIPTION AU CONGRÈS est prévue vers le 1 mai 2021.
For many years physicists divided the world of quantum particles into two kingdoms – bosons and fermions. Anyons are particles with a sort of memory, that fall outside those kingdoms. In recent decades theorists have produced an enormous literature about the possible occurrence and properties of anyons. It is only in the last few months, however, that clear experimental confirmation has appeared. I will review the theoretical background and describe the breakthrough experiments. Anyons will be central in a new and promising method of information processing: topological quantum computing.
Cooling atomic gases to ultracold temperatures revolutionized the field of atomic physics, connecting with and impacting many other areas in physics. Recent advances in producing ultracold molecules suggest similarly dramatic discoveries are on the horizon. I will review the physics of ultracold molecules, including our work bringing a new class of molecules to ultracold temperatures. Chemistry at these temperatures has a very different character than at room temperature. I will describe two striking effects: Spin-dependent reactivity and molecular Feshbach resonances.
Over the past decade, significant progress has been made in the commercialization of quantum sensors based on ultra-cold atoms and matter-wave interferometry. Nowadays, the first absolute quantum gravimeters have reached the market and there is even a cold-atom machine on the International Space Station. Matter-wave interferometers utilize the wave nature of atoms and their interaction with laser light to create interference between different quantum-mechanical states. Compared to an optical interferometer, the roles of matter and light are reversed: light is used as the "optic" to split, reflect, and recombine matter-waves. The resulting interference contains precise information about the atom's motion, such as its acceleration, as well as the electro-magnetic fields that permeate its environment. These atom interferometers can be designed as extremely sensitive instruments and have already led to breakthroughs in time-keeping, gravimetry, and tests of fundamental physics. In this talk, I will give an overview of laser-cooling and matter-wave interferometry and its applications as a versatile tool, for example, to test Einstein's Equivalence Principle, map the Earth's gravitational field, or aid future navigation systems.
One of the most famous tidbits of received wisdom about quantum mechanics is that “you can’t ask” which path a photon took in an interferometer once it reaches the screen, or in general, that only questions about the specific things you finally measure are well-posed at all. Much work over the past decades has aimed to chip away at this blanket renunciation, and investigate “quantum retrodiction.” Particularly in light of modern experiments in which we can trap and control individual quantum systems for an extended time, and quantum information protocols which rely on “postselection,” these become more and more timely issues.
All the same, the principal experiment I wish to tell you about addresses a century-old controversy: that of the tunneling time. Since the 1930s, and more heatedly since the 1980s, the question of how long a particle spends in a classically forbidden region on those occasions when quantum uncertainty permits it to appear on the far side has been a subject of debate. Using Bose-condensed Rubidium atoms cooled down below a billionth of a degree above absolute zero, we have now measured just how long they spend inside an optical beam which acts as a “tunnel barrier” for them. I will describe these ongoing experiments, as well as proposals we are now refining to study exactly how long it would take to “collapse” an atom to be in the barrier.
I will also say a few words about a more recent experiment, which looks back at the common picture that when light slows down in glass, or a cloud of atoms, it is because the photons “get virtually absorbed” before being sent back along their way. It turns out that although it is possible to measure “the average time a photon spends as an atomic excitation,” there seems to be no prior work which directly addresses this, especially in the resonant situation. We carry out an experiment that lets us distinguish between the time spent by transmitted photons and by photons which are eventually absorbed, asking the question “how much time are atoms caused to spend in the excited state by photons which are not absorbed?”
We investigate the effect of coupling between translational and internal degrees of freedom of composite quantum particles on their localization in a random potential. We show that entanglement between the two degrees of freedom weakens localization due to the upper bound imposed on the inverse participation ratio by purity of a quantum state. We perform numerical calculations for a two-particle system bound by a harmonic force in a 1D disordered lattice and a rigid rotor in a 2D disordered lattice. We illustrate that the coupling has a dramatic effect on localization properties, even with a small number of internal states participating in quantum dynamics.
arXiv:2011.06279
Hollow-core optical fibres provide μm-scale confinement of photons and atoms and reduce the power requirements for optical nonlinearities. This platform has opened tantalizing possibilities to study and engineer light-matter interactions in atomic ensembles. However, the purity, efficiency and nature of these interactions are contingent on the number, geometry and movement of atoms within the fiber. It is thus of interest to have a handle on loading atoms into and their motion inside the fiber.
Starting from ~ μK MOT (Magneto-Optical Trap) clouds positioned a few mm above the fiber, optical dipole potentials have been used to guide matter into the hollow fibers. To study the effect of different experimental conditions on the loading process, we use parallel programs to re-create the trajectories of atoms into vertically oriented hollow fibres with core diameters: 7 μm and 30 μm. We make predictions about the effects of the initial MOT temperature and position, of different guiding optical potentials on the loading efficiency, as well as of higher-order waveguide modes excited in the fiber. We compare the results of these simulations with reported experiments. Additionally, atomic motion inside the hollow fiber is visualized in order to predict ensemble features such as atom-cloud length, atom-atom distances and position-velocity distributions. These play a role in determining how transversely-confined light couples with the ensemble. Lastly, as the attempted schemes of gravity-assisted loading from a vertically positioned MOT appear to grant efficiencies in the limited range of 0.01 to 1%, potential alternatives are explored in order to realize a more direct interfacing of atoms into the fiber.
We model an atomic Bose-Einstein condensate (BEC) near an instability, looking for universal features. Instabilities are often associated with bifurcations where the classical field theory provided here by the Gross-Pitaevskii equation predicts that two or more solutions appear or disappear. Simple examples of such a situation can be realized in a BEC in a double well potential or in a BEC rotating in a ring trap. We analyze this problem using both Bogoliubov theory and exact diagonalization. The former describes elementary excitations which display complex frequencies near the bifurcation. We make connections to the description of bifurcations using catastrophe theory but modified to include field quantization.
Lieb-Robinson and related bounds set an upper limit on the speed at which information propagates in non-relativistic quantum systems. Experimentally, light-cone-like spreading has been observed for correlations in the Bose-Hubbard model (BHM) after a quantum quench. Using a two-particle irreducible (2PI) strong-coupling approach to out-of-equilibrium dynamics in the BHM we calculate both the group and phase velocities for the spreading of single-particle correlations in one, two, and three dimensions as a function of interaction strength in the Mott insulating phase. Our results are in quantitative agreement with recent measurements of the speed of spreading of single-particle correlations in both the one- and two-dimensional BHM realized with ultracold atoms. We demonstrate that there can be large differences between the phase and group velocities for the spreading of correlations and explore how the anisotropy in the velocity varies across the phase diagram of the BHM. Our results establish the 2PI strong-coupling approach as a powerful tool to study out-of-equilibrium dynamics in the BHM in dimensions greater than one.
In recent years, multi-species trapped-ion systems have been investigated for the benefits they provide in quantum information processing experiments, such as sympathetic cooling and combining long coherence time of one species with ease of optical manipulation of the other. However, a large mass-imbalance between the ions result in decoupling of their motion in the collective vibrational (phonon) modes that are used to mediate entanglement between ion-qubits. We theoretically and numerically investigate a scheme that introduces far off-resonant optical tweezers, tightly focused laser beams on individual ions, of controllable strength in a conventional Paul (RF and DC) trap. The tweezers enable site-dependent control over the trapping strength and manipulation of the phonon mode structure (eigenfrequencies or eigenvectors) of the trapped-ion system. The tweezers provide local control over the effective mass of the ion and hence minimize the motional decoupling. We demonstrate an algorithm to program the optical tweezer array to achieve a target set of phonon modes. Our work paves the way for high efficiency sympathetic cooling and fast quantum gates in multi-species trapped-ion systems.
We acknowledge support from TQT (CFREF), the University of Waterloo, NSERC Discovery and NFRF grants, and Govt. of Ontario.
Many of previous approaches for the firewall puzzle rely on a hypothesis that interior partner modes are embedded on the early radiation of a maximally entangled black hole. Quantum information theory, however, casts doubt on this folklore and suggests a different tale; the outgoing Hawking mode will be decoupled from the early radiation once an infalling observer, with finite positive energy, jumps into a black hole.
In this talk, I will provide counterarguments against current mainstream proposals and present an alternative resolution of the firewall puzzle which is consistent with predictions from quantum information theory. My proposal builds on the fact that interior operators can be constructed in a state-independent manner once an infalling observer is explicitly included as a part of the quantum system. Hence, my approach resolves a version of the firewall puzzle for typical black hole microstates as well on an equal footing.
One of the cornerstones of Quantum Mechanics (QM), Heisenberg’s Uncertainty Principle (HUP), establishes that it is not possible to simultaneously measure with arbitrary precision both the position and the momentum of a quantum system. This principle, however, does not prevent one from measuring with infinite precision the system’s position. However, theories of Quantum Gravity, aiming to bridge between General Relativity and QM, predict the existence of a minimal observable length - a minimal uncertainty on the position generally of the order of the Planck length. This prediction results therefore in a contradiction with HUP, requiring a modification of the principle. This need gave rise to the Generalized Uncertainty Principle (GUP). In this talk, I will show how the GUP can change known aspects of standard QM, leading to ways to test Quantum Gravity.
Scalar-tensor gravity can be described as general relativity plus an effective imperfect fluid corresponding to the scalar field degree of freedom of this class of theories. A symmetry of electrovacuum Brans-Dicke gravity translates into a symmetry of the corresponding effective fluid. We present the formalism and an application to an anomaly in the limit of Brans-Dicke theory to Einstein gravity.
[Based on V. Faraoni & J. Côté, Phys. Rev. D 98, 084019 (2018); Phys. Rev. D 99, 064013 (2019)]
We study Quantum Gravity effects on the density of states in statistical mechanics and its implications for the critical temperature of a Bose Einstein Condensate and fraction of bosons in its ground state. We also study the effects of compact extra dimensions on the critical temperature and the fraction. We consider both neutral and charged bosons in the study and show that the effects may just be measurable in current and future experiments.
Loop Quantum Gravity (LQG) is one proposed approach to quantize General Relativity. In previous literature LQG effects have been applied to Bianchi II spaces and here we numerically solve the resulting equations of motion using the fixed step 6th order Butcher-1 Runge-Kutta method. We also test, for a wide range of initial conditions, analytic transition rules for the Kasner exponents and show in which cases these transition rules hold.
Guided by the application of loop quantum gravity (LQG) to cosmological space-times and techniques developed therein, I will present an effective framework for vacuum spherically symmetric space-times. Stationary solutions of the effective theory give an LQG corrected metric with a number of interesting properties including curvature scalars that are bounded by the Planck scale and a minimal (non-zero) mass for black hole formation. Finally, the vacuum solution we derive is only valid down to some minimum (non-zero) radial coordinate; this necessitates the inclusion of matter fields for a description of the full space-time and in particular address the question of singularity resolution.
Using Loop Quantum Gravity corrections one can study quantum gravity effects for a dust-gravity system, resulting in a Loop Quantum version of Oppenheimer-Snyder collapse. In this talk I will explain how this model is built up and the consequences of adding holonomy corrections to the classical theory. In particular, we see that, in the black hole formation, there is a bounce when the energy density of the dust field reaches the Planck scale and the matter starts expanding. This expansion reaches, eventually, the apparent horizon, at which point the horizon disappears and there is no longer a black hole.
Launched in 2016, our four-year Integrated Science program is intended for students who have a passion for science and who wish to dissolve the barriers between the traditional scientific disciplines. The highlight of the program is a second-semester, first-year “megacourse” that takes a radical approach to first-year science by asking four overarching questions: How did Earth evolve, what is energy, what is life, and how does my smartphone work? Students completing the megacourse receive credit equivalent to second-semester biology, calculus, chemistry, and physics. At the same time, students are introduced to astronomy, earth sciences, computer science, and statistics, scientific disciplines to which first-year students typically receive no exposure. Along with a quick overview of the course, we will explore how students are assessed to ensure that they are sufficiently competent in each of the four main disciplines, particularly in physics.
I will show and discuss the recent progress of spectroscopic studies of neutron-rich nuclei near and beyond the neutron drip line, using the large acceptance multi-purpose spectrometer SAMURAI at RIBF at RIKEN [1]. After a brief introduction on characteristic features of structures near and beyond the neutron dripline, we focus on the recent experimental results on the observation of 25-28O [2] beyond the neutron drip line, and the Coulomb and nuclear breakup of halo nuclei such as 6He and 19B [3]. Future perspectives on the spectroscopy of such extremely neutron-rich nuclei are also discussed.
[1] T. Nakamura, H. Sakurai, H. Watanabe, Prog. Part. Nucl. Phys. 97, 53 (2017).
[2] Y. Kondo, et al. Phys. Rev. Lett. 116, 102503 (2016).
[3] K.J. Cook, et al., Phys. Rev. Lett. 124, 212503 (2020).
Understanding the structure of complex many-body nuclei is one of the central challenges in nuclear physics. The conventional shell model is capable of explaining the structure of stable nuclei, but it starts to shatter towards the driplines or rare isotopes. To explain the new trends in the shell model at the driplines, it is essential to study these exotic nuclei. Halo nuclei are prime examples of some of the unusual characteristics of rare isotopes. The development in the radioactive ion beam facilities made it possible to explore different aspects of halo nuclei. 11Li is a two-neutron halo with a 9Li core. In this study, the resonance states of the 11Li have been investigated through the deuteron scattering off an 11Li. The experiment was performed at the IRIS facility at TRIUMF with an 11Li beam accelerated to 7.3A MeV. The scattered deuterons were detected using a silicon and CsI(Tl) detector. The missing-mass technique was used to obtain the excitation spectrum. The observed resonance spectrum from inelastic scattering and the ground state of 11Li from elastic scattering will be presented that will show the excited states seen for 11Li. Their characteristics will be discussed.
Neutron beta decay is a fundamental nuclear process that provides a means to perform precision measurements that test the limits of our present understanding of the weak interaction described by the Standard Model of particle physics and puts constraints on physics beyond the Standard Model. The Nab experiment will measure a, the electron-neutrino angular correlation parameter and b, the Fierz interference term. The Nab experiment implements large area segmented silicon detectors to detect proton momentum and electron energy to determine a to a precision of $\delta a / a \sim 10^{-3}$ and b to a precision of $\delta b = 3 \cdot 10^{-3}$. The Nab silicon detectors are being characterized by protons prior the execution of Nab experiment. This talk will present preliminary measurements on the electronic response of detector pixels.
The Gamma-Ray Infrastructure For Fundamental Investigations of Nuclei (GRIFFIN), is a state-of-the-art spectrometer designed for the $\beta$-decay studies of exotic nuclei produced at the TRIUMF-ISAC facility. It provides unique research opportunities in the fields of nuclear structure, nuclear astrophysics, and fundamental interactions.
The spectrometer is composed of an array of 16 Compton suppressed clover-type high-purity germanium (HPGe) detectors as a core, and complemented by a powerful set of ancillary detectors that comprise plastic-scintillators for beta tagging, LN2-cooled Si(Li) detectors for conversion electron measurements and an array of eight LaBr$_3$(Ce) scintillators for lifetime measurements [1].
Innovative results using the GRIFFIN spectrometer have been recently published, including the precision measurements of the Fermi super allowed beta emitter $^{62}$Ga, the astrophysically-motivated investigations of the $^{132}$In, $^{129}$Cd, $^{129}$In nuclei and the nuclear structure of $^{80}$Ge. An overview of the future scientific opportunities together with the recent experiments will be provided.
References
[1] A.B. Garnsworthy et al. NIM A:918:9–29, 2019
At TRIUMF, Canada’s particle accelerator centre, the TIGRESS Integrated Plunger (TIP) and its configurable detector systems have been used for charged-particle tagging and light-ion identification in Doppler-shift lifetime measurements using gamma-ray spectroscopy with the TIGRESS array of HPGe detectors. An experiment using these devices to measure the lifetime of the first $2^+$ state of $^{40}$Ca has been performed by projecting an $^{36}$Ar beam onto a $^\text{nat}$C target. Analysis of the experimental gamma-ray spectra confirmed the direct population of the first $2^+$ state. Since the centre-of-mass energy in the entrance channel was below the Coulomb barrier, the reaction mechanism is believed to be the transfer of one alpha particle from the $^{12}$C target to the $^{36}$Ar beam nucleus, rather than fusion-evaporation from a compound $^{48}$Cr nucleus. The low centre-of-mass energy resulted in the direct population of the $2^+$ state of $^{40}$Ca, which eliminated feeding cascades, and therefore restricted the decay kinetics predominantly to first order. Currently, Monte-Carlo simulations using the Geant4 framework are being developed to locate the precise beam spot and to verify the reaction mechanism. Simulations with the correct parameters are expected to reproduce the experimental energy spectra and angular distributions of alpha particles while providing a Doppler Shift Attenuation Method measurement of the lifetime of the first $2^+$ state in $^{40}$Ca. In the future, the observed reaction mechanism can be applied to N=Z radioactive beams to provide direct access to low-lying excited states of nuclei with (N+2) and (Z+2), enabling transition rate studies at the N=Z line far from stability. Results of analysis of the experimental data and simulations will be presented and discussed.
Neutron rich Mg isotopes far from stability belong to the island of inversion, a region where the single particle energy state description of the shell model breaks down and the predicted configuration of nuclear states becomes inverted. Nuclei in this region also exhibit collective behaviour in which multiple particle transitions and interactions play a significant role in the nuclear wavefunctions. This is seen through intruder states of opposite parity in highly excited nuclei approaching the island of inversion, and can be observed through electromagnetic transition strength measurements.
In-beam reaction experiments performed at TRIUMF, Canada's particle accelerator centre, allow for precision measurements of nuclei far from stability. Using TIGRESS in conjunction with the TIGRESS Integrated Plunger for charged particle detection, electromagnetic transition rates can be measured to probe nuclear wavefunctions and perform tests of theoretical models using the well-understood electromagnetic interaction.
The approved experiment will use TIGRESS and the TIGRESS Integrated Plunger to measure the lifetime of the first excited state in $^{28}$Mg, which due to the relatively long lifetime was unable to be precisely measured in a previous Doppler Shift Attenuation Method (DSAM) measurement. To become sensitive to longer lived states, this experiment will use the Recoil Distance Method (RDM) to exploit the Doppler shift of gamma rays emitted in flight, which when compared to data obtained with Monte Carlo simulations performed using the Geant4 framework allow for the determination of a best fit lifetime. Additionally, this experiment will aim to further investigate an anomalously long-lived, highly-excited, negative parity intruder state seen in the DSAM experiment, the lifetime of which is accessible through RDM experiments. Work done in preparation for this experiment as well as future experiments to probe lifetimes in excited states of island of inversion nuclei $^{30-32}$Mg will be discussed.
Accelerator Science is both a discipline in its own right within modern physics and provides highly powerful tools for discovery and innovation in many other fields of scientific research. Accelerators do support different disciplines of subatomic physics, material sciences, life sciences and applications in research and industry. Accelerator Science Community does perform R&D to improve operational facilities and prepare technologies for new facilities unveiling new opportunities and unprecedented performance parameters. Having delivered nearly five decades of discovery, TRIUMF has a vibrant reputation globally as Canada’s particle accelerator laboratory and a hub for particle accelerator physics and technology with a wide network of international connections. In collaboration with CLS, the Fedoruk Centre and Canadian Universities, we want to grow the impact of Accelerator Science in Canada on the society and the potential to address key issues of the society of today.
Superconducting radiofrequency (SRF) cavities have been used for more than 50 years to increase the energy of charged particles. In Canada there are two accelerator centres which use SRF technology, i.e TRIUMF and the Canadian Light Source (CLS). The CLS was the first light source to use an SRF cavity in a storage ring from the beginning of operations in 2004. TRIUMF began developing SRF technology in 2000 which led in 2006 to the commissioning of the ISAC-II heavy ion superconducting linac for the post-acceleration of radioactive beams. More recently, the ARIEL SRF electron accelerator was installed at TRIUMF as a second driver for radioactive isotope production. In this talk, I will first give an overview of Canada’s SRF infrastructure and the underlying concepts. Then, I will briefly present how performance has globally evolved since the early days. Nowadays, state of the art niobium cavities reach fundamental limitations in terms of accelerating gradient (energy gain per unit length) and power dissipation. Increasing performance requires specialized chemical and surface treatments which must be tailored to specific cavity types and exploring materials beyond bulk niobium. I will highlight recent research highlights from TRIUMF and UVic including results from testing new surface treatments on unique multimode coaxial resonators and material science investigations using beta detected nuclear magnetic resonance (beta-NMR) and muon spin rotation and relaxation (muSR).
The CLS2 is a concept design of a next generation synchrotron light source to keep Canada at the forefront of scientific research that is uniquely available to researchers with access to such national infrastructure. Canada's research priorities in health and medicine, agriculture and food security, advance materials and industrial research, will all be enabled with national access to a next generation synchrotron. The CLS has provided critical research for Canada and the world, including covid-19 research that can only be performed on a synchrotron, and all OECD countries are in the process of commissioning, investing in or planning a next generation light source such as CLS2. This presentation of a new concept for a next generation synchrotron is a world leading design with the highest brightness in its class. The Conceptual Design Report currently being written is aimed at the government, industry and scientific community who will be the users of the facility and to engage them in the creation of a Technical Design Report and a project proposal to realise a future light source for Canada beyond the CLS which is approaching end-of-life.
The Canadian scientific community lost their local source of neutron beams for materials research on March 31st 2018, due to the closure of the National Research Universal reactor at Chalk River National Laboratories. Furthermore, the dwindling global supply of neutrons has made it increasingly difficult for local scientists to access neutron beams. There is a growing demand for the development of new generation facilities in Canada to address the drought which the local neutron user community is experiencing. A compact accelerator based neutron source (CANS) offers an intense, pulsed source of neutrons with a capital cost significantly lower than spallation sources. Research and development for a prototype CANS at the University of Windsor is currently underway. This facility will serve three major beam lines including, a neutron science station, a boron neutron capture therapy station and a PET isotope station. An outline of the proposed parameters of the facility and the design strategy for the target-moderator-reflector assemblies for the neutron science and BNCT stations will be presented.
The Electron Cyclotron Resonance Ion Source is a versatile and reliable source to charge-breed rare isotopes at the TRIUMF's Isotopes Separation and Acceleration (ISAC) facility. Significant research work has been done by different groups worldwide to improve the efficiency and performance of the ECRIS as a charge state booster. The most recent of these research works is the implementation of the two-frequency heating on the ECRIS. At the ISAC facility of TRIUMF, a 14.5 GHz PHOENIX booster which has been in operation since 2010 was recently upgraded to accommodate the two-frequency heating system using a single waveguide. The efficiency for charge breeding into a single charge state, which depends on the rare isotope that is being charge-bred, has been determined to be between 1 - 6 and will be improved by the activities started at TRIUMF. The CSB, and the corresponding beam transport lines are being investigated in terms of beam properties like beam emittance from the extraction system, and after the beam separation. A systematic investigation of the effect of the two-frequency heating technique on the intensity, emittance, and efficiency of the extracted beam is presently being conducted.
Recently, Kitaev materials have attracted great interest due to their potential to realize a quantum spin liquid ground state which hosts gapless Majorana excitations. In this talk, after a review of the physics of Kitaev materials, I will discuss the effects of static magnetic and electric fields on Kitaev's honeycomb model. Using the electric polarization operator appropriate for Kitaev materials, I will derive the effective Hamiltonian for the emergent Majorana fermions to second-order in both the electric and magnetic fields. While individually each perturbation does not qualitatively alter Kitaev spin liquid, the magneto-electric cross-term induces a finite chemical potential at each Dirac node, generating a Majorana-Fermi surface. I will argue this gapless phase is stable and exhibits typical metallic phenomenology, such as linear in temperature heat capacity and finite, but non-quantized, thermal Hall response. Finally, I will discuss the potential for realization of this, and related, physics in Kitaev materials such as RuCl3.
I will discuss muon spin rotation (muSR), nuclear magnetic resonance (NMR) and thermodynamic measurements on several Mo3O8-based cluster Mott insulators consisting of a 1/6th-filled breathing Kagome lattice. Depending on sometimes subtle structural differences between these various materials, a number of different magnetic phases can be stabilized, including possible quantum spin liquids: a long-range entangled state with emergent fractionalized excitations.
The current and upcoming astroparticle physics program will help understand the nature of the universe with the possible discovery of the nature of dark matter. The efforts towards greater sensitivities to the small signal induced by the very rare event direct dark matter experiments aim to detect turn into a continuous fight against radioactive background. There are various methods to reduce or mitigate background sources. These mainly include the selection of very radio-pure materials to build the experiment and the detectors, detector technologies able to discriminate signal to background events and the choice of deep underground sites to locate the experiments. In this talk I will review the challenges for direct dark matter search experiment along with the current R&D efforts in detector technologies.
The NEWS-G collaboration aims to detect sub-GeV WIMPs using Spherical Proportional Counters (SPC). During the past 6 years, the collaboration developed a new 140 cm diameter detector. This detector - larger than the previous generation - is made from stringently selected materials for their radio-purity and is enclosed in a spherical shielding made of different layers of polyethylene and low background lead. Finally, the inner surface of the detector was plated with a half millimeter of pure copper to reduce Pb-210 induced backgrounds. A new calibration method using a UV laser was also used in addition to Ar-37, neutron, and gamma sources. The new detector performed a first measurement campaign at the Laboratoire Souterrain de Modane in France in 2019 before being moved and installed at SNOLAB. Here we present a summary of the work done on the data analysis of the first campaign. This presentation will be followed by a status of the current installation and the first data taking of the experiment at SNOLAB.
DEAP-3600 is a direct dark matter search experiment located 2km underground at SNOLAB. The experiment is located at this depth to shield the sensitive detector from cosmic rays. The experiment uses a liquid argon target to search for WIMP dark matter candidates. Liquid argon is chosen as a target material for three reasons: it has a good scintillation light yield, it is transparent to its own scintillation light, and the nature of its scintillation enables the significant reduction of some backgrounds via pulse shape discrimination. Approximately 3300 kg of liquid argon is used within the DEAP-3600 experiment. The liquid argon is contained within a hollow acrylic sphere with an inner radius of approximately 85 cm. The acrylic sphere is surrounded by 255 photomultiplier tubes to detect the scintillation light. A TPB wavelength shifter is applied to the inside face of the acrylic vessel, the wavelength shifter turns the 128 nm light produced by scintillating argon to 420 nm where the PMTs are more sensitive.
This talk will give an update on the current status of the experiment and an overview of some of the recent analyses performed. Several hardware upgrades are scheduled to occur in 2021, these upgrades will be described as well as the future plans for the upgraded detector.
If one wishes to understand and successfully simulate the radiation damage of biological tissue one needs to understand the fundamental ionization processes of molecules in the gas or vapour
phase first. The latter problem has been addressed in a number of studies in recent years, but experimental data have remained scarce and accurate cross-section predictions based on first-principles quantum-mechanical calculations challenging due to the complexity of the molecules of interest. There is thus a role to be played by simplified modelling - provided the models used
can be shown to work for simpler systems for which reliable theoretical and experimental data are available for comparison.
We have recently developed one such model. It is based on the independent atom model (IAM) according to which a cross section, e.g., for electron removal from a molecule can be obtained from atomic cross sections for the same process. Instead of simply adding up the cross sections for all the atoms that make up the molecule we take geometric overlap into account, which arises when the atomic cross sections are pictured as circular disks surrounding the nuclei in the molecule. The overlapping areas are calculated using a pixel counting method (PCM) and, accordingly, we label our model IAM-PCM.
The IAM-PCM has been applied to a number of ion-impact collision problems with target systems
ranging from relatively simple molecules, such as water and methane to complex biomolecules,
such as the RNA and DNA nucleobases [1], and also including atomic and molecular clusters [2].
In this talk, I will explain the model, present a selection of recent results and discuss what can be learned from them.
[1] H. J. Lüdde et al., J Phys. B 52, 195203 (2019); Phys. Rev. A 101, 062709 (2020); Atoms 8, 59 (2020).
[2] H. J. Lüdde et al., Eur. Phys. J. B 91, 99 (2018).
An exciting frontier in quantum information science is the creation and manipulation of quantum systems that are built and controlled quanta by quanta. In this context, there is active research worldwide to achieve strong and coherent coupling between light and matter as the building block of complex quantum systems. Despite the range of physical behaviours accessible by these QED systems, the low-energy description is often masked by small fluctuations around the mean fields. In contrast, we describe our theory/experimental program towards novel forms of light-matter quantum systems, where highly correlated Rydberg material is strongly coupled to cavity fields. We call this new domain of strong coupling quantum optics, "many-body quantum electrodynamics." I describe our laboratory efforts towards the exploration of new physics for light-matter interaction, where locally gauged quantum materials are entirely driven by quantum optical fluctuation. Genuinely surprising phenomena may arise from the universal features of non-perturbative physics of many-body QED.
The study of plasmonics has the potential to reshape the physics of light-matter interactions in metallic nanohybrids and their applications to nanotechnology. Metallic nanohybrids are mode metallic nanoparticles and quantum emitters such as quantum dots. Recently, there is a considerable interest to study the light-matter interaction in the nanoscale size plasmonic nanohybrids. When an external light falls on the QEs, electrons in the QEs get excited from the ground state to the excited states and electron-hole pairs are created in the QEs. Similarly, when the external light (photons) falls on the MNPs it modifies the plasmonic properties of these particles. We know that there are free electrons on the surface of the MNPs. These free electrons oscillate as the charged waves on the surface. The quantized particles of the charged wave are called plasmons. When external light photons fall on the surface of the MNPs, there is an interaction between the photons and plasmons. This interaction produces new types of quantized quasi-particles called the surface plasmon polaritons (SPPs) It interesting to note that exciton energies and SPP energies can be modified by manipulating the size and shape of the QEs. It is also found that exciton and SPP energies can also be modified by applying an external field such as external control lasers, external stress-strain fields, and magnetic fields. Here we study the light-matter interaction in plasmonic nanohybrids made of an ensemble of metallic nanoparticles and quantum emitters. The study of linear and nonlinear plasmonics has the potential to reshape the physics of light-matter interactions and their applications to nanotechnology and nanomedicine. Further, we include the effect of the dipole-dipole interaction (DDI) on the light-matter interaction in plasmonic nanohybrids. We found that the SPP field also induces dipoles in QEs and MNPs and they interact with each other via that the anomalous dipole-dipole interaction. It is shown that the anomalous DDI is many times stronger than the classical DDI. The nonlinear plasmonics such as two-photon spectroscopy and Kerr nonlinearity is also explored. Finally, we have examined that these nanohybrids can be used to fabricate the nanosensors and nano switches for the applications of nanotechnology and nanomedicine.
Spontaneous two-photon decay rates for the $1s2s\;^1S_0$ -- $1s^2\;^1S_0$ transition in helium and its isoelectronic sequence up to $Z$ = 10 are calculated, including the effects of finite nuclear mass. We use correlated variational wave functions in Hylleraas coordinates and pseudostate summations for intermediate states. The accuracy of previous work is improved by several orders of magnitude. Length and velocity gauge calculations agree to eight or more figures, demonstrating that the theoretical formulation correctly takes into account the three effects of (1) mass scaling, (2) mass polarization, and (3) radiation due to motion of the nucleus in the center-of-mass frame [1].
Algebraic relationships are derived and tested relating the expansion coefficients in powers of $\mu/M$, where $\mu/M$ is the ratio of the electron reduced mass to the nuclear mass. Astrophysical applications of two-photon transitions to the continuum emission around 400 $\mu$m in planetary nebulae will be briefly discussed.
[1] A. T. Bondy, D. C. Morton, and G.W.F. Drake, Phys.\ Rev.\ A {\bf 102}, 052807 (2020).
$^*$Research supported by NSERC and by SHARCNET.
Resonant laser ionization spectroscopy uses multiple lasers to step-wise excite atom, therefore is a powerful tool for the study of high energy atomic structures, such as Rydberg states and autoionizing states. At the laser ion source test stand (LIS-stand) in TRIUMF, resonant laser ionization spectroscopy is used to study complex atoms. The spectroscopy results not only provide efficient laser ionization schemes for on-line laser ion source beam delivery of these elements but also the information of Rydberg and autoionizing states. This allows also to refine the energy of the ionization potential of these elements as well as extract information on some electron correlations. An overview of the off-line resonant laser ionization spectroscopy at TRIUMF will be presented.
The $^{137}\mathrm{Ba}^+$ ion is a promising candidate for high-fidelity quantum computing. We generate barium atoms using laser ablation of a $\mathrm{BaCl}_2$ target. The flux of neutral atoms generated by ablation is then ionized near our ion trap-center, giving us trapped ions which we can then use for quantum computing. Laser ablation loading can be used to trap ions more quickly and with less added heat load than other common loading methods. Because of the relatively low abundance of the isotope of interest, a two-step photoionization technique is used, which gives us the ability to selectively load a desired isotope. In this talk, I discuss characterization of the ablation process for our $\mathrm{BaCl}_2$ targets, including typical fluences needed, preparation and lifetimes of ablation spots, and plume temperature estimates. We demonstrate loading of single $^{137}\mathrm{Ba}^+$ ions with high selectivity compared to its 11% natural abundance.
Hyper-Kamiokande is the next generation water-Cherenkov neutrino experiment, building on the success of its predecessor Super-Kamiokande. To match the increased precision and reduced statistical errors of the new detectors, improvements to event reconstruction and event selection are required to suppress backgrounds and reduce systematic errors. Machine learning has the potential to provide these enhancements to enable the precision measurements that Hyper-Kamiokande is aiming to perform. This talk provides an overview of the areas where machine learning is being explored for water Cherenkov detectors. Results using various network architectures are presented, along with comparisons to traditional methods and discussion of the challenges and future plans for applying machine learning techniques.
The neutrino, a fundemental particle, offers the potential to image parts of the universe never before seen and can provide an early warning for cosmic events. With their ability to carry information across the universe unperturbed, neutrinos offer a clear image of the cosmos and can provide insight into its nature with relative ease. Learning from successful neutrino telescopes such as IceCube, the Pacific Ocean Neutrino Explorer (P-ONE) will be built in the Cascadia Basin in the Pacific Ocean, supported by an international collaboration. Located 2660 meters below sea level, P-ONE will consist of 70 strings each equipped with at least 20 sensitive photodetectors and 2 calibrators in an infrastructure provided by Ocean Networks Canada. A key step in the data analysis pipeline is the reconstruction of the path of particles as they pass through the detector. Using simulated data, I will present my work in reconstructing muon tracks in this proposed detector through a likelihood framework.
A crucial task of the ATLAS calorimeter is energy measurement of detected particles. In the liquid argon (LAr) calorimeter subdetector of ATLAS, electromagnetically and hadronically interacting particles are detected through LAr ionization. Special electronics convert drifting electrons into a measurable current. The analytical technique presently used to extract energy from the measured current is known as optimal filtering. While this technique is sufficient for past and Run3 pile-up conditions in the LHC, it has been shown to suffer some degradation of performance with the increased luminosity expected at the High Luminosity LHC. This presentation will explore machine learning techniques as a substitute for optimal filtering, examining the strengths, weaknesses, and limitations of both energy reconstruction methods.
The rare $K^+ \to \pi^+ \nu \bar{\nu}$ decay is an ideal probe for beyond the Standard Model (BSM) physics contributions to the flavor sector. It is heavily suppressed in the SM and its branching ratio is predicted, with remarkable precision for a second order weak process involving hadrons, to be $\left(8.4 \pm 1.0\right) \times 10^{-11}$.
The NA62 experiment at the CERN SPS is designed to study precisely the $K^+ \to \pi^+ \nu \bar{\nu}$ branching ratio. To reach the required signal sensitivity, the overall muon rejection factor must be of the order of $10^{7}$. Therefore, a redundant particle identification (PID) system composed of a Ring Imaging Chernenkov (RICH), a set of three independent calorimeters, and a scintillator-based veto detector is employed.
Machine learning (ML) algorithms were developed to extract PID information directly from the calorimeter hit information, a departure from the previous approach where reconstructed quantities were used. High purity samples of muon, pion and electron single charged track decays were extracted from the NA62 data for the training and validation of the ML methods.
An architecture based on the ResNet-18 network achieved the best $\mu^+$/$\pi^+$ separation with a muon rejection factor of the order of $10^{5}$ while keeping the pion acceptance around 90 %, not including the RICH. The newly developed tool will be incorporated in analysis of the data collected during the 2021 NA62 run.
The Laser Interferometer Gravitational-Wave Observatory (LIGO) is expected to begin its fourth observing run in 2022, with a large projected improvement in detector sensitivity. This sensitivity boost increases the gravitational wave (GW) detection rate, but also increases the likelihood of GW events overlapping with transient, non-Gaussian detector noise, or glitches. This project aims to quantify how GW parameter estimation is affected by simultaneous glitch noise, particularly with regards to salvaging inspiral masses and sky location for electromagnetic follow-up of GW candidates.
We study the response of a static Unruh-DeWitt detector outside an exotic compact object (ECO) with a variety of (partially) reflective boundary conditions in 3+1 dimensions. The horizonless ECO, whose boundary is extremely close to the would-be event horizon, acts as a black hole mimicker. We find that the response rate is notably distinct from the black hole case, even when the ECO boundary is perfectly absorbing. For a (partially) reflective ECO boundary, we find resonance structures in the response rate that depend on the different locations of the ECO boundary and those of the detector. We provide a detailed analysis in connection with the ECO's vacuum mode structure and transfer function.
We derive Loop Quantum Gravity corrections to the Raychaudhuri equation in the interior of a Schwarzschild black hole and near the classical singularity for several schemes of quantization. We show that the resulting effective equation implies defocusing of geodesics due to the appearance of repulsive terms. This prevents the formation of conjugate points, renders the singularity theorems inapplicable, and leads to the resolution of the singularity for this spacetime.
A relativistic theory of gravity like general relativity produces phenomena differing fundamentally from Newton’s theory. An example, analogous to electromagnetic induction, is gravitomagnetism, or the dragging of inertial frames by mass-energy currents. These effects have recently been confirmed by classical observations. Here we show, for the first time, that they can be observed by a quantum detector. We study the response function of Unruh De-Witt detectors placed in a slowly rotating shell. We show that the response function picks up the presence of rotation even though the spacetime inside the shell is flat and the detector is locally inertial. The detector can distinguish between the static situation when the shell is nonrotating and the stationary case when the shell rotates and the dragging of inertial frames, i.e., gravitomagnetic effects, arise. Moreover, it can do so when the detector is switched on for a finite time interval within which a light signal cannot travel to the shell and back to convey the presence of rotation.
Continuous waves from non-axisymmetric neutron stars are orders of magnitude weaker than transient events from black hole and neutron star collisions. Unlike a transient event, a continuous wave source will allow repeated observations. We will present results of all-sky searches for neutron stars and other sources carried out by the Falcon pipeline and discuss interplay between detector artifacts and outliers produced by our searches.
Our current understanding of the core-collapse supernova explosion mechanism is incomplete, with multiple viable models for how the initial shock wave might be energized enough to lead to a successful explosion. Detection of a gravitational wave (GW) signal emitted in the initial few seconds after core-collapse would provide unique and crucial insight into this process. With the Advanced LIGO and Advanced Virgo gravitational wave detectors expected to soon approach their design sensitivity, we could potentially detect this GW emission from most core-collapse supernovae within our galaxy. But once identified, how well can we recover the signal from these detectors? Here we use the BayesWave algorithm to maximize our ability to accurately recover GW signals from core-collapse supernovae. Using the expected design sensitivity noise curves of the advanced global detector network, we inject and recover supernova waveforms modeled with different explosion mechanisms into simulated noise, tuning the algorithm to extract as much of the signal as possible. We report the preliminary results of this work, including how the reconstruction is affected by the model and what we can hope to learn from the next galactic supernova.
In this talk, I will consider the stability of asymptotically anti-de Sitter gravitational solitons. These are globally stationary, asymptotically (globally) AdS spacetimes with positive energy but without horizons. I will introduce my ongoing project investigating solutions of the linear wave equation in this class of backgrounds. I will provide analytical expressions for the behavior of the scalar field near the soliton bubble and at spatial infinity. The special BPS (supersymmetric) case will then be examined as an example of a solution where stable trapping occurs. This project is joint work with Dr. Hari K. Kunduri and Dr. Robie A. Hennigar.
The mass and spin properties of black hole binaries inferred from their gravitational-wave signatures reveal important clues about how these binaries form. For instance, stellar-mass black holes that evolved together from the same binary star will have spins that are preferentially aligned with their orbital angular momentum. Alternatively, if the black holes formed separately from each other and later became gravitationally bound, then there is no such preference for having aligned spins. Furthermore, it is known that the presence of misaligned spins induces a general relativistic precession of the orbital plane, imprinting unique structure onto the gravitational-wave signal. The fidelity with which gravitational-wave detectors can measure off-axis spins, or equivalently, precession, will therefore have important implications for the use of gravitational waves to study binary black hole formation channels. I will summarize a new study that examines how well the A+/AdV+ detector network will measure off-axis spin components, and report preliminary results comparing spin resolution between the fourth and fifth LIGO-Virgo observing runs using simulated detector noise and multiple sets of simulated signals distributed over the mass-spin parameter space.
Perturbation theory for gravitating quantum systems tends to fail at very late times (a type of perturbative breakdown known as secular growth). We argue that gravity is best treated as a medium/environment in such situations, where reliable late-time predictions can be made using tools borrowed from quantum optics. To show how this works, we study the explicit example of a qubit hovering just outside the event horizon of a Schwarzschild black hole (coupled to a real scalar field) and reliably extract the late-time behaviour for the qubit state. At very late times, the so-called Unruh-DeWitt detector is shown to asymptote to a thermal state at the Hawking temperature.
Many research efforts in physics rely on design, implementation, and execution of numerical studies. These studies are often the guiding torch of further experimental investigations, but they are rarely carried out with software development principles in mind. As a result, efficiency and verification measures are often not incorporated in the R&D process and this impairs the quality and confidence of technical reports generated based on them. While software development workflow is second nature to those trained in computer science and engineering disciplines, systematic training on it has not been a conventional component of physics programs.
In this talk, I share my experience in designing and teaching a new course on applications of machine learning in physical sciences. I introduce the course and its learning objectives, format, and outline. I then discuss my findings on the mechanisms that can be placed in course projects to better equip young researchers with team-oriented and effective software development practices. I will finally review some of the student feedback from the first offering of this course in 2020 and the resulting improvements I made to the 2021 offering.
Neutrons were applied in the study of medicine very quickly. A mere six years after discovery, neutrons were first used for cancer therapy. Interest in neutron radiotherapy waxed and waned over the following decades. The last use of neutron-only therapy, treating cancer of the salivary glands, ceased several years ago.
There is, however, still interest in boron neutron capture therapy (BNCT) for certain brain tumours that have no good current treatment options. Boron neutron capture synovectomy (BNCS) for horribly disabling arthritis has also been explored. The idea behind BNCT is that if tumours can be loaded with boron, then reactions of that boron with neutrons result in heavy charged particles recoiling in the cell. This would be highly effective in killing cells. Ideally, there can be a large difference in the radiation dose delivered to healthy tissue and tumour because only the tumour would contain boron. Healthy tissue would be spared. However, the chemistry to be able to load enough boron into the tumour to make the treatment successful has been a challenge.
Of course, neutron activation has created a variety of radioisotopes that are used for both imaging and treatment of disease. 99mTc is still commonly used in imaging, and for decades was produced in Chalk River, Canada. 125I is used to treat prostate cancers and at any point in time a large percentage of the world’s supply of this agent is produced at McMaster University in Hamilton, Ontario.
Finally, in vivo neutron activation analysis (IVNAA) has been used since the 1960s to study levels of both essential and toxic elements in the body. IVNAA helped physicians understand the challenges of parental neutron therapy for cancer patients and was the first technique to show bone loss in anorexia nervosa patients. IVNAA studies also led to legislative changes in air levels of toxins such as cadmium. Canada is still at the forefront of this area of research: studies measuring people have taken place over the last decade at McMaster. The first in vivo studies of fluorine exposure showed that exposure in Ontario is low, and that the single biggest factor in exposure is tea drinking. Recent studies have been made of aluminum levels in Northern Ontario miners exposed to McIntyre power. Early data shows inhaling powder resulted in uptake of aluminum into the body and the aluminum has persisted in the bones of miners for years or decades.
Isotopes of Heavy and Super Heavy nuclei are typically produced in fusion-evaporation reactions. In these types of reactions neighboring isotopes are often produced simultaneously. This makes it incredibly difficult to assign experimentally observed decay properties to specific isotopes. Presently, such assignments are heavily reliant on the use of excitation functions, cross-bombardment reactions, and the assumption that charged-particle emission does not occur. However, without direct-experimental confirmation that a specific isotope or mixture of isotopes had been produced for a given reaction, it is possible that misassignments have been made. At Lawrence Berkeley National Laboratory, the recent addition of FIONA (For the Identification of Nuclide A) to the Berkeley Gas-filled separator now allows for a produced isotope or mixture of isotopes to be directly identified by their mass numbers. Recent measurements were performed on the lightest mendelevium isotopes (A =244-247) to confirm that previously-reported decay properties had been correctly assigned to the appropriate isotope. These studies included the unambiguous identification of the new isotope 244Md. These and other recent results will be discussed. These results highlight the necessity of utilizing mass-number identifications for isotopes produced in fusion-evaporation reactions.
β-delayed neutron emission probabilities of exotic nuclei, along with nuclear masses and β-decay half-lives, are of key importance in the stellar nucleosynthesis of heavy elements via the rapid neutron-capture process (r-process). β-delayed neutron emission influences the final r-process abundance curve through the redistribution of material as neutron-rich nuclei decay towards stability, and by acting as a source of late-time neutrons which can be recaptured during the freeze-out phase. Obtaining a more complete description of this process is vital to developing a deeper understanding observed elemental abundances.
New generations of radioactive beam facilities, with state-of-the-art detector systems, will reach previously inaccessible neutron-rich nuclei for which delayed neutron-emission becomes the dominant decay process. In parallel, cutting edge nuclear models are constantly advancing and the need for accurate nuclear data only grows.
Traditional measurement techniques have relied on the correlated detection of the parent ion and its subsequent decay products, including the neutron. Due to their neutral charge, neutrons are intrinsically difficult to measure. Low detection efficiency imposes a severe loss of statistics in all experiments, thus requiring either higher beam rates, larger detectors or longer beam times. Each of these solutions presents difficulties of their own. However, storage rings can provide a complimentary technique that allows the measurement of key nuclear properties without requiring the detection of the emitted neutron. The ILIMA program at FAIR will use heavy ion detectors, such as the CsISiPHOS detector[1], installed in the ESR and CR to achieve this goal, among others.
Here, we investigate this technique and demonstrate how heavy-ion detection methods can provide complimentary means to study β-delayed neutron emission.
[1] M. A. Najafi et al., NIMA 836, 1-6, (2016)
Resonant laser ionization of atoms provides an efficient and selective means for ion source operation. It uses stepwise resonant excitation of an atom's valence electron into energetically Rydberg states or auto-ionizing levels. A resonant ionization laser ion source RILIS is particularly suited to provide beams of rare isotopes at radioactive isotope facilities like ISAC at TRIUMF.
The operational principle, current developments, application and science with TRIUMF's RILIS will be discussed.
The TRIUMF Ultra-Cold Advanced Neutron (TUCAN) collaboration is currently building a next-generation ultra-cold neutron source, with a neutron electric dipole moment (nEDM) measurement as the flagship experiment. The nEDM measurement is based on the Ramsey method of separated oscillating fields to measure the precession frequency of the neutron in combined magnetic and electric fields. The Ramsey method involves the pulsed application of an oscillating magnetic field to produce a π/2 flip of the neutron spins. This talk presents studies of the magnetic field pulse in the nEDM experiment using finite element simulations, with a focus on the suppression and inhomogeneity of the field caused by eddy currents. Further Monte Carlo simulations of the neutron spins are used to optimize the timing of the pulse and simulate the expected behaviour of the neutrons in the full Ramsey measurement.
The study of neutron rich nuclei far from the valley of stability has become an increasingly important field of research within nuclear physics. One of the decay mechanisms that opens when the decay Q value becomes sufficiently large is that of beta-delayed neutron emission. This decay mode is important when studying the astrophysical r-process as it can have a direct effect on theoretical solar abundance calculations. In addition, by extracting data on the excited states of the nucleus via the neutron kinetic energies, structural information of nuclei can be obtained through beta-delayed neutron spectroscopy. The utilization of large-scale neutron detector arrays in future experiments is therefore imperative in order to study these beta-delayed neutron emitters.
The deuterated scintillator array, DESCANT, was designed to be coupled with the large-scale gamma-spectrometers GRIFFIN and TIGRESS at the TRIUMF ISAC-I and ISAC-II facilities, respectively. However, DESCANT was originally intended to be a neutron-tagging array for fusion evaporation reactions, and a precise measurement of the neutron energy was not considered a priority over neutron detection efficiency. This limitation could be overcome through the use of thin plastic scintillators, possibly positioned in front of the DESCANT detectors, allowing for a more in-depth analysis of beta delayed neutron emitters at the GRIFFIN decay station. Plastic scintillators are ideal for this enhancement due to their timing properties, customizability, and overall cost effectiveness. The energy of the neutrons can then be determined via the time-of-flight technique, improving the current precision of the neutron energy with the existing setup significantly. To investigate the viability of this augmentation, GEANT4 will be used to simulate and optimize the experimental design, the progress of which will be discussed.
The understanding of abundances of elements heavier than iron originating from the $r$-process nucleosynthesis in neutron star mergers and core collapse supernovae requires experimental information from the involved neutron-rich nuclei from close to the neutron-dripline to the line of stability. The $\beta$-delayed neutron emission plays important roles in this process shifting the decay chain to lower masses and increasing the neutron density in the environment. The $\beta$-delayed neutron branching ratio and the respective $\beta$-decay half-life are also important for improving theoretical models, and to achieve more realistic models of the decay heat in a fission reactor.
In order to provide new experimental data of half-lives and $\beta$-delayed neutron branching ratios, since 2016 the BRIKEN campaign based at the RIB facility of RIKEN, Japan, has allowed the measurement of hundreds of nuclei with unknown or incomplete decay information. The use of a fragmentation facility such as RIKEN allows to reach the most neutron-rich exotic nuclei that were not accessible before. The Advanced Implantation Detector Array (AIDA) based on silicon DSSDs was used to register implants and $\beta$-decays with high position resolution. Surrounding AIDA, a 4$\pi$ array of $^{3}$He neutron counters embedded in a polyethylene moderator matrix, and two HPGe clovers inserted in this matrix registered the neutrons and $\gamma$-ray emitted after nuclei decays in AIDA.
This contribution will report on the results of decay studies around the doubly-magic $^{78}$Ni region. The focus of our data analysis are deformed neutron-rich Se and Br isotopes around N=60.
nEXO is a next generation time projection chamber searching for neutrinoless double-beta decay in 5 tonnes of liquid xenon enriched in the isotope Xe-136. Interactions within LXe produce anti-correlated scintillation and ionization signals, which will be used to reconstruct the energy, position, and multiplicity of each event. Silicon photomultipliers (SiPMs) have been identified as the devices to detect the vacuum ultraviolet scintillation light for nEXO. SiPMs are silicon devices ~ 1 cm^2 with single photon sensitivity, and have a quantum efficiency of ~ 15% at 175 nm. A baseline characterization of the many SiPMs that will be distributed among the nEXO collaboration is necessary: the detector will employ tiles of SiPMs, organized into staves, yielding a photo-coverage area of ~ 4.5 m^2. The development of integrated SiPM tiles is advanced within the collaboration, requiring precise testing in conditions similar to their deployment. I will present on the status and plans for SiPM mass testing using an environmental test stand capable of measuring ~ 150 cm^2 of SiPMs at 168K with quick turnaround between tile deployment, facilitating both a high-rate of baseline SiPM characterization, and precision testing of integrated tiles.
For 50 years, TRIUMF has stood at the frontier of scientific understanding as Canada’s particle accelerator centre. Driven by two made-in-Canada cutting edge accelerators - the world’s largest cyclotron, and our new high-power superconducting linear accelerator - we continue to ask the big questions about the origins of the universe and everything in it.
With over five decades of experience in the production of accelerator-based secondary particles for science, TRIUMF also ensures that Canada remains on the leading edge of supplying radioisotopes, neutrons, photons, and muons enabling fundamental science in the fields of nuclear, particle and astrophysics, as well as solid state and medical sciences and applications.
ISAC-TRIUMF is the only ISOL facility worldwide that is routinely producing radioisotope beams (RIB) from secondary particle production targets under irradiation in the high-power regime in excess of 10 kW. TRIUMF’s current flagship project ARIEL, Advanced Rare IsotopE Laboratory, will add two new target stations providing isotopes to the existing experimental stations in ISAC I and ISAC II at keV and MeV energies, respectively. In addition to the operating 500 MeV, 50 kW proton driver from TRIUMF’s cyclotron, ARIEL will make use of a 30 MeV, 100 kW electron beam from a newly installed superconducting linear accelerator. Together with additional 200 m of RIB beamlines within the radioisotope distribution complex, this will put TRIUMF in the unprecedented capability of delivering three RIBs to different experiments, while producing radioisotopes for medical applications simultaneously – enhancing the scientific output of the laboratory significantly.
From its inception, the Life Sciences division at TRIUMF has leveraged the laboratory’s extensive particle accelerator expertise and infrastructure to develop novel technologies that help understand life at the molecular level. This includes novel technologies and research in particle beam therapy and biobetaNMR, but also prominently the production of short-lived (half-life <2 hr) positron emitting isotopes like F-18, C-11 and a number of emerging isotopes, including, but not limited to Ga-68, Zr-89, Cu-64 and cyclotron-produced Tc-99m. More recent efforts have focused on the development of various therapeutic isotopes: Alpha-emitting isotopes like Ac-225 for targeted alpha therapy (TAT), or Hg-197 for targeted radionuclide therapy (TRT) with an Auger emitter.
In order to better enable a new generation of scientists and experiments with a wider array of isotopes, TRIUMF is currently construction the Institute for Advanced Medical Isotopes (IAMI). IAMI will be commissioned and ready for operation in early 2023. This facility will house a dedicated TR24 (24 MeV) cyclotron, and several state-of-the-art laboratories for the development of radiopharmaceuticals from all accelerators on site. This presentation will provide an overview of the facility and the research that is planned to take place at IAMI. It will significantly increase British Columbia’s and Canada’s capacity for the sustainable and reliable production and distribution of medical isotopes currently critical for Canadian health research and clinical use, and ultimately allow Canada to maintain leadership in the realm of isotope production and application across the life sciences.
In this talk I will describe how combining ultrafast lasers and electron microscopes in novel ways makes it possible to directly ‘watch’ the time-evolving structure of condensed matter on the fastest timescales open to atomic motion. By combining such measurements with complementary (and more conventional) spectroscopic probes one can develop structure-property relationships for materials under even very far from equilibrium conditions and explore how light can be used to control the properties of materials.
I will give several examples of the remarkable new kinds of information that can be gleaned from such studies and describe how these opportunities emerge from the unique capabilities of the current generation of ultrafast electron microscopy instruments. For example, in diffraction mode it is possible to identify and separate lattice structural changes from valence charge density redistribution in materials on the ultrafast timescale and to identify novel photoinduced phases that have no equilibrium analogs. It is also possible to directly probe the strength of the coupling between electrons and phonons in materials across the entire Brillouin zone and to probe nonequilibrium phonon dynamics (or relaxation) in exquisite detail.
Accelerator Mass Spectrometry (AMS) provides high sensitivity measurements (typically at or below 1 part in $10^{12}$) for rare, long-lived radioisotopes when isobars (other elements with the same atomic weight as the isotope of interest) can be eliminated. In AMS laboratories, established techniques are used for the removal of the interfering isobars of some light isotopes. However, for smaller, lower-energy AMS systems separating the abundant isobars of many isotopes, such as the sulfur-36 in measurements of chlorine-36, remains a challenge. For some heavy isotopes, such as strontium-90 and cesium-135,137, even high energy accelerators are unable to separate the interfering isobars.
The Isobar Separator for Anions (ISA), which has been integrated into a second injection line of the 3 MeV tandem accelerator system at the A. E. Lalonde AMS Laboratory, will provide a universal way to measure rare radioisotopes without the interference of abundant isobars. The ISA is a radiofrequency quadrupole (RFQ) reaction cell system, including a DC deceleration region, a combined cooling and reaction cell, and a DC acceleration region. The deceleration region accepts a mass analyzed beam from the ion source (with energy 20-35 keV) and reduces the energy to a level that the reaction cell can accept. RFQ segments along the length of the cell create a potential well which limits the divergence of the traversing ions. DC rod offset voltages on these RFQ segments maintain a controlled ion velocity through the cell. The cell is filled with an inert cooling gas that has been experimentally selected to provide the lowest ion energy and the highest transmission, and with nitrogen dioxide, a reaction gas chosen to preferentially react with the interfering isobar. In the case of chlorine-36, the sulfur-36 isobar has been shown to be reduced by over $10^{6}$. Preliminary characterization of the ISA and its incoming and outgoing ion beams will be presented.
Negative Ion Source Development for Accelerator Mass Spectrometry
CJ Tiessen, WE Kieser, and XL Zhao
Accelerator mass spectrometry (AMS) is a highly sensitive technique used for the analysis of long-lived radioisotopes. While carbon-14 dating is the most well known application, AMS can be used to measure other isotopes such as beryllium-10, aluminum-26, iodine-129, and uranium-236 which are useful in geology, archeology, environmental tracer and chronology studies, nuclear waste monitoring, and nuclear forensics. The technique uses a combination of electrostatic analyzers, mass-separation magnets, electrostatic lenses, as well as a tandem accelerator. In the accelerator, an electron stripping gas canal is used to convert incoming negative ions to positive ions while simultaneously disintegrating molecular isobars. Ions from the samples are injected into the accelerator using a cesium sputter negative-ion source. The focus of this work is to model the electrodynamics within the ion source, including the effects of the more intense positive cesium ion beam and the sputtered sample negative-ion beam. Simulations using Integrated Engineering Software’s Lorentz 2E ion optics software will guide the design of a new ion source with the goal of increasing the emitted sample ion current while also improving the emittance of this beam. Following a short overview of the AMS system, details of the ion source, including the mutual space-charge interaction of the two beams, will be presented.
The radioactive decay of radon in the home is the leading cause of lung cancer in non-smoking Canadians (REF 1,2). Radon produced by the decay of uranium and thorium minerals entering the home may accumulate in concentrations that exceed the national maximum guideline for indoor air of 200 Bq/m3. There is a critical need to develop a practical tool to assess an individual’s exposure to radon and eventually one’s lung cancer risk. An important opportunity is to use keratinizing tissues in the body (hair, nails) as archives of radon exposure. The lead is sequestered from the environment in toenails, including the relatively long-lived (22 yrs) 210Pb isotope, which comes from 222Rn decay.
In this project, we are using isotope ratio mass spectrometry to quantify the amount of 210Pb in a known amount of sample. This method has the advantage of providing a direct and relatively rapid count of the numbers of 210Pb atoms. The challenge is that the actual number of 210Pb atoms is very low and achieving reliable results requires high sensitivity methods specifically designed for the extraction of lead from the biological matrix. In the first stages of the project, we are using isotope dilution methods coupled with multiple collector inductively coupled plasma mass spectrometry (MC-ICPMS). Initial results demonstrate that femtogram quantities of lead can be measured.
The next stage of the project involves the design and construction of a laser ablation ion source coupled to the Multiple Reflection Time of Flight (MR-TOF MS) at the TITAN instrument at TRIUMF. The laser ion source in combination with the MR-TOF MS offers high sensitivity and the ability to separate isobars of 210Pb. The laser beam, after passing through optical telescope system and polarizers for pulse energy modulation, is focused on a small point on the sample surface located in a high-vacuum chamber. Thus, the laser source enables spatial mapping of 210Pb isotopic composition and allow one to map the accumulation of the radon daughter products over the growth of tissue. Ultimately, an accurate measurement of the number of accumulated atoms in an individual’s biological tissue may be a personalized biodosimeter for radon.
Crowded soft-matter and biological systems organize locally into preferred motifs. Locally-organized motifs in soft systems can, paradoxically, arise from a drive to maximize overall system entropy. Entropy-driven local order has been directly confirmed in model, synthetic colloidal systems, however similar patterns of organization occur in crowded biological systems ranging from the contents of a cell to collections of cells. In biological settings, and in soft matter more broadly, it is unclear whether entropy generically promotes or inhibits local organization. Resolving this is difficult because entropic effects are intrinsically collective, complicating efforts to isolate them. Here, we employ minimal models that artificially restrict system entropy to show that entropy drives systems toward local organization, even when the model system entropy is below reasonable physical bounds. By establishing this bound, our results suggest that entropy generically promotes local organization in crowded soft and biological systems of rigid objects.
The symmetries of unconventional superconductors may be classified by the locations of their gap nodes. Recently, the role of spin-orbit coupling (SOC) has become important, as sufficiently strong SOC generates novel mixed-parity superconductivity. In this talk, I show that the nodal structure of unconventional superconductors may be determined by angle-dependent magneto-thermal conductivity measurements, provided the SOC is larger than the quasiparticle scattering rate. This effect is complementary to vortex-induced magneto-thermal oscillations identified previously, and is dominant in strongly anisotropic materials. As an application, I present results for the magneto-thermal conductivity of the "Rashba bilayer" YBa$_2$Cu$_3$O$_{6.5}$, which possesses a so-called "hidden spin-orbit coupling." We find that the SOC endows $\kappa_{xx}/T$ with a characteristic field-angle dependence that should be easily observed experimentally.
A state-preserving quantum counting algorithm is used to obtain coefficients of a Lanczos recursion from a single ground state wavefunction on the quantum computer. This is used to compute the continued fraction representation of an interacting Green's function for use in condensed matter, particle physics, and other areas. The wavefunction does not need to be re-prepared at each iteration. The quantum algorithm represents an exponential reduction in memory over known classical methods. An extension of the method to determining the ground state is also discussed.
Since the temperature of an object that cools decreases as it relaxes to thermal equilibrium, naively a hot object should take longer to cool than a warm one. Yet, some 2300 years ago, Aristotle observed that “to cool hot water quickly, begin by putting it in the sun.” In the 1960s, this counterintuitive phenomenon was rediscovered as the statement that “hot water can freeze faster than cold water” and has become known as the “Mpemba effect.” While many specific mechanisms have been proposed, no general consensus exists as to the underlying cause. Here we demonstrate the Mpemba effect in a controlled setting, the thermal quench of a colloidal system immersed in water, which serves as a heat bath. Our results are reproducible and agree quantitatively with calculations based on a recently proposed theoretical framework. By carefully choosing parameters, we observe cooling that is exponentially faster than that observed using typical parameters, in accord with the recently predicted strong Mpemba effect. We then show that similar phenomena can be observed when heating—these are the first observations of an inverse Mpemba effect. In this case, a cold system placed in a hot bath will reach equilibrium more quickly than a warm one placed in identical conditions. Our experiments give a physical picture of the generic conditions needed to accelerate relaxation to thermal equilibrium and support the idea that the Mpemba effect is not simply a scientific curiosity concerning how water freezes into ice—one of the many anomalous features of water—but rather the prototype for a wide range of anomalous relaxation phenomena that may have significant technological application.
Molecular self-assembly is one of the most important bottom-up fabrication strategies to produce two-dimensional networks at solid surfaces. The formation of complex two-dimensional (2-d) surface structures at the molecular scale relies on the self-assembly of functional organic molecules on solid substrates. Driven by an intricate equilibrium between molecule–molecule and molecule–substrate interactions, a number of different non-covalent molecular interactions can be used to generate stable 2-d geometric structures. For example, in the case of halogen-terminated monomers halogen bonding.
In addition to being the building blocks of self-assembled networks, halogen-terminated molecules can be activated on surfaces to form 2-d π-conjugated polymers. Ideally the process follows a two-step procedure whereby the carbon halogen bonds break to form organometallic structures and subsequent covalent C-C coupling. These organic analogues of graphene, the only natural 2-d conjugated polymer, represent a promising new class of high-performance functional nanomaterials.
In this work we study the adsorption of a tribromo-substituted heterotriangulene molecule (TBTANGO) the Au(111) and Ag(111) surfaces using room temperature scanning tunneling microscopy in ultrahigh vacuum. The resultant two-dimensional molecular networks range from: self-assembled networks held together by non-covalent Br⋅⋅⋅Br halogen bonds on Au(111); organometallic networks with C-Ag-C linkage on Ag(111); and a π-conjugated polymer when the TBTANGO monomers are deposited directly onto a hot Au(111) surface.
Seawater spray and precipitation are two main sources of icing and ice accumulation in cold ocean regions, presenting a major challenge for shipping and operating maritime equipment [1].
There is a limited number of analytical techniques to study seawater spray ice formation. MRI is known for its non-invasive capabilities in measurements of a solid ice [2,3]. In this work, we investigated the potential of MRI as an analytical measurement technique for studies of the seawater spray ice.
The signal detected with MRI/NMR comes from pockets of brine in the forming ice, and the unfrozen water, with the 1H NMR signal from the brine decreasing as the temperature drops and the brine freezes further. 3D MRI showed different freezing patterns and temperature gradients depending on the initial freezing temperature and the surface geometry. T1-T2 maps indicated strong changes in relaxation parameters as the freezing progresses, indicating changing environment for the brine in the growing ice [4]. In a separate freezing series using Na NMR, the amount of sodium in the brine remained almost unchanged until the brine reached the eutectic temperature, and the sodium precipitation accelerated.
These measurements were done on an MRI scanner, with the freezing setup designed to fit a 4 cm i.d. RF probe inside a 2.4 T superconducting magnet. To explore a possibility of using NMR for freezing studies in a more open environment, we also used a portable, unilateral NMR device to characterize sea spray freezing on a cold surface. The device consisted of a flat 3-magnet array [5] with the sensitive volume (approx.2 mm x 15 x 15 mm) at 1 cm away from the magnet surface. 1D-resolved 1H NMR measurements provided information on the brine concentration, T2 and diffusion at a range of temperatures.
The results provide information on the changing environment in brine in freezing sea sprays, with a potential for NMR studies both in the lab and in the field.
[1]A.R. Dehghani-Sanij, S.R. Dehghani, G.F. Naterer, Y.S. Muzychka, Ocean Eng. 143 (2017) 1–23.
[2]M.W. Hunter, R. Dykstra, M.H. Lim, T.G. Haskell, P.T. Callaghan, Appl. Magn. Reson. 36 (2009) 1–8.
[3]J.R. Brown, T.I. Brox, S.J. Vogt, J.D. Seymour, M.L. Skidmore, S.L. Codd, J. Magn. Reson. 225 (2012) 17–24.
[4] G.Wilbur, B.MacMillan, K.M.Bade, I.Mastikhin, J. Magn. Reson. 310 (2020) 106647.
[5] J.C.Garcia-Naranjo, I.Mastikhin, B.J.Colpitts, B.J.Balcom, J. Magn.Reson. 207 (2010) 337-344
Many-body localization impedes the spread of information encoded in initial conditions, providing a intriguing counter point to continuing efforts to understand the approach of quantum systems to equilibrium and also opening the possibility of diverse non-equilibrium phases.
While much work in this area has focused on systems with a single degree of freedom per site, motivated by rapid developments in cold atom experiments, we focus on the Fermi Hubbard model, with both spin and charge degrees of freedom. To explore the spread of information between these in the presence of disorder, we compare the time dependence of the entanglement entropy with the time dependence of the charge and spin correlations, and in addition we rewrite the Hamiltonian in terms of charge and spin-specific integrals of motion, allowing us to distinguish time scales associated with charge-charge, spin-spin, and charge-spin correlations.
In theories with extra dimensions, the standard QCD axion has excited states with higher mass. The axion of such theories, named the Kaluza-Klein (KK) axion, would have a significantly shorter decay time for higher mass states. This would allow for axion decays on Earth, even in the absence of a strong magnetic field. It would also mean that a fraction of heavier mass axions created in the Sun would remain gravitationally trapped in the Solar System, dominating the local density of axions.
NEWS-G is a dark matter direct detection collaboration that aims to detect low mass WIMPs using a gaseous target detector. The detector is a gas-filled metallic sphere with a high voltage electrode in its centre. While WIMP detection is its main purpose, it is also particularly suitable to KK axion detection. Since the rate of KK axion decays depends only on volume, not on mass, the use of a low density target is an asset: it allows to distinguish such decays from the background by identifying the separate locations of the capture of the two resulting photons.
This talk will cover arguments in favour of the existence of (solar) KK axions, and the work performed on data from NEWS-G detectors to set new constraints on the solar KK axion model.
DEAP-3600 is a direct detection dark matter experiment with single-phase liquid argon as the target material to search for nuclear recoil signal from the interaction of WIMPs, one of the most widely accepted hypotheses for dark matter. Along with the occurrence of this elastic interaction of WIMP and target nuclei, theories also predict the dark matter signal could vary over the course of a year because of the rotation of the sun and hence the earth around the galactic center. This type of modulation is not expected in most of the known backgrounds and the observation of this type of modulation signal will extend the sensitivity of WIMP search in DEAP-3600. The detector stability of DEAP-3600 is studied which will lead to a measurement of the Ar39 half-life and annular modulation of the dark matter signal. In this talk, the sensitivity studies of the detector will be presented.
The Large Hadron Collider (LHC) at CERN supports a plethora of experiments aimed at improving our understanding of the universe by attempting to solve the many answered questions in physics, such as: What is the nature of dark matter? Why is electric charge quantized? Why do the free parameters of the Standard Model (SM) have their particular values? To-date, the SM has been stringently tested at the LHC and completely validated by the recent discovery of the Higgs boson by the ATLAS and CMS experiments. However, no smoking-gun signal of new physics beyond the SM (BSM) has been detected at the LHC to-date. The Monopole and Exotics Detector at the LHC (MoEDAL) is specifically dedicated to investigating various BSM scenarios through searches for highly ionizing particles, such as magnetic monopoles and multiply electrically charged particles, as avatars of new physics. Currently, MoEDAL has taken data for $pp$ collisions at center-of-mass energies of $\sqrt{s}=8$ and $\sqrt{s}=13$ TeV, providing the world's best laboratory constraints on magnetic monopoles with magnetic charges ranging from two to five times the Dirac charge. During the ongoing Long Shutdown 2, the MoEDAL collaboration has been preparing the MoEDAL Apparatus for Penetrating Particles (MAPP) detector upgrade. The aim of the MAPP detector is to expand MoEDAL's physics program by including searches for new mini-ionizing particles (mIPs) and long-lived neutral particles (LLPs). The proposed placement of the MAPP detector is $\sim50$ m from the interaction point, in the UGC1 gallery; a generously sized cavern adjacent to the MoEDAL region at interaction point 8. This presentation provides a progress update on the new MoEDAL Apparatus for Penetrating Particles (MAPP) detector currently planned for deployment by run-3 and phased throughout. A brief overview of the two subdetectors, MAPP-mCP and MAPP-LLP is presented. Lastly, benchmark studies involving renormalizable portal interactions that couple a dark sector to the SM are presented for each subdetector to illustrate the performance capabilities of the MAPP detector in the upcoming Run-3.
For many years the SuperCDMS collaboration has been developing cryogenic
low-threshold silicon and germanium detectors for dark matter searches. The recently developed gram-scale high-voltage eV-resolution (HVeV) detectors are designed to be operated with a high voltage bias (on the order of 100 V) to take advantage of the Neganov-Trofimov-Luke amplification to resolve individual electron-hole pairs. An improved version of the HVeV detector achieved a phonon energy resolution of 2.7 eV without the voltage assisted amplification. Background data with exposures on the order of 1 gram-day were acquired with this detector in an above-ground laboratory, without bias voltage (0 V) as well as at high voltages. We compare the 0 V data with high voltage data, in an attempt to understand the spectrum observed. The 0 V data were also used to set a nuclear recoil dark matter limit.
Polish up your Klingon (the title is a bit of a teaser; it is a Klingon translation of “Tell me more”)! Effective communication is key when it comes to talking about your research, presenting your research at conferences, and writing papers; Even more so when trying to sell your ideas and research to funding agencies and politicians and decision makers. Lorna Somers has perfected the art of storytelling, speaking at educational, arts and charitable organizations throughout the world. “Tell me more!” is what we want our listeners to say when we are talking about our research. But they will not get there if they do not understand what we are talking about or why it is relevant and important! During this workshop, Lorna will teach us how to effectively communicate in different contexts and with different audiences.
The Belle II experiment operating at the SuperKEKB electron-positron collider is the first high energy collider experiment to use CsI(Tl) pulse shape discrimination (PSD) as a new method for improving particle identification. This novel technique employs the particle-dependent scintillation response of the CsI(Tl) crystals which comprise the electromagnetic calorimeter to identify electromagnetic vs. hadronic showers. The new dimension of calorimeter information introduced by PSD has allowed for significant improvements in neutral kaon vs. photon discrimination, an area critical for the Belle II flagship measurement of $\sin(2 \phi_1)$ using $B \rightarrow J/\psi K^0_L$. This talk will describe the implementation of PSD at Belle II including the development of the pulse shape characterization algorithms and new simulation methods to compute the CsI(Tl) scintillation response from the ionization dE/dx of the secondary particles produced in the crystals. The performance of PSD for $K^0_L$ vs photon separation will be presented and the significant improvement over traditional shower-shape approaches will be demonstrated. Ongoing studies exploring new directions for PSD at Belle II will also be presented including new methods of pulse shape characterization with machine learning as well as using PSD to enhance cluster finding and low momentum charged particle identification.
The electroweak production of a Z boson in association with two jets is measured using the full Run-II dataset of the ATLAS experiment. This EW-Zjj process is a fundamental process of the Standard Model (SM), it is sensitive to vector boson fusion Z boson production via the WWZ triple gauge vertex. The process is difficult to study, so an advanced methodology is employed to measure the EW-Zjj signal by exploiting topological features. This methodology and the large dataset collected during Run-II have made it possible to measure the cross section of EW-Zjj differentially for the first time. The cross section is measured as a function of four observables: the invariant mass of the two jet system, the rapidity interval spanned by the two jets, the signed azimuthal angle between the two jets, and the transverse momentum of the Z boson. The observed total fiducial cross section of EW-Zjj is 37.14 ± 3.47 (stat.) ± 5.79 (syst.) fb.
The techniques developed for this analysis can be applied to the measurement of other electroweak processes such as vector boson fusion Higgs production. EW-Zjj is also an important background for vector boson scattering processes that are of growing interest for searches of deviations from the SM. The differential cross sections themselves provide two avenues for testing the SM. First, the measurements are sufficiently precise as to distinguish between different state-of-the-art theoretical predictions. Knowledge gained here is applicable to other areas, such as Higgs physics. Second, the differential cross sections are used to test deviations from the SM attributed to higher order corrections in the WWZ vertex by exploiting the sensitivity of a parity-odd observable with an effective field theory approach.
Black holes are, without question, one of the most bizarre and mysterious phenomena predicted by Einstein’s theory of general relativity. They correspond to infinitely dense, compact regions in space and time, where gravity is so extreme that nothing, not even light, can escape from within. And, their existence raises some of the most challenging questions about the nature of space and time. Over the past few decades, astronomers have identified numerous tantalizing observations that suggested that black holes are real. This past April, the search for confirmation changed dramatically with the publication of the first image ever taken of a black hole, rendering tangible what was previously only the purview of theory and science fiction. I will describe how these observations were made, how the images were generated, how quantitative measurements were obtained, and what they all mean for gravity and black hole astronomy.
We propose a driving scheme in dynamic Atomic Force Microscopy (AFM) to maximize the time the tip spends near the surface during each oscillation cycle. Using a quantum description of the oscillator that employs a generalized Caldeira-Leggett model for dissipative oscillator-surface interaction, we predict large classical squeezing and a small amount of skewness of the probability distribution of the oscillator. Our model also predicts that a dissipative surface force may enhance quantum effects in the motion of a micro-mechanical oscillator that interacts with a surface.
The quest to engineer quantum computers of a useful scope faces many challenges that will require continued investigation of the physics underlying the devices. In this talk, we focus on trapped ion quantum computing. We discuss our efforts to implement quantum information processing with Ba+ ions and provide an overview of possible future benefits this ion could provide for quantum computing efforts, including architectures that are well suited for implementing quantum error correction, and exploration of exotic methods to encode quantum information more efficiently in quDits (multi-level versions of the more familiar two-level quBits). To this end, we present novel measurements related to an all-optical technique for isotope-selective ion production, and discuss why this technique may be critical for building quantum computing devices using the isotope Ba-133+.
We propose an efficient, nanoplasmonic method to selectively enhance the spontaneous emission rate of a quantum system by changing the polarization of an incident control field, and exploiting the polarization dependence of the system's spontaneous emission rate. This differs from the usual Purcell enhancement of spontaneous emission rates as it can be selectively turned on and off. Using a three-level system in a quantum dot placed in-between two silver nanoparticles and a linearly-polarized, monochromatic driving field, we present a protocol for rapid quantum state initialization; while maintaining long coherence times for control operations. This process increases the overall amount of time that a quantum system can be effectively utilized for quantum operations, and presents a key advance in quantum computing.
Time-resolved spectroscopy of multi-electron dynamics associated with the Xe giant plasmonic resonance is demonstrated by applying an attosecond in situ measurement method. The Xe giant resonance was first noticed through enhanced photoionization around 100 eV using synchrotron X-ray beams. Recently, this was revisited with high harmonic spectroscopy, where enhanced extreme ultraviolet (XUV) emission was measured above the photon energy of 90 eV. Although this is remarkable progress, achieved using a table-top XUV source with excellent coherence, we need phase information to understand electron interactions during the resonant excitation. To measure this, we introduce a weak field to perturb recollision electron trajectories during the XUV generation process. This modulates the emitted XUV beam, allowing us to determine emission times of each XUV frequency. Consequently, we observe a large group delay variation around 84 eV of the XUV spectrum, which coincides with the strong amplitude enhancement at the resonance. This reveals the time-dependent response of the resonance, showing a tail with a decay time of 200 as. Since the emission time is the frequency derivative of the spectral phase, this measurement corresponds to the full characterization of the X-ray pulse influenced by the resonance.
This is an evidence that in situ methods can probe multi-electron correlation. Our demonstration to measure the delay of the plasmonic resonance implies that in situ methods are a viable alternative to photoelectron streaking utilizing recollision electrons as exquisitely sensitive probes to characterize ultrafast electron dynamics. Although the in situ method does not distinguish between the ionization and recombination steps of high harmonic generation, it is still valuable for simple approaches pursuing attosecond science. The application of in situ techniques, as demonstrated here in a many-body system, presents a new direction of strong-field attosecond physics where ultrafast many-body dynamics are measured and controlled by all-optical metrologies.
The study of many-body quantum systems undergoing non-equilibrium dynamics has received a lot of interest in the past few years. One way to characterize such systems is by monitoring non-analytic behavior of physical quantities that might occur as a function of time. This is precisely the aim of the theory of dynamical phase transitions. Another way is by looking at universal structures that generally form in many-body systems, as seen through the wavefunction. In this case, Catastrophe theory is a framework which allows one to mathematically describe universal features of wavefunctions via a set of scaling exponents. We found strong evidence suggesting that in fact both theories are related. By studying the transerve field Ising model with infinite range interactions, which can be simulated in ultra-cold atoms and trapped ions experiments, we were able to relate non-analyticities occurring at critical times to universal structures appearing in the wavefunction. More precisely, we numerically calculated a quantity called the Loschmidt rate function as a function of time, and found kinks occurring periodically in time that coincided with the universal structures of the time-evolved wavefunction, identified via scaling laws.
High harmonic generation (HHG) in gasses has become a method of choice among table-top extreme ultraviolet (XUV) sources. In order to generate higher photon energies from this process, many strategies can be implemented, including red-shifting and compressing the driver pulses. Here, we propose a new approach for inducing a red-shift to driver pulses and compressing them to few-cycle durations in a single stage. This method uses the recently discovered multidimensional solitary states (MDSS) pulses that result from the nonlinear Raman process in gas-filled hollow-core fibers. It has a few key advantages: 1) MDSS are created from relatively long pulses, making this method suitable for any laser source offering sub-picosecond pulses. 2) The MDSS pulses can be compressed by simple propagation through glass. 3) In contrast with commonly-used optical parametric amplifiers, the induced red-shift can be modest, allowing to reach a target XUV photon energy while minimizing the detrimental effects of long driver wavelengths on HHG efficiency.
To produce MDSS pulses, the output pulses of a Titanium-Sapphire system are stretched to 400 fs and coupled to a hollow-core fiber filled with nitrogen. In the fiber, intermodal nonlinear processes occur which lead to the red-shifted MDSS pulses. Then, CaF2 plates inserted into the beam path compress the pulses down to 12 fs and these are sent to an argon-filled cell where HHG occurs. From this simple apparatus, the HHG spectrum is expanded to cover the M$_{2,3}$ edge of cobalt and $10^9$ photons/second are generated at 60 eV. This significant XUV photon flux allows for the implementation of X-ray resonant magnetic scattering measurements on a cobalt/platinum ferromagnet, an example of photon-hungry application. Although such flux has previously been generated in a different gas from the direct output of the same laser system, the conversion efficiency of our approach is one order of magnitude higher as it allows to reach the target photon energy by generating harmonics in argon, a gas that offers a large generation efficiency. Due to its simplicity and versatility, our approach can readily be adapted to different applications and could be particularly interesting for high power Ytterbium laser systems offering sub-picosecond pulses.
Time-domain terahertz (THz) spectroscopy has been widely exploited in studying semiconductors, superconductors, topological insulators, and metal-organic frameworks. A high-sensitivity THz system can resolve weak spectroscopic features and a broadband system allows experimentalists to rely on additional spectral information to investigate novel phenomena in materials. In a standard configuration relying on difference frequency mixing to generate and detect THz radiation, the spectroscopy signal can be improved by increasing the nonlinear interaction length inside a nonlinear crystal. However, the accessible spectral bandwidth is then limited by phase-matching conditions. Here we demonstrate a time-resolved THz system relying on noncollinear THz generation and detection schemes in thick nonlinear crystals to perform high-sensitivity and broadband spectroscopy. This concept relies on a phase grating etched on the front surface of two 2-mm thick gallium phosphide (GaP) crystals (THz generation and detection crystals) to diffract the incident near-infrared pulses. Our scheme exploits the long interaction length in these crystals to improve the signal strength and dynamic range in the system. In addition, the noncollinear geometry yields optimizable phase-matching conditions to access a broad spectral bandwidth. We compare our results with those obtained with a traditional broadband collinear system using a pair of thin GaP crystals without gratings. The noncollinear geometry shows a significant increase of the maximum signal amplitude, by a factor of 20, while also achieving a large spectral bandwidth reaching up to 7 THz. We also achieve a dynamic range above 80 dB between 1.1 and 4.3 THz. Our concept could be extended to other nonlinear crystals besides GaP to improve THz generation and detection in different spectral regions. In conclusion, this work paves the way towards high-sensitivity THz spectroscopy over a broad bandwidth in low power experiments and could enable high-field THz generation above 3 THz.
In this research project, I calculated the high-harmonic spectrum from a 1D periodic potential. I investigated numerical methods for solving the 1D time-dependent Schrodinger equation of a particle in a double-well potential, as well as determining its ground state. I used the Crank Nicolson method [1], which is a finite difference method that can be used for numerically solving second-order partial differential equations. Using this method, I calculated the time evolution of an electronic wave function in a harmonic potential. My code was bench-marked against analytic solutions of the harmonic oscillator wave functions. I extended the use of this code by implementing the imaginary time method [2] to determine the ground state of an electron in a double-well potential. The time-independent Schrodinger equation is solved in the Bloch state basis to calculate the band structure of two different 1D periodic potentials. The calculations of dispersion relation are used to calculate the High Harmonic Spectrum and the final results are compared with [3].
[1] Wachter, C. (2017). Numerical Solution of the Time-Dependent 1D-Schrodinger Equation using Absorbing Boundary Conditions (Bachelor Thesis, University of Graz, Austria). Retrieved from https://physik.uni-graz.at/~pep/Theses/BachelorThesis_Wachter_2017.pdf
[2] Williamson, A. (1996). Quantum Monte Carlo Calculations of Electronic Excitations.
Retrieved from http://www.tcm.phy.cam.ac.uk/~ajw29/thesis/node27.html
[3] Wu, M. (2015). Attosecond Transient Absorption in Gases and High Harmonic Generation in Solids (Doctoral dissertation, Louisana State University, USA). Retrieved from https://digitalcommons.lsu.edu/cgi/viewcontent.cgi?article=4320&context=gradschool_dissertations
High harmonics generation (HHG) in solids is a decade old field and yet the understood mechanisms leading to HHG is still an incomplete picture. They fail to capture real-space motion like lateral tunneling ionization. We investigate theoretically high harmonic generation in solids using a localized basis of Wannier states. Wannier states are localized wavefunctions overcoming the infinite nature of Bloch states in real space. We develop a semi-classical model for interband generation, which allows the characterization of HHG in terms of classical trajectories. Our semi-classical approach is in quantitative agreement with quantum calculations. The success of the model completes the single-body picture for HHG in semiconductors. It reveals a complete picture of the mechanisms shaping HHG. Both the ionization and recombination events are altered by real-space processes that are intuitively explained by the Wannier-Stark ladder. An electron tunnel ionized by a strong electric field undergoes a diagonal transition on an energy vs position diagram. The angle of that transition depends on two competing terms: the reduced energy gap due to the Stark effect which favours more horizontal transition and the dipole coupling matrix element, which favours vertical transition. We find that for the recombination, the electron prefers to align in real space with its parent hole. The importance of our semi-classical theory extends beyond HHG; it enables modeling of dynamic processes in solids with classical trajectories, such as for coherent control and transport processes, potentially providing better scalability and a more intuitive understanding.
Computational skills are integral to physics research; they enable the operation of instruments, facilitate the analysis of data, and elucidate physical phenomena through simulation. The same can be said for physics curricula; not only does this reflect the importance in research but incorporating computation into physics courses provides its own pedagogical value. Not surprisingly, many undergraduate programs include dedicated courses that teach introductory computer programming and/or computational methods. We have recently begun integrating computational activities into our second-year physics courses and have adopted a centralized approach: Exercises alternate between multiple courses and are administered independent of the course instructors. Our goal is to provide regular, cohesive exposure of computation, contextualized to their courses, without overburdening one specific course/instructor. We will discuss the structure of these exercises, how they fit into the broader picture of computation in our program, and the overall impact on our students (both perception and proficiency).
Covid-19 project: What a physics instructor learned by working with engineering coop students to create open problems using WeBWorK
BCcampus has funded a number of projects to increase the use of Open Educational Resources (OER) in the British Columbia. There are initiatives to make either Zero (or low cost) Textbook Credentials. One of the major stumbling blocks to having all first-year textbooks in engineering programs be OER was that lack of a book on Mechanics, both statics and dynamics, that was comparable to the commercially available books. These textbooks contain more than 7000 problems as well as about 1000 worked out examples that use high quality 2D and 3D images. In addition they often come bundled with an online homework system.
UBC Mechanical Engineering had begun creating problems of this level of complexity using
WeBWorK. WeBWorK is an open-source on-line homework system for delivering individualized homework problems over the web. It gives students instant feedback as to whether or not their answers are correct. WeBWorK has been used by the mathematics community for decades, but there are not many physics problems in the Open Problem Library (OPL) and very few, less than 100, of the type needed for first and second-year engineering students.
Due to Covid-19 lockdowns this summer, I had spare time on my hands and the Federal Government was providing large subsidies for coop students. A small project at UBC with one engineering professor and two students turned into a larger project with six students. I, a physics instructor at a community college, supervised three of those students as well as working with a professional graphic artist. The project has continued and I am now in my third semester supervising two students. In this January-April 2021 semester, I am teaching the course associated with this project, a first-year physics course in mechanics, both statics and dynamics, geared for engineering students. This is an ongoing project and I invite others to join us in this work.
I will present what I learned about WeBWorK, supervising coop students to create questions, and working with a professor from a large university to create open educational resources.
Temporal diffusion spectroscopy (TDS) has been used to infer axon sizes using geometric models that assume axons are cylinders. A celery sample was imaged to test if the importance of other geometric models. The vascular bundles and collenchyma tissue (~20 μm cells) in celery can be modeled as containing cylindrical cells. Whereas the parenchyma cells are rounder and are 3-4 times larger in diameter. Thus we imaged celery to test TDS with oscillating gradient spin echo (OGSE) to see if the spherical cell model and cylindrical cell model infer significantly different cell sizes to determine how important the geometrical model is.
A small section of a celery stalk was cut to fit inside a 15 mL sample tube filled with water. The image slice was chosen to be perpendicular to the length of the celery stalk. The sample was imaged using a 7T Bruker AvanceIII NMR system with Paravision 5.0 and BGA6 gradient set with a maximum gradient strength of 430357 Hz/cm, and a 3.5 cm diameter bird cage RF coil. Each 20 ms apodised cosine gradient pulse ranged from n = 1-20, in steps of 1. Two different gradient strengths were used for each frequency and gradient pulses were separated by 24.52 ms. A 1mm thick slice was acquired with the following imaging parameters: 2 averages, 2.56 cm FOV, TR = 1250 ms, TE = 50 ms, matrix 128 x 128, 200 μm in plane resolution, acquisition time 26.67 minutes per scan (scans performed = 40, 17.78 hours).
The inferred diameters of cells in celery (14±6μm to 20±12μm) were not statistically different when using the two different geometric models. This is the first step toward understanding the importance of geometric models for TDS.
The authors wish to acknowledge funding from NSERC and Mitacs.
Positron Emission Tomography (PET) images of the brain can reflect the level of brain molecular metabolism with low spatial resolution, while magnetic resonance imaging (MRI) brain images can provide anatomical structure information with high spatial resolution. In order to achieve the complementary of molecular metabolism information and spatial texture structure, it is meaningful to fuse the two types of images. Traditional fusion methods are prone to color distortion of PET image or unclear texture of MR image. A novel medical image fusion algorithm is proposed here. Firstly, the source image is processed with nonsubsampled shearlet transform (NSST) to obtain low-frequency sub-band and a series of high-frequency sub-bands, which can effectively extract the contour and texture details of the image; Secondly, a regional adaptive weighted fusion is adopted for the low- frequency sub-band, which is conducive to the high fidelity of the fused PET image color. while the improved Laplacian gradient sum is used as the input excitation of the parameter- adaptive simplified pulse-coupled neural network (PA-SPCNN) to fuse the high-frequency sub-bands, which improves the clarity of the fused MRI image texture. Finally, inverse NSST is performed on the fused low-frequency and high-frequency sub-bands to obtain the fused image. The experimental results show that the proposed algorithm can retain the basic information of the source image, and show the metabolic status of the functional image without losing the texture features of the structural image. The fusion algorithm achieves good results in both subjective and objective evaluation.
Tissue microstructure, such as axon diameters, can be inferred from MRI diffusion measurements either through relating models of the geometry of the tissue and MR parameters, or through directly relating MR measurements to tissue parameters. Some have implemented geometric models to infer axon diameters using temporal diffusion spectroscopy. In order to target smaller diameter axons, we have replaced the pulsed gradient spin echo pulse sequence used in most temporal diffusion spectroscopy measurements with oscillating gradient spin echo sequence (OGSE). Here we use OGSE temporal diffusion spectroscopy to infer axon diameters is white matter tracts of the live mouse brain.
Axon diameters in the live mouse brain were inferred using oscillating gradient spin echo temporal diffusion spectroscopy. Two sets of five images were collected in less than 11 minutes from which the measurements were made. Diameters ranged from 4 to 12 μm in various white matter regions including the optic tract, corpus callosum, external capsule, dorsal hippocampal commissure and fasciculus retroflexus. Confirmation of axon diameters using electron microscopy remains to be done. The short imaging time suggests this is the first step toward a feasible imaging method for live animals and eventually for clinical applications.
The authors wish to acknowledge Rhonda Kelley for her help with animal care and imaging. The authors acknowledge funding from NSERC and Mitacs.
Magnetic Resonance Imaging (MRI) detects signal from hydrogen nuclei in biological tissue. MRI requires a homogeneous static magnetic field to generate artifact-free images. The subject is spatially encoded with magnetic field gradients. The signal is acquired in the frequency domain and the image is reconstructed by inverse Fourier transform. Objects with high magnetic susceptibility, such as MRI-safe metallic implants, distort the surrounding magnetic field. This leads to severe artifacts that appear as signal voids in the conventional MRI images, due to rapid intravoxel dephasing, in addition to misregistration of frequencies to position.
Pure phase encoding techniques with short encoding times are largely immune to magnetic field inhomogeneity artifacts. This is because the constant signal evolution times can be sufficiently short that no appreciable dephasing has occurred. High quality artifact-free MRI images were acquired with pure phase encoding techniques, from which the magnetic field distribution around the metal was derived. This approach was compared with conventional MRI methods, which failed to map the magnetic field in high susceptibility regions. Although it is challenging to apply the proposed method in a routine clinical MRI scan, the measured magnetic field distribution could enable the development of novel nonlinear encoding techniques, where the metal induced magnetic field distortion is exploited to provide spatial information.
Introduction: pHi is a hallmark of altered cellular function in the tumour microenvironment and its response to therapies. One of the main acid-extruding membrane transport proteins in cells is the Na+/H+ exchanger isoform 1 (NHE1). Chemical exchange saturation transfer (CEST) MRI uniquely images pHi. In CEST-MRI, contrast is produced by exciting exchangeable tissue protons at their specific absorption frequency and observing the transfer of magnetization to bulk tissue water. Amine and amide concentration-independent detection (AACID) is a ratiometric approach that uses the distinctive sensitivity of amine and amide protons to CETS contrast. The AACID value inversely relates to tissue pHi. One way to achieve tumour acidification as a therapeutic strategy is by blocking the NHE1 transporter. Cariporide is a potent inhibitor of NHE1. We have shown that cariporide can selectively acidify U87MG glioma in mice. The goal of this study was to determine whether cariporide also selectively acidifies a rat C6 glioma tumour model immediately following injection by mapping tumour pHi.
Methods: A 2μL suspension of 10^6 C6 glioma cells were injected into the right frontal lobe of six 8-week-old male rats. To evaluate the effect of cariporide on tumour pHi, rats received an IP injection of the drug (6mg/kg in 2ml) two weeks after tumour implantation. They received the drug inside a 9.4T scanner to measure the change in pHi following injection.
Results: Five minutes after injection we started collecting CEST-MRI for 3 hours. For data analysis, we compared the first maximum change in AACID value post-injection with the pre-injection value. Approximately 60 minutes after injection, the average AACID value in the tumour significantly increased (p<0.05). The average AACID value in tumour post-injection was 5.4% higher compared to pre-injection corresponding to a 0.26 lower pHi. The average AACID value in contralateral tissue also increased in a similar way.
Conclusion: We did not observe selective tumour acidification following injection as was observed in the previous study. The reason for this discrepancy is currently unknown but may be related to potential differences in tumour vasculature that may limit the ability of cariporide to infiltrate the tumour. Future work includes increasing cariporide dose and modifying our quantification method to increase the temporal stability of the AACID measurement.
**Introduction:** MRI’s low sensitivity, caused by the use of nuclei with low-gyromagnetic ratios or low magnetic field strength, can presently be improved with expensive high-field MRI-hardware and/or expensive enriched-isotopes. We propose a new method that does not require any extra signal-averaging or hardware to improve the quality of MRI images. We will use a significant k-space under-sampling acquisition method where only a certain percentage of the k-space points will be acquired per image, corresponding to the acceleration-factor (AF); it follows that one can acquire ten under-sampled images in the same time as one fully-sampled image. Averaging each possible combination of images of the under-sampled set, a density decay curve can then be fitted and reconstructed using the Stretched-Exponential-Model (SEM) combined with Compressed Sensing (CS).1
**Method:** 1H MR was performed on a resolution-phantom at the low-field (0.074T) MRI scanner using a home-built RF coil. Nine 2D fully-sampled k-spaces were acquired. Combinations of 2, 3, and 4 averages were carried out for each possible permutation, resulting in 14 k-spaces total (2 combinations for 4 averages, etc.); these were retroactively under-sampled for three AF’s (7, 10, 14). 3 Cartesian sampling schemes (FGRE, x-Centric2, & 8-sector FE Sectoral3) were used. The SNR attenuation is assumed to represent a decrease of the resonant isotope density in phantom after diluting it with the non-resonant isotope. The resulting signal decay (density) curve was fitted using the Abascal method.1
**Results:** The SNR of the 9 k-space averaged image and the original image was 16 and 5, respectively. The SNR of the three sampling schemes is 15 for FGRE, 19 for x-Centric, and 17 for FE Sectoral.
**Conclusion:** The improved SNR of the generated images for all sampling schemes demonstrate that the SEM equation can be adapted for fitting the SNR decay dependence of the MR signal. Since this technique does not require extra hardware, the proposed method could be implemented in current MRI-systems and yield improved images. Due to the CS-based reconstruction, the higher AF leads to more visible artefacting; this could be reduced by a Deep Learning-based correction after the fact.4
**References:** **1** Abascal et al. IEEE Trans Med Imaging (2018); **2** Ouriadov et al. MRM (2017); **3** Khrapitchev et al. JMR (2006); **4** Duan et al. MRM (2019)
Magnetic resonance imaging (MRI) is widely used as a non-invasive diagnostic technique to visualize the internal structure of biological systems. MRI has limited spatial resolution and the microscopic behaviour within an image voxel cannot be visualized with qualitative images. Quantitative analysis of molecular diffusion provides insights into the microscopic structure beyond the MRI image resolution. It is challenging to analytically derive the MR diffusion signals for complex microscopic environments, particularly with susceptibility effects. In this work, an easy to use open-source Monte Carlo algorithm has been developed to simulate MR diffusion measurements under arbitrary conditions.
The self diffusion of water molecules can be described by Brownian motion. The Monte Carlo method was applied to simulate the Brownian motion in a user-defined microscopic environment. The fast simulation can be performed on any MRI experiments with user-defined magnetic field distribution. The method has been applied to predict nanoparticle configuration. Magnetic nanoparticles, serving as biosensors, distort the local magnetic field leading to changes in the MR diffusion signals. The simulation agreed with the experimental results. The nanoparticle concentration in water can be determined with MR diffusion measurements.
We have developed an efficient, easy to use algorithm for rapid diffusion simulation in different microscopic environments with arbitrary magnetic fields. This simulation will be employed to optimize the nanoparticle biosensor systems for a wide range of targets, including cancer cells and COVID virus.
In numerical relativity, marginally outer trapped surfaces (MOTSs) (often referred to as apparent horizons) are the main tool to locate and characterize black holes. For five decades it has been known that during a binary merger, the initial apparent horizons of the individual holes disappear inside a new joint MOTS that forms around them once they are sufficiently close together. However the ultimate fate of those initial horizons has remained a subject of speculation. In this talk I will introduce new mathematical tools that can be used to locate and understand axisymmetric MOTS. In particular I will show that the MOTS equations can be rewritten as a pair of coupled second order equations that are closely related to geodesic equations and hence dubbed the MOTSodesic equations. Numerically, these are very easily solved and in the linked talks by KTB Chan, R Hennigar and S Muth they will be used to identify and study rich families of previously unknown MOTS in a variety of black hole spacetimes, including both exact solutions and binary merger simulations. I will also show that the MOTS stability operator bears the same relation to MOTSodesics as the Jacobi deviation operator does to geodesics and consider the implications.
The common picture of a binary black hole merger is the “pair of pants” diagram for the event horizon. However, in many circumstances, such as those encountered in numerical simulations, the event horizon may be ill-suited and it is more practical to work with quasi-local definitions of black hole boundaries, such as marginally outer trapped surfaces (MOTS). The analog of the pair of pants diagram for the apparent horizons remains to be fully understood. In this talk, I will discuss the complete picture for the merger of two axisymmetric black holes. I will begin by introducing new classes of MOTS present in Brill-Lindquist initial data. I will then discuss the role played by these and related surfaces in understanding the final fate of the apparent horizons of the initial two participants in the merger.
In the case of binary black hole mergers, the surface of most obvious interest, the Event Horizon, is often computationally difficult to locate. Instead, it is useful to turn to quasi-local characterizations of black hole boundaries, such as Marginally Outer Trapped Surfaces (MOTS), which are defined for a single time slice of the spacetime, and the outer-most of which is the apparent horizon. In this talk, I will describe ongoing work focused on understanding MOTS in the interior of a five-dimensional black hole; both static and rotating. Similar to the four-dimensional Schwarzschild case previously studied, we find examples of self-intersecting MOTS with an arbitrary number of self-intersections. This provides further support that self-intersecting behavior is rather generic. I will also discuss the second stage of our research, which is for a rotating 5D black hole spacetime. These two cases fit into a larger project involving exploration of the generality of self-intersecting behaviour in MOTS, within spacetimes of increasing diversity.
Despite the constant stream of black hole merger observations, black hole mergers are not yet fully understood. The phenomenon seems simple enough, but the details of how the two apparent horizons end up as one horizon is unclear due to the non-linear nature of the merger process. Recent numerical work has shown that there is a merger of self-intersecting Marginally Outer-Trapped Surfaces (MOTS) during the black hole merger. Following papers have investigated further into MOTS in a simpler and static scenario, that of a Schwarzschild black hole. Such cases require less machinery and are solved with everyday computers. Those numerical calculations show an infinite number of self-intersecting MOTS hidden within the apparent horizon, as well as open surfaces (MOTOS). The importance of Schwarzschild MOTS are not to be undermined due to its relative simplicity as such MOTS describe an extreme-mass-ratio black hole merger, where one of the black holes is far more massive than the other. In this talk, I will discuss the current understanding of black hole mergers as have numerically been shown and my work investigating Schwarzschild MOTS in maximally-extended Kruskal-Szekeres coordinates.
The observation of supermassive black holes (SMBHs) of mass over a billion solar masses within the first billion years after the Big Bang challenges standard models of the growth of massive objects. Direct collapse black holes arising from a short-lived supermassive star phase have been proposed as a means to form the SMBHs in the required time. In this work we show that a weak cosmological magnetic field may be sufficient to allow direct collapse into very massive objects, overcoming the normal barrier of angular momentum. A dynamo action in the accretion phase emphasizes the effect of the magnetic field. I also review generally the four distinct modes of gravitational collapse with magnetic fields: strong field/strong coupling, weak field/strong coupling (emphasized here), strong field/weak coupling, and weak field/weak coupling.
One of the more exciting things to emerge from black hole thermodynamics in the past 10 years is the understanding that black holes can undergo a broad range of chemical-like phase transitions, including liquid-gas phase transitions, triple points, superfluid transitions, polymer-type transitions, and exhibit critical behaviour. It is even possible to consider black holes as the working material for heat engines. The efficiencies for a variety of black holes can be calculated and compared against each other.
In this talk I will discuss the connection between critical behaviour and the efficiency of black hole heat engines. I first consider the heat capacity of static black holes at constant volume such that Cv=0 Using the near critical expansion of the equation of state, the coefficients appearing in this expansion can be found from an engine cycle placed along a critical point on a PV plot.
I will discuss the importance and applications of the simplifications made, along with how this result allows one to go from the near critical expansion of the equation of state directly to a conclusion about the behaviour of a heat engine near the critical point.
One of the more exciting things to emerge from black hole thermodynamics is that black holes can form the working material for heat engines. I explore the connection between the critical behaviour of black holes and their efficiency as heat engines over a range of dimensions and for a variety of theories of gravity.
I first show that their efficiency as heat engines near the critical point can be written in general dimensions in terms of the variables characterizing the geometry of the cycle and the critical exponents. Engines near the critical point approach the Carnot efficiency, with the rate of approach determined by the universality class of the black hole. I will specifically consider a broad range of charged black holes, Lovelock black holes, and black holes with isolated critical points. I will then discuss work in progress exploring this formalism for black holes whose specific heat at constant volume is nonzero, applying it to examples such as rotating black holes and STU black holes.
We examine the thermodynamics of a new class of asymptotically AdS black holes with non-constant curvature event horizons in Gauss-Bonnet Lovelock gravity, with the cosmological constant acting as thermodynamic pressure. We find that non-trivial curvature on the horizon can significantly affect their thermodynamic behaviour. We observe novel triple points in 6 dimensions between large and small uncharged black holes and thermal AdS. For charged black holes we find a continuous set of triple points whose range depends on the parameters in the horizon geometry. We also find new generalizations of massless and negative mass solutions previously observed in Einstein gravity.
Additional spatial dimensions compactified to submillimeter scales serves as an elegant solution to the hierarchy problem. As a consequence of the extra-dimensional theory, primordial black holes can be created by high-energy particle interactions in the early universe. While four-dimensional primordial black holes have been extensively studied, they have received little attention in the context of extra-dimensions. We adapt and extend previous analyses of four-dimensional primordial black holes for the purpose of studying the impact extra-dimensions have on cosmology. We find new constraints on both extra-dimensional primordial black holes, and the fundamental extra-dimensional theories by combining an analysis of Big Bang Nucleosynthesis, the Cosmic Microwave Background, the Cosmic X-ray Background, and the galactic centre gamma-rays. With these constraints we explore to what extent these extra-dimensional primordial black holes can comprise the dark matter in our universe.
Studies of atomic nuclei furthest from stability often reveal surprising phenomena such as exotic structures, highly-deformed shapes and rare modes of radioactive decay. Understanding the properties of the most exotic nuclei is crucial for constraining nuclear reaction rates in explosive astrophysical scenarios and explaining the elemental abundances of the stable and radioactive isotopes that they eject into the universe. These studies pose a significant experimental challenge that requires powerful rare-isotope production and accelerator facilities coupled with state-of-the-art detection systems. In this presentation, I will describe some of the more exotic modes of radioactivity that are relevant in neutron-deficient nuclei, what they can tell us about nucleosynthesis and I will present a novel detector called the Regina Cube for Multiple Particles that was designed and built at the University of Regina for experiments at TRIUMF with the GRIFFIN spectrometer.
Globular clusters contain some of the oldest stars in the universe and provide a key method of understanding the formation and evolution of galaxies. Unfortunately, there are a number of mysteries about the history of globular clusters. One of the most important is the existence of multiple populations, and evidence that the current generation of stars within globular clusters has been elementally polluted by the ashes of some unknown previous stellar event or events.
At present, the uncertainties in the stellar nuclear reaction rates are too high for astrophysical models to identify the polluting site or sites. Sensitivity studies have identified a number of important reaction rates, including $^{39}$K($p,\gamma$)$^{40}$Ca, along with the most important resonances which must be measured. Once these reaction rates have been determined, the polluting site can be identified.
In this talk we will present results from direct measurements of important resonance strengths in $^{39}$K($p,\gamma$)$^{40}$Ca performed with the DRAGON recoil separator at TRIUMF in Vancouver, Canada including the first direct measurement of the resonance predicted to dominate the reaction rate in the expected range of astrophysical temperatures.
The investigation of radiative capture reactions involving the fusion of hydrogen or helium is crucial for the understanding of stellar nucleosynthesis pathways as said reactions govern nucleosynthesis and energy generation in a large variety of astrophysical burning and explosive scenarios. However, direct measurements of the associated reaction cross sections at astrophysically relevant low energies are extremely challenging due to the vanishingly small cross sections in this energy regime. Additionally, many astrophysically important reactions involve radioactive isotopes, which pose challenges for beam production and background reduction.
One of the key aspirations in experimental nuclear astrophysics is the determination of the stellar origin of the cosmic γ-ray emitting isotope $^{26}$Al, which is still posing an experimental challenge.
The observation of the characteristic 1.809 MeV $\gamma$-ray signature throughout the interstellar medium as well as isotopic excesses of $^{26}$Mg found in meteorites provided evidence for the existence of $^{26}$Al in the early Solar System, however, its exact origin is still being discussed. Understanding the stellar nucleosynthesis of $^{26}$Al is complicated by the presence of a 0$^{+}$ isomer located 228.31 keV above the ground state. Since said level undergoes super-allowed $\beta^{+}$ decay directly into the $^{26}$Mg g.s., the emission of the 1.809 MeV $\gamma$-ray is bypassed, and the isomer does not contribute to the directly observed galactic $^{26}$Al abundance, however, influences the $^{26}$Al:$^{27}$Al ratio in presolar grains. Thus, only by studying the reactions involved in the production and destruction of both, $^{26}$Al and $^{26m}$Al, one can identify how various astrophysical environments contribute to the $^{26}$Al $\gamma$-ray flux.
To date, the available experimental information on the rate of the $^{26m}$Al(${\it p}$,$\gamma$)$^{27}$Si reaction is rather limited and considerable uncertainties still remain. In this contribution, I will present results obtained from a recent analysis of an inverse kinematics study performed with DRAGON (Detector of Recoils And Gammas Of Nuclear Reactions) using isomeric $^{26m}$Al beam to investigate the 448 keV resonance in $^{26m}$Al(${\it p}$,$\gamma$). Additionally, a brief overview of other recent experimental activity at DRAGON will be presented.
Measurements of correlation parameters in nuclear β decay have a long history of helping shape our current understanding of the fundamental symmetries governing our universe: the standard model. A variety of observations indicate this model is incomplete, so scientists continue to search for what may lie beyond the standard model. Nuclear β decay continues to play an important role in this search for new physics, one that is complementary to other searches. To achieve the precision required to be competitive with the LHC, for example, elegant and sensitive techniques are required. The 6He Cyclotron Radiation Emission Spectroscopy (CRES) experiment under development at CENPA, the University of Washington, aims to make the world's most sensitive search for tensor components to the weak interaction by an energy-spectrum shape measurement of this pure Gamow-Teller decay. As demonstrated by Project 8, the energy of β-decay electrons emitted in a magnetic field can be measured to 15 eV. If the 6He ions are confined in a Penning trap to avoid wall effects, the ultimate precision on the Fierz interference parameter is estimated to be ΔbFierz=10–4. This talk will outline the 6He CRES experiment and our plans to observe the cyclotron radiation of electrons emitted from 6He confined in a Penning trap.
The proton drip-line is not firmly established for heavy masses. Near N=82, the masses of neutron-deficient Yb and Tm isotopes were measured. In Tm (Z=69), the precise location of the drip-line could be determined, and for both isotopic chains the stabilizing effect of the N=82 shell was examined. These elements now represent the largest atomic numbers at which this shell closure has been directly probed through the 2-neutron separation energy.
These measurements were accomplished using the recently commissioned Multiple Reflection, Time-Of-Flight Mass Spectrometer at the TITAN facility. Its sensitivity and in-device beam purification capabilities permitted the determination of not only the ground-state masses, but also that of a long-lived isomer in $^{151}$Yb. In this presentation an overview of the measurement technique will be given before the scientific results are discussed.
High-temperature superconductor YBa2Cu3O7 (YBCO) can be systematically disordered by irradiating with a He-ion beams to induce a metal-insulator transition (MIT). Therefore, tunnel junctions demonstrating Josephson tunneling properties can be constructed in planar YBCO films using a He-ion microscope. We have used superconducting loops with disordered YBCO junctions to develop devices that together form fully recurrent neural networks.
Arrays of disordered loops with junctions in planar YBCO thin films with can demonstrate both neuron-like and synapse-like properties. A different architectural approach has been taken by replacing individual synaptic connections with a disordered array of superconducting loops with Josephson junctions. The disordered array can be connected to neurons at its incoming and outgoing nodes to form a fully connected and recurrent neural network and this demonstrates properties of a synaptic memory. The advantage of this approach is that the available memory increases exponentially with increasing size of the array while still fully connecting all the neurons in the network. A neuron-like device is designed with disordered YBCO Josephson junctions that demonstrate leaky integrate-and-fire properties with spiking output and dynamically varying threshold. I will discuss the designs and demonstrate them using equivalent circuit simulations and propose a collective synaptic network architecture that can also work with various other materials that are of interest.
I was a graduate student in Jules Carbotte's group in the early 1990s, during the heyday of high temperature superconductivity. At the time (for me), everything was new and everything was exciting and it felt as if we were about to learn something beautiful about the world. I recognize now how many of those feelings are tied to where I was and who I was working for. Indeed, I have much to thank Jules for: for helping to create a community of people working together on a common problem; for his infinite patience as a supervisor; and for the joy that physics gave him, which he shared freely with his students. In this talk, I will pay tribute to my former supervisor and mentor.
This talk will focus on an overview of Eliashberg theory, a formalism that Jules was very well known for. But I will also discuss some potential shortcomings of this framework, as time permits, from the weak coupling to the strong coupling limit, with polaron physics, and applicability to the hydride and superhydride materials.
A crucial challenge in engineering modern, integrated systems is to produce robust designs. However, quantifying the robustness of a design is less straightforward than quantifying the robustness of products. For products, in particular engineering materials, intuitive, plain language terms of strong versus weak and brittle versus ductile take on precise, quantitative meaning in terms of stress–strain relationships. Here, we show that a “systems physics” framing of integrated system design produces stress–strain relationships in design space. From these stress–strain relationships, we find that both the mathematical and intuitive notions of strong versus weak and brittle versus directly characterize the robustness of designs. We use this to show that the relative robustness of designs against changes in problem objectives has a simple graphical representation. This graphical representation, and its underlying stress–strain foundation, provide new metrics that can be applied to classes of designs to assess robustness from feature- to system-level.
Graphene is among the most promising materials considered for next-generation gas sensing due to its properties including high mechanical strength and flexibility, high surface-to-volume ratio, large conductivity, and low electrical noise. While gas sensors based on graphene devices have already demonstrated high sensitivity, one of the most important figures of merit, selectivity, remains a challenge. In the last few years, however, surface functionalization emerged as a potential route to achieve selectivity. In this talk, we focus on experiments where we functionalized the surface of CVD graphene field-effect transistors (GFET) through thermal evaporation of metal-free phthalocyanines and copper phthalocyanines. We present and discuss sensitivity and selectivity results obtained when such sensors are exposed to volatile organic compounds such as ethanol, toluene, formaldehyde, and acetone. In general, the functionalized GFET presented enhanced selectivity for oxygen-containing molecules (formaldehyde and acetone).
Chemical warfare agents (CWAs) are potential threats to civil society and defence personnel. In recent years, many efforts has been deployed to develop a scalable, rapid and accurate detection system to identify trace amount of CWAs. Here we report a graphene-based field-effect transistor (GFET) sensor able to detect 800 ppb of dimethyl methyl phosphonate (DMMP), a simulant of the nerve agent sarin. We observe enhanced sensitivity when the GFET sensor is exposed to few mWs of UV light. Back gate measurements performed before and during exposures to the analyte allow us to investigate the sensing mechanism while monitoring the induced changes in carrier concentration and mobility in graphene.
Graphene field effect transistors (GFETs) have an enormous potential for the development of next-generation gas sensors, but more efforts are required to improve their sensitivity and selectivity. In this talk we discuss UV illumination as a promising method to enhance the performance of GFETs for the detection and recognition of analytes such as ethanol, water vapor and dimethyl methylphosphonate (DMMP), a molecule with structural similarities to nerve agents such as sarin. We show that illuminating the devices in operando with a UV LED results in both improved sensitivity and selectivity. By monitoring the sensing response of the GFETs as a function of gate voltage, we directly demonstrate that a shift in the Dirac point due to the optical doping is associated with the increased sensitivity. Moreover, we discuss how the substrate and fabrication residues on the surface of the graphene sensors can play a role in modifying the sensing performance.
In this work we explore the use of multi-layer graphene (MLG) films grown by chemical vapor deposition for adaptive thermal camouflage. Using different ionic liquids, we tune the opto-electronic properties of MLG (150 – 200 layers) and investigate changes in optical reflectivity and emissivity in the infrared region (IR). We fabricate devices having a metallic back electrode supporting a porous membrane onto which we deposit the MLG. We use both non-stretchable polyethylene (PE), and stretchable polydimethylsiloxane (PDMS) as porous membranes. Using a thermal imaging system, we demonstrate that even when the device temperature is maintained higher than the environment, the MLG emissivity can be electrically controlled such that the device appears indistinguishable from the environment [1]. Moreover, we evaluate the performance of such devices based on flexible textiles towards developing a new material platform for defense applications.
The science and technology of alkanethiol self-assembled monolayers (SAMs) on gold and other solid surfaces is a subject of ongoing research driven by the fundamental interest and attractive practical applications. The structural organization of alkanethiol SAMs is dominated by the strong intermolecular interaction, manifested by the enhanced quality of SAMs formed by long chain alkanethiols. Thiol ligands cover a larger number of binding sites on nanostructured rather than atomically flat gold surfaces,$^{1, 2}$ ascribed to the presence of curved surfaces of nanoparticles and vertices of nanostructured surfaces. The observation of this effect on surfaces of compound semiconductors, such as GaAs, is highly challenging due to the problem with maintaining surface stoichiometry and controlling oxide formation on these materials. Thus, there is anecdotal evidence that formation of high-quality SAMs on compound semiconductors requires flat surfaces.
We have investigated formation of mercaptohexadecanoic (MHDA) SAMs on digitally photocorroded (DIP) surfaces of (001) GaAs/Al$_{0.35}$Ga$_{0.65}$As nanoheterostructures (5 pairs of GaAs/AlGaAs, d$_{GaAs}$ = 12 nm, d$_{AlGaAs}$ = 10 nm). The DIP process allows etching with a step resolution of better than 0.1 nm, making possible also in situ deposition of different SAMs on freshly etched surfaces. The FTIR spectroscopy revealed the growing absorbance intensity and decreasing vibration energy of the -CH$_2$ modes of MHDA SAMs formed on surfaces of GaAs with the increasing nano-scale roughness produced in an ammonia solution. The absorbance amplitude of 1.08 x 10$^{-2}$ (E$_{CH2}$ = 2919.6 cm$^{-1}$, FWHM = 20.3 cm$^{-1}$) observed for the SAM developed on the surface of the 5$^{th}$ GaAs layer was 11-fold greater than that on the surface of the 1$^{st}$ GaAs layer (E$_{CH2}$ = 2922.0 cm$^{-1}$, FWHM = 25 cm$^{-1}$), which suggests formation of an excellent quality MHDA SAM. Our results suggest the feasibility of attractive applications of the DIP process for the research of atomic scale interfaces involving III-V semiconductors and manufacturing of advanced sensors and nanodevices.
The National Research Council Canada (NRC) was contracted by Infrastructure Canada and the City of Toronto to improve the understanding of the performance of various catch basin covers under various conditions. A full scale model roadway was built 10.7 m long and 2.6 m wide in the NRC's Coastal Wave Basin and the water depth in front of the catch basin varied from 0.5 - 15 cm, the road grade was varied from 0.5 - 10.0% and the cross slope was varied from 2.0 - 4.0%. Early in the test protocol it was noted that the capacitance wire wave gauges used to measure the water depth on the roadway provided inconsistent results. In this work we will compare water depth measurements from a manual point gauge, an acoustic sensor and the capacitance wave probes. We will examine in which way each of the sensors is biased and examine the impact of those results.
Molybdenum possesses seven stable isotopes and the relative amounts of these isotopes are found to vary in nature. This is because physical and chemical processes can redistribute the isotopes in a system due to the differences atomic masses. Specific processes can leave an “isotopic fingerprint” that may be recorded in the isotopic composition of the element in a given sample. The interpretation of these data can enable one to elucidate source(s) and processes that may have affected the element. An important example of a potential Mo source is petroleum coke (PC). This is a by-product from the extraction of crude oil from the Oil Sands in northern Alberta and although it is employed as a fuel source, it is not used as quickly as it is produced, allowing it to accumulate on site [1]. There is evidence PC dust spread by wind contributes to an increase in polycyclic aromatic hydrocarbons (PAH) and polycyclic aromatic compounds (PAC) accumulating in lichen samples in forests in the Athabasca Oil Sands Region [2]. This could mean that trace metals are deposited in forests or bodies of water. Natural sources of the surrounding oil sands and bitumen seeps mean that the Athabasca River contains relatively high concentrations of metals compared to glacial counterparts. It is vital to distinguish between the natural and anthropogenic inputs so that appropriate procedures can be put in place to minimize human impacts on the region. Concentration measurements of Mo in aqueous environments provide ambiguous results as they do not necessarily distinguish between natural and industrial sources. Isotope abundance data can provide additional information on the source and history of the material. This project measured the isotopic composition of Mo leached from PC. Isotope determination was done using the double spike method measured with a Neptune multi-collector ICP-MS. The δ97/95 values for three oven dried PC leachate samples were determined to be +0.13 ‰, +0.34 ‰, and +0.81 ‰ with a 2σ uncertainty of 0.06 ‰. These samples had concentrations of 3.16, 9.47, and 0.55 µg/L respectively (uncertainty of 5 %). The isotopic composition of samples from this region gives a better understanding of the sources and sinks of Mo and can be combined with snow, lichens, and water to identify potential environmental concerns caused by PC distributed by wind.
1. J. M. Robertson et al. Aqueous- and solid-phase molybdenum geochemistry of oil sands fluid petroleum coke deposits, Alberta, Canada. Chemosphere 217, (2019).
2. M. S. Landis et al. Source apportionment of an epiphytic lichen biomonitor to elucidate the sources and spatial distribution of polycyclic aromatic hydrocarbons in the Athabasca Oil Sands Region, Alberta, Canada. Science of the Total Environment 654, (2019).
High-power lasers are rapidly becoming standard tools in advanced manufacturing, mainly in the form of laser welding, laser cutting, and laser additive manufacturing. Of these applications, laser welding in the electric mobility sector---particularly in the manufacturing of battery packs---presents unique challenges. Weld depth needs to be precisely controlled, not only to ensure joint strength, but also to ensure the weld does not puncture into the lithium ion cell. In addition, these processes often involve highly reflective metals (such as copper), which have material properties that lead to unstable welds; this requires an unprecedented level of control to ensure weld quality and depth. To better monitor and control weld quality, we need fully in-line monitoring during the process. Inline Coherent Imaging (ICI) is a process monitoring technique that has been demonstrated to measure keyhole depth (down to 15 um axial resolution) at high imaging rates (~200 kHz), even in high aspect ratio features. Due to its interferometric nature, coherent imaging has unparalleled sensitivity and dynamic range. However, it suffers from speckle noise, which degrades high-speed measurements by orders of magnitude, and false interfaces that arise from unwanted interferences (“autocorrelation” peaks). These pose a significant challenge to quality assurance and closed-loop control, particularly in highly dynamic laser processing applications, such as copper welding. To mitigate these problems, we have integrated a second, automatically synchronized, imaging channel into a standard ICI system, by exploiting a previously unused part of the imaging window. This “witness” image allows us to identify real signatures based on correlation and filter out the uncorrelated noise. Using this system, we have demonstrated the complete removal of autocorrelation artifacts, and have increased signal to noise by a factor of two, with no loss of imaging rate or spatial resolution compared to standard ICI. When applied to imaging laser keyhole welding, the false interface detection rate is reduced from 10% to 0.15%, yielding improved tracking of the true morphology.
In this talk, recent highlights and future prospects will be discussed.
As the most recently-discovered particle of the Standard Model, the Higgs boson is fundamental to our understanding of particle physics and is the focus of much attention at CERN’S Large Hadron Collider (LHC). The Higgs boson’s couplings to other particles are predicted by the Standard Model (SM), so performing precise measurements of these couplings can probe for discrepancies and constrain theories beyond the SM. This talk will present recent work by the ATLAS experiment at CERN to characterize the newly-discovered Higgs boson by measuring its coupling to W bosons using data collected at the LHC from 2015-2018. It will highlight the first ATLAS observation of H->WW* decay in the vector boson fusion (VBF) production channel and its role in rigorously testing the SM.
In the Standard Model, the interactions between gauge bosons are completely specified and any deviations from this expectation would indicate the presence of new physics phenomena at unprobed energy scales. The study of the self-couplings of electroweak gauge bosons is therefore a powerful approach to searching for new physics phenomena. The large data samples collected by the ATLAS experiment at the LHC make it possible to now explore extremely rare processes involving the interaction between four gauge bosons.
In this talk I will discuss the search for evidence of one of these rare processes, namely, the vector boson scattering between a W boson and a photon, whose production cross-section has never before been measured by the ATLAS collaboration. Making a measurement of this electroweak process is challenging due to the presence of a large and irreducible background from processes involving the strong interaction, which are mismodelled at high di-jet mass where we expect the greatest sensitivity to VBS. I will discuss analysis techniques being used to make a measurement in the presence of this large and mismodelled background.
The phase-out of ozone depleting substances has led to the release in the atmosphere of new generations of fluorinated coolants and propellants. Those molecules contain C-F bonds, which make them strong absorbers in the mid-infrared spectral region. To properly assess the impact of those molecules on climate, their radiative forcing must be calculated from their experimental and/or theoretical absorption cross-sections.
The common way to obtain the data is through the acquisition of laboratory absorption spectra by Fourier transform spectroscopy. This process allows the study of the temperature and pressure dependence as well as the impact of hot bands and combination bands. However, the acquisition is time-consuming and not always straightforward.
A second method consists into calculating the vibrational band positions and intensities of the molecule by quantum mechanical calculations and simulating the cross-section spectra. The calculations can be carried out by ab-initio or density functional theory methods. Although theoretical data can quickly estimate the radiative efficiency of a molecule, the results are dependent on the levels of theory and still require empirical corrections to match their experimental counterparts. However, theoretical calculations has proven to be an efficient tool to analyze the conformational populations and to provide data on a spectral range that cannot be easily accessed experimentally.
Over the past few years, our group has analyzed the radiative properties of multiple molecules and extracted their radiative efficiency and global warming potential. In this talk, we will discuss recent results and findings. In particular, we will show that the best results come from a compilation of both experimental and theoretical results.
Floquet theory is useful for understanding the behaviour of quantum systems subject to periodic fields. Ho et al. [Chem. Phys. Lett. 96, 464 (1983)] have presented an extension of Floquet theory to the case of systems in the presence of multiple periodic fields with different frequencies. However, unlike conventional Floquet theory, which is well-established, many-mode Floquet theory (MMFT) is somewhat controversial, with conflicting statements regarding its validity appearing in the literature. I will present our recent resolution of these discrepancies.
Joint work with Adam Poertner, supported by NSERC.
Laser resonance ionization (mass) spectroscopy in a hot cavity environment is an ultra-sensitive means for laser spectroscopy of short-lived isotopes. Despite the non-Doppler free nature of hot cavity, in source spectroscopy, this method allows to determine atomic energy levels and through Rydberg series convergence the determination of the first ionization potential. An overview of the ongoing in-source spectroscopy program with radioactive isotopes at TRIUMF - Canada's particle accelerator laboratory will be given.
In this joint experimental and theoretical work [1], photoelectron emission from excited states of laser-dressed atomic helium is analyzed. We successfully demonstrate a method that is complimentary to transient absorption (e.g. [2]) for the assignment of light-induced states (LIS). The experiment is carried out at DESY in Hamburg and uses the FLASH2 free-electron laser to produce an extreme ultraviolet (XUV) pulse to which the helium atom is subjected along with a temporally overlapping infrared (IR) pulse in the multi-photon ionization regime ($\approx$10$^{12}$ W/cm$^2$). Analysis of the experiment occurs at the reaction microscope (REMI) end station [3] at FLASH2. The XUV pulse is scanned over the energy range 20.4 eV to 24.6 eV, corresponding to excited states of helium. The resonant, electric dipole-allowed $n$P states corresponding to a first step of single XUV photon excitation are shown to lead to ionization, independent of whether or not the lasers temporally overlap. However, dipole-forbidden transitions to $n$S and $n$D states corresponding to multiphoton (XUV $\pm$ $n$IR) excitation are observed during temporal overlap. Studying photo-electron angular distributions (PADs) in the case where the ionization pathway of a LIS is difficult to resolve energetically allows for an unambiguous determination of the dominant LIS. The IR intensity and relative polarization between the lasers are varied to control the ionization pathway. Numerical solutions of the time-dependent Schr\"odinger equation within a single-active electron model with a local potential completely support the experimental findings in this project.
[1] S. Meister $\textit{et al}$., Phys. Rev. A $\bf{102}$ (2020) 062809; Phys. Rev. A $\bf{103}$ (2021) in press.
[2] S. Chen $\textit{et al}$., Phys. Rev. A $\bf{86}$ (2012) 063408.
[3] S. Meister $\textit{et al}$., Applied Sciences $\bf{10}$ (2020) 2953.
*work supported by NSERC, NSF, and XSEDE
The main advantage of hybrid PET-MR imaging systems is the ability to correlate anatomical with metabolic information directly. The bulk of commercially available PET-MR systems are quite large and expensive and mostly used on humans rather than for preclinical animal studies. This has led to a gap of knowledge in PET-MR imaging of small animal models used in preclinical research. Our work takes advantage of a new imaging system developed by Cubresa called 'NuPET'. This device is a MR-compatible PET scanner placed around the subject while they are within the toroidal bore of a MR scanner. With this equipment we are attempting to demonstrate the selective activation of serotonergic neurons in living rodent brains. To specify which neurons are to be activated, we use Designed Receptors Exclusively Activated by Designer Drug (DREADD) technology. These DREADDs are designer G-protein-coupled receptors. Neurons at the site of a stereotactic injection are transfected with a viral vector containing the proteins necessary to force expression of DREADDs in genetically modified rats. These may then be activated by administering the designer drug clozapine-N-oxide (CNO). This technique allows for precise spatiotemporal control of receptor signaling in vivo. Over two experiments (N=5, N=2) we have attempted to image the effect of DREADDs-mediated excitation of 5-HT neurons in rats. Voxel-based analysis of the data thus far show no confirmed statistically significant differences between rats given saline and those given CNO. Numerous methodological issues have been discovered within the experimental design, and are being addressed for a new trial of the technique and technology.
The authors wish to acknowledge funding from NSERC partnership grants, Mitacs, Cubresa, and Research Manitoba.
Calcium (Ca) is an essential mineral in the body that helps maintain healthy bone density. Dysreguation of Ca can result in serious health issues and a reliable and efficient method of identifying changes in bone mineral balance can help to provide eaarly diagnosis of deteriorating bone health. The objective of this project is to investigate the application of naturally occurring Ca isotope abundance variations to understand biological processes, including biomineralization. This is because the kinetics underlying metabolic processes that involve Ca are mass dependent and will redistribute the abundances of naturally occuring, stable Ca isotopes. Thus, a careful measurement of Ca isotopic composition of the Ca pools in the body (i.e. bone, blood, and urine) can provide unique insight into the disruption of Ca metabolism. The extent of natural variations of stable Ca isotopes in the human metabolism is limited with a relative natural variation of less than 0.5% in the 44Ca/40Ca isotope amount ratio. Therefore, reliable measurement of Ca isotopic composition has remained very challenging, especially considering low Ca levels and significant procedural blank levels. The goal of this project was so develop a reliable and accurate analytical measurement procedures specifically for small amounts (approx. 1 µg) of Ca in biological materials.
In this study the extraction and isolation of calcium from a diverse set of biological matrices was optimized for low procedural blanks and separation from matrix elements and isobaric interferences such as Na, Mg, K, Mg, Ti, Fe, Ba. A 42Ca–48Ca double spike (DS) was applied to correct for potential isotopic fractionation during sample preparation and measurement. Ca isotope abundance analysis was performed using a multicollector thermal ionization mass spectrometer. The measurement procedure enabled processing of total Ca amounts of 1000 ng, with a total procedural blank of <10 ng and enabled measurement of the Ca isotopic compositions of the reference materials NIST SRM 1400 (bone ash), NIST SRM 1486 (bone meal) and IAPSO (seawater).
Introduction: A non-invasive imaging method: inhaled hyperpolarized (HP) 129Xe magnetic resonance imaging (MRI) is currently used to measure lung structure and function.1 Simultaneous ventilation/perfusion (V/P) lung measurements of functional gas exchange within the lungs can be obtained using this MRI approach because of the high solubility of xenon in lung tissue as compared to other imaging gases. This measurement is possible due to the distinct and large range of chemical shifts (~200ppm) of 129Xe when residing within barrier and RBC (e.g., barrier and RBC phase xenon) compared to the gas phase. Therefore, 129Xe is a unique probe for exploring xenon within and beyond the lung, such as lung parenchyma (barrier), RBC, and even other organs such as the brain, heart and kidney.
[15O]water positron emission tomography (PET) is the gold standard imaging method for determining cerebral perfusion.2,3 In this study, simultaneous 129Xe-based MRI and [15O]water PET images were collected and compared.
Methods: A 60mL plastic syringe was used in which 30mL of the hyperpolarized 129Xe gas was dissolved in [15O]-water solution (30mL) After dissolving, all leftover xenon gas was removed from the syringe. A turn-key, spin-exchange polarizer system (Polarean 9820 129Xe polarizer) was used for obtaining Hyperpolarized 129Xe gas. 129Xe dissolved phase images were acquired in a 3T PET/MRI (Siemens Biograph mMR) scanner. [15O]water PET data were acquired simultaneously with 129Xe MRI using the integrated PET system in the 3T PET/MRI.
Results: Two consecutive 2D axial 129Xe MRI images and two (2D and 3D) [15O]water PET images were acquired simultaneously. 129Xe/PET images indicate that the diameter of the phantom from both PET and MRI images are similar. Both 129Xe images demonstrate a sufficient SNR level (80 and 10 respectively) suggesting that 3D 129Xe imaging is possible.
Conclusions: The results of this proof-of-concept study clearly indicate the feasibility of the simultaneous hyperpolarized 129Xe MRI and [15O]water PET measurements. This demonstration enables the next step, namely, in-vivo double tracer brain perfusion imaging which we plan to perform using a small animal model.
References:
1.Kaushik, S. S. et al. MRM (2016); 2. Fan, A., et. al. JCBFM (2016); 3. Ssali. T., et. al. JNM (2018).
Introduction: A great challenge in quantitative dynamic positron emission tomography (PET) imaging is to determine the exact volumes of interest (VOI) with which one wants to work. They have a tremendous impact on the time-activity curves that are used to extract the pharmacokinetic coefficients. Since PET images are functional and not anatomical, using a bijective relationship with a computed tomography (CT) co-image is neither the sole nor the best possibility. In recent years, many publications have come with ingenious methods to work directly with the PET images, ranging from machine learning algorithms to manual toil to define regions. These techniques have different uses, mainly in the hope of enabling easier and more efficient tumor delimitation. In the case of dynamic images, the temporal aspect of the imaging procedure changes the methods that can be implemented. Furthermore, the need to delimitate specific and precise functional sites render the whole operation computationally and physically challenging, especially in the absence of a common and well-established methodology.
Methodology: In this project, a novel approach using a gradient-based segmentation was used on pre-clinical dynamic PET images on rats. Fourteen different animals were used under similar pre-clinical conditions. The developed segmentation technique uses properties of the image itself, relying on already known properties of the radio-drug used in order to segment automatically the kidney of the animal. The work was conducted using the mini-PET scanner at the Montreal Neurological Institute, according to the ethical guideline from the University of Montréal and the Canadian Tri-Council.
Results: From the preliminary data, the proposed method has a relevant rate of success on clinical images in delimitating the volumes of interest. The respective time-activity curves follow the general pattern of the manual delimitations done by experts, yet with non-negligible differences. So far, the proposed technique offers good result on 8 of the 14 rats, as compared to 12 rats when using a manual segmentation. The greatest strength of the algorithm is its ability to reproduce the same results notwithstanding the operator. The technique can also quantify movement in the organ of interest and work in spite of a great amount of Gaussian noise.
Keywords: Nuclear Medicine, pre-clinical dynamic PET imaging, pharmacokinetic
Magnetic Resonance Imaging (MRI) is a powerful imaging modality with excellent soft tissue contrast. Contrast agent such as iron oxide nanoparticles can be used to “tag” individual cells, distorting the magnetic field around them and allowing the imaging of single cells. Time-lapse MRI can be used to track the motion of tagged cells, providing insights in the studies of inflammatory diseases and metastasis of cancer. Current methods have a very limited temporal resolution resulting in a detection limit of 1µm/s . In addition, the manual cell counting is time consuming and difficult.
In this work, a dictionary learning based technique has been developed to accelerate the MRI data acquisition and aid in the task of locating cells. Dictionary learning is a machine learning technique, in which features of an image can be ‘learned’ as atoms. Images can be represented using a sparse combination of atoms. The sparsity property can be used as a constraint in non-linear image reconstruction with data sampled below the Nyquist criteria. The undersampling improved the temporal resolution of the in-vivo measurements by approximately an order of magnitude. The dictionary atom coefficients provided information on the cell locations for feature detection.
Magnetic resonance imaging (MRI) is widely used as a non-invasive diagnostic technique to visualize the internal structure of biological systems. Quantitative analysis of magnetic resonance signal lifetimes, i.e., relaxation times, can reveal molecular scale information and has significance in the study of brain, spinal cord, articular cartilage, and cancer discrimination. Determination of MR relaxation spectra (relaxometry) is an inherently ill-posed problem. Conventional methods to extract MR relaxation spectra are computationally intensive, requiring high quality data and generally lacking the spectrum peak width information. A novel computationally efficient signal analysis method, based on neural networks (NN), has been developed to provide accurate real-time quantitative MR relaxation spectrum analysis.
Deep learning with NN is a technique for solving complex nonlinear problems. NN have been optimized to determine 1D and 2D MR relaxation spectra. Simulated signals with Rician noise were employed for training the neural networks. The network performance was evaluated with simulated and experimental data, and compared with the traditional inverse Laplace transform (ILT) method. NN outperformed ILT. The 1D spectrum peak widths, generally considered not reliable with the traditional approach, could be determined accurately by the NN, noise permitting.
The proposed exponential analysis method is not restricted to magnetic resonance. It is readily applicable in other areas with exponential analysis, such as fluorescence decay and radioactive decay. The method could be extended to higher dimensional spectra and adopted to solve other ill-posed problems.
Positron Emission Tomography (PET) Imaging of the brain might become the most effective imaging technique to predict Alzheimer's disease. However, the definition of the brain in PET images is low and the lesion area is not easy to define, so the accuracy of traditional machine learning algorithms in predicting Alzheimer's disease from PET images is low. Deep learning algorithms can effectively improve the accuracy of prediction. Here, a deep learning model based on multi-attention mechanisms is constructed for the prediction of Alzheimer's disease. Firstly, in order to improve the images and reduce the loss of spatial information caused by convolution, a soft-attention mechanism was introduced to embed non-local modules and CBAM modules into the prediction model, which effectively solved the problems of the lack of detail information and the lack of connection between channels in PET images after deep convolution. Secondly, a split-attention mechanism was introduced, and the influence of different parameters of feature images with different sizes on the prediction accuracy was enhanced by using a packet network. Finally, head movement correction, image registration, craniocerebral separation, Gaussian smoothing and other image preprocessing were performed on the acquired PET images to effectively improve the image features of Alzheimer's disease focal areas in brain PET images. The experimental results showed that the prediction accuracy, sensitivity and specificity of the model in the ADNI database for Alzheimer's disease were 90.5%, 86.1% and 93.5% respectively, which could provide more accurate diagnostic results compared with the existing methods.
Our universe is a dynamic, fascinating, and beautiful place. Yet, physics is sometimes perceived as being dry and lacking cultural engagement. To mitigate that perception, we are engaging two powerful partners: our physical universe and our local culture.
The talk will describe a scientific and cultural outreach program developed for underrepresented youth in Newfoundland and Labrador, which offers activities featuring female and Indigenous role models, engage Indigenous story-telling, discuss science-related career opportunities, and emphasize a diverse set of skills required in modern science. We’ll also discuss challenges and opportunities brought by the Covid-19 pandemic and lessons learned on reaching remote communities and engaging teenagers online.
Undergraduate research activities, strong mentorship and peer support have been demonstrated to improve the experiences of students studying science in the last few years and the community has grown on campus. The University of Winnipeg has a large number of Indigenous students per capita, and is uniquely situated to support and encourage Indigenous students in the sciences. This presentation will describe two programs: the Pathway to Graduate Studies (P2GS) program for junior students and our chapter of the Canadian Indigenous Science and Engineering Society (.caISES), which is open to all Indigenous students studying science. These two programs offer a rich environment for research and scholarly success as well as a means to form a sense of community and belonging on campus. The P2GS program provides opportunities for junior Indigenous undergraduate students to upgrade their basic science skills, gain research experience in a university laboratory, and to form a network of peers, graduate students and faculty, which will help increase their success and participation in natural sciences and engineering (NSE) graduate programs. The P2GS program brings students on campus for four weeks each May. They split their time between science classes led by senior Indigenous students and engaging in research projects with a faculty mentor. The program has had a wide ranging impact, with several of the participants crediting this experience with their decision to continue in NSE or to enroll in graduate school.
Our .caISES chapter meets at least monthly and hosts events which bridge culture and science. Students also attended national and regional meetings for .caISES and the American AISES parent organization. The University of Winnipeg was the recipient of the 2020 Stelvio J. Zanin Chapter of the Year in our inaugural year.
The authors wish to acknowledge funding for the P2GS program from NSERC PromoScience and the University of Winnipeg and from the University of Winnipeg, scholarships and fundraising for .caISES.
We discuss interaction of gravitational waves with matter including plasma and its implications for cosmology.
We present a study of the evolution of entanglement entropy of matter and geometry in quantum cosmology. We show that entanglement entropy increases rapidly as the Universe expands, and then saturates to a constant non-zero value. The saturation value of the entropy is a linear function of the energy E associated to the quantum state: S=γE. This result suggests a ‘First Law’ of matter-gravity entanglement entropy in quantum gravity.
We report a formal analogy between cosmology and earth science. The history of a closed universe is analogous to an equilibrium beach profile (i.e., the depth of the water as one recedes from a beach moving seaward). A beach profile reaches equilibrium in summer and in winter and is described by a variational principle that minimizes energy dissipation. The oceanography side of the analogy gains much needed information from the cosmology side.
[Based on V. Faraoni, Phys. Rev. Research 1, 033002.]
It has been shown beyond reasonable doubt that about 95% of the total energy budget of the universe is given by the dark constituents, namely Dark Matter and Dark Energy. What constitutes Dark Matter and Dark Energy remains to be satisfactorily understood however, despite a number of promising candidates. An associated conundrum is that of coincidence, as to why the Dark Matter and Dark Energy densities are of the same order of magnitude at the present epoch. In an attempt to address these, we consider a quantum potential resulting from a quantum corrected Raychaudhuri/Friedmann Equation in presence of a Bose-Einstein condensate (BEC) of light bosons. For a suitable and physically motivated macroscopic ground state wavefunction of the BEC, we show that a unified picture of the cosmic dark sector can indeed emerge, which also resolves the issue of the coincidence. The effective density of the Dark energy component turns out to be a cosmological constant. Furthermore, comparison with observed data give an estimate of the mass of the constituent bosons in the BEC, which is well within the bounds predicted from other considerations.
False vacuum decay in quantum mechanical first order phase transitions is a phenomenon with wide implications in cosmology, and presents interesting theoretical challenges. In the standard approach, it is assumed that false vacuum decay proceeds through the formation of bubbles that nucleate at random positions in spacetime and subsequently expand. In this paper we investigate the presence of correlations between bubble nucleation sites using a recently proposed semi-classical stochastic description of vacuum decay. This procedure samples vacuum fluctuations, which are then evolved using lattice simulations. We compute the two-point function for bubble nucleation sites from an ensemble of simulations, demonstrating that nucleation sites cluster in a way that is qualitatively similar to peaks in random Gaussian fields. We comment on the implications for first order phase transitions during and after an inflationary era.
Cosmology presupposes that on scales of $10^{8}$ light years the universe is the same at every point and in every direction. This is observationally supported by the cosmic microwave background (CMB) which has a temperature of 2.7 Kelvin in all directions. However, there exist small perturbations on this symmetric background - for example the CMB has perturbations of 0.001 Kelvin. A study of these fluctuations is cosmological perturbation theory. In this talk, I will review the standard theory of cosmological perturbations, explain our framework which is different from the standard method and then generalize our framework to include a matter clock.
While Big Bang cosmology successfully explains much of the history of our universe, there are certain features it does not explain, for example the spatial flatness and uniformity of our universe. One widely studied explanation for these features is cosmological inflation. I will discuss the gravitational wave spectra generated by inflaton field configurations oscillating after inflation for E-Model, T-Model, and additional inflationary models. I will show that these gravitational wave spectra provide access to some inflation models beyond the reach of any planned cosmic microwave background (CMB) experiments, such as LiteBIRD, Simons Observatory, and CMB-S4. Specifically, while these experiments will be able to resolve a tensor-to-scalar ratio ($r$) down to $10^{-3}$, I show that gravitational wave background measurements have the potential to probe certain inflation models for $r$ values down to $10^{-10}$. Importantly, all the gravitational wave spectra from E- and T-model inflation lie in the MHz-GHz frequency range, motivating development of gravitational wave detectors in this range.
The purpose of this presentation is to recognize the effects of electromagnetic energy injection into the early Universe from decaying sub-GeV dark vectors. Decay widths and energy spectra for the most prominent channels in the sub-GeV region are calculated for various dark vector models. The models include the kinetic mixing of the dark photon with the Standard Model photon, $U(1)_{A'}$ , a dark vector boson which couples to the baryon minus the lepton current, $U(1)_{B-L}$, and the last three are dark vector bosons which couple one lepton's current minus a different lepton's current, $U(1)_{L_i - L_j}$ where $i , j = e, \mu, \tau$ . Measurements from Big Bang Nucleosynthesis and the Cosmic Microwave Background are used to constrain the lifetime, mass and coupling constant of the dark vectors.
Much of what we know of the early universe comes from observations of the cosmic microwave background (CMB): a 13 billion-year-old field of microwave radiation that permeates the entire universe. Recent technological advances have made real the possibility of combining CMB measurements with other large data sets to extract hitherto inaccessible cosmological information. One such example is the novel technique of kinetic Sunyaev Zel’dovich (kSZ) tomography in which one combines a CMB temperature map with the positions and distances of galaxies across the sky, using statistical correlations between high-fidelity maps to reconstruct the velocity of those galaxies at the largest angular scales and over an appreciable fraction of the volume of the universe. At these scales and distances the motion of large-scale structure owes its statistical properties to the conditions of the early universe; measuring this velocity map would therefore provide an independent probe into the physics of that era.
The primary challenge in extracting a velocity map using kSZ tomography is characterizing the dominant sources of uncertainty introduced by non-idealities such as redshift measurement error, incomplete sky coverage, galactic and extragalactic contaminants, and confusion with other physical effects in the CMB. To this end we have designed a pipeline to perform the reconstruction using next-generation CMB and galaxy survey data that incorporates these contaminants into its design. We account for redshift errors and demonstrate that masking and other cut sky effects do not influence the reconstruction fidelity. We show that galactic processes do not contribute to the reconstruction noise, and estimate the impact of extragalactic sources that mimic the kSZ signal. We demonstrate that reconstruction is possible with data from next generation surveys and forecast how well the pipeline will perform with realistic contaminants. We estimate a strong signal-to-noise of the velocity map on large scales, making it a new data product on the cutting edge of cosmological research, as the field turns its attention to upcoming high-resolution datasets.
Experimentally-derived rates of selected charged-particle induced capture reactions are key ingredients in our global understanding of stellar nucleosynthesis. In particular, selected resonant proton and alpha capture reactions on medium-mass stable and radioactive targets are important for nucleosynthesis in a variety of scenarios such as classical novae and the $p$ and $rp$-processes, which form nuclei on the proton-rich side of stability. Select charged-particle reactions are also important for neutron capture processes, e.g. the $s$-process, where they can contribute to the neutron flux. In this talk, I will discuss my group's efforts to constrain important charged-particle capture reactions at both stable and rare-isotope beam facilities and using both direct and indirect measurement techniques. A particular emphasis will be placed on recent results related to the $s$-process neutron source ${}^{22}\mathrm{Ne}(\alpha,n){}^{25}\mathrm{Mg}$, as well as ongoing technical developments and anticipated future work at TRIUMF and the Texas A&M Cyclotron Institute.
The nuclear-polarized beam facility at TRIUMF-ISAC provides radioactive ion beams, highly polarized by laser collinear optical pumping, to several experimental stations. It has successfully delivered 8,9,11Li, most Na isotopes, and 31Mg over the last 20 years for studies in material science, biochemistry, nuclear physics, and fundamental symmetries. An overview of the polarizer facility will be presented and its future development and upgrade will be discussed.
Zinc-65 (Zn-65) is a radionuclide of interest in the fields of medicine and gamma-ray spectroscopy, within which its continued use as a tracer and common calibration source necessitates increasingly-precise nuclear decay data. A Zn-65 dataset was obtained as part of the KDK ("potassium decay") experiment, whose apparatus consists of an inner X-ray detector and an efficient outer detector, the Modular Total Absorption Spectrometer (MTAS), to tag gamma rays. This setup allows for the discrimination of the electron capture decays of Zn-65 to the ground (EC) and excited (EC) states of Copper-65 (Cu-65) using an emerging technique for such a measurement, exploiting the high efficiency ($\sim$98%) of MTAS. Techniques used to obtain the ratio $\rho$ of EC to EC decays are applicable to the main KDK analysis which is making the first measurement of $\rho$ for Potassium-40, a common background in rare-event searches such as those for dark matter. The KDK instrumentation paper (under review by NIM) pre-print is available at arXiv:2012.15232. We present our current methodology and analysis procedures developed to obtain a novel measurement of the electron-capture decays of Zinc-65.
Ion traps have long been recognized as superb precision tools for fundamental physics research.
In contemporary nuclear physics, they are widely employed to prepare, control and study short lived radionuclides with high precision and accuracy. Over the last decade, Multi-Reflection Time-of-Flight (MR-ToF) mass separators have significantly gained in importance at radioactive ion beam (RIB) facilities due to their superb mass resolving powers of R=m/∆m>105 achieved in a few milliseconds.
As a novel application of MR-ToF devices, we are currently developing the Multi Ion Reflection Apparatus for Collinear Laser Spectroscopy (MIRACLS). In this approach, a fast ion beam is bouncing back and forth between the two electrostatic mirrors of an MR-ToF device such that the trapped ions are probed by a spectroscopy laser during each revolution. This boosts the experimental sensitivity by a factor of 30-600 compared to conventional, single-pass collinear laser
spectroscopy (CLS). MIRACLS will hence provide access to ’exotic’ radionuclides with very low production yields at RIB facilities.
While our initial work is focused on highly sensitive CLS for nuclear structure research, the novel experimental techniques developed within MIRACLS open unique opportunities for searches for new physics beyond the standard model of particle physics in radioactive molecules. The latter
have recently been identified as unexplored, yet highly sensitive probes for fundamental physics such as hitherto undiscovered permanent electric dipole moments (EDMs).
This talk will describe the MIRACLS concept, recent highlights as well as its potential for novel precision studies with radioactive molecules in the context of searches for new physics.
Nuclear isotopes are nuclei with a fixed number of protons Z but with a varying number of neutrons N. The question of how many neutrons a certain element can have while maintaining its stability against neutron or proton emission, or in other words where the proton and neutron drip-lines lie, has been troubling not only nuclear physicists but also astrophysicists since it can help answering fundamental questions like “Where do the stable elements of our universe come from?” Unfortunately, the experimental study of very exotic isotopes very close to the drip-lines is often impossible due to their extremely short half-lives and low production yields.
To tackle the current lack of experimental data, we used the mass measurements of three neutron-deficient nuclei performed with TITAN’s MR-TOF mass spectrometer and known reaction energies to extract the masses of 20 nuclei on the border of nuclear stability or past it. With these new mass values, we determined the proton drip-line in the region around Z=78 and we compared our results with various theoretical models.
With the added benefit of the exotic character of these newly determined masses, we leapt into investigating possible Thomas-Ehrman shifts in the region. This unusual effect has been well established in some light and medium nuclei but never conclusively observed in heavier species.
Our calculation procedure as well as our results will be presented.
Mass measurement facilities are extremely important in furthering our understanding of nuclear structure away from the valley of stability, including aiding in the search for collective behaviors in exotic nuclei. TRIUMF’s Ion Trap for Atomic and Nuclear science (TITAN) is among the world’s premier precision trapping facilities, with the newly added Multiple-Reflection Time-of-Flight Mass Spectrometer (MR-ToF-MS) expanding its reach. The TITAN MR-ToF-MS was used in the measurement of neutron-rich iron isotopes around $N = 40$. These masses are critical in investigating a potential Island of Inversion at $N = 40$, which has been supported previously in literature by increased collectivity seen in this region. In total, the masses of $^{67-70}$Fe were measured, with $^{69}$Fe and $^{70}$Fe constituting first time measurements and $^{67}$Fe and $^{68}$Fe resulting in improvements over current literature uncertainties. The impact of these mass measurements on the presence of a surfacing Island of Inversion will be discussed.
A polarized electron beam is being considered as an upgrade for the SuperKEKB accelerator. Having a polarized beam at Belle II opens a new precision electroweak physics program, as well as improving sensitivity to dark sector and lepton flavour violating processes. In order to achieve a polarized beam at SuperKEKB a variety of hardware and technical challenges are being studied. The limiting factor on the precision of these future measurements is expected to be the uncertainty in the beam polarization achieved at the interaction point. The average beam polarization can be measured with high precision by making use of the relationship between beam polarization and the kinematics of tau decays.
In order to develop the tau polarimetry measurement technique, in preparation for a polarized electron beam at SuperKEKB, the data collected by BaBar is being analyzed. BaBar has a enough data to make a polarization measurement with a subpercent statistical uncertainty. This allows the dominant systematic uncertainties to be identified and studied, and the limiting factors for the precision of tau polarimetry to be established. As Belle II is similar in design to BaBar it is expected a similar or better level of precision can be achieved with sufficient data and the installation of polarized beams further motivated. This presentation will be the first time the results using the BaBar data are presented publicly.
The ATLAS Experiment at CERN is a general-purpose particle physics detector that measures properties of particles created in high-energy proton-proton collisions fueled by CERN’s Large Hadron Collider (LHC). Searching for undiscovered particles is exciting, but there is still much to be learned about the particles that we know to exist in the Standard Model by making precision measurements of these particles. One area where increased precision is needed is the electroweak sector, where potential tension exists between theoretical predictions and the current best measurements on important properties such as the mass of the W-boson. In this talk, I will discuss our precision measurement of the transverse momentum of the Z-boson, a vital stepping stone to improving our W-boson mass measurement. I will explain how this difficult measurement has been made possible thanks to a unique reduced-background ATLAS dataset.
The SuperKEKB is a high luminosity e+e- collider with a circumference of 3km located in Japan, which collides 7GeV electrons with 4GeV positrons for precision flavour studies, CP violation, and searches for new physics. We are aiming at upgrading the SuperKEKB with a polarized electron beam, which would provide high precision neutral current electroweak and other measurements. To polarize the electron beam at the interaction point(IP) in the longitudinal direction, a spin rotator must be designed and installed in the SuperKEKB High Energy Ring. The right-side rotator rotates the vertical spin to longitudinal at the IP; the left-side rotator rotates the spin back to vertical. We present the status of work on a spin rotator conceptual design based on replacing existing dipole magnets with rotator magnets on both sides of the IP. Each rotator magnet in this concept is made of a solenoid-dipole combined function magnet with 6 quadrupoles on the top of each section to compensate for the x-y plane coupling caused by the solenoid. This presentation will include the physics motivation, the conceptual design, and results of the BMAD accelerator simulation of this design, including spin-tracking results.
Since the discovery of the Higgs boson with a mass of about 125 GeV in 2012 by the ATLAS and CMS Collaborations, an important remaining question is whether this particle is part of an extended scalar sector as postulated by various extensions to the Standard Model. Many of these extensions predict additional Higgs bosons, motivating searches in an extended mass range. Here we report on a search for new heavy neutral Higgs bosons decaying into a pair of Z bosons in the $\ell^+\ell^- \ell^+\ell^-$ and $\ell^+\ell^- \nu\bar\nu$ final states, where $\ell$ stands for either an electron or a muon. The search uses proton-proton collision data at a centre-of-mass energy of 13 TeV collected from 2015 to 2018 by the ATLAS detector during Run 2 of the Large Hadron Collider, corresponding to an integrated luminosity of 139 fb$^{-1}$. Different mass ranges spanning from 200 GeV to 2000 GeV for the hypothetical resonances are considered, depending on the final state and model. In the absence of a significant observed excess, the results are interpreted as upper limits on the production cross section of a spin-0 or spin-2 resonance. The upper limits for the spin-0 resonance are translated to exclusion contours in the context of Type-I and Type-II two-Higgs-doublet models, and the limits for the spin-2 resonance are used to constrain the Randall-Sundrum model with an extra dimension giving rise to spin-2 graviton excitations.
Remarks and comments on issues of interest in Cosmology followed by questions, answers and discussions with a panel on a set of pre-distributed list of interesting challenges in cosmology.
The inertial confinement fusion scheme relies on the implosion of a Deuterium-Tritium pellet by the means of tens of laser beams. At maximum of compression, extreme thermodynamic conditions must be reached in order to trigger a thermonuclear wave. Laser-plasma interaction, for such large spatial and temporal scales, may only be described numerically with specific hydrodynamic codes. In the latter, only the laser beams refraction, and energy deposition due to inverse bremsstrahlung, are accounted for. Alas, such a description is incomplete as laser-plasma interaction may trigger to a plethora of physical effects leading to the loss of laser energy. Chiefly, the coherent laser light may be scattered in different directions through wave mixing processes such as Raman or Brillouin back and side scattering, cross beam energy transfer and collective scatterings.
Postponing the description of non-lineal kinetic effects, we recently developed a Monte-Carlo algorithm to describe any kind of convective wave mixing process involving two[1] or more electromagnetic waves and one driven electrostatic wave. The laser beams, described by large bundle of rays, can suffer scattering by any kind of wave coupling phenomenon. In the case of Brillouin backscattering, an incoming ray has a given probability to be scattered, as a collision, in the backward direction by the driven acoustic wave. As all these scatterings are stimulated, the probability of ray deflection depends on the scattered light amplitude. This non-linearity is addressed by means of a fixed-point iteration method. At a given hydrodynamic map, the raytracing is performed several times to estimate the light intensities at each cell, until convergence of the stationary solutions. To date, our method includes: 1°) Raman back and side-scattering, 2°) Brillouin back and side-scattering, 3°) the energy exchange between laser beams and scattered lights, 4°) the collective scattering in which an electrostatic wave is shared by a cone of laser beams.
[1] A. Debayle et al., Phys. Plasmas 26, 092705 (2019)
At relativistic intensities, electrons can be driven close to the speed of light, facilitating exploration of a new regime of laser-plasma interactions and high-field science. These intense pulses can drive matter into extreme states of temperature and pressure, mimicking those typically found in astrophysical environments, and leading to the observation of new states of high-energy-density matter. Advancements in intense laser matter interactions have also led to a new generation of pulsed particle and radiation sources, each with ultrashort, femtosecond-scale duration inherited from the laser driver. These sources can be used to study ultrafast dynamic phenomena in dense materials, such as material phase transitions and electron-ion equilibration.
In this talk, I will discuss our recent work performing high-resolution X-ray spectroscopy of K-shell emission from high-intensity (I ∼10^{21} W/cm^2) laser experiments using the ALEPH laser at Colorado State University. Through measurements of K-shell fluorescence, electron emission and XUV spectroscopy of the plasma emission, we examine the generation and propagation of energetic electrons in thin foil and layered targets to elucidate the physics of high-intensity laser solid interactions. I will also discuss the generation of broadband hard X-ray sources through laser wakefield acceleration, generated by an intense laser pulse traveling through low-density plasma, and how these sources can be used to diagnose high-energy-density matter and phase transitions in dense materials.
Tajima and Dawson proposed the idea of laser-wakefield accelerators (LWFAs) during the late 1970s. LWFAs produce low transverse emittance, ultrashort electron bunches of few femtoseconds duration with the potential to drive free electron lasers and compact X-ray and gamma-ray sources. Through the implementation of high-gradient quadrupole magnets, it is possible to focus and transport LWFA electron beams with minimal degradation over long distances. In this research work, we examine the focusing of LWFA electron beams using a triplet quadrupole system. We also look at the subsequent generation of collimated gamma beams. We analyse the changes in electron beam divergence, charge and pointing stability with and without the quadrupole system. Copper autoradiography was performed to look into the generation of intense gamma ray beams through the propagation of focused electron beams through a lead converter. Finally, Monte Carlo simulations will be performed to investigate gamma ray generation and peak gamma ray intensity.
The nonlinear behavior of absolute stimulated Raman scattering (SRS) near the quarter-critical density is investigated using one-dimensional (1D) Vlasov simulations with parameters relevant to ignition-scale direct-drive coronal plasmas. Numerical Vlasov simulations show that a strong and stable Airy pattern is formed by the Raman light as it is generated near its cutoff density. This pattern self-consistently modulates the density profile below the quarter-critical density. The density modulation superimposed in the linear density profile results in a change in the nature of SRS in lower density region from spatial (convective) amplification to temporal (absolute) growth. In addition, strong Langmuir decay instability (LDI) cascades produce daughter Langmuir waves (LWs) that seed SRS below quarter-critical density. These effects act to broaden the spectrum of reflected light. More interestingly, collapse of the primary LWs is observed near their turning point, producing hot electrons. These observations provide a new explanation of hot electron generation and SRS scattered light spectra for ignition-scale experiments.
Experiments performed at the National Ignition Facility (NIF) have provided evidence that stimulated Raman scattering (SRS) occurs at a level that poses a preheat risk for directly-driven inertial confinement fusion implosions [1]. To help investigate the mechanisms responsible for the generation of this SRS, recent experiments on the OMEGA EP laser (in which similar SRS signatures were observed) were analyzed using a new ray-tracing model. The model is able to explain the time-dependent scattered light spectra from these OMEGA EP experiments: It identifies SRS side-scatter and near backscatter from portions of each incident beam, where either the scattered electromagnetic wave, or the electron plasma wave, are generated in the direction parallel to contours of constant density, as the origin of the major spectral features. As similar effects are known to occur at the ignition scale (on the NIF) [2], it is suggested that the OMEGA EP platform could provide a good surrogate in which to develop SRS mitigation strategies.
This material is based upon work supported by the Natural Sciences and Engineering Research Council of Canada [RGPIN-2018-05787, RGPAS-2018-522497]
[1] M. Rosenberg et al., Phys. Rev. Lett. 120, 055001 (2018).
[2] P. Michel et al., Phys. Rev. E99, 033203 (2019).
The suppression of turbulence in fusion plasmas, crucial to the success of next-generation tokamaks such as ITER, depends on a variety of physical mechanisms including the shearing of turbulent eddies via zonal flow and possibly the generation of intrinsic rotation. The turbulence exhibits interesting features such as avalanche structures and self-organisation, and its absence is associated with the formation of internal transport barriers. In order to successfully capture all of these effects in gyrokinetic simulation, one may need to allow for the inclusion of global effects (such as radial profile variation), as well as other often-neglected effects that are small in $\rho_\ast$. A careful numerical treatment is necessary to ensure that both the global and local physics are calculated accurately at reasonable expense.
To that end, we develop a novel approach to gyrokinetics where multiple flux-tube simulations are coupled together in a way that consistently incorporates global profile variation while allowing the use of Fourier basis functions, thus retaining spectral accuracy. By doing so, the need for Dirichlet boundary conditions typically employed in global gyrokinetic simulation, where fluctuations are nullified at the simulation boundaries, is obviated. This results in a smooth convergence to the local periodic limit as $\rho_\ast \rightarrow 0$. In addition, our scale-separated approach allows the use of transport-averaged sources and sinks, offering a more physically motivated alternative to the standard sources based on Krook-type operators.
Having implemented this approach in the flux-tube code $\texttt{stella}$, we study the role of transport barriers and avalanche formation in the transition region between the quiescent core and the turbulent pedestal, as well as the efficacy of intrinsic momentum generation by radial profile variation. Finally, we show that near-marginal plasmas can exhibit a radially localized Dimits shift, where strong coherent zonal flows give way to flows which are more turbulent and smaller scale.
Study of high-confinement mode (H- mode) of tokamak operation plays an important role to optimize conditions for fusion reactors. Many experimental techniques, including electrode biasing and resonant magnetic perturbations (RMP), have been developed to improve the plasma confinement, facilitating transition from low to high confinement mode (L-H transition) and to study the transition mechanism. The H-mode is characterized by a rapid increase in plasma density, in conjunction with a sudden drop in H alpha emissions, indicating an improvement in both particle and energy confinements. The Saskatchewan Torus-Modified (STOR-M) tokamak is a small tokamak with a major radius of 46 cm and minor radius of 12 cm. The operational parameters during the biasing experiments are B_t (toroidal magnetic field) ~ 0.7 T, I_p (plasma current) ~ 25 kA, V_l (loop voltage) ~ 3 V, n_e (average density) ~ 1×10^13 cm-3, T_e (average electron density) ~ 100 eV, and τ_E (global energy confinement time) ~ 2 ms. Hydrogen plasma is used for the experiment. On the (STOR-M) tokamak, the electrode biasing experiments have been carried out to induce a sheared electric field and suppress the turbulence induced transport. The electrode is placed at different radial locations and the biasing voltage and polarity can be varied between shots. Biasing experiments with rectangular AC waveforms have also been carried out in the STOR-M tokamak. The RMP experiments carried out in the STOR-M tokamak using the (l = 2, n = 1) helical coils carrying a static current pulse. The resonant interaction between the plasma and RMP suppresses magnetohydrodynamic (MHD) fluctuations and improves plasma confinement. The current work will focus on the studies of the correlations between different electrostatic and/or magnetic fluctuating signals measured with various diagnostic probes.
There exists an unconventional class of waves known as thermal diffusion waves, or simply thermal waves, that are produced using sinusoidally, time-varying heat sources and they can be used to determine the thermal conductivity in the medium. Recent advancements have resulted in the construction of thermal wave resonator cavities (TWRCs) capable of sustaining quasi-standing thermal waves which have been used to measure the thermal properties of solids, liquids, and gases. The success of TWRC diagnostic techniques with different forms of matter motivates the application of similar methods to magnetized plasmas, where heat transport processes are of particular importance to magnetic confinement fusion devices. Results are presented from experiments in a large linear magnetized plasma device using an electron temperature filament that is formed from a cerium hexaboride crystal cathode that injects low energy electrons along a magnetic field into the center of a pre-existing plasma, forming a hot electron filament embedded in a colder plasma that behaves as a thermal resonator. By oscillating the cathode voltage we produce an oscillating heat source in the filament and demonstrate the stimulated excitation of thermal waves and the presence of a thermal resonance in the finite-length temperature filament. We have successfully used this technique to determine the thermal conductivity in the plasma. A theoretical model of the thermal wave dispersion relation and resonator is compared to the Langmuir probe data from the experiment.
The effects of the modification of the electron distribution function in the nonlinear regime of the Buneman instability and statistical noise effects have been investigated, using high-resolution Vlasov and Particle-in-Cell simulations. It is shown that this modification is a result of electron trapping. In nonlinear regimes, electron trapping and associated modification of the electron distribution function result in excitation of waves moving in the opposite direction of the initial drift velocity of electrons (backward waves). In the PIC simulations, however, the modification of the velocity distribution function occurs due to high statistical noise even in the linear stage, so that the observed growth is inconsistent with the linear theory.
Engineering of defects located in-grain or at grain boundary is central to the development of functional materials and nanomaterials. While there is a recent surge of interest in the formation, migration, and annihilation of defects during ion and plasma irradiation of bulk (3D) materials, the detailed behavior in low-dimensional materials remains most unexplored and especially difficult to assess experimentally. A new hyperspectral Raman imaging scheme providing high selectivity and diffraction-limited spatial resolution was adapted to examine plasma-induced damage in a polycrystalline graphene film grown by chemical vapor deposition on copper substrates and then transferred on silicon substrates. For experiments realized in nominally pure argon plasmas at low pressure, spatially resolved Raman conducted before and after each plasma treatment shows that the defect generation in graphene films exposed to very low-energy (11 eV) ion bombardment follows a 0D defect curve, while the domain boundaries tend to develop as 1D defects. Surprisingly and contrary to common expectations of plasma-surface interactions, damage generation at grain boundaries is slower than within the grains. Inspired by recent modeling studies, this behavior can be ascribed to a lattice reconstruction mechanism occurring preferentially at domain boundaries and induced by preferential atom migration and adatom-vacancy recombination. Further studies were realized to compare the impact of different plasma environments promoting either positive argon ions, metastable argon species, or VUV-photons on the damage formation dynamics. While most of the defect formation is due to knock-on collisions by 11-eV argon ions, the combination with VUV-photon or metastable atom irradiation is found to have a very different impact. In the former, the photons are mainly thought to clean the films from PMMA residues due to graphene transfer from copper to silicon substrates. On the other hand, the surface de-excitation of metastable species first impedes the defect generation and then promotes it for higher lattice disorder. While this impediment can be linked to an enhanced defect migration and self-healing at nanocrystallite boundaries in graphene, such effect vanishes in more heavily-damaged films. Finally, these experiments were used as building blocks to examine the formation of chemically doped graphene film in such plasmas using argon mixed with either traces of N- or B-bearing gases.
Polycrystalline monolayer graphene films grown by chemical vapor deposition were exposed to a low-pressure inductively-coupled plasma operated in a gaseous mixture of argon and diborane. Optical emission spectroscopy and plasma sampling mass spectrometry reveal high B2H6 fragmentation leading to significant populations of both boron and hydrogen species in the gas phase. X-ray photoelectron spectroscopy indicates the formation of a boron-containing layer at the surface and provides evidence of a substitutional incorporation of boron atoms within the graphene lattice. To probe plasma’s influence on graphene structure, Hyperspectral Raman Imaging (RIMA for Raman Imaging) is used to obtain qualitative as well as quantitative data on a macroscopic scale. Graphene domains doping by graphitic boration is then confirmed by hyperspectral Raman Imaging of graphene domains. These results demonstrate that diborane-containing low-pressure plasmas are an efficient mean for boron substitutional incorporation in graphene with minimal domain hydrogenation and defect generation.
: Plasma Immersion Ion Implantation (PIII) is a versatile material processing technique [1,2] with many applications in semiconductor doping, micro- and nanofabrication [3], as well as the surface modification of metals for improved resistance against wear and corrosion. In PIII a solid target is immersed in plasma, and negative polarity high-voltage (typically 1-20 kV) are applied the target. During the PIII pulse electrons are expelled, resulting in a positive ion sheath surrounding the target; ions in sheath implanted on solid surface. PIII provides uniform ion implantation with high ion fluences across broad area targets. The targets need not be planar as the plasma is conformal to the immersed target. For precision PIII processing it is important to accurately predict the implanted ion concentrations. The P2I code was developed by Bradley, Steenkamp, and Risch [4,5] to accurately predict PIII sheath dynamic, ion implantation currents and total delivered ion fluence. The P2I code is an efficient implementation of the numerical solution of Lieberman’s dynamic sheath model [2]. However, experiments typically show an increase in plasma density during high voltage PIII pulses due to various effects including secondary electrons ejected from the target. The increase in plasma density significantly effects the ion implantation current as well as the implanted ion concentrations.
To address these deficiencies, a new code (P3I) as an advanced version of the existing P2I code which address these discrepancies in measurement by accounting for plasma density enhancement effects. In addition, due to the growing interest in the use of PIII to study ion bombardment of plasma-facing components for fusion applications, the P3I code will incorporate aspects of the Stangeby and McCracken Scrape-off layer (SOL) model [6]. This talk will discuss the development of this new code for various PIII applications.
References
[1] A. Anders, Handbook of Plasma Immersion Ion Implantation and Deposition, Wiley (2000).
[2] Michael A Lieberman and Alan J Lichtenberg. Principles of plasma discharges and materials processing. John Wiley & Sons, 2005.
[3] Marcel Risch and Michael P. Bradley, “Prospects for band gap engineering
by plasma ion implantation”, Phys. Status Solidi C 6, No. S1, S210–S213 (2009) / DOI 10.1002/pssc.200881279
[4] M.P. Bradley and C.J.T. Steenkamp, “Time-Resolved Ion and Electron Current Measurements in Pulsed Plasma Sheaths”, IEEE Trans. Plasma Sci. 34, 1156-1159 (2006).
[5] M. Risch and M. Bradley, “Predicted depth profiles for nitrogen-ion implantation into gallium arsenide”, phys. stat. sol. (c) 5, 939-942 (2008).
[6] P.C Stangeby, G.M. McCracken, “Plasma Boundary Phenomenon in Tokamak” Nuclear Fusion, vol 30, No.7 (1990).
Plasma immersion ion implantation (PIII) is a versatile tool in the field of materials processing, surface modification, and semiconductor manufacturing[1]. By immersing the target directly in the plasma, PIII boasts many advantages over its predecessor, conventional ion implantation, including a simpler design, faster throughput and more uniform implantation over irregular objects[4]. When a negative polarity high voltage (NPHV) pulse is applied to the target, ions are implanted through this plasma sheath and into the target[5]. However, plasma immersion also introduces several complicating factors that challenge optimization. Foremost among them is maintaining constant bulk plasma parameters, specifically ion density and temperature, and appropriately correcting for inevitable fluctuations that occur[2][3].
Experiments were performed at the University of Saskatchewan plasma physics laboratory on a PIII system with an Inductively Coupled Plasma (ICP) device. This experiment utilized two identical RF-compensated Langmuir probes at two different vertical positions above the biased target to study the perturbations both near the target and further away. The results indicate that electron density and plasma potential are very sensitive to the NPHV pulse, and the amplitudes of the perturbation increase with increasing pulse magnitude above 2 kV. Perturbation amplitudes relative to steady state values trend consistently for all pressures at the same power. However, perturbations are quelled significantly when power is increased, regardless of the pressure. Additionally, the electron temperature undergoes fluctuations whose relative amplitudes are generally smaller than those of density and plasma potential, but are consistent across pulse amplitude, power and pressure. Furthermore, the sheath recovery time was measured. It was shown that it predominantly depends on NPHV amplitude, rather than power or pressure. Finally, the velocity of a rarefaction wave that propagates away from the pulser is measured based on the time delay of the peak of these perturbations. These results will contribute to optimizing the process of PIII by quantifying corrections needed to current models used to calculate implantation doses that assume constant plasma parameters. In future studies they will serve as a basis for comparison against future laser-induced fluorescence diagnostic data.
References
[1] Bradley, M. P., Desautels, P. R., Hunter, D. and Risch, M. [2009], `Silicon electroluminescent device production via plasma ion implantation', Physica Status Solidi (C) Current Topics in Solid State Physics 6(S1), 6-9.
[2] Lieberman, M. and Lichtenberg, A. [2005], Principles of Plasma Diagnostics and Materials Processing, Wiley-Interscience.
[3] Risch, M. and Bradley, M. [2008], `Predicted depth profiles for nitrogen-ion implantation into gallium arsenide', Physica Status Solidi (C) Current Topics in Solid State Physics 5(4), 939-942.
[4] Risch, M. and Bradley, M. P. [2009], `Prospects for band gap engineering by plasma ion implantation', Physica Status Solidi (C) Current Topics in Solid State Physics 6(SUPPL. 1).
[5] Steenkamp, C. J. and Bradley, M. P. [2007], `Active charge/discharge IGBT modulator for Marx generator and plasma applications', IEEE Transactions on Plasma Science 35(2 III), 473-478.
The vast majority of dusty/complex plasma experiments have involved the suspension of charged, micron-sized particles in plasmas. The particles are suspended due to a delicate balance between gravitational and electrostatic forces. The addition of a magnetic field to these systems has a profound influence on both the surrounding plasma and the dusty plasma as the dynamics of first the electrons, then the ions, and finally the charged dust grains become influenced by the magnetic field. Since the mid-2000s, a number of experimental devices have been built around the world to explore the physics of dusty plasmas in strongly magnetized plasmas. One of these devices, the Magnetized Dusty Plasma Experiment (MDPX) device at Auburn University is a flexible, high magnetic field research instrument with a mission to serve as an open access, multi-user facility for the dusty plasma and basic plasma research communities. In particular, under conditions when the magnetic field is sufficiently large, B ≥ 1 T, a variety of emergent phenomena are observed including the formation of self-ordered plasma structure, specifically plasma filamentation along the magnetic field direction, as well as a new type of imposed spatial ordering of the dust particles. Recent three-dimensional fluid simulations suggest that both of these phenomena are strongly connected to differences in ion and electron transport parallel and perpendicular to the magnetic field. This presentation will provide an overview of recent experiments and the associated simulations.
This work is supported with funding from the U.S. Department of Energy and the National Science Foundation (Physics Division and EPSCoR Office).
The dependence of the mode-coupling instability threshold in two-dimensional complex plasma crystals is studied. It is shown that for a given microparticle suspension at a given discharge power there exist two thresholds in pressure. Above a specific pressure $p_\mathrm{max}$, the monolayer is always in the crystal phase. Below a specific pressure $p_\mathrm{min}$, the crystalline monolayer undergoes the mode-coupling instability and the monolayer is in the fluid phase. In between $p_\mathrm{in}$ and $p_\mathrm{max}$, the crystal will be in the fluid phase when increasing the pressure from below $p_\mathrm{min}$ until it reaches $p_\mathrm{max}$ where it recrystallises, while it remains in the crystal phase when decreasing the pressure from above $p_\mathrm{max}$ until it reaches $p_\mathrm{min}$. A simple auto-consistent sheath model can explain the melting threshold as a function of pressure and rf power due the changes of the sheath electric field and the microparticle charges leading to the crossing of the compressional in-plane phonon mode and the out-of plane phonon mode.
Fixed-bias multi-needle Langmuir probes consisting of several cylindrical probes biased to different potentials can be used to measure plasma parameters on satellite without the need of sweeping bias voltages. Compared to a single Langmuir probe for which voltage is varied periodically in time, fixed bias probes enable measurement with a significantly higher sampling rate and, owing to the high orbital speed of satellites, a much higher spatial resolution when used to diagnose space plasma. The inference of plasma parameters from needle probes is typically based on the Orbit Motion Limit theory (OML) which assumes an infinitely long cylindrical probe, and the absence of nearby objects. These assumptions are rarely satisfied in an actual experimental setup. In this study, three-dimensional kinetic simulations are used to compute currents collected by needle probes on the Norsat-1 satellite, and create a synthetic data set, or solution library. This is then used to construct regression models to infer plasma densities and satellite floating potentials from four-tuples of collected currents. Two regression approaches are considered, consisting of radial basis functions (RBF) and deep learning neural networks. Regression results and OML results will be compared and assessed when the assumptions made in the OML theory are not fully satisfied. The use of regression techniques rather than purely analytic expressions is shown to lead to more accurate inference techniques for measuring plasma parameters in space, than those based on analytic approximations.
Affiliation: University of Saskatchewan
Fusion and related plasma physics research enables the development of a new, safe and reliable, high-output fusion energy source. There are however multiple problems to address with fusion devices. One such problem is that of contaminating dust, produced by plasma wall interactions within the reactor.
Dust generation from Plasma Facing Components (PFC) is problematic for tokamaks as they approach suitable reactor conditions. Tungsten dust is especially detrimental in the plasma core, due to associated high Z bremsstrahlung power losses. As Tungsten is a primary candidate for PFC materials in large projects such as ITER, this remains a pressing issue. In order to better understand dust dynamics in tokamaks, a dust injection experiment has been developed for the Saskatchewan Torus-Modified (STOR-M). This experiment will utilize calibrated, spherical tungsten micro-particles. A known quantity of these tungsten micro-particles are to be injected into the STOR-M tokamak, with control over the position of the plume of dust particles. This will enable the study of dust dynamics and the effects of dust particles on the tokamak plasma within STOR-M.
In preparation for this experiment, a dust injector has been designed and built, based on the fast gas valve for the University of Saskatchewan Compact Torus Injector (USCTI). Additionally, an experimental test apparatus has been developed and used to characterize the dust injector.
Two dust injection schemes have been envisaged for STOR-M. The first disperses dust particles directly into the tokamak chamber, where a discharge is to commence around these particles. The second utilizes the USCTI to trap the tungsten particles in an accelerated plasmoid, in order to deliver dust particles to the core of the STOR-M plasma within a time scale of approximately 10 µs. Integration of the dust injector into the STOR-M system is currently underway.
References:
J. Roth et al. “Recent analysis of key plasma wall interactions issues for ITER”. In: Journal of Nuclear Materials390-391 (2009), p.1. doi:10.1016/j.jnucmat.2009.01.037.
R. D. Smirnov et al. “Tungsten dust impact on ITER-like plasma edge”. In: Physics of Plasmas 22.012506 (2015). doi:10.1063/1.4905704.
Krasheninnikov S. I., Smirnov R. D., and Rudakov D. L. “Dust in magnetic fusion devices”. In: Plasma Physics and Controlled Fusion 53(8) (2011).doi:10.1088/0741-3335/53/8/083001.
For nearly a century, Langmuir probes have been used to infer plasma densities and temperatures from current characteristics. In practically all cases, these inferences are based on analytic expressions obtained theoretically. Despite their limitations, analytic expressions continue to be used because of their relative simplicity, and the fact that they can be used to construct fast inference techniques requiring only modest computing resources. With recent advances in computer technology, and the development of sophisticated plasma simulation models, it is now possible to reproduce in silico, the response of sensors to different plasma environments, while accounting for more physical processes, a more detailed geometries, that possible analytically. However, even the fastest computers and the most advanced numerical models are unable to directly provide sufficiently fast inference algorithms for near-real time data processing. One approach that we have been pursuing consists of using 3D kinetic simulations to calculate sensor responses for a range of plasma parameters of interest, constructing solution libraries; that is sets of computed responses, along with corresponding plasma parameters. These sets can then be used to construct and test multivariate regression techniques whereby selected plasma parameters can be inferred. In this talk I will present the general steps involved in the construction of solution libraries, the use of inference models based on regressions, and the assessment of these methods. I will also present an application of the method to actual space plasma measurements.
This work considers the use of spherical segmented Langmuir probes as a means to measure ionospheric plasma flow velocities. This is done by carrying out three-dimensional kinetic self-consistent Particle in Cell (PIC) simulations to compute the response of a probe to space plasma under a range of space environment conditions of relevance to satellites in low Earth orbit (LEO) at low and mid latitudes. Computed currents and corresponding plasma parameters, including densities, temperatures, and flow velocities are then used to construct a solution library which is used to construct regression-based inference techniques. Model inference skills can then be assessed directly from the synthetic data sets obtained from our solution library. The method is then applied to actual segmented Langmuir probes mounted on the Proba-2 satellite.
Flow-through Z-pinches were first discovered over 50 years ago, manifesting themselves as a stable, pinch-like structure that persisted for 100 us in the Newton-Marshal gun experiments at LANL in the late 1960’s. Linear stability analysis performed by Uri Shumlak in the 1990’s showed that when dV_z/dr > 0.1 k V_A the kink mode could be stabilized in a Z-pinch plasma. Experimental work over the last couple of decades have shown that Z-pinches can be stabilized when the sheared axial flow exceeds this threshold. Quasi-steady-state Z-pinches existed near the axis of the assembly region for 20-80 us. The instability growth time from these Z-pinches was about 10 ns. Recent results from the sheared-flow Z-pinch experiment at the University of Washington, FuZE, have shown it may be possible to achieve a thermonuclear fusion burn. The FuZE device achieved 10 us long fusion burns along 30 cm of the Z-pinch plasma. Using adiabatic scaling relationships, it may be possible to build a Q=6 fusion reactor using the traditional Marshall gun approach. The formation and sustainment method relies on creating a neutral gas reserve that can be continuously ionized, supplying the stabilizing plasma flow to the Z-pinch throughout the current pulse. Creating the optimized neutral gas fill profile requires tedious experimentation. Fuse Energy Technologies will be studying the scaling towards a reactor by forming and sustaining flow-through Z-pinches using a new technique. The deflagration ionization process will be replaced with an array of plasma injectors. This novel technique will allow better control of the mass flow into the Z pinch. This process may allow for better comparisons with the scaling relationships. Previous work, recent simulation and experimental results from the Fuse devices will be presented.
Energy loss in magnetic confinement fusion is dominated by plasma turbulence --- turbulent transport can surpass all other mechanisms by several orders of magnitude. Instability, driven by the Ion Temperature Gradient (ITG) mode is a key contributor to such turbulence, and is the topic of this work. Simulating such small-scale, $\mathcal{O}(\mathrm{mm})$, turbulence over an $\mathcal{O}(\mathrm{m})$ tokamak is computationally intensive, particularly with the 6 or 7-D kinetic models used to clearly capture velocity-space effects, e.g. Landau damping. To mitigate computational demands, these models have often focused on thin annular flux-tubes, which reduces the radial domain to $\mathcal{O}(\mathrm{cm})$.
Fluid modeling (4-D), the approach adopted here, also provides a formidable decrease in computational demands. This permits diverse (high numeracy) global (full-domain) investigations of both large-scale interactions with the equilibrium profiles/gradients, and meso-scale mode-mode coupling. The former is especially important in X-point geometry, where the poloidal boundary is shaped to exhibit a discontinuity which can interact nonlocally with regions well inside the tokamak.
This project extensively characterizes global ITG behavior in realistic devices of both circular$^1$ and X-point geometry. In the linear growth phase, several distinct types of eigen-structure are found, described, and quantified. Thorough investigation of the poloidal mode spectra uncovered a significant shift in mode location, with respect to resonant surfaces, which was unexpected and yet-unreported. The possibility for such behavior was subsequently found within a previously published gyrokinetic model.$^{1,2}$ Notably, linear investigations also clearly demonstrate the suppression of instability by localized neoclassical flows, identifying that they can play a significant role in transport barriers.
In the turbulent phase, study focuses on the energy spectra, nonlinear radial heat flux (from transport coefficients to evolution and structures), and the behavior and traits of turbulent eddies. Even with broadly similar parameters, different X-point devices demonstrated a great diversity in their spectra and structures. Clear power law relations, some common to all cases, some characteristic to particular devices, are detailed. Under certain conditions, interaction with the X-point was found to qualitatively affect the mode throughout the domain.
[1] J. Zielinski, M. Becoulet, A. I. Smolyakov, X. Garbet, G. T. A. Huijsmans,
P. Beyer, and S. Benkadda, ``Global itg eigenmodes: From ballooning angle and
radial shift to reynolds stress and nonlinear saturation,'' Physics of
Plasmas, vol. 27, no. 7, p. 072507, 2020.
[2] X. Garbet, Y. Asahi, P. Donnel, C. Ehrlacher, G. Dif-Pradalier, P. Ghendrih,
V. Grandgirard, and Y. Sarazin, ``Impact of poloidal convective cells on
momentum flux in tokamaks,'' New Journal of Physics, vol. 19, no. 1,
p. 015011, 2017.
Steep thermal gradients in a magnetized plasma can induce a variety of spontaneous low frequency excitations such as drift-Alfven waves and vortices. We present results from basic experiments on heat transport in magnetized plasmas with multiple heat sources in close proximity [1]. The experiments were carried out at the upgraded Large Plasma Device (LAPD) operated by the Basic Plasma Science Facility at the University of California, Los Angeles. The setup consists of three biased probe-mounted CeB6 crystal cathodes that inject low energy electrons along a strong magnetic field into a pre-existing cold afterglow plasma forming three electron temperature filaments. A triangular spatial pattern is chosen for the thermal sources and multiple axial and transverse probe measurements allow for determination of the cross-field mode patterns and axial filament length. When the three sources are placed within a few collisionless electron skin depths, a non-azimuthally symmetric wave pattern emerges due to the overlap of drift-Alfven modes forming around each filament. This leads to enhanced cross-field transport from nonlinear convective (E×B) chaotic mixing and rapid density and temperature profile collapse in the inner triangular region of the filaments. Steepened thermal gradients form in the outer triangular region, which spontaneously generates quasi-symmetric, higher azimuthal mode number drift-Alfven fluctuations. A steady-current model with emissive sheath boundary predicts the plasma potential and shear flow contribution from the sources. A statistical study of the fluctuations reveals amplitude distributions that are skewed which is signature of intermittency in the transport dynamics.
[1] R.D. Sydora, S. Karbashewski, B. Van Compernolle, and M.J. Poulos, and J. Loughran, “Drift-Alfven fluctuations and transport in multiple interacting magnetized electron temperature filaments”, Journal of Plasma Physics, vol. 85, issue 6, 2019, pp. 905850612.
MBT for TBM (Topological Band Magnetism)
A.H. MacDonald, C. Lei, Shu Chen, O. Heinonen, and R.J. McQueeney
Physics Department, University of Texas at Austin 78712 USA
Bulk MnBi2Te4 and MnBi2Se4 are antiferromagnetic topological insulators [1], and also van der Waals compounds with weakly-coupled seven-atom-thick (septuple) layers. I will discuss the electronic, magnetic, and topological properties of thin films formed by flexibly stacked septuple layers from a theoretical point of view, with the goal of anticipating properties that are achievable using van der Waals epitaxy. Much of the theoretical analysis will be made using an attractively simplified model [2] that retains only Dirac cone degrees of freedom on both surfaces of each septuple layer. The model can be validated, and its parameters can be estimated, by comparing with ab initio density-functional theory (DFT) calculations. I will use the model to explain when thin films exhibit a quantized anomalous Hall effect (QAHE) and when they do not, and to relate the magnetic-configuration-dependent properties of thin films to the magnetic Weyl semimetal limit of the ferromagnetic configuration. MBT thin films can have gate-tunable transitions between topologically trivial and QAH states [3], and metamagnetic QAH states [4], including ones with perfectly compensated antiferromagnetic configurations [5]. I will comment on the magneto-electric [6], and magneto-optical [7] properties of these materials and how they relate to the topological magneto-electric effect, and on the potential role in spintronics.
[1] M.M. Otrokov et al., Highly ordered wide bandgap material for quantized anomalous Hall effect effect and magnetoelectric effects, 2D Mater. 4, 025082 (2017).
[2] C. Lei, S. Chen, and A.H. MacDonald, Magnetized topological insulator multilayers, Proc. Nat. Acad. Sci. 117, 27224 (2020).
[3] C. Lei and A.H. MacDonald, Gate-Tunable Quantum Anomalous Hall Effects in MnBi2Te4 Thin Films, arXiv:2101.07181.
[4] C. Lei, O. Heinonen, A.H. MacDonald, and R.J. McQueeney, Metamagnetism of few layer topological antiferromagnets, arXiv:2102.11405.
[5] C. Lei, O. Heinonen, R.J. McQueeney and A.H. MacDonald, Quantum Anomalous Hall Effect in Collinear Antiferromagnetic Thin Films, to be submitted.
[6] C. Lei and A.H. MacDonald, Spin and Orbital Magneto-electric Response in Magnetized Topological Insulator Thin Films, to be submitted.
[7] C. Lei and A.H. MacDonald, Magneto-Optical Kerr and Faraday Effects in MBT Thin Films, to be submitted.
We report the observation of a giant c-axis nonlinear anomalous Hall effect in the non-centrosymmetric Td phase of MoTe2 without intrinsic magnetic order. Here, application of an in-plane current generates a Hall field perpendicular to the layers. By measuring samples across different thicknesses and temperatures, we find that the nonlinear susceptibility obeys a universal scaling with sample conductivity that is indicative of extrinsic scattering mechanisms. Application of higher bias yields an extremely large anomalous Hall ratio and conductivity.
Magnetic atoms on superconductors induce an exchange coupling, which leads to states within the superconducting energy gap. These so-called Yu-Shiba-Rusinov (YSR) states can be probed by scanning tunneling spectroscopy at the atomic scale. Here, we investigate single magnetic adatoms on a superconducting Pb surface.
As YSR states are within the superconducting energy gap, their excitation by electrons requires a subsequent inelastic relaxation process. At strong tunnel coupling, thermal relaxation is not sufficiently fast and resonant Andreev processes become the dominant tunneling process [1]. We obtain direct evidence of these two transport regimes by inserting GHz radiation into the STM junction and analyzing the photon-assisted tunneling maps [2,3].
[1] M. Ruby, F. Pientka, Y. Peng, F. von Oppen, B. W. Heinrich, K. J. Franke, Phys. Rev. Lett. 115, 087001 (2015).
[2] O. Peters, N. Bogdanoff, S. Acero Gonzalez, L. Melischek, J. R. Simon, G. Reecht, C. B. Winkelmann, F. von Oppen, K. J. Franke, Nature Physics 16, 1222 (2020).
[3] S. Acero Gonzalez, L. Melischek, O. Peters, K. Flensberg, K. J. Franke, F. von Oppen, Phys. Rev. B 102, 045413 (2020).
Majorana bound states are zero-energy states predicted to emerge in topological superconductors and intense efforts seeking a definitive proof of their observation are still ongoing. A standard route to realize them involves antagonistic orders: a superconductor in proximity to a ferromagnet. Here, we show that this issue can be resolved using antiferromagnetic rather than ferromagnetic order. We propose to use a chain of antiferromagnetic skyrmions, in an otherwise collinear antiferromagnet, coupled to a bulk conventional superconductor as a novel platform capable of supporting Majorana bound states that are robust against disorder. Crucially, the collinear antiferromagnetic region neither suppresses superconductivity nor induces topological superconductivity, thus allowing for Majorana bound states localized at the ends of the chain. Our model introduces a new class of systems where topological superconductivity can be induced by editing antiferromagnetic textures rather than locally tuning material parameters, opening avenues for the conclusive observation of Majorana bound states.
[1] S. A. Díaz, J. Klinovaja, D. Loss, and S. Hoffman, arXiv:2102.03423.
Level attraction describes a mode coalescence that can take place in driven open systems. It indicates a development of an instability region in the energy spectrum of the system bounded by exceptional points [1]. This regime has been recently reported in a number of experiments in driven dissipative cavity magnonic systems [2].
Here, we present a framework for describing the mode attraction in a variety of cavity magnonic systems where the interaction between cavity photons and magnons is described in terms of a non-linear relaxation process. We show that the memory function for the photon mode in this approach is expressed through a non-equilibrium susceptibility of the magnonic bath. This allows us to consider a situation in which a bath is driven out of the equilibrium that is necessary to describe the attraction regime. The advantage of this approach is that the susceptibility of the bath can be calculated numerically using first-principle methods.
Using this framework, we demonstrate how mode attraction can appear in driven cavity magnonic systems for certain geometries. This includes non-linear and non-local interactions between cavity photons and magnon modes.
[1] N. R. Bernier et al, Phys. Rev. A 98, 023841 (2018).
[2] Y.-P. Wang and C.-M. Hu, J. Appl. Phys. 127, 130901 (2020).
Elementary excitations in highly entangled states such as quantum spin liquids may exhibit exotic statistics, different from those obeyed by fundamental bosons and fermions. Excitations called non-Abelian anyons are predicted to exist in a Kitaev spin liquid - the ground state of an exactly solvable model proposed by Kitaev. Material realization of the spin liquid has been the subject of intense research in recent years. The 4d honeycomb Mott insulator α−RuCl3 has emerged as a leading candidate, as it enters a field-induced magnetically disordered state where a half-integer quantized thermal Hall conductivity was reported. I will present a microscopic theory of generic spin models, including Kitaev and other bond-dependent spin-interactions responsible for disordered phases. Essential ingredients to engineer spin liquids, applications to materials, and the intriguing link to exotic multipolar orders in transition metals will be also discussed.
$\mathrm{Mn}_3\mathrm{X}$ compounds in which the magnetic $\mathrm{Mn}$ atoms form AB-stacked kagome lattices have received a tremendous amount of attention since the observation of the anomalous Hall effect in $\mathrm{Mn}_3\mathrm{Ge}$ and $\mathrm{Mn}_3\mathrm{Sn}$. Although the magnetic ground state has been known for some time to be an inverse triangular structure with an induced in-plane magnetic moment, there have been several controversies about the minimal magnetic Hamiltonian. We present a general symmetry-based model for these compounds that includes a previously unreported interplane Dzyaloshinskii-Moriya interaction, as well as anisotropic exchange interactions. The latter are shown to compete with the single-ion anisotropy which strongly affects the ground state configurations and elementary spin-wave excitations. Finally, we present the calculated elastic and inelastic neutron scattering intensities and point to experimental assessment of the types of magnetic anisotropy in these compounds that may be important.
The Ce3+ pseudospin-1/2 degrees of freedom in the pyrochlore magnet Ce2Zr2O7 are known to possess dipole-octupole (DO) character, making it a candidate for novel quantum spin liquid (QSL) ground states at low temperatures. We report new heat capacity (CP) measurements on Ce2Zr2O7, which can be extrapolated to zero temperature to account for R·ln(2) entropy using a form appropriate to quantum spin ice. The measured CP rises sharply at low temperatures, initially plateauing near 0.08 K, before falling off towards a high temperature zero beyond 3 K. Phenomenologically, the entropy recovery above T = 0.08 K gives R·ln(2) less (R/2)·ln(3/2), the missing Pauling, spin ice entropy. At higher temperatures, the same data set can be fit to the results of a numerical linked cluster (NLC) calculation that allows estimates for the terms in the XYZ Hamiltonian expected for such DO pyrochlore systems. This constrains possible exotic and ordered ground states, and clearly favours the realization of a U(1)π QSL state. NLC calculations of the magnetic susceptibility and dynamic structure factor agree with these results and provide further constraints on the experimentally-determined values of the exchange parameters.
The physics of heavy 5d transition metal oxides can be remarkably different from that of their lighter 3d counterparts. In particular, the presence of strong spin-orbit coupling (SOC) effects can lead to the formation of exotic ground states such as spin-orbital Mott insulators, topological insulators, Weyl semimetals, and quantum spin liquids. In materials with an edge-sharing octahedral crystal structure, large SOC can also give rise to highly anisotropic, bond-dependent, Kitaev interactions. The first, and thus far the best, experimental realizations of Kitaev magnetism are honeycomb lattice materials: the 5d iridates A$_2$IrO$_3$ and the 4d halide $\alpha$-RuCl$_3$. However, there has recently been a growing interest in the search for Kitaev magnetism in other families of materials, such as the double perovskite iridates (A$_2$BIrO$_6$) and iridium halides (A$_2$IrX$_6$). In this talk I will describe what we can learn about these novel materials using synchrotron x-ray scattering and spectroscopy techniques, including Resonant Inelastic X-ray Scattering (RIXS) and X-ray Absorption Spectroscopy (XAS). By revealing detailed information about the crystal electric field splitting, SOC strength, and magnetic excitation spectrum, these techniques provide an ideal probe of spin-orbit-driven ground states and Kitaev magnetism.
We study an effective pseudo-spin model from microscopics for d$^2$ materials on various lattice geometries. It was found that the interplay between electron-electron interactions and spin-orbit coupling generates intriguing multipole-multipole interactions. These interactions give rise to various multipolar phases, which were identified using computational techniques such as classical Monte Carlo and exact diagonalization. Potential applications and extensions of this theory will also be presented.
Condensed matter systems admit topological collective excitations above a trivial ground state, an example being Chern insulators formed by Dirac bosons with a gap at finite energies. However, in contrast to electrons, there is no particle-number conservation law for collective excitations. This gives rise to particle number-nonconserving many-body interactions whose influence on single-particle topology is an open issue of fundamental interest in the field of topological quantum materials.
Taking magnons in honeycomb-lattice ferromagnets as an example, we uncover topological magnon insulators that are stabilized by interactions through opening Chern-insulating gaps in the magnon spectrum. This can be traced back to the fact that the particle-number nonconserving interactions break the effective time-reversal symmetry of the harmonic theory. Hence, magnon-magnon interactions are a source of topology that can introduce chiral edge states, whose chirality depends on the magnetization direction. Importantly, interactions do not necessarily cause detrimental damping but can give rise to topological magnons with exceptionally long lifetimes. We identify two mechanisms of interaction-induced topological phase transitions and show that they cause unconventional sign reversals of transverse transport signals, in particular of the thermal Hall conductivity. Our results demonstrate that interactions can play an important role in generating nontrivial topology.
Reference: Alexander Mook, Kirill Plekhanov, Jelena Klinovaja, Daniel Loss, arXiv:2011.06543 (2020)
We argue that the usual magnetization $\vec{M}$, which represents a correlated property of 10$^{23}$ variables, but is summarized by a single variable, cannot diffuse; only the non-equilibrium spin accumulation magnetization $\vec{m}$, due to excitations, can diffuse. For transverse deviations from equilibrium this is consistent with work by Silsbee, Janossy, and Monod (1979), and by Zhang, Levy, and Fert (2002).
We examine the corresponding theory of longitudinal deviations for a ferromagnet using $M$ and the longitudinal spin accumulation $m$. If an initial longitudinal magnetic field $H$ has a frozen wave component that is suddenly removed, the system approaches equilibrium via two exponentially decaying coupled modes of $M$ and $m$, one of which includes diffusion. If the system in a slab geometry is subject to a time-oscillating spin current, the system approaches equilibrium via two spatially decaying modes, one associated with spacial decay away from each surface. We also explore the possibility that decay of $M$ directly to the lattice is negligible, so that decay of $M$ must be mediated through decay to $m$ and then to the lattice.
In recent years the prospects of quantum machine learning and quantum deep neural network have gained notoriety in the scientific community. By combining ideas from quantum computing with machine learning methodology, quantum neural networks (QNNs) promise new ways to interpret classical and quantum data sets. However, many of the proposed quantum neural network architectures exhibit a concentration of measure leading to barren plateau phenomena. In this talk, I will show that, with high probability, entanglement between the visible and hidden units can lead to exponentially vanishing gradients. To overcome the gradient decay, our work introduces a new step in the process which we call quantum generative pre-training.
Since many concepts in theoretical physics are well known to scientists in the form of equations, it is possible to identify such concepts in non-conventional applications of neural networks to physics.
In this talk we examine what is learned by convolutional neural networks, autoencoders or siamese networks in various physical domains. We find that these networks intrinsically learn physical concepts like order parameters, energies, or other conserved quantities.
In this talk, I will introduce a generalization of the earth mover's distance to the set of quantum states. The proposed distance recovers the Hamming distance for the vectors of the canonical basis, and more generally the classical earth mover's distance for quantum states diagonal in the canonical basis. I will discuss some desirable properties of this distance, including a continuity bound for the von Neumann entropy and its insensitivity to local perturbations, and I will show how these properties make the distance suitable for learning quantum data using quantum generative adversarial networks
Based on https://arxiv.org/abs/2009.04469 and https://arxiv.org/abs/2101.03037.
In this presentation you'll see how to use TensorFlow Quantum to conduct large scale research in QML. The presentation will be broken down into two major sections: First you will follow along as we implement and scale up (beyond the authors original size) some existing QML works from the literature in TensorFlow Quantum. We will focus on how to write effective TensorFlow Quantum code, visualization tools and surrounding software that the TensorFlow ecosystem has curated that can be leveraged for QML. In the second half of the presentation we will review our recent work titled "Power of data in quantum machine learning" (https://arxiv.org/abs/2011.01938) and why we think developing an understanding of data is an important step to achieving quantum advantage in QML.
Despite an undeserved reputation for being hard to understand, the mathematics behind quantum computing is based on relatively straightforward linear algebra. This means that the equations governing quantum computing are intrinsically differentiable. This simple observation has remarkable consequences. In particular, many of the tools developed over the past decades for deep learning, such as gradient-based training algorithms, can be applied to quantum computers with little modification. In this talk, I will overview how these ideas can be explored using freely available open-source software and publicly accessible quantum computing platforms, enabling the discovery and optimization of new and interesting quantum computing algorithms.
In the distant future we expect to be using large-scale, nearly perfect quantum computers that aid in drug discovery, break RSA encryption, and outperform supercomputers in certain machine learning tasks. Today we have access to small quantum computers afflicted by noise and error. Somewhere between these two extremes lies a momentous event for the field known as quantum advantage: solving a computational problem of practical value, using a quantum computer in an essential manner. With what tools must we equip ourselves in order to reach quantum advantage as soon as possible? This talk will introduce quantum enhanced sampling, a tool for speeding up a critical component of many near-term quantum algorithms: estimation of quantities encoded in quantum operations. This helps to bridge the gap between several near-term quantum algorithms and their far-term counterparts. We will motivate the need for this tool through recent examples in quantum machine learning and quantum chemistry. Then we will give a pedagogical introduction to quantum enhanced sampling methods. Finally, we will show results demonstrating the performance of this method and will discuss the implications for near-term quantum computing.
Many important challenges in science and technology can be cast as optimization problems. When viewed in a statistical physics framework, these can be tackled by simulated annealing, where a gradual cooling procedure helps search for ground state solutions of a target Hamiltonian. While powerful, simulated annealing is known to have prohibitively slow sampling dynamics when the optimization landscape is rough or glassy. In this talk I will show that by generalizing the target distribution with a parameterized model, an analogous annealing framework based on the variational principle can be used to search for ground state solutions. Autoregressive models such as recurrent neural networks provide ideal parameterizations since they can be exactly sampled without slow dynamics even when the model encodes a rough landscape. We implement this procedure in the classical and quantum settings on several prototypical spin glass Hamiltonians, and find that it significantly outperforms traditional simulated annealing in the asymptotic limit, illustrating the potential power of this yet unexplored route to optimization.
Control systems are vital in engineering, and machine learning is transforming data science; however, their basic constructs are expressed in terms of classical physics, which impedes generalizing to quantum control and quantum machine learning in a consistent way. We incorporate classical and quantum control and learning and their dependencies into a single conceptual framework. Then we discuss inconsistencies between current definitions of quantum control and quantum learning vs their descriptions achieved by generalizing classical versions using our framework. We illustrate our framework in the context of quantum-enhanced interferometric-phase estimation, which incorporates both control and machine learning.
Generating high-quality data (e.g. images or video) is one of the most exciting and challenging frontiers in unsupervised machine learning. Utilizing quantum computers in such tasks to potentially enhance conventional machine learning algorithms has emerged as a promising application, but poses big challenges due to the limited number of qubits and the level of gate noise in available devices. In this talk, we provide the first practical and experimental implementation of a quantum-classical generative algorithm capable of generating high-resolution images of handwritten digits with state-of-the-art gate-based quantum computers. In the second part of my talk, we focus on combinatorial optimization; another key candidate in the race for practical quantum advantage. Here we introduce a new family of quantum-enhanced optimizers and demonstrate how quantum generative models can find lower minima than those found by means of stand-alone state-of-the-art classical solvers. We illustrate our findings in the context of the portfolio optimization problem by constructing instances from the S&P 500 stock market index. We show that our quantum-inspired generative models based on tensor networks generalize to unseen candidates with lower cost function values than any of the candidates seen by the classical solvers. This is the first demonstration of the generalization capabilities of quantum generative models that brings real value in the context of an industrial-scale application.
Physics degree holders are among the most employable in the world, often doing everything from managing a research lab at a multi-million dollar corporation, to developing solutions to global problems in their own small startups. Science and Technology employers know that with a physics training, a potential hire has acquired a broad problem-solving skill set that translates to almost any environment, as well as an ability to be self-guided and -motivated so that they can teach themselves whatever is needed to be successful at achieving their goals. Therefore it's no surprise that the majority of physics graduates find employment in private--sector, industrial settings. At the same time, about 25% of graduating PhDs will take a permanent faculty position--yet academic careers are usually the only track to which students are exposed while earning their degrees.
In this talk, I will explore less-familiar (but more common!) career paths for physics graduates, and will provide information on resources to boost your career planning and job hunting skills.
IBM Watson is well known for industry-leading natural language processing that defeated defending champions on Jeopardy! and most recently, learned to debate complex topics with humans. Equally as exciting, though perhaps less publicized, IBM works with government, enterprise, and industry to apply machine learning to real world applications such as customer care. This talk offers a view into the ways AI is transforming the customer service landscape: expanding capacity to serve, improving user experience, saving humans time and organizations money.
As a trained experimental scientist, when it became clear that I needed to transition to industry I was left to find something that fit my skills. Data science offered that opportunity. I have worked in numerous industries including health care, fintech, oil and gas, and agriculture applying statistical knowledge and machine learning (ML) techniques. Knowledge from my physics degree set me up for success through learning how to solve complex problems while experience has taught me how to approach the problem with business and return-on-investment in mind. Now, at AltaML, I see use cases from many different industries in a single day. We apply ML to everything from safety to agriculture to robotics. I’ll provide some examples of how we’ve applied AI/ML to real-world problems to help people make better decisions.
Chilling an underground mining project becomes more costly as the depth increases. The air temperature increases as it descends due to auto-compression, additional heat from the host rock, equipment and processes is inevitable. A move to battery powered vehicles may allow for less air flow, legislation changes pending, but battery powered vehicles and the charging process liberate heat. The susceptibility of less air to additional heat becomes an issue if management intends to maintain the same level of activity. This paper discusses the cost comparison results of a feasibility study for a planned mine expansion and prototype testing data of a patent pending cryogenic chilling system. Cryogenic liquids store energy, effectively the heat from the mine is converted to electricity.
Furthermore, compressed air can be produced whilst simultaneously chilling (5000 cfm produces 1.2 MW chilling) and motive force, engines for equipment can be fueled by cryogenic liquids, a vehicle would produce cool clean air for exhaust with about 1/3 motive power to 2/3 chilling. The results obtained from a prototype system, approximately the size of a small spot chilling system, demonstrates conclusively that rapid response to heat added to the air flow is a feature; therefore, Chilling on Demand™ is a feature that can overcome the issues of heat management at lower air flows. Since the chilling is delivered by a cryogenic fluid, extending the depth of a mine only requires an additional surface liquefier module and a longer pipe. Since the system can provide chilling on demand it compliments chilling on demand increasing the economic benefit of installed VOD systems.
This talk aims to give an example of how a degree in physics can lead to an interesting industrial career in optical sensor development. A broad understanding of different physical laws and behaviors (mechanics, thermodynamics, electromagnetics, optics), combined with a practical grounding in electronics, programming and machining, provides an ideal skill set for developing optical instruments where complex interactions between different sub-systems must be understood and anticipated. I will describe how my university physics degrees led to a varied and interesting career developing satellite instruments for ozone monitoring and wildfire measurement, thermal and terahertz imaging cameras, magnetic tools for pipeline inspection and a laser-based instrument for disease diagnosis in exhaled breath. Along the way I will give a brief introduction to the inner workings of these various sensors.
This talk focuses on the responsibility of scientists to counter pseudo-scientific ideas in society, and reviews the factors that have led to a rise in popular anti-science sentiment. I will provide insights into how to communicate the ideas of science with the public, and I will give some examples of important environmental issues that are most commonly misconstrued by the general public, from a pro-science EcoModernist perspective.
TBD
Microfluidic technology has been used in many application areas including diagnostics, drug delivery and drug discovery. In drug discovery, microfluidic devices have been used to perform combinatorial experiments where several drug candidates can be exposed to biological materials such as protein drug targets, cells or small organisms simultaneously at various concentrations in order to determine a suitable drug candidate for further investigations. Small organisms such as C.elegans worms or Drosophila flies are ideal model organisms that are used in the drug discovery process understand biological processes and studying human diseases at the molecular-genetic level. Nevertheless, these organisms are small and are difficult to handle. Microfluidic devices provide the capability to handle these organisms one at a time but in a parallelized manner allowing unprecedented capability to perform combinatorial analysis.
In this talk, I will present some of the unique microfluidic devices that we have developed in my laboratory to study and perform assays on these model organisms. First, I will describe the work that we have done to characterize the phenomenon of electrotaxis of C.elegans worms. I will demonstrate the use of it to immobilize the worm and sort it as well as measure its neuromuscular response. I will also describe devices to immobilize and image the neurons in Drosophila larva. Finally, I will describe microinjection devices that are capable of immobilization of C.elegans and Drosophila and inject precise regions in them to deliver biomolecules. The versatility of these devices provide new capabilities to biophysicists, medical researchers and drug discovery scientists to study these organisms in great detail.
Non-invasive liquid biopsies offer hope for point-of-care glimpse into the molecular hallmarks of the disease, including drug resistance and targets. Among different types of liquid biopsy platforms, tumor-derived exosomes (EXs) are unique due to their intercellular tumor communication and serve as carriers of biological information. Exosomes are nanoscale extracellular vesicles (EVs) released from cells into body fluids, carrying cell cargos such as DNA and RNA reflective of their parental cells. They offer a unique opportunity to access biologically important accepts of disease complexity.
In Mahshid Lab, we develop a new nanoplatform for molecular analysis of single EVs. We harnesses a nanopatterned fluidic device that incorporate SERS (surface enhanced Raman spectroscopy) for molecular profiling of cancerous EVs. Using this approach, we were able to distinguish a library of peaks expressed in GBM (Glioblastoma) EVs from two distinct glioblastoma cell lines (U373, U87) and compare them to those of non-cancerous glial EVs (NHA) and artificial homogenous vesicles. In parallel, we develop a nanofluidic device with tunable confinement to trap EVs in a free-energy landscape that modulates vesicle dynamics in a manner dependent on EV size and charge. We show that the surface charge of particles can be measured by analysis of particle diffusion in the landscape. Since extra-cellular vesicles are representative of their parental cells, their surface charge and size can provide information of their parental cells. As proof-of-principle, we perform size and charge profiling of a population of EVs extracted from human glioblastoma astrocytoma (U373) and normal human astrocytoma (NHA) cell lines.
Novel therapeutic strategies are urgently needed to control the SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2) pandemic. This virus belongs to a larger class of corona viruses currently circulating, which pose major threats to global public health. Here, I present the fabrication and characterization of Erythro-VLPs: Erythrocyte-Based Virus Like Particles, i.e., red blood cell based proteoliposomes carrying the SARS-CoV-2 spike protein.
Erythrocytes can present antigens to the immune system when senescent cells are being phagocytized in the spleen. This capacity together with their high biocompatibility make red blood cells (RBCs) effective vehicles for the presentation of viral immunopathogens, such as the SARS-CoV-2 S-protein, to the immune system. .The proteoliposomes were prepared by tuning lipidomics and proteomics of the RBC membranes on a nanoscale. Epi‐fluorescent and confocal microscopy, dynamic light scattering (DLS), and Molecular Dynamics (MD) simulations were used to characterize the liposomes and the insertion of the S-proteins. The protein density on the outer membrane was estimated to be 70 proteins/μm. The Erythro-VLPs have a well-defined size distribution of 222±6 nm and exhibit dose-dependent binding to ACE-2 (angiotensin converting enzyme 2) in biolayer interferometry assays.
We present direct experimental evidence of a pronounced immunological response in mice after 14 days, after two injections, and the production of antibodies was confirmed in ELISA. In addition, these antibodies were found to be specific for the S-protein RBD sub-domain demonstrating that the protein not hidden or conformationally altered by the developed protocol. This immunological response was observed in the absence of any adjuvant which is usually required for protein-based vaccines.
The RBC platform that we present in this work can easily and rapidly be adapted to different viruses in the future by embedding the corresponding antigenic proteins and opens novel possibilities for therapeutics.
[1] Himbert et al., “Erythro-VLPs: Embedding SARS-CoV-2 spike proteins in red blood cell based proteoliposomes leads to pronounced antibody response in mouse models”, submitted.
As a result of the growing world-wide antibiotic resistance crisis, many currently existing antibiotics have been shown to be ineffective due to bacteria developing resistive mechanisms. There are a limited variety of potent antibiotics that are successful at suppressing microbial growth, such as polymyxin B, however, are deemed as a last resort due to their high toxicity. Adverse side effects associated with polymyxin B treatment include nephrotoxicity, neurotoxicity, and hypersensitivity. Previous research has focused on the development of an effective drug delivery system that can inhibit bacterial growth while minimizing negative side effects. In particular, nanoparticles have been of interest as they can be conjugated to a drug of interest, allowing for effective drug transport to the target. Despite their potential, an antibiotic delivery system has yet to be established, due to the nanoparticles lacking specificity and lack of biocompatibility and rejection. Here, we present a novel antimicrobial drug delivery method that uses modified red blood cells (RBC) that are encapsulated around polymyxin B. These RBC-based antibiotics are made specific to certain bacteria through the addition of the corresponding antibodies to their cell membranes. We investigate whether this drug delivery system is effective at inhibiting bacterial growth and selective, which is important to minimize the negative side effects seen with conventional polymyxin B treatment. This RBC based platform is potentially advantageous to synthetic nanoparticle-based approaches because of their biocompatibility and bioavailability, resulting in longer retention time in the human body.
During the Covid-19 pandemic, face masks have become the new norm with their widespread use in public as part of a multi-barrier approach for infection control, including physical distancing, hand hygiene, and altered social behaviour. Masks provide benefits to both the mask wearer and to those in their proximity when they are worn by all individuals in a common area. The gold standard in personal protective equipment (PPE) remains the N95 respirator, made of synthetic materials with electrostatic properties that filter and retain more than 95% of aerosols <1 µm and larger in size. N95 respirators degrade during washing and disinfection, and as such are single use disposable PPE. Similarly, surgical-style masks made of polypropylene and non-woven materials are unsuited to frequent washing/decontamination with heat or detergents. Owing to their disposable nature, most commercially available PPE such as the above are unsustainable for supplying the public due to supply interruptions, high cost over time, and a lack of aesthetic attributes (colour, pattern) to encourage use.
As a result, textile manufacturers and a new cottage industry of homemade mask makers, including volunteers making donated masks for vulnerable populations, can provide fabric face masks to the public. Recently, the U.S. CDC indicated that commercial manufacturers of face masks will require testing although the conditions of such standards have yet to be outlined. Recently, Dr. Tam has made recommendations for mask materials to elevate the quality of masks being worn by the public, however these recommendations are not easily translatable to actual mask construction and such recommendations are based on very limited testing of fabric masks.
This presentation will discuss the development of a test apparatus for assessing mask efficacy by measuring the aerosols transmitted through the masks. We use a laser-based system, using relatively inexpensive diode lasers to illuminate the exhaled particles, a webcam for data acquisition, and Python-based particle tracking software. We make the approximation that the intensity of the scattered light from the droplets is proportional to the size of the droplet, but we will be able to quantify the droplet size by analyzing the data with Mie scattering theory.
Reference:
https://advances.sciencemag.org/content/6/36/eabd3083?te=1&nl=running&emc=edit_ru_20200822
Social distancing measures have been the main non-pharmaceutical intervention (NPI) against the COVID-19 pandemic. Numerous large-scale analyses have studied how these measures have affected human movement, finding sizeable drops in average mobility. Yet comparatively little attention has been paid to higher-order effects such as “superspreading events” which are known to be outsized drivers of pandemics. Networks with heterogeneous (high variance) distributions of contacts can dramatically accelerate spreading processes, even if the average number of contacts is low. This stresses the need to quantify higher-order effects, and the (in)ability of existing NPI to reduce them.
Here we assess this by applying tools of statistical physics to approximately 12 billion anonymized mobile phone traces from 2.33 million devices in the Chicago metropolitan area, from Jan.1 to Jun.30, 2020, covering the first wave of state- and city-level social distancing measures in the pandemic. To identify potential super-spreading events, we grid these data at a fine spatial and temporal resolution, revealing large, transient co-localizations of people which we term hotspot events. We then ask about the spatiotemporal distribution of these events and the mobility statistics of the people participating in them--both before and after the implementation of social distancing measures.
Encouragingly, we find that distancing policies heralded a dramatic rarefaction of people, reflected in an increase in the entropy of their spatial distribution. As a result, we observe a concomitant drop in hotspot event frequency, with the largest reduction occurring in the urban core. This, however, belies a more worrisome trend: though we observe a large average reduction in the amount people travel (as measured by individual radius of gyration), this fails to be true for the subset of users participating in hotspot events. These users display higher-than-average baseline mobility, which persists (and even increases) during the post-lockdown period.
Our findings indicate that though social distancing policies may succeed in reducing average mobility, their effectiveness in reducing the key driver of spreading processes on networks (the second moment of the degree distribution) may be more limited. This in turn suggests the need for additional NPI specifically targeted at “super-spreading events”.
For a lot of us, the COVID-19 pandemic has meant dialing back, hunkering down, and holding off until things get back to normal. For some, though, it has meant ramping up and going the extra mile to get things back to normal. This presentation will attempt to tell a story that starts with a small group of students from Lakehead University and the Northern Ontario School of Medicine that aimed to redistribute and manufacture PPE as a STOPGAP solution to fill shortages in northern Ontario. The initiative grew into a network of doctors, students, professors, staff, and industrial partners working towards keeping people safe- now and into the future. What we discovered is not, in our opinion, as important as how we discovered it, and how a group of passionate people put their lives on hold to develop 3D printed face masks, make test equipment from aquarium parts and hot glue, meet doctors from SickKids hospital on the side of the highway to exchange filters, and partner with business people willing to risk everything to bring Ontario the ability to control its own supply of PPE. We are still working to overcome this challenge, and we hope the story of what we did will inspire others to find creative ways to overcome similar obstacles that may face us in the future.
This keynote will provide a high-level overview of the current state-of-the-art in quantum technologies and their applications to sensing, imaging and metrology. I will start with a brief historical view about how National Metrology Laboratories like NIST and NRC-Canada have used these technologies for years. I will then transition to some near-term commercial applications before returning to a long-term view of the future applications of these quantum technologies to National Metrology Laboratories, society, and basic science.
Single-photon detectors are being increasingly implemented in a variety of applications ranging from quantum information science to spectroscopy and remote sensing. These measurement techniques rely on the accurate detection of single photons at specific wavelengths. National metrology institutes worldwide, including the National Research Council Canada, have been developing characterization techniques and reference standards for such single-photon technologies. The implementation of quantum emitters as metrology single-photon source standards will enable the in-situ characterization of next-generation photonic technologies including quantum photonic integrated circuits, where single-photon sources, detectors, and other optical components for quantum communication and computation are fabricated on one on-chip platform. This presentation will discuss ongoing efforts in the development of characterization methodologies for single-photon technologies, including the need for consistency in the measurement of performance metrics for single-photon emitters.
Mechanical systems represent a fundamental building block in many areas of science and technology, from atomic-scale force sensing to quantum information transduction to kilometer-scale detection of infinitesimal spacetime distortions. All such applications benefit from improved readout sensitivity, and many seek new types of mechanical actuation. In this talk I will discuss our efforts to realize a tabletop, room-temperature optomechanical system capable of sensing the broadband (100Hz - 1MHz) quantum noise in the radiation force from incident laser light; this would represent a milestone toward optomechanically tuned squeezed light sources and mechanical sensitivities beyond the standard quantum limit. Time permitting, I will also discuss our progress toward creating a qualitatively different kind of optomechanical system in which light, even an average of a single photon in the apparatus, strongly tunes the spatial extent and effective mass of a mechanical mode.
After 3 decades of preparation, tools and procedures for reproducible fabrication of atom-perfect silicon structures have matured to a point where it has now become possible to build proto-devices while also planning viable atom-scale manufacturing. In the beginning, device complexity and production rates will be low while manufacturing costs are high, challenges that must be offset by the high value of select initial products. Inherent attributes including ultra high speed, ultra small size/weight/power, variance-free manufacture and routine access to some quantum effects are waiting to be harnessed.
A glimpse of our current capabilities will be shown by examples including structures we can make, unique electronic properties of those, chemical and electromagnetic sensing capabilities and fabrication automation through machine learning.
Near term device objectives such as a quantum metrological current standard, an unusually high temperature capable quantum metrology-based standard thermometer, and a uniquely portable, due to low power consumption, quantum random number generator will be mentioned.
Collaborative work with Professor Konrad Walus, EE, UBC, that shows the unprecedented low power consumption of binary and analog atom-defined silicon circuitry will be briefly sketched.
Entanglement is the essential resource that defines this new paradigm of quantum-enabled devices. Here I confirm the long-standing prediction that a parametrically driven mechanical oscillator can entangle electromagnetic fields. We observe stationary emission of path-entangled microwave radiation from a micro-machined silicon nanostring oscillator, squeezing the joint field operators of two thermal modes by 3.40(37)~dB below the vacuum level. This entanglement can be used to implement Quantum Illumination (a sensing technique) that employs entangled photons to boost the detection of low-reflectivity objects in environments with bright thermal noise. The promised advantage over classically correlated radiation is particularly evident at low signal photon flux, a feature that makes the protocol potentially useful for non-invasive biomedical scanning or low-power short-range radar detection. In this work, we experimentally simulate quantum illumination at microwave frequencies. We generate entangled fields using a Josephson parametric converter at millikelvin temperatures to illuminate a room-temperature object at a distance of 1 meter in a proof of principle radar setup.
The DND/CAF is faced with a rapidly evolving defence, safety, and security environment with the emergence of disruptive technologies such as quantum. It is expected that some disruptive technologies, quantum in particular, will have an impact in less than 5 years. Quantum-enabled technologies will have applicability across a wide array of defence applications, such as in sensing (including position, navigation, and timing), communications, computing, and advanced materials. Canada has benefited from early, world-renown strength in quantum technologies. As such, the DND/CAF Quantum Science and Technology Strategy (Strategy) leverages strong national and international partnerships and calls for coherence across departmental investments to accelerate the development of defence-relevant quantum technologies. Enabling Canadian sensitive technologies to develop beyond the laboratory is in the best interest of DND/CAF in order to be prepared for disruptions in the future operating environment. The Strategy also calls for increased quantum internal research capacity and human capital across the department to allow DND/CAF to be in a position to assess, advise, and benefit from allied efforts and face the challenges of the 21st century and beyond.
The National Research Council is launching The Internet of Things: Quantum Sensors Challenge Program in 2021. This program has seven years of funding and aims to develop a disruptive generation of quantum sensors that are orders of magnitude better than sensors that exist today. The program is structured to encourage collaborative research projects between the NRC and researchers in academia, industry, and other government departments. This talk will discuss program details and review the collaborative model.
The diamond Nitrogen-Vacancy centre (NV-centre) is a defect which occurs in natural diamonds, and can also be introduced artificially. Due to screening effects, the NV-centre defect exhibits remarkably long spin coherence times. This means the diamond NV centre can be used for precision magnetometry, using Optically Detected Magnetic Resonance (ODMR) of the Zeeman splitting. This talk will review the history and basic physics of the diamond NV-centre, and describe work toward a new compact diamond NV-centre magnetometer, with potential applications in geophysical sensing.
We present a novel quantum multi-mode time bin interferometer that is suitable for a wide range of optical signals and capable of being used for free space quantum channels. Our design uses only reflective optics with curved mirrors providing the one-to-one imaging system necessary for a multi-mode interferometer. The curved mirrors are ideal since, unlike lenses, their focal length depends only on the geometry of the mirror allowing them to be used with a wide range of optical signals and avoid chromatic effects. Furthermore, each curved mirror is placed in a cavity like configuration with a flat mirror, thus created a relatively smaller physical footprint. The small physical footprint allows the interferometer to be placed in a monolithic chassis that is built using additive manufacturing. Additive manufacturing enables nonconventional techniques that allowed for flexure optomechanical components to be built into the monolithic chassis enabling alignment of the interferometer with the reduced physical footprint. The monolithic chassis allows for increased robustness and gives a predictable thermal expansion. In addition, the use of low thermal expansion material, such as Invar, further increases the thermal tolerance of the interferometer, increasing the practicality of the device. Overall, this study advances the practicality of the multi-mode time bin interferometers for free space quantum applications. Thus, further enabling the deployment of quantum technologies to bring about new applications and fundamental research.
Single acceptor dopants in Si along with dangling bonds are enabling technologies for atomic scale charge and spin-based devices.1 Additionally, recent advances in hydrogen lithography have enabled the patterning of quantum dot based circuit elements with atomic precision.[2] We engineered a single acceptor coupled to a dangling bond wire on highly doped p-type H-Si(100) and characterized its electronic properties with scanning tunneling spectroscopy. The coupled entity has an electronic structure that behaves as a conductive wire from which the charge state of the dopant can be accessed and has a complex dependence on the dangling bond wire length. In addition, dI/dV mapping reveals features reminiscent of charging rings that are centered over the dopant and overlap with the wire.[3] This overlap varies with electric field and its tunability may augment the functionality of dangling bond based quantum devices.
References:
1 A. Laucht et al., "Roadmap on quantum nanotechnologies", Nanotechnology, vol. 32, no. 16, p. 162003, 2021. Available: 10.1088/1361-6528/abb333
[2] T. Huff et al., "Binary atomic silicon logic", Nature Electronics, vol. 1, no. 12, pp. 636-643, 2018. Available: 10.1038/s41928-018-0180-3
[3] N. Turek, S. Godey, D. Deresmes and T. Mélin, "Ring charging of a single silicon dangling bond imaged by noncontact atomic force microscopy", Physical Review B, vol. 102, no. 23, 2020. Available: 10.1103/physrevb.102.235433
Quantum dots embedded in photonic nanowires are highly efficient single photon generators. Integrating such sources on-chip offers enhanced stability and miniaturization; both of which are important in many applications involving quantum information processing. We demonstrate the efficient coupling of quantum light generated in a III-V photonic nanowire to a silicon-based photonic integrated circuit. This hybrid quantum photonic integrated circuit is assembled through a “pick & place” approach using a nanomanipulator in a scanning electron microscope where the nanowires are transferred individually from the growth substrate and carefully placed onto the photonic integrated circuit. The emission properties of on-chip nanowire QDs were measured using an all-fibre pump and collection technique. We demonstrate detected count rates of a million counts per second with single photon purities higher than 95 percent thus showing that using nanowires with embedded QDs coupled to on-chip photonic structures is a viable route for the fabrication of stable single photon sources.
Optically active defects in solids---colour centres---are one of the most promising platforms for implementing quantum technologies. Their spin degrees of freedom serve as quantum memories that in some cases can operate at room temperature. Their control can be achieved with microwave spin control and resonant optical excitation but is hindered by the broadening of optical transitions from thermal phonons and spectral diffusion. Furthermore, spin-qubit optical transitions are often outside the telecommunications wavelength band required for long-distance fiber optic transmission. Harnessing the coupling between mechanical degrees of freedom and spins has emerged as an alternative route for controlling spin-qubits. However, connecting spin-mechanical interfaces to optical links to realize a spin-photon interface has remained a challenge. Here we demonstrate such an interface using a diamond optomechanical cavity that does not depend on optical transitions and can be applied to a wide range of spin qubits.
Our device consists of a diamond microdisk resonator studied in [Optica 3, 963-970 (2016)]. The microdisk is fabricated from optical grade diamond that contains ensembles of NV centres. We use an optical mode at 1564 nm with the quality factor Qo = 150k. The mechanical mode which we use to couple to the NV spin state is a radial breathing mode with a frequency of around 2.1 GHz with the quality factor Qm = 4k. The device operates in the sideband resolved regime enabling optomechanical self-induced oscillations for sufficiently high optical input power of a blue detuned laser. These oscillations can produce the stress of a few MPa, large enough to drive the electronic spins of NV centers. We use a standard diamond NV confocal microscope to initialize and readout the NV state. The MW pulses transfer the population between |-1⟩<->|0⟩ and |+1⟩<->|0⟩ state. We wait for 0.7 us with the mechanical drive that drives the |-1⟩<->|+1⟩ transition.
In our measurements, we observe a coinciding dip in the |+1⟩ population and a peak |-1⟩ population, that verifies that the spins are being optomechanically driven. On calibrating this signal with the MW Rabi contrast and the background signal, we estimate a driving rate of 2pix170kHz and ~45% transfer of spin population between |±1⟩ states. Feasible improvements in device geometry will increase the optomechanically-induced driving rate by a few orders of magnitude allowing for coherent control of NV spins using an optomechanical resonator.
Among solid state quantum emitter systems, semiconductor quantum dots are particularly attractive due to their high radiative quantum efficiencies [1], their strong optical coupling enabling fast [2] and arbitrary [3] qubit rotations, and their tunable emission in the range of standard telecommunication wavelengths. For applications such as quantum light sources and quantum nodes, it is essential to maximize the fidelity of the optical control process governing quantum state initialization and control. While the dephasing time tied to radiative relaxation is many orders of magnitude longer than control times achievable with subpicosecond laser pulses, resonant coupling of the electron-hole pair to phonons in the solid-state environment can still contribute to decoherence during the optical control process [2]. This decoherence channel is often referred to as excitation-induced dephasing since the impact on fidelity is dictated in part by the characteristics of the driving laser field. Here we report the demonstration of suppression of phonon-mediated decoherence through the application of frequency-swept laser pulses via adiabatic rapid passage in the strong-driving regime [4]. We also investigate the dependence of the threshold for decoherence suppression on the size and shape of the quantum dot. Our findings indicate that the use of telecom-compatible quantum dots leads to decoherence suppression at pulse areas comparable to a single Rabi oscillation period.
[1] Atature et al. Nat. Rev. Mater. 3, 38 (2018).
[2] Mathew et al. Phys. Rev. B 90, 035316 (2014).
[3] Mathew et al. Phys. Rev. B 84, 205322 (2011); Gamouras et al. J. Appl. Phys. 112, 014313 (2012); Gamouras et al. Nano Letters 13, 4666 (2013); Mathew et al. Phys. Rev. B 92, 155306 (2015).
[4] Ramachandran et al. Opt. Lett. 45, 6498 (2020).
In addition to being extremely sensitive sensors, nitrogen vacancies (NV) centers in diamond are an ideal showcase of quantum technologies as they work in ambient conditions. Experiments with NV centers usually involve a bulky optical system, together with a wide assortment of signal generators and samplers, which is challenging to synchronize together. Here, we perform quantum control experiments on NV centers which are much more accessible to a broader community. We achieve this by (i) miniaturizing hardware components into a magnetometer the size of Rubik’s cube and (ii) leveraging a commercial platform for control and readout. We interfaced all the quantum magnetometers signal generation and readout components with a modular control platform, thus allowing it to fully operate the sensor. We will present room-temperature results including optically detected magnetic resonance, Rabi and Ramsey oscillations of an ensemble of NV centers. In addition to democratizing complex experiments in quantum physics, our work paves the way for efficient prototyping of quantum sensors with commercial control solutions.
Les centres azote-lacune (NV) dans le diamant sont une vitrine idéale des technologies quantiques car ils fonctionnent dans des conditions ambiantes. Les expériences avec les centres NV impliquent généralement un système optique volumineux, ainsi qu'un large assortiment de générateurs de signaux et d'échantillonneurs, qu'il est difficile de synchroniser ensemble. Ici, nous réalisons des expériences de contrôle quantique sur des centres NV qui sont beaucoup plus accessibles à une plus large communauté. Nous y parvenons (i) en miniaturisant l’électronique dans un magnétomètre de la taille d'un cube Rubik et (ii) en exploitant une plateforme commerciale pour le contrôle et la lecture. Nous avons interfacé tous les composants de génération de signaux et de lecture du magnétomètre quantique avec une plateforme de contrôle modulaire, lui permettant ainsi d’opérer le capteur quantique. Nous présenterons les résultats obtenus à température ambiante, notamment la résonance magnétique détectée optiquement et les oscillations de Rabi et de Ramsey d'un ensemble de centres NV. En plus de démocratiser les expériences complexes en physique quantique, notre travail ouvre la voie à un prototypage efficace des senseurs quantiques avec des solutions de contrôle commerciales.
SBQuantum are building a Magnetic Intelligence Platform to extract additional information from magnetic fields. The platform uses nitrogen-vacancy diamond sensors to unlock the tensor information from the magnetic field before interpreting this data through a suite of proprietary algorithms for the detection and classification of magnetic anomalies. This presentation will dive through the history of SB Quantum which lead to Magnetic intelligence and discuss potential applications of the platform solution as well as upcoming challenges to its deployment.
Ultra-weak light, known as biophotons, are emitted spontaneously by living organisms, but the origin, wavelength and the underlying mechanisms have not yet been clearly identified; although energy metabolic processes seem to be involved. Moreover, neurons can emit photons and there is strong experimental and theoretical evidence that myelinated axons can serve as photonic waveguides. Thus, it has been conjectured that biophotons are involved in neural communication. The main challenge of imaging biophotons is their low intensity, which requires detectors displaying high sensitivity and very low noise level.
To accommodate for the detection of ultra-weak biophoton signals, we use superconducting nanowire single-photon detectors (SNSPDs) where spectral filtering of blackbody radiation – achieved by spooling the input fibres – yields extreme low dark counts (on the order of 0.5 counts per minute). For our study, we have chosen tadpole and frog Xenopus brains as our models, since these conserve most of the essential cellular and molecular mechanisms from mammalian brains and are easy to manipulate.
In my talk, I will present our setup and results from our recent measurements of biophoton emission. I will also introduce a range of planned measurements e.g. spectral and temporal characterization, application of neural activity stimulators/inhibitors, and discuss some improvements to the experimental apparatus such as implementing fiber-coupling to an array of SNSPDs, EMCCD cameras, and using different biological samples. These measurements are all aimed at our long term goals of understanding how biophotons are generated in neurological cells and determining if biophotons play a role in communication in the nervous system (beyond the current paradigm of electro-chemical signalling processes). This could open the door to the fascinating fundamental question of whether quantum phenomena, such as entanglement, play a role in higher level functions of the brain, e.g., consciousness.
Quantum confinement and manipulation of charge carriers are critical for achieving devices practical for various quantum technologies such as quantum sensing. Atomically thin transition metal dichalcogenides (TMDCs) have attractive properties such as spin-valley locking, large spin-orbit coupling and high confinement energies which provide a promising platform for novel quantum technologies. In this talk, we present the design and fabrication of electrostatically gated quantum structures based on fully encapsulated monolayer tungsten diselenide (WSe2) aimed at probing and measuring the properties of the confined single and few-hole states in these structures. Furthermore, we successfully demonstrate that local control gates successfully pinch-off the current across the device with gate voltages consistent with their lithographic widths. Finally, we discuss the origins of the observed mesoscopic transport features related to the quantum dots through the WSe2 channel.
In microscopy, the imaging of light-sensitive materials has been a persistent problem, as the sample being studied may be altered or damaged by the illumination itself. Naturally, to overcome over-illuminating the sample, one can reduce the intensity of the classical light source; however, reducing the source intensity comes with a trade-off which affects noise and image quality. In recent years, it has been shown that using quantum illumination as a source for imaging schemes significantly reduces photon illumination of the sample while maintaining image quality. In fact, in our previous work [1] we combine two quantum imaging and detection schemes in a technique coined “Interaction-free-ghost-imaging” and achieve results with low photon number and high contrast images.
A further limitation of all direct imaging schemes, whether they be quantum or classical, is the so-called diffraction limit. When measuring intensity directly, the maximal resolution is dictated by the size of optical apertures in the imaging system. Recent results [2] [3] have shown that performing phase-sensitive measurements, as opposed to intensity measurements, could increase resolution by several orders of magnitude.
We propose an experiment implementing the super-resolution technique in a quantum ghost-imaging scheme. We aim to show that the added benefits of low photon counts along with increased resolution show promise for imaging small light-sensitive objects such as biological cells, and further challenges the notion of how many photons are needed to form a visible image.
[1] Zhang,Y., Sit, A., Bouchard, F., Larocque, H., Grenapin, F., Cohen, E., Elitzur, A., Harden, J. Boyd,R., Karimi, E, “Interaction-free-ghost-imaging of structured objects”, Optics Express 27, 2212-2224 (2019).
[2] Tsang, M., Nair, R., Lu X, “Quantum Theory of Superresolution for two Incoherent Optical Point Sources”, Physical Review X 6, 031033 (2016).
[3] Tham, W., Ferretti, H., Steinberg, A, “Beating Rayeigh’s curse by Imaging Using Phase”, Physical Review Letters 118, 070801 (2017).
Non-classical light sources are an important tool for many quantum information processing applications such as quantum key distribution and linear optical quantum computing. Sources based on semiconductor quantum dots offer close to ideal performance in terms of efficiency and single photon purity. However, emission rates are limited by the radiative lifetime of the excitonic complexes. This limitation can be overcome by multiplexing independent quantum dot emitters. Here we propose an approach to deterministically integrate multiple single photon emitters within a single photonic structure based on bottom-up grown nanowires. We use selective-area vapour-liquid-solid epitaxy to incorporate five energy tuned quantum dots in a single nanowire photonic waveguide, all of which are all optimally coupled to the same optical mode. Each dot acts as an independent source of high purity single photons and the total emission rate is found to scale linearly with the number of embedded emitters. This result is an important step towards producing wavelength multiplexed single photon sources where the emission rate is limited by the number of incorporated emitters.
As science probes ever more extreme facets of the universe, the role of nuclear theory in confronting fundamental questions in nature continues to deepen. Long considered a phenomenological field, breakthroughs in our understanding of nuclear and electroweak forces in nuclei are rapidly transforming modern nuclear theory into a true first-principles, or ab initio, discipline.
In particular this allows us to attack some of the most exciting questions in physics beyond the standard model such as the nature of dark matter and the nature of neutrino masses through a hypothetical process called neutrinoless double beta decay. We first address the gA quenching puzzle which has challenged the field for over 50 years, then discuss rapid advances which now allow for converged calculations of neutrinoless double beta decay nuclear matrix elements for all major players in ongoing searches 76Ge, 130Te, and 136Xe.
The discovery of the lepton-number-violating neutrinoless double-beta decay process will prove that neutrinos are Majorana fermions. The Large Enriched Germanium Experiment for Neutrinoless double-beta Decay (LEGEND) project will search for this decay in $^{76}$Ge. In its first phase — LEGEND-200 — 200~kg of $^{76}$Ge-enriched high-purity germanium detectors will be deployed in a liquid-argon cryostat. It is under construction at the Laboratori Nazionali del Gran Sasso (LNGS) in Italy. The first phase has a background goal of $< 0.6$ counts/(FWHM t y), which yields a $3\sigma$ half-life discovery sensitivity beyond $10^{27}$ years. The second phase — LEGEND-1000 — will comprise 1000 kg of enriched germanium detectors. It will be sited deep underground with SNOLAB as the preferred host. LEGEND-1000 will have a discovery sensitivity beyond $10^{28}$ years. In this talk, I will give an overview of the LEGEND project.
I will review the present and near-term future prospects for new
cosmology results with 21cm probes. This is a
technology-driven observational field and I will describe experimental
challenges and enabling technology in parallel with the science.
Line Intensity Mapping has emerged as a powerful tool to probe the large-scale structure across a wide range of redshift, with the potential to shed light on dark energy at low redshift and the cosmic dawn and reionization process at high redshift. Multiple spectral lines, including the redshifted 21cm, CO, [CII], H-alpha, and Lyman-alpha emissions, are promising tracers in the intensity mapping regime, with several experiments on-going or in the planning. I will discuss results from current pilot programs, and prospects for the upcoming TIME experiment and the SPHEREx mission. I will illustrate how the use of cross-correlation between multiple line intensity maps will enable unique and insightful measurements, revealing for example the tomography of reionization and cosmological probes in the high redshift Universe.
Coming soon!
Neutrinos present a portal into understanding some of the most significant puzzles of modern physics, even as the nature of the neutrino is still mysterious. SNO+ is well positioned to examine some of those puzzles. Located 2 km underground in the Vale Creighton mine in Sudbury at the international facility SNOLAB, SNO+ is the largest liquid scintillator neutrino detector currently in operation. The depth of the experiment makes further measurements of Solar neutrinos possible while the geography makes reactor neutrino measurements possible. The crowning measurement of the experiment is the search for neutrino-less double beta decay which will probe the mass and nature of the neutrino itself. Throughout the filling period, data have been collected that are being used to evaluate the performance of the detector and to make some initial measurements of solar and reactor neutrino physics. Some of those first results will be presented as well as updated results from the previously completed water phase. This presentation will also introduce the physics program of the experiment and give an update of the status of the experiment.
nEXO is a proposed next generation neutrinoless double beta decay experiment. The detector is a single-phase time projection chamber filled with 5 tonnes of liquid xenon enriched in {136}^Xe, designed for a half-life sensitivity of ~$10^{28}$ yr. Events in the detector will result in both ionization and scintillation signals, read out by separate electronic systems. Scaling up from the successful 200 kg EXO-200 to the 5 tonne nEXO detector will significantly increase the source mass as well as improve background discrimination through a monolithic detector design. The detector design and research progress on different components will be presented.
The Deep Underground Neutrino Experiment (DUNE) is a next-generation long-baseline neutrino oscillation experiment. DUNE’s main goal is to provide unprecedented sensitivity in the search for neutrino CP violation, to determine the neutrino mass hierarchy, and to make precision measurements of neutrino mixing parameters. DUNE will be sensitive to low-energy neutrinos coming from supernova bursts, bringing insight for both particle physicists and cosmologists. DUNE’s ambitious physics program also includes searches for proton decay and non-standard neutrino interactions. The experiment will utilize a new broadband high-intensity neutrino beam and a suite of Near Detectors at Fermilab, along with Far Detectors situated 1300 km from Fermilab at the Sanford Underground Research Facility. This presentation reviews DUNE’s extensive physics program and experimental design, as well as recent progress. The ongoing and future activities of the recent Canadian effort will be presented, with an emphasis on how researchers and students can contribute.
T2K and Super-Kamiokande (Super-K) in Japan represent the current generation of successful campaigns to understand the properties of neutrino mixing, using detectors whose physics reach also extends to studies of astrophysical neutrinos and searches for new physics through processes such as nucleon decay or dark matter annihilation. T2K utilizes Super-K as the far detector in a long-baseline neutrino experiment to study oscillations with accelerator-produced muon neutrino or antineutrino beams. This resulted in the discovery of the $\nu_\mu$ to $\nu_e$ oscillation channel and following hints of CP violation in neutrino oscillations.
Hyper-Kamiokande (Hyper-K) is a next-generation experiment informed by the success of T2K and Super-K. It utilizes a water Cherenkov far detector, whose site construction is underway, 8 times larger than Super-K, and will benefit from an upgraded 2.5 times higher intensity beam than T2K. An Intermediate (distance) Water Cherenkov Detector (IWCD) will help mitigate systematic uncertainties to a level commensurate with this unprecedented statistical precision, affording significant discovery potential of leptonic CP violation. In this talk, I will describe the status of the T2K, Super-K, and Hyper-K projects, and highlight planned Canadian contributions to the water Cherenkov detectors, including new photosensors, new methods of calibration and deep learning event reconstruction, and a prototype water Cherenkov test beam experiment (WCTE) at CERN.
In this talk, I will describe my efforts to understand the nature of the mysterious dark matter. I provide an overview of the general problem and then describe my current approach to it, which is to characterize the behavior of a proposed dark matter particle, the axion. I will give some insight into how I am using a range of tools -- model building, computation, and high energy astrophysics -- to get at the basic question of ``what is the statistical mechanics of axion dark matter"?
I will discuss work that shows that the self-interaction should not be ignored and that the sign of the interaction makes a significant difference in the evolution of the system, both for QCD axions and fuzzy dark matter.
Dark matter could be a "thermal-ish" relic of freeze-in, where the dark matter is produced by extremely feeble interactions with Standard Model particles dominantly at low temperatures. In this talk, I will discuss how sub-MeV dark matter can be made through freeze-in, accounting for a dominant channel where the dark matter gets produced by the decay of plasmons (photons that have an in-medium mass in the primordial plasma of our Universe). I will also explain how the resulting non-thermal dark matter velocity distribution can impact cosmological observables.
The identity of dark matter remains a mystery, despite decades of theorizing and detection efforts. This includes the mechanism for its primordial production, its interactions of with itself and with visible matter, and the very nature of dark matter, which could range from a Bose-Einstein Condensate, to Black Holes, to a traditional particle. In this talk I will discuss new directions for dark matter theory and how to experimentally test these ideas. I will focus on two examples, one wherein short-range self-interactions of dark matter lead to the formation of neutron star like cores in dark matter halos, and another wherein dark matter has spin quantum number larger than any particle in the Standard Model, being comprised of particle excitations of a so-called higher spin field.
A worldwide search is underway for elastic scattering between massive dark matter and nuclei in underground laboratories. Asymmetric dark matter particles with masses above a few GeV could easily be captured in stars via the same process. It has long been known that this can lead to observational consequences, as the weakly-interacting particles act as an efficient heat conductor. This can affect neutrino fluxes, astero/helioseismology, and even change the main sequence lifetime of stars. Modelling this process is not straightforward, and typically makes use of approximations at the limit of their validity. I will present recent results based on the first full set of Monte-Carlo simulations of this process since the 1980s, and compare the standard analytic predictions with these more accurate numerical results in order to tease out what we know and don't know about dark matter heat conduction in stars.
Introduction: Three-dimensional transrectal ultrasound (3D TRUS) imaging is utilized in prostate cancer diagnosis and treatment, necessitating manual prostate segmentation which is time-consuming and difficult. The purpose of this work was to develop a generalizable and efficient deep learning approach for automatic prostate segmentation in 3D TRUS, trained using a diverse dataset of clinical images. Large and diverse datasets are rare in medical imaging, so this work also examines the performance of our method when trained with less diverse and smaller datasets.
Methods: Our training dataset consisted of 206 3D TRUS images acquired in biopsy & brachytherapy procedures using two acquisition methods (end-fire (EF) and side-fire (SF)), resliced at random planes resulting in 6,773 2D images used to train a 2D network. Our proposed 3D segmentation algorithm involved deep-learning prediction on 2D slices sampled radially around the approximate centre of the prostate, followed by reconstruction into a 3D surface. A modified U-Net and U-Net++ architecture were implemented for deep learning prediction, as the latter has been shown to perform well with small datasets. Our training dataset was split to train separate EF and SF networks. These split datasets were then reduced in size to 1000, 500, 250, and 100 2D images. Manual contours provided the ground truth for training and testing, with the testing set consisting of 20 EF and 20 SF 3D TRUS images unseen during training.
Results: For the full training set, the U-Net and U-Net++ performed with an equivalent median[Q1,Q3] Dice similarity coefficient (DSC) of 94.8[93.2,95.5]% and 94.7[92.6,95.4]%, respectively, higher than a 3D V-Net and state-of-the-art algorithms in the literature. When trained only on EF or SF images, the U-Net++ demonstrated equivalent performance to the network trained with the full dataset. When trained on EF and SF datasets of 1000, 500, 250, and 100 images, the U-Net++ performed with DSC of 93.7%, 93.9%, 93.2%, 90.1% [EF] and 90.3%, 90.3%, 89.2%, 81.0% [SF], respectively.
Conclusions: Our proposed algorithm provided fast (<1s) and accurate 3D segmentations across clinically diverse 3D TRUS images, demonstrating generalizability, while strong performance with smaller datasets demonstrated the efficiency of our approach, providing the potential for widespread use, even when data is scarce.
Dosimetry is an important part of radiation therapy, ensuring the prescribed treatment is delivered to the patient and avoiding accidental overexposure of adjacent healthy tissue. This includes characterizing proton beams for proton therapy. However, patients in proton therapy facilities are typically also exposed to secondary neutron fields, that are generated in all materials intercepted by the proton beam delivery. As the biological dose from neutrons is larger than from protons, depending on the proton beam delivery, these neutron fields can account for several percent of the overall dose to the patient outside the treated organ. While dosimeters measure the physical deposited dose, they typically do not give information on what type of particle the dose is coming from. Consequently, the biological effect is not well determined if a significant mixed field is present, and the dosimeter cannot completely confirm that the treatment plan is correctly implemented.
As ionizing radiation causes light emission in optical fibres combined with scintillators (Radiation Induced Luminescence – RIL), fibre detectors can be used as dosimeters for radiation therapy, with real-time response. Dosimeters constructed from fibres are extremely compact providing superior spatial resolution, even with the potential of in-vivo dosimetry. Here, we present a combination of several scintillator/fibre detectors that have different sensitivity to proton and neutrons. We tested fibre detectors made with Gd2O2S:Eu, Y2O3Eu, Gd2O2S:Tb, Y2O2S:Eu, YVO4, IG260 and a pure PMMA fibre with 0-400 MeV neutrons and 223 MeV, 63 MeV, 36 MeV and 9 MeV protons. Such a fibre detector combination has the potential to not just measure the physical dose but also to estimate the biological dose.
Radiotherapy and chemotherapy are the gold standard for treating patients with cancer in the clinic but, despite modern advances, are limited by normal tissue toxicity. The use of nanomaterials, such as gold nanoparticles (GNPs), to improve radiosensitivity and act as drug delivery systems can mitigate toxicity while increasing deposited tumor dose. To expedite a quicker clinical translation, three-dimensional (3D) tumor spheroid models that can better approximate the tumor environment compared to a two-dimensional (2D) monolayer model have been used. We tested the uptake of 15 nm GNPs and 50 nm GNPs on a monolayer and on spheroids of two cancer cell lines, CAL-27 and HeLa, to evaluate the differences between a 2D and 3D model in similar conditions. The anticancer drug docetaxel (DTX) which can act as a radiosensitizer, was also utilized, informing future potential of GNP-mediated combined therapeutics. The radiosensitization effects on monolayer vs spheroids with the different sized GNPs was also elucidated. In the 2D monolayer model, the addition of DTX induced a small, non-significant increase of uptake of GNPs of approximately 20% while in the 3D spheroid model, DTX increased uptake by between 50% and 200%, with CAL-27 having a much larger increase relative to HeLa. Further, the depth of penetration of 15 nm GNPs over 50 nm GNPs increased for both cancer spheroids. Measurement of the responses to radiation with GNPs yielded a large radiosensitization effect, with more of the cells on the periphery of the spheroid being affected. These results highlight the necessity to optimize GNP treatment conditions in a more realistic tumor-life environment. A 3D spheroid model can capture important details, such as different packing densities from different cancer cell lines and the introduction of an extracellular matrix, which are absent from a simple 2D monolayer model.
Effective local therapy is needed to avoid local progression of the tumor, which may further decrease the development of systemic metastases and increase the possibility for resection. Radiation therapy (RT) is frequently used to locally treat the tumor. One of the major issues in RT for treating cancer is the close proximity of adjacent organs at risk, resulting in treatments doses being limited by significant tissue toxicities, preventing dose escalation necessary to guarantee local control. One of the currently adapted approaches to overcome this challenge is to add radiosensitizers to current RT protocol to unlock the full potential of RT. In this talk, I will focus on gold nanoparticles (GNPs), docetaxel, and cisplatin as radiosensitizers. About half of cancer patients (50%) receive radiotherapy, and all of these patients would benefit from this type of novel approaches.
Diacetylene molecules can self-assemble into crystals, with three-dimensional packing and separation between molecules dictated by the chemical groups on either side of the carbon-carbon triple bonds. When exposed to ionizing radiation, like photon, electron and proton beams used in radiotherapy applications, some diacetylene crystals undergo a radical solid-state polymerization reaction, resulting in a long polymer chain with alternating triple- and double- carbon bonds. The π-electrons along the conjugated chain undergo transitions between energy states, absorbing light in the UV-VIS in the process. This radiochromic material becomes deeply coloured, where the absorbance, or optical density, in the visible range of the spectrum is a function of the absorbed ionizing radiation dose. Thus, radiochromic materials have been used for several decades as two-dimensional films for quantitative measure of dose and have been more recently investigated for real-time in vivo dosimetry using optical fibres. Packing of diacetylene monomers within the crystal affects not only probability of polymer chain initiation, but also the rate at which polymerization takes place. Understanding the mechanism for this self-assembly and the effect of different side groups on behaviours relevant to dosimetric applications is of great interest. This talk will first discuss the implications of side group selection on usability of radiochromic material in real-time dosimetry, as illustrated in commercially available films to date. Secondly, we will explore challenges with current commercially available radiochromic materials and finally will consider how they can be improved to meet the required criteria for real-time use in patient dosimetry.
Objective: Dose distribution index (DDI) is a dose-volume parameter used in the treatment planning evaluation. DDI provides the dosimetric estimates on the target coverage, sparings of all organs-at-risk and remaining healthy tissue in the treated organ in a single parameter. In this study, the DDI value was predicted by machine learning model using different algorithms.
Methods: The DDI were calculated using its original formula by definition. On the other hand, machine training was carried out to determine the DDI using the same data of 50 prostate volumetric modulated arc therapy (VMAT) plans from the Grand River Regional Cancer Centre, Kitchener, Ontario. Machine learning algorithms such as linear regression, tree regression, support vector machine (SVM) and Gaussian process regression (GPR) were used to predict the DDI value for each prostate VMAT treatment plan. For comparing the performance of the machine learning algorithms, root mean square error (RMSE), prediction time of the machine learning and training time were determined and compared.
Results: Comparing the RMSE values among all algorithms, only the DDI predicted by the medium and coarse tree regression algorithms showed a relatively large RMSE values in the range of 0.021 – 0.034. For other algorithms such as SVM and GRP, they all performed very well in predicting the DDI with smaller RMSE values ranging from 0.0038 to 0.0193. By considering other factors such as prediction speed and training time, the square exponential GPR algorithm had the smallest RMSE value of 0.0038, a relatively high prediction speed of 4,100 observation per second and a short machine training time of 0.18 second.
Conclusion: It is concluded that the family of GPR algorithms performed best in the dose distribution index prediction. It is expected that the accuracy of DDI prediction will increase with more plan data trained using such algorithm.
Objective: We built a RT Bot, a chatbot with characterization for the patient, general public and radiation staff to provide educational information regarding radiotherapy using the artificial intelligence. The Bot was personalized by machine learning to detect the user’s temperament and intent in order to provide the best guidance to the user with a human-like response.
Methods: The Bot was developed using the IBM Watson Assistant functionalities on the IBM cloud. Dataset of information was prepared for different user groups such as descriptions of all processes in radiotherapy, promotion of cancer screening especially high fatality and popular cancer sites, and basic cancer preventive measures such as how to maintain healthy life with suitable diet and exercise. To ensure correct information can be understood and digested by the users with their background (patient, general public and radiation staff), the Bot character was personalized through the IBM Watson Assistant functionalities such as natural language understanding, entities and slots.
Results: The Bot can be operated in a front-end window on any Internet-of-things such as smartphone, tablet, laptop and desktop. In the beginning, the Bot will communicate with the user intentionally with an introduction. The user can then type in any text to answer concerning their enquiry. The Bot usually begins by answering simple questions regarding radiotherapy and providing related information. If the Bot cannot understand the user’s wording, it will provide a guidance to help the user.
Conclusion: A chatbot was built for interdisciplinary educational purpose for the patient, general public and radiation staff using artificial intelligence and machine learning. The Bot may be used by a cancer centres or some private sectors such as high school, community centres, volunteer group and charities, which promote cancer preventive measures and screening for a healthy life or educate user what is cancer and radiotherapy.
Purpose: A quantitative measure of delivered ionizing radiation is recommended for quality assurance and quality control purposes for patients undergoing radiotherapy treatments. Current dosimeters are not well suited for direct measurements due to atomic composition and size limitations. We are developing a fiber optic probe dosimeter based on radiochromic material for in vivo dosimetry. Through measuring the change in optical absorption of the radiochromic sensor, we can quantify the absorbed dose of ionization radiation delivered in real-time. The radiation-sensitive material is composed of lithium-10,12-pentacosa diynoate (LiPCDA), which upon exposure, polymerizes and results in an increased optical density. We observed that monomers of LiPCDA can have two distinct crystalline morphologies, with aspect ratios 10:1 producing hair-like structures and 2:1 resulting in platelets, with polymerized absorbance peaks typically centred at 635 nm and 674 nm, respectively. We aim to characterize and compare the dose-response of the two crystal morphologies achieved through desiccation and Li+ concentration.
Method: The hair-like LiPCDA in commercial film was desiccated, producing crystals with an absorbance peak at 674 nm. Both materials were exposed to 50-3000 cGy using a clinical linear accelerator with a 6MV X-ray beam; samples of varying Li+ concentration were exposed to 200-400 cGy. Absorbance spectra for all samples were collected and were imaged with a scanning electron microscope to compare their crystal morphology.
Results: Differences in crystal morphology were not observed when hair-like LiPCDA was desiccated. However, varying the molar ratio of Li+ to PCDA to produce crystals with either 635 nm or 677 nm absorbance peak, differentiable crystal morphologies were observed. The platelet form is ~3x less sensitive to dose but with a more extensive dynamic range relative to hair-like.
Conclusion: Crystals can be preferentially grown and exhibit differing dose-response. The macrostructure effect on radiation sensitivity in the context of radiotherapy will be explored.
In less than five years, the field of gravitational wave astronomy has grown from a groundbreaking first discovery to revealing new populations of stellar remnants through distant cosmic collisions. I'll summarize recent results from LIGO-Virgo and their wide-reaching implications, give an overview of the instrumentation of the current Advanced LIGO detectors, and discuss prospects for the future of multi-messenger astrophysics with gravitational wave detectors on Earth and in space.
Very large neutrino telescopes are multipurpose instruments that can observe tens of thousands of neutrinos interact at energies well beyond those of man-made accelerators. This has made them unique experiments for studying neutrino properties and probing what might be beyond the Standard Model. Exotic neutrino oscillations, new interactions and new force mediators are among these topics. In this talk I will present the newest neutrino oscillations results from IceCube, the future of this experiment and an exciting new opportunity for deploying a neutrino telescope in Canada: the Pacific Ocean Neutrino Experiment, P-ONE.
New developments toward creating a working 2-way communication between brains and machines offer exciting possibilities, yet are often limited simply by the basic bio-compatibility of the materials employed in their construction. Traditional electrical engineering semiconductors and metals are often quite poor choices for use in a real living wet biological environment, and much recent effort has been devoted to instead develop soft, squishy bio-polymer interface materials, that communicate via photons and not electrons. Inspired by the molecular mechanisms in our eyes that enable vision, photo-reversible azo visible dyes are incorporated into bio-polymers such as silk fibronin, to provide a stable dynamic transduction layer between live neural cells and optical fibres. Sensing neural activity locally and selectively is achieved spectro-scopically via subtle optical changes to the thin dye nano-layers at the fibre ends. Signalling back to a brain can be achieved by simple mechano-transduction via photo-mechanical layers, photo-chemical release of neurotransmitters from artificial vesicles embedded, or via light-reversible changes to surface energy and chemistry. Characterization of the structure and dynamics of these soft active nano-neuro-layers in situ is a key challenge, and results will be detailed from surface energy analysis, and ‘underwater’ Visible Ellipsometry, and Neutron Reflectometry techniques we have developed at McGill, and at Chalk River Laboratories.
Bilateral symmetry in animals commonly leads to a duality in peripheral sensory apparatus. For example, two eyes, as commonly found in most vertebrates, provide a mechanism to encode information such that subsequent neural processing can create stereoscopic perception. Further, two ears lateralized to the sides of the head are important for sound source localization, a key ecological consideration. Recent evidence expands upon this duality and points to a novel biophysical principle, that of synchrony, doubly at play in the auditory periphery. By synchrony, we mean dynamics associated with weakly-coupled self-sustained (i.e., active) oscillators. This talk will discuss two facets by which this arises in the Anolis lizard. First, within a given inner ear, evidence suggest that the sensory cells acting as mechano-electro transducers metabolically use energy to behave as limit cycle oscillators. Further, these "hair cells" can couple together to form groups (or "clusters") that synchronize, effectively allowing them to greatly increase their sensitivity to low-level sounds. Second, by virtue of direct coupling between the tympana (i.e., "eardrums") via an interaural canal, the two ears can synchrnonize, possibly thereby allowing improvements in localization to low-level sounds. Thus in essence, each eardrum is effectively and meaningfully driven from both sides, not just via sound fields external to the head. Taken together, these considerations illustrate a remarkable example by which collective active behavior can mesoscopically emerge to improve the ability of peripheral sensory systems to encode incident information.
When you look at a picture, neurons are excited within your eyes and your brain. Those neurons' activation patterns reflect your perception of the stimulus, and can be measured in neurophysiology experiments. Importantly, these neuronal responses are profoundly shaped by visual experience. In this presentation, I will discuss the nature of the brain's visual representations, and the mechanisms through which those representations are learned and refined by visual experience.
In the weakly electric fish Eigenmannia (glass knifefish), high frequency (200-600Hz) electric organ discharge (EOD) is driven by high frequency cholinergic synaptic input onto the electrocytes at their electroplaques. Assuming periodic release of ACh into the cylindrical synaptic gap, we solve numerically a one dimensional reaction-diffusion model at 200Hz and 500Hz. The model included the diffusion of ACh and its interactions with AChesterase (AChE) in the gap and with AChRs at the post synaptic membrane. At 500Hz a higher AChE/ACh ratio is needed to remove ACh from the cleft between consecutive ACh releases. Only a small fraction of the ACh molecules reaches the AChRs, and there are residual amounts of ACh molecules from the preceding release. Previous computational studies showed that the persistently present ACh should not impede high frequency electrocyte firing, provided the cholinergic current is subthreshold for triggering firing. Our results suggest that the cholinergic current from the carry-over (persistent) activation of AChRs exceeding the firing threshold sets the upper limit for EOD frequency in Eigenmannia individuals, which is observed around 600Hz.
I will review some important challenges for theoretical cosmology, focusing on the trans-Planckian problem for inflation and the anisotropy problem for matter bounce and ekpyrosis, and I will discuss some recent work exploring particular aspects of these problems.
I will argue why we need to remain objective about the physics of the early universe and explore different scenarios. In particular, I will present a cosmological bounce model based on Cuscuton gravity that does not have any ghosts or curvature instabilities. I will then discuss if Cuscuton bounce can provide an alternative to inflation for generating near scale-invariant scalar perturbations. While a single field Cuscuton bounce generically produces a strongly blue power spectrum (for a variety of initial/boundary conditions), scale-invariant entropy modes can be generated in a spectator field kinetically coupled to the primary field. Furthermore, this solution has no singularity, nor requires an ad hoc matching condition. Tensor modes (or gravitational waves) in Cuscuton bounce are also stable but similar to most bounce models, the produced spectrum is strongly blue and unobservable.
Different theories of the very early universe that can explain our observations of the cosmic microwave background are presented. The current paradigm - inflationary cosmology - has received much attention, but it is not the only theoretically viable explanation; indeed, several alternative scenarios exist. It thus bares the question: how can we discriminate between the various theories, both from a theoretical and an observational point of view? A few pathways to answering this question are discussed in this talk.
Despite their weak interactions, neutrinos can carry stupendous amounts of information about the cosmos, thanks to their small masses and large abundance. The highest-energy neutrinos can tell us about the largest particle accelerators in the Universe, and can probe energy scales larger than those available at the LHC. I review the ability of future neutrino telescopes including IceCube-Gen2, and Pacific Ocean Neutrino Experiment (P-ONE) to determine the precise flavour composition and source of astrophysical neutrinos above 100 TeV, in light of improved measurements of neutrino properties by JUNO, DUNE and HyperKamiokande. Finally, I will discuss the ability of future neutrino telescopes to search for new physics such as neutrino decay, dark matter and microscopic black holes.
Recent unprecedented developments in astronomical observations have established the era of multi-messenger astronomy. Weakly interacting neutrinos play a fundamental role in the evolution of supernovae, neutron star mergers, and accretion disks around black holes. The byproducts of neutrino reactions with ejected matter as well as their direct detection provide extra insight about the physics of their interiors. The analysis of such signals together with other multi-messengers will shed light in our understanding of related phenomena such as the synthesis of heavy elements and the mechanism of stellar explosions. In this talk, I shall discuss the connection between neutrinos and compact objects in the Galaxy, as well as at cosmological scales.
The Scintillating Bubble Chamber (SBC) experiment is a novel multipurpose technique optimized for low-energy nuclear recoils detection. Two semi-identical detectors are under development by the collaboration, aimed at studying dark matter interactions (SBC-SNOLAB) and reactor CEvNS interactions (SBC-CEvNS). This talk will review the detector strategies and the feasibility studies of the weak mixing angle, neutrino magnetic moment, and a light Z′ gauge boson mediator for different SBC-CEvNS configurations. Finally, we will highlight how world-leading sensitivities are achieved with a one-year exposure for a 10 kg chamber at 3 m from a 1 MW$_{th}$ research reactor or a 100 kg chamber at 30 m from a 2000 MW$_{th}$ power reactor.
NEWS-G (New Experiments With Spheres-Gas) is a rare event search experiment using Spherical Proportional Counters (SPCs). Primarily designed for the direct detection of dark matter, this technology also has appealing features for Coherent Elastic Neutrino-Nucleus Scattering (CE$\nu$NS) studies. CE$\nu$NS is a process predicted by the standard model and can be used as a tool to probe new physics and other applications, such as monitoring neutrino flux from nuclear reactors or sterile neutrino search.
The NEWS-G collaboration is studying the feasibility of detecting CE$\nu$NS at a nuclear reactor using an SPC. I will discuss the efforts made by the NEWS-G collaboration to assess the feasibility of such an experiment.
An overview of the latest results and Run 3 prospects for Heavy Neutrino searches at ATLAS will be discussed.
Presentation of the results of the EDI Survey.
The multiple interactions of light with biomolecules, cells and tissues enable established and emerging techniques and technologies used in cancer research and patient care. These approaches range from simple, point-of-care devices to complex, multifunctional platforms combined with complementary non-optical methods, including nanotechnologies, robotics, bioinformatics and machine learning. This seminar will use specific examples from current research to illustrate the biophysical and biological principles underlying the emerging fields of “onco-photonics” or “photo-oncology”.
In our presentation, we will offer an overview of aperture-type scanning near field optical microscopy (SNOM) – a family of nano-optical imaging techniques derived from scanning probe microscopy which are capable of subwavelength resolution, and the development of three dimensional (3D) SNOM methods undertaken by our group to locally image the distribution of the electromagnetic radiation in the proximity of nanoparticles and nano-objects. We will discuss a few applications in which we took advantage of 3D-SNOM to design specific optical nanosystems for light harvesting. Specific case studies that will be presented include the design of plasmonic thin-film solar cells enhanced by random arrays of copper nanoparticles, and the use of 3D-SNOM for characterizing evanescent waveguides self-assembled from of copper nanoparticles assembled on thin films of graphene. In the final part of our talk, we will we present near-field scanning thermoreflectance imaging (NeSTRI), a new pump-probe technique invented in our group, in which an aperture-type SNOM is used to contactlessly determine the thermal conductivity of inhomogeneous thin films at the nanoscale. These examples well represent the versatility of SNOM imaging and its potential for designing an even wider family of nano-optical devices.
I will present my group’s recent efforts to combine atomic ensembles with nanophotonic structures. I will describe our experiment in which photons emitted by a quantum dot embedded in a semiconductor nanowire are sent into an ensemble of laser-cooled caesium atoms confined inside a hollow-core photonic-crystal fibre to realize photon storage and single-photon wavelength conversion. Additionally, I will report on our progress in developing new types of mesoscopic optical cavities based on dichroic mirrors realized with chiral photonic crystal slabs and metasurfaces. This research was undertaken in part due to funding from the Canada First Research Excellence Fund.”
Blister formation occurs when a laser pulse is focussed through a transparent substrate onto a coated polymer thin film. A pocket of expanding vapor is formed beneath the film, which pushes the film upward locally. This process has been used for Laser-Induced Forward Transfer (LIFT) of materials. Most studies of blister formation and blister-based LIFT use linear absorption of nanosecond or picosecond lasers to obtain large target areas (~100s of µm$^2$). We are the first to achieve nanoscale blisters, through nonlinear absorption of femtosecond pulses.
We spin-coated polyimide films achieving a thickness of 1.3 µm. We used a Ti:Sapphire laser producing pulses of 45-fs duration at a central wavelength of 800 nm. We mounted samples onto a 3D motion stage, and focused single pulses of various energies through the glass substrate onto the polymer-glass interface. Since polyimide is transparent to 800 nm light, we used tightly-focused (NA ≥ 0.4) femtosecond pulses to induce nonlinear absorption. We characterized samples after blister fabrication using atomic force microscopy (AFM). At intensities above 10$^{13}$ W/cm$^2$, interactions of the pulse with both the film and substrate must be considered. We model these interactions and find that the resulting blister volume is proportional to the energy deposited in the film.
The use of 0.95 NA focusing led to a minimum structure diameter of 700 nm, smaller than the wavelength of the laser pulse. In the future, we propose the use of thinner films and shorter wavelengths to reach further into the nanoscale. This technique can be used for direct micro- and nano-fabrication, and potentially to LIFT sensitive materials on the nanoscale. It is a possible alternative to lithography, laser milling, and laser-based additive machining that also leaves the surface composition unchanged, since the laser energy is deposited beneath the film.
Currently, plasmonic nanofibers doped with semiconductor quantum dots, organic dye quantum emitters (QEs), and metallic nanoparticles (MNPs) have attracted much attention due to their wide range of applications including waveguides, light-sources, and optical sensors. These nanofibers doped with QEs and MNPs have been fabricated using a variety of metals and emitters. For example, Hu et al. [1] have studied the fabrication of a plasmonic random fiber from gold MNPs and pyrromethene dye molecules (QEs) embedded in the liquid core optical fiber. They found that a narrower and sharper photoluminescence (PL) spectrum can be more easily obtained when there is greater overlap between the plasmonic resonance of the gold-MNP and the dye molecules. Here we have developed a theory of photoluminescence for plasmonic nanofibers [2]. When probe light propagates inside the nanofiber, it induces surface plasmon polariton (SPPs) and electric dipoles in metallic nanoparticles. These dipoles interact with each other via the dipole-dipole interaction (DDI) [3]. The energy of photonic bound states in the presence of the SPP and DDI fields is then calculated. We have demonstrated that the number of bound states can be controlled by changing the strength of the SPP and DDI couplings. The expression of photoluminescence has been calculated using the density matrix method in the presence of the DDI coupling. We found that the intensity of the PL spectrum depends on the quality called quantum efficiency, which depends on the radiative and non-radiative decay rates. We have found that the quantum efficiency is enhanced when the exciton energy is in resonance with the bound photon energy. Further, we predicted that the PL intensity is also enhanced due to the DDI coupling. The enhancement of the PL spectrum can be used to fabricate plasmonic nanosensors.
[1] Hu, Z et al., Gold nanoparticle-based plasmonic random fiber laser. J. Opt. 2015, 17, 35001.
[2] Singh, M. R.; Brassem, G.; Yastrebov, S. G. Optical quantum yield in plasmonic
Nanowaveguide. Annalen der Physik in press, 2021.
[3] Singh, M. R. The effect of the dipole–dipole interaction in electromagnetically induced transparency in polaritonic band gap materials. Journal of Modern Optics 2007, 54, 1739.
Liquid-phase exfoliation (LPE) is a low-cost and scalable technique for producing a wide range of van der Waals nanomaterials that can be incorporated into existing laboratory sample and industrial material production. Liquid-phase exfoliated nanomaterials have the potential to produce devices quickly and at low-cost, with colloidal dispersions easily adaptable to existing production methods. Other methods of nanomaterial production, such as chemical vapour deposition (CVD) and mechanical exfoliation can be costly, time-consuming and require complicated equipment to produce comparatively small area devices. Such methods are excellent for laboratory-scale samples to examine physical properties and produce proof of concept devices. However, LPE can and will bridge the gap towards real-world applications that require faster, easier and more cost-effective production methods. In this work, we investigate the saturable absorption and Kerr nonlinearity of graphene fabricated by LPE and CVD.
Thin films of LPE graphene were produced on BK7 glass substrates, and compared to CVD graphene transferred onto an identical substrate. Through careful consideration of the concentration of graphene dispersion and deposition methods, very thin films of graphene can be prepared. Atomic force microscopy (AFM) measurements showed effective bi-layer graphene thickness for the LPE samples. Z-scan measurements performed with 180 fs pulses at a wavelength of 1030 nm reveal that both LPE and CVD graphene display strong saturable absorption characteristics, with a nonlinear absorption coefficient (β) approaching -104 cm/GW and a Kerr nonlinearity (n2) of -1 cm2/GW. Such strong saturable absorption is ideal for devices such as mode-lockers for ultrafast pulsed lasers. The magnitude of the nonlinear absorption coefficient of the LPE graphene increases with pulse duration, τ, up to 105 cm/GW at around τ = 10 ps. These results pave the way for the use of LPE graphene in nonlinear optical applications such as frequency generation and mode locking.
The Belle II experiment at the SuperKEKB collider in Tsukuba, Japan began physics data taking in 2019. With a target integrated luminosity of 50 ab-1, Belle II aims to record a data sample that is roughly 40-100 times larger than its predecessors thus enabling some uniquely high-precision studies of b-quark, c-quark, and tau-lepton physics. The experiment provides an interesting environment to search for a wide variety of dark sector particles, possible dark matter candidates, and other low-mass particles predicted by theories of physics beyond the Standard Model. In this talk, I will summarize recent Belle II physics result based on the initial data taking, and discuss future prospects for the experiment.
Belle II is a B factory experiment for the SuperKEKb electron-positron collider located at the KEK laboratory in Tsukuba, Japan, operating near the Upsilon(4S) resonance, at an energy of 10.58 GeV. In this talk I will discuss our analysis searching for the ultra-rare charged lepton flavour violating (CLFV) decay $B^+ \to K^+ \tau$ e. This decay is far below experimental sensitivity if we assume the decay rate predicted by the Standard Model. However, many extensions of the Standard Model, specifically those attempting to incorporate the recent “B physics anomalies”, predict much larger branching fractions which are potentially within the reach of experiments. Discovery of this mode would be explicit evidence of physics beyond the Standard Model, while a null result would allow us to place strict constraints on these models. A previous search was done at BaBar in 2012, setting a 90% CL upper limit branching fraction of a few x $10^{-5}$. The much larger integrated luminosity dataset at Belle II can be exploited to improve the analysis sensitivity by at least an order of magnitude. A brief overview of the Belle II experiment and the theoretical aspects of CLFV will be discussed, along with the current status and future potential of our analysis.
The Belle II experiment is a next-generation $B$-factory experiment located at the SuperKEKB $e^+e^-$ collider, with the focus on examining the decays of $B\bar{B}$ meson pairs. The Belle II experiment started data taking in March 2019. It has since reached a world-record instantaneous luminosity of $2.4\times10^{34}{\rm cm^{-2}s^{-1}}$, and has accumulated a total of $90.0\,{\rm fb^{-1}}$ to date. One of the main goals of the experiment is the precision measurement of the Cabbibo-Kobayashi-Maskawa (CKM) quark-mixing matrix elements. The $V_{ub}$ element of the CKM matrix describes the coupling strength between $u$ and $b$ quarks. The semileptonic $B$ meson decays of the type $B \to X_u \ell \nu$ play a critical role in the determination of $|V_{ub}|$. An inclusive untagged search for the $B \to X_u \ell \nu$ process at Belle II will be presented. Only the final state charged lepton is selected, while the final state meson and the companion $B$ meson in the event are not reconstructed. The final state neutrino cannot be detected and manifests as missing energy in the event. This decay is suppressed compared to the decay with a charm quark in the final state, $B \to X_c \ell \nu$, which is the main background for this mode. Because the up quark is lighter than the charm quark, the leptons in the $B \to X_u \ell \nu$ decay can reach higher energies. This is exploited in the analysis by extracting the $B \to X_u \ell \nu$ yield in the momentum endpoint region of the charged lepton, where the $B \to X_c \ell \nu$ contributions are negligible. Reconstruction and background suppression methods will be presented, leading to a discussion of the current results and of the prospects for this measurement with the Belle II experiment.
Defect engineering plays an essential role in materials science and is of paramount importance in thin-film device fabrication. Novel experimental methods are needed to identify and quantify defects during film growth. The Debye temperature (DT) of a solid is a representation of the stiffness and so is sensitive to defect concentrations. The DT tends to decrease in the vicinity of the surface such that the endpoint value found for the top atomic layer is known as the surface DT. In this collaborative project, we have used a suite of surface characterization techniques to characterize and quantify defects on the surface and in the near-surface region in epi-films compared to single crystals. We applied Rutherford Backscattering Spectroscopy (RBS, random and channeling modes), Positron Annihilation Spectroscopy (PAS), and Low Energy Electron Diffraction (LEED) to study defect density and distribution and calculate surface DT for different epitaxially grown thin films (Si films on sapphire, and Ge on Si (001)). We used Rutherford Backscattering Spectroscopy (RBS) in a channeling alignment to measure defect distribution as a function of depth, which can be correlated with PAS measurements, giving information about defect densities. These results were compared with surface DT calculated from LEED patterns which showed that the larger the concentration of defects in the epitaxial layer, the lower is the surface DT. For example, the surface DT’s of bulk Si (001), 1μm Si on sapphire, and 0.6μm Si on sapphire were 609K, 574K, and 535K, respectively. However, experimental uncertainties of LEED DT are large and show dependence on the diffraction peak index, electron energy, and inner potential in calculations. Overall, we found good agreement between estimates of surface DT from LEED, defect densities estimate from RBS, and PAS results.
The morphology of ice formed under flowing liquid water is a challenging
free-boundary problem. A common case in nature is the formation of icicles,
which grow as liquid water flows down the surface, freezing as it descends.
Theories of icicle growth have always assumed a thin liquid coat over the
entire icicle's surface. These theories predict the growth in length and mean
diameter well, but have so far failed to explain how ripples form. The ripples
that commonly wrap around icicles have been observed to be solely dependent on
the presence of impurities in the source water in concentrations as low as 20ppm NaCl.
We present experimental observations of the flow and wetting behaviour of water
on actively growing icicles using a fluorescent dye. Sodium fluorescein acts as
both an indicator of liquid and instability triggering impurity. The water does
not coat the entire icicle. Rather it descends in rivulets leaving trails of
water or adding to liquid reservoirs already on the surface. The patches of
water left on the surface are larger for higher concentrations and are
distributed to match the ripples that form.
The wetting behaviour is affected by the ice's texture, surface chemistry, and
topography. We examined these effects by growing icicles on of cylinders of ice
to isolate these effects. While ripples began to form on roughened and
salt-doped ice, they only wrapped around the icicle to form a rib at a hard
edge or near the tip. In those locations the water spreads over the entire
circumference, which may encourage the ripple pattern to wrap around the
icicle. This incomplete coverage appears to affect the morphology of the
growing icicle and may be an important component of the mechanism of ripple
formation.
The presence of impurities appears to trigger a feed-back between the water
distribution and the ice properties: the impurities
cause variations in texture, chemistry and shape, which in turn attracts more
water to those locations, providing more material to freeze.
Frustrated magnetic materials and strongly correlated electron systems are a forefront of research in modern condensed matter physics and materials science. Despite almost three decades of investigations, the theoretical understanding of these fascinating systems remains incomplete. The most prominent theoretical frameworks used to tackle these systems take the form of an emergent gauge theory akin to the gauge theory that describes conventional electromagnetism.
Spin ice is an unusual substance in which the magnetic moments of individual atoms behave very similarly to the protons in conventional water ice — hence the name spin ice — failing to align even at very low temperatures and displaying the same residual entropy that Linus Pauling calculated for water ice and which is measured experimentally. Spin ices, which belong to the broad class of compounds called magnetic pyrochlores, actually have something in common with electromagnetic fields; both can be described by a gauge theory. Many aspects of conventional electromagnetism are sensitive to constraints from enclosure boundaries, such as total internal reflection used in communication with optical fibers. It is then reasonable to wonder if spin ices have similar sensitivities to boundary effects and confinement. Motivated by the recent experimental realizations of spin ice and other magnetic pyrochlore thin films, I will discuss in this talk some of the exotic physical phenomena that arise when considering spin ice thin films such as, for example, a novel magnetic charge crystallization on the film surface while the bulk remains thermally disordered [1]. From a broader context, magnetic pyrochlore thin films offer a natural platform to study the confinement of emergent gauge fields describing strongly correlated systems and the evolution of nontrivial magnetic correlations as one moves from three to two-dimensional spin textures [2]. Finally, I will discuss the consequences of open surfaces on the mechanism of order by disorder in thin films of the XY pyrochlore antiferromagnet. We find that a complex competition between multiple orders take place, as a function of temperature and film thickness. A gradient of ordering spreads over long length scale inside the film while the nature of the phase transitions is blurred between two- and three-dimensional critical phenomena [3]. Beyond the physics of films, this work may also pertain to near-surface effects in single crystals of rare-earth pyrochlore oxides.
[1] L. D. C. Jaubert, T. Lin, T. S. Opel, P. C. W. Holdsworth and M. J. P. Gingras; Phys. Rev. Lett. 118, 207206 (2017).
[2] Étienne Lantagne-Hurtubise, Jeffrey G. Rau and Michel J. P. Gingras; Phys. Rev. X 8, 021053 (2018).
[3] L. D. C. Jaubert, J.G. Rau, P. C. W. Holdsworth and M. J. P. Gingras; unpublished.
Ability to control spin is important for probing many spin related phenomena in the field of spintronics. Spin-orbit torque is an important example in which spin flows across magnetic interface and helps to control magnetization dynamics. As spin can be carried by electrons, spin-triplet pairs, Bogoliubov quasiparticles, magnons, spin superfluids, spinons, etc., studies of spin currents can have implications across many disciplines. In this talk, I first review the most common ways to generate spin flows and then concentrate on how spin can be controlled in insulating materials. In the first part of the talk, I will discuss a linear response theory based on the Luttinger approach of the gravitational scalar potential and apply this theory to magnon transport in antiferromagnetic insulators, ranging from collinear antiferromagnets [1,2,3] to breathing pyrochlore noncollinear antiferromagnets [4,5]. The theory also applies to noncollinear antiferromagnets, such as kagome, where we predict both the spin Nernst response [4] and generation of nonequilibrium spin polarization [5] by temperature gradients, the latter effect constitutes the magnonic analogue of the Edelstein effect of electrons. In the second part of this talk, I will discuss the spin superfluid transport in exchange interaction dominated three-sublattice antiferromagnets. The system in the long-wavelength regime is described by an SO(3) invariant field theory (nonlinear sigma model). Additional corrections from Dzyaloshinskii-Moriya interactions or anisotropies can break the symmetry; however, the system still approximately holds a U(1)-rotation symmetry. Thus, the power-law spatial decay signature of spin superfluidity is identified in a nonlocal-measurement setup where the spin injection is described by the generalized spin-mixing conductance [6,7]. We suggest iron jarosites as promising material candidates for realizing our proposal. Both magnons and spin superfluidity flows are examples of spin flows with low dissipation and as a result our studies pave the way for the creation of novel electronic devices for classical and even quantum information processing where the signals can propagate with almost no dissipation. If time permits, I will also discuss realizations of skyrmion lattices in noncollinear antiferromagnets.
[1] V. Zyuzin, A.A. Kovalev, Phys. Rev. Lett. 117, 217203 (2016).
[2] Y. Shiomi, R. Takashima, E. Saitoh, Phys. Rev. B 96, 134425 (2017)
[3] B. Li, A.A. Kovalev. Phys. Rev. Lett. 125, 257201 (2020)
[4] B. Li, S. Sandhoefner, A.A. Kovalev, arXiv:1907.10567 (2019)
[5] B. Li, A. Mook, A. Raeliarijaona, A.A. Kovalev, arXiv:1910.00143 (2019)
[6] G. G. Baez Flores, A.A. Kovalev, M. van Schilfgaarde, K. D. Belashchenko, Phys. Rev. B 101, 224405 (2020)
[7] B. Li, A.A. Kovalev, arXiv:2011.09102
In this talk I will discuss how one may view four-dimensional de Sitter space as a coherent Glauber-Sudarshan state in string theory. I will also discuss why a de Sitter space cannot exist as a vacuum state in string theory.
I will discuss different notions of nonrelativistic strings and their target space geometries. The first example comes from a self-contained corner of string theory dubbed nonrelativistic string theory, which is closely related to string theory in the discrete light-cone quantization. The appropriate spacetime geometry for nonrelativistic string theory is a stringy generalization of Newton-Cartan geometry. The second example involves sigma models at a Lifshitz point, which describe strings moving in bimetric spacetime. In the limit when the two metrics coincide, the relativistic sigma model that underlies string theory can be recovered. This study of Lifshitz-type sigma models also provides useful insights for constructing a quantum theory of membranes.
I will review the recently discovered ’t Hooft anomalies involving higher-form symmetries and discuss some of their implications for the dynamics of vector-like gauge theories.
Effective field theories (EFT) are widely used to parameterize long-distance effects of unknown short-distance dynamics or possible new heavy particles. It is known that EFT parameters are not entirely arbitrary, and in particular must obey positivity constraints if causality and unitarity are satisfied at all scales. We systematically explore those constraints from the perspective of 2 to 2 scattering processes, and show that all EFT parameters in units of the mass threshold M are bounded below and above: causality requires a sharp form of dimensional analysis scaling.
I describe the first investigation of the holographic complexity conjectures for rotating black holes. Exploiting a simplification that occurs for equal-spinning odd dimensional black holes, I demonstrate a relationship between the complexity of formation and the thermodynamic volume associated with the black hole. This result suggests that it is thermodynamic volume and not entropy that governs the complexity of formation in both the Complexity Equals Volume and Complexity Equals Action proposals. This proposal reduces to known results involving the entropy in settings where the thermodynamic volume and entropy are not independent, but has much broader scope. Assuming the validity of a conjectured inequality for thermodynamic volume, this result suggests the complexity of formation is bounded from below by the entropy for large black holes.
Quantum computing is a rapidly growing field both in academia and industry. This is driving the need to expand traditional course offerings and degree programs to train the next generation of researchers and quantum scientists. Most programs have focused on graduate courses and research opportunities for students with a physics background. Laurier’s combination of physics and computer science within a single undergraduate department, provided a unique opportunity to introduce an undergraduate 3rd year course in quantum computing. The course was designed to be open to all science majors who have the required mathematical background. This talk will describe the goals and framework used to build the course, the outcomes so far and the lessons learned along the way.
Prior research has found limitations in how students reason about uncertainty and measurement in introductory courses, with many students thinking point-like (a single measurement could be the true value) rather than set-like (a set of measurements estimate the parameter). Motivated by the question, "How does that intro-level reasoning influence student thinking about quantum mechanical measurement," we conducted interviews and surveys to probe student reasoning about uncertainty and measurement across classical and quantum mechanical contexts. The work also aims to characterize the possible paradigms of student thinking about uncertainty and measurement across physics contexts, adding nuance to the point- and set-paradigms.
Medical x-ray imaging has revolutionized modern medicine. A necessary and critical component of a medical x-ray imaging system is the x-ray detector. Over the past 50 years, x-ray detectors have evolved from film-screen systems, to computed radiographic cassettes, culminating in flat-panel digital x-ray detectors that directly capture image data during patient examination, bypassing the need for an intermediate data readout between data acquisition and image viewing. This approach has increased the efficiency of medical imaging procedures, with the added benefit of improved image quality. Flat-panel detectors also enable cone-beam computed tomography, which is used in dentistry, interventional radiology, and radiation therapy treatment planning and verification. Flat-panel x-ray detectors used in clinical practice are energy-integrators, which means that the image signal is proportional to the intensity of the x-ray beam. Recent technological innovations, largely developed to satisfy the needs of CERN’s particle collision experiments, have resulted in photon-counting x-ray imaging detectors. These detectors enable identifying individual photon interactions at rates adequate for a large range of medical applications. This technology may reduce the radiation dose of x-ray imaging procedures, and may enable new energy-based x-ray imaging methods that estimate the shape of medical x-ray spectra to provide new types of image contrast not possible with energy-integrating detectors. This talk will discuss the basic physics of photon-counting x-ray imaging and the key factors that need to be overcome to realize the full potential of this exciting new technology.
For almost five decades, Magnetic Resonance Imaging (MRI) has been on a monotonic technological progression towards higher and higher magnetic field strength. This is largely due to the fact that, as any physicist will tell you, nuclear magnetization and therefore MR signal strength scales with the applied field strength. Why then go backwards to a low magnetic field to explore advanced neuroimaging when “everyone knows” we should be using high magnetic fields in MRI?
While strong magnetic field confers many benefits – e.g. bigger chemical shifts, stronger fMRI contrast, etc – it is not a panacea. In contrast, MRI at lower magnetic field exhibits decreased spatial distortion due to lower susceptibility induced field inhomogeneity, decreased RF heating of tissue, improved RF pulse B1 homogeneity, etc. We have recently explored the use of a novel MRI device (Synaptive Medical Inc, Toronto ON) that utilizes a cryogen-free head-only 0.5-T magnet with 16-channel fully digital receive chain and a 100 mT/m gradient set with a usable slew rate of 400 T/m/s. The introduction of modern spectrometer design to a 0.5-T magnet has permitted us to explore a range of advanced neuroimaging applications. This allows us to take advantage of all the benefits of low field while mitigating the drawbacks of decreased signal strength through improved T/R chain and gradient coil design.
Through this research we will explore a spectrum of advanced neuroimaging applications of this technology including:
• “Distortion Free” Diffusion Weighted Imaging, leveraging the high strength/slew gradient design;
• “Band Free” balanced SSFP imaging of internal auditory canal, leveraging the improved field homogeneity and low RF specific absorption rate inherent to low magnet field;
• Accelerated MR screening exams for use in stroke imaging, leveraging the high performance receive chain for generating excellent SNR images.
MRI provides exquisitely detailed images of brain and spinal cord anatomy and pathology. MR images are multi-planar, radiation free and have a greater sensitivity and specificity than either CT or ultrasonography. Although an engineering challenge, the placement of MRI systems in the operating room will revolutionize neurosurgical care. Surgical navigation was repeatedly updated by iMR images able to detect brain shift resulting from CSF leakage. iMRI has also identified a significant number of patients who harboured unsuspected, residual tumour at the end of surgery, thus sparing them the discomfort and expense of reoperation. Newer MRI techniques such as DTI and fMRI were bought into the operating room allowing for locating vital tracts and areas during the surgical process with the concomitant brain shift.
A 1.0T conduction cooled superconducting magnet has been designed for intraoperative MRI. The warm bore of the magnet is 700mm in diameter and 1200mm in length, the magnet with a high homogeneity of a magnetic field in a DSV of300mm. Total weight of the magnet is 1.8 ton,two 4K cryocoolers are applied for cooling the magnet,the magnet is installed in a mover, and still at field during movement.In this paper,electro-magnetic design,quench simulation,eddy current simulation and test results of the magnet are presented.
Carbon monoxide (CO) has a bad reputation due to potential lethal consequences when inhaled at high concentrations in humans. However, at low doses CO exerts a broad spectrum of biological activities that results in a variety of beneficial actions including among others anti-inflammatory, vasodilatory, anti-apoptotic and anti-proliferative effects [1].
Plasma can generate CO from the dissociation of CO2; in this context, non-equilibrium plasma at atmospheric pressure is an attractive in situ CO source since it is able to create CO at low doses from CO2 [2]. Moreover, plasma can be used for biomedical applications and intense research is now being conducted on its potential therapeutic use for the treatment of pathologies such as cancer and skin wounds. Plasmas are very versatile as they possess the capacity to generate large amounts of reactive species combined with electric field, photons and charged particles. However, the combination of plasma and CO for biomedical applications remains to be fully explored.
This presentation will focus on the challenge to develop a plasma reactor to generate controlled quantities of CO that can be used for therapeutic purposes. The reactor is based on plasma jet configuration where the discharge is produced in a coaxial dielectric barrier discharge (DBD) reactor equipped with a quartz capillary tube [3]. Helium with small addition of CO2 goes through the device. To assess and quantify the production of CO from plasma, we developed a system whereby mouse blood hemoglobin, a strong scavenger of CO, interact with the plasma reaction. Once CO binds to hemoglobin, it forms carboxyhemoglobin (COHb), which can be easily and precisely quantified by a spectrophotometer. We will present the first results showing that indirect and direct plasma treatments have different effects on the production of CO and its binding to hemoglobin.
[1] R. Motterlini and L. E. Otterbein, Nat. Rev. Drug Discov., vol. 9, no. 9, pp. 728–743, Sep. 2010.
[2] E. Carbone and C. Douat, Plasma Med., vol. 8, no. 1, pp. 93–120, 2018.
[3] T. Darny, J.-M. Pouvesle, V. Puech, C. Douat, S. Dozias, and E. Robert, Plasma Sources Sci. Technol., vol. 26, no. 4, p. 045008, Mar. 2017.
Non-thermal plasma (NTP) is being increasingly considered for its many medical applications. Even though NTP comprises physical factors such as the electric field and charged particles, NTP is mostly recognized to induce biological responses through its production and delivery of reactive species such as reactive oxygen and nitrogen species (RONS). Precise tuning of RONS is an important issue for plasma medicine as different RONS compositions and concentrations can lead to different clinical outcomes. For example, in some situations NTP was found capable of inducing cell proliferation, thus promoting wound healing, while in other situations NTP was found to induce proliferation arrest, thus yielding anticancer effect [1]. This highlights the fact that NTP should not be considered as a simple drug with its dose defined as a single parameter. NTP can be better viewed as a vector to administer reactive molecules, hence making accessible molecules that cannot be administered via more stable solid or liquid states.
Different NTP devices thus possess different physical properties that produce various RONS leading to distinctive biological responses. For example, varying the driving frequency or the plasma-forming gas can lead to different plasma properties and change drastically the outcome of the treatment. In this work, we use the convertible plasma jet in order to produce three different NTPs using the same plasma-forming gas and the same driving frequency [2]. Investigating the cytotoxic effect of NTP with an in vitro model of triple-negative breast cancer cells in suspension, we observed that cytotoxicity not only depends on the discharge mode, but that the cellular response to the addition of nitrogen or oxygen to the plasma-forming gas is modulated according to the discharge mode. This highlights the fact that fine-tuning of plasma parameters could become an essential step in future NTP treatments in the clinic.
The author acknowledges FRQNT, NSERC, MEDTEQ, Mitacs, TransMedTech, NexPlasmaGen Inc. for research funding.
References
[1] L. Boeckmann, M. Schäfer, T. Bernhardt, M.-L. Semmler, O. Jung et al. Applied Sciences, 10, 6898 (2020)
[2] J.-S. Boisvert, J. Lafontaine, A. Glory, S. Coulombe and P. Wong, IEEE Transactions on Radiation and Plasma Medical Sciences, 4, 644-654 (2020)