- Compact style
- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Help us make Indico better by taking this survey! Aidez-nous à améliorer Indico en répondant à ce sondage !
Welcome to the CAP2023 Indico site. This site is being used for abstract submission and congress scheduling. Post-deadline poster abstracts will be accepted until May 22, 2023. The Congress program can be seen by selecting "Timetable" in the left menu. Congress registration is now open using the link in the left menu.
Bienvenue au site web Indico pour ACP2023. Ce site servira à la soumission de résumés et à la préparation de l'horaire. Les résumés d'affiches après la date limite seront acceptés jusqu'au 22 mai 2023. Le programme du congrès peut être consulté en sélectionnant "Timetable" dans le menu de gauche. L'inscription au congrès est maintenant ouverte en utilisant le lien dans le menu de gauche.
|
Panel with Dr. Ania Harlick, Dr. Henry Shum, Dr. Nomaan X, Dr. Sabrina Leslie, and Dr. Sanjeev Seahra
Life as a physicist can take you many directions; just in academia, there are many different fields and positions one might end up in. On this panel, we hear about physicists of many walks of life and different stages of their careers for some insight of what the future might hold for you as a physicist, and the obstacles that may come. The panelists range from postdoc to head of department; a broad scope that should cover everyone's interest! The panelists will introduce themselves along with their careers, and then participants will have the chance to ask questions to the panelist(s) of their choice.
This is a COVID-safe event; all participants must wear a face mask.
A presentation/workshop by Dr. Stephen Heard
As an author of a scientific paper, you face a bewildering array of options for publication. There are thousands of journals: some very general, and some narrow in scope; some well-known, and some obscure. The situation is complicated even further by the recent proliferation of “predatory” journals and by an escalation in publishing costs. Dr. Heard will discuss some of the factors one might consider in choosing a journal. Participants will then work in small groups to evaluate journals and consider strategy for publishing a paper.
This is a COVID-safe event; all participants must wear a face mask.
Dr. Stephen Heard is a Professor of Biology at the University of New Brunswick, and the author of The Scientist’s Guide to Writing: How to Write More Easily and Effectively Throughout Your Scientific Career (Princeton University Press; 2nd ed. 2022). He has published over 90 scientific papers and served as a journal Associate Editor for more than 20 years.
While it is considered to be one of the most promising hints of new physics beyond the Standard Model, dark matter is as yet known only through its gravitational influence on astronomical and cosmological observables. I will discuss our current best evidence for dark matter’s existence as well as the constraints that astrophysical probes can place on its properties while highlighting some tantalizing anomalies that could indicate non-gravitational dark matter interactions. Future observations, along with synergies between astrophysical and experimental searches, have the potential to illuminate dark matter’s fundamental nature and its influence on the evolution of matter in the cosmos from the first stars and galaxies to today.
Hyperbolic lattices are a new form of synthetic quantum matter in which particles effectively hop on a discrete tiling of two-dimensional hyperbolic space, a non-Euclidean space of negative curvature. Hyperbolic tilings were studied by the British-Canadian geometer H.S.M. Coxeter and popularized through art by M.C. Escher. Recent experiments in circuit quantum electrodynamics and electric circuit networks have demonstrated the coherent propagation of wave-like excitations on hyperbolic lattices. In this talk, I will survey a few of the many exciting directions opened up by this new field, including generalizations of Bloch band theory for hyperbolic lattices, hyperbolic topological materials, and tabletop simulations of the AdS/CFT correspondence.
The Standard Model (SM) of particle physics has been very successful in describing the elementary particles and their interactions. The search for neutrinoless double-beta decay ($0\nu\beta\beta$) offers a way to probe for physics beyond the SM. Observation of $0\nu\beta\beta$ would unambiguously demonstrate violation of lepton number. Additionally, it could also help explain the observed baryon asymmetry in the universe, validate the Majorana nature of neutrinos, and probe new mass generation mechanisms up to the GUT scale. The proposed nEXO experiment will search for $0\nu\beta\beta$ decay in $^{136}$Xe with a projected half-life sensitivity exceeding $10^{28}$ years at the $90\%$ confidence level. nEXO will employ a liquid xenon (LXe) Time Project Chamber (TPC) filled with 5 tonnes of Xe enriched to $\sim90\%\;^{136}$Xe. In parallel, new avenues are being investigated for future upgrades to nEXO with the aim to suppress backgrounds obscuring the $0\nu\beta\beta$ signal. One approach is the extraction and identification of the $\beta\beta$-decay daughter Ba ion, also known as Ba tagging, which will ensure classification of an event as a $\beta\beta$ event irrefutably. Groups at McGill University and TRIUMF are developing an accelerator driven ion source to implant radioactive ions inside a volume of LXe, for subsequent ion extraction using methods under development by other groups within the nEXO collaboration. In the first phase of this development, ions will be extracted using an electrostatic probe for subsequent identification using $\gamma$ spectroscopy. The motivation for the project, the experimental apparatus, and recent updates will be presented along with planned measurements.
Radioactivity in particulates contributes significantly to the background in ultra-low background experiments. The alpha generated from dust provides a degraded energy signal on the detector that mimics low-energy nuclear recoil events, which is background to rare event particle detectors, especially dark matter search experiments. A particulate cleaning station, which includes controlled gas flow on the material surface, a flowmeter, an optical microscope, and a profilometer to scan the surface, has been developed and used to study the cleaning efficiency of dust with various speeds of gas. In this talk, the hardware of the system, the analysis technique, and the cleaning efficiency of different materials and sizes of dust with various gas speeds will be presented.
LoLX is a small scale R&D experiment, hosted at McGill University, which aims to study the properties of liquid xenon (LXe) scintillation light and characterize Cherenkov light emission in LXe with cutting-edge photo-detection technology. It supports next-generation rare-decay experiments, such as nEXO, which will search for neutrinoless double-beta decay in LXe. Interactions in nEXO produce scintillation light in the vacuum ultraviolet (VUV), and the photo-detection technology of choice are silicon photomultipliers (SiPMs), which have a high efficiency in this region, as well as exceptional gain.
The previous detector design included 96 Hamamatsu VUV4 SiPMs in a cylindrical geometry. Optical filters are used to separate Cherenkov and scintillation light produced by a radioactive beta source. In this talk we will present LoLX², the new cubic version of LoLX, which addresses a few issues encountered in its first iteration.
LoLX² will assess the performance of two types of SiPMs, Hamamatsu VUV4 and FBK HD3. It will deploy 40 of each type as well as a VUV-sensitive photomultiplier tube (PMT), which serves as a benchmark for SiPM photo-detection efficiency in VUV. We will give an overview of the new LoLX inner detector designed at TRIUMF, its assembly and the testing of the FBK HD3 SiPMs.
I present studies on a deep convolutional autoencoder originally designed to remove electronic noise from a p-type point contact high-purity germanium (HPGe) detector. With their intrinsic purity and excellent energy resolutions, HPGe detectors are suitable for a variety of rare event searches such as neutrinoless double-beta decay, dark matter candidates, and other exotic physics. However, noise from the readout electronics can make identifying events of interest more challenging. At lower energies, where the signal-to-noise ratio is small, distinguishing signals from backgrounds can be particularly difficult.
I demonstrate that a deep convolutional autoencoder can denoise pulses while preserving the underlying pulse shape well. Results show that a deep learning-based approach is more effective than traditional denoising methods. I also present several studies on how the use of this autoencoder can lead to better physics outcomes through improvements in the energy resolution and better background rejection. Finally, I highlight extensions of this research that our group is working on and show how our methods are broadly applicable to the particle astrophysics community.
nEXO is a proposed tonne-scale experiment which aims to search for neutrinoless double beta ($0\nu\beta\beta$) decay in the isotope $^{136}$Xe. The observation of $0\nu\beta\beta$ decay would demonstrate lepton number violation in weak processes and the Majorana nature of neutrinos. This would be an explicit signature of physics beyond the Standard Model and also may provide insight into the observed matter-antimatter asymmetry in the Universe. nEXO is being designed to investigate this rare decay with a projected half-life sensitivity that is greater than $10^{28}$ years at the 90% confidence level.
In order to reduce the impact of cosmogenic backgrounds, the experiment is anticipated to be located at SNOLAB, an underground laboratory located two kilometres below the surface. The xenon-filled Inner Detector is designed to be located at the centre of a water tank to shield against radioactive backgrounds and to tag passing cosmogenic muons. This 12.3 m in diameter and 12.8 m in height tank, which is filled with 1.5 kilotonnes of ultra-pure deionized water and instrumented with an array of 8-inch photomultiplier tubes (PMTs), constitutes the Outer Detector. The PMTs will be used to veto potential background events in the Inner Detector that may be introduced by spallation neutrons from passing cosmic muons and other secondary particles.
A calibration system is being developed for nEXO's Outer Detector. The aim of this system is to calibrate the timing properties of the PMT's readout system and monitor the optical properties of the water. I will discuss the design implemented for calibrating the Outer Detector by analyzing the result of a GPU-accelerated ray-tracing software (Chroma) as well as considering the different strategies currently used by other similar experiments.
The interpretation of experimental results in particle physics is complicated by the fact that essentially all experimental probes of short distance physics are complex multi-scale processes, and so our ability to interpret experiments depends on our ability to factorize the physics at different distance scales. A simple example is the factorization of hadronic cross sections into short-distance scattering amplitudes and long-distance parton distribution functions, but for more complex situations with additional scales the issue of factorization can be significantly more involved.
Effective Field Theory (EFT) is a general approach in which only the degrees of freedom relevant at a particular length scale are included as degrees of freedom in the theory, and provides a systematically improvable approach to factorization. For collider physics, the appropriate EFT goes under the name of “Soft-Collinear Effective Theory” (SCET). In this talk I’ll discuss a recent, simple formalism for SCET and discuss its application to the study of power corrections to various processes.
The instability of the vacuum in the presence of a strong static electric field that creates charged pairs is Schwinger pair production. In this talk we describe the classical field theory of pair creation using non-hermitian quantum mechanics. The Klein-Gordon equation in 1+1 dimensions in the presence of a constant electric field with an ansatz $\phi(x,t) = e^{-\mathrm{i}\omega t}\phi_{\omega}(x)$, can be mapped to an effective time independent Schr\"{o}dinger equation with a shifted inverted harmonic oscillator (IHO) potential. In this talk we address the question of implementing appropriate long distance physics (boundary condition at infinity) for the IHO that describes pair production using the philosophy of point particle effective field theory (PPEFT). The point particle effective action describes the local interaction of the high energy source. To the leading order, it amounts to adding a complex Dirac delta function at large distances which then fixes appropriate boundary condition for the wavefunction of the IHO at large distances in a renormalization group (RG) invariant way, that describes particle production. We derive Schwinger's pair production rate using the imaginary part of the point particle effective action that renders the emission probability RG invariant.
The Eguchi-Hanson-AdS_5 family of spacetimes are a class of static, geodesically complete, asymptotically locally AdS_5 soliton solutions of the vacuum Einstein equations with a negative cosmological constant. They have negative mass and are parameterized by an integer p ≥ 3 with a conformal boundary with spatial topology L(p, 1). In this talk, I will introduce mode solutions of the scalar wave equation on this background and show that the geometry admits a normal mode spectrum. In addition, I will also discuss other geometric properties of these soliton spacetimes.
We study the quantum-classical Einstein equation from a Hamiltonian perspective where the classical gravitational phase space variables and matter state evolve self-consistently. Applied to cosmology, we show that the resulting equations with a quantized massive scalar field permit exact semiclassical static universes, where the curvature and cosmological constant $\Lambda$ arise as discrete values associated to the eigenstates of the scalar field. Linear stability analysis reveals stable and unstable modes that are functions of $\Lambda$, and independent of the size and curvature of the static universe. The unstable mode leads to an inflating "emergent" universe. We also show numerically that the classical and quantum-classical evolutions agree at late times.
In this talk, we present a novel approach to fully renormalize observables such as theoretical predictions for cross sections and decay rates in particle physics. While renormalization techniques have been utilized to absorb infinities, the theoretical expressions for observables are still not fully renormlazed as they contain dependence on arbitrary subtraction schemes and scales. We resolve this to achieve full renormalization based on a new principle termed as the Principle of Observable Effective Matching (POEM) to simultaneously gain both scale and scheme independence. We illustrate this with an example of the total cross section of the electron positron to hadrons whereby we utilize 3- and 4-loop MS scheme expressions via perturbative Quantum Chromodynamics (pQCD). With POEM and a process termed as Effective Dynamical Renormalization,we fully renormalize these expressions. We obtain prediction of 1.052431+0.0006−0.0006 at Q=31.6GeV, which is in excellent agreement with the experimental value of Rexpe+e−=1.0527+0.005−0.005.
We really understand a phenomenon in science when we can use it to make something new. This interplay between fundamental science and new materials is particularly vibrant in the highly interrelated fields of biological physics and soft materials, where a confluence of experimental techniques and theoretical approaches meet to address fundamental questions. What is a gel? How do macromolecules, such as proteins and DNA, function in the crowded and confined environment of a cell? What role do motor-driven "active" processes play in transporting material in a cell? And what role do hydrodynamic interactions (the fluid-mediated interaction between macromolecules) play? I will try to paint a picture that motivates these questions.
Magnetic resonance imaging (MRI) is a non-invasive diagnostic tool that uses magnetic fields and radio waves to create detailed images of the body's internal structures. This lecture introduces MRI and explains the physical principles behind the formation of images from signals derived from the magnetic moments of 1H nuclei.
The lecture will discuss the fundamental concepts of MRI, including nuclear magnetic resonance (NMR). NMR is the underlying physical phenomenon that allows MRI to work and involves the interaction of magnetic fields with the atomic nuclei in the body's tissues.
Next, the lecture explains how strong magnetic fields are used in MRI. A magnetic field aligns the magnetic moments of the atomic nuclei in the body's tissues, which can then be excited with radio frequency (RF) fields. The excited nuclei then emit RF signals that are picked up by the MRI's detectors, known as radio frequency coils. Linear gradients in the magnetic field are used to spatially encode the detected RF signals so that images can be formed using the Fourier Transform.
Different sources of contrast in images will be introduced, including T1, T2, and proton density. Examples of how each of these highlights different properties of the body's tissues will be presented.
Safety considerations for MRI will be briefly discussed. Finally, some applications of MRI in medicine will be presented, including its use in cancer, neurological disorders, and musculoskeletal injuries.
Overall, this lecture briefly introduces the physics involved in MRI image generation. It is intended for anyone interested in learning more about this important diagnostic tool.
Aside from the lightest elements, hydrogen, helium and some lithium, which were formed in the big bang, the vast majority of the elements around us were (and are) formed in stars, through chains of nuclear reactions and decays. While the general picture of how the various elements are formed is mostly complete, constructing a detailed understanding of element formation remains an active area of research. This includes building an understanding of the origin of elements heavier than iron, formed mainly in chains of neutron captures and beta decays such as the rapid and slow neutron capture processes. Other active areas of investigation include the formation of elements, both heavier and lighter in hydrogen, in stellar explosions such as novae, x-ray bursts, and supernovae. These scenarios typically involve rapid chains of proton or alpha capture reactions followed by beta decays.
Forming a complete understanding of stellar nucleosynthesis requires complex modelling of stellar processes, and a key ingredient in these models are the rates of the nuclear reactions involved. In turn, constraining these rates requires input from nuclear physics, including laboratory measurements of important reactions using stable and radioactive beam facilities. In this talk, I will discuss some of the forefront techniques used to understand stellar nuclear reactions through accelerator-based measurements, including both direct and indirect measurements. As illustrative examples, I will discuss the results of some recent experiments exploiting these techniques, as well as future efforts on the horizon. I will also present some of the latest developments in experiment design and detector technology, which will be applied to future measurements.
Bound-state $\beta$-decay ($\beta_b^-$-decay) is a radically transformative decay mode that can change the stability of a nucleus and generate temperature- and density-dependent decay rates. In this decay mode the $\beta$-electron is created directly in a bound atomic orbital of the daughter nucleus instead of being emitted into the continuum, so the decay channel is only significant in almost fully stripped ions during extreme astrophysical conditions. The $\beta_b^-$-decay of $^{205}\text{Tl}^{81+}$ could influence our understanding of the production of $^{205}\text{Pb}$, a short-lived radioactive (SLR, 17.3 Myr) nucleus that is fully produced by the s-process in stars. In the context of the early Solar system, SLRs are defined by half-lives of 0.1-100 My and their abundance in meteorites can be used to constrain the formation of the Solar System [1]. Historically, it has been noted that thermal population of the 2.3 keV state of $^{205}\text{Pb}$ in stellar conditions could dramatically reduce the abundance of s-process $^{205}\text{Pb}$ by speeding up the EC-decay to $^{205}\text{Tl}$. This destruction of $^{205}\text{Pb}$ is potentially balanced by the $\beta_b^-$-decay of $^{205}\text{Tl}^{81+}$ [2]. Currently, a theoretical prediction for the half-life of fully stripped $^{205}\text{Tl}$ is used in stellar models, but given the importance of the $^{205}\text{Pb}$/$^{204}\text{Pb}$ chronometer, a measurement of the $\beta_b^-$-decay for $^{205}\text{Tl}^{81+}$ was conducted at the GSI Heavy Ion Facility in March 2020. A $^{205}\text{Tl}^{81+}$ beam was stored in the Experimental Storage Ring, and the growth of $^{205}\text{Pb}^{81+}$ daughters with storage time was directly attributable to the $\beta^-_b$-decay channel. The authors will report a preliminary measured half-life and detail how this half-life can be used to more accurately predict the $^{205}\text{Pb}$ abundance in the early Solar System.
[1] M. Lugaro, et al. Progress in Particle and Nuclear Physics, 102:1–47, 2018.
[2] K. Yokoi, et al. Astronomy and Astrophysics, 145:339–346, 1985.
Nuclear pairing, i.e., the tendency of nucleons to form pairs, has important consequences to the physics of neutron star crusts and heavy nuclei. While the pairing found in nuclei typically happens between identical nucleons and in singlet states, recent investigations have shown that certain heavy nuclei can exhibit triplet and mixed-spin pairing correlations in their ground states. In this talk, I will present new investigations on the effect of nuclear deformation on these novel superfluids. Signatures of these pairing effects can be directly seen in nuclear experiments on spectroscopic quantities and two-particle transfer direct reaction cross sections. Indirectly, pairing correlations of nuclear superfluidity can be probed in cold-atomic experiments utilizing Feshbach resonances. On that note, preliminary results on phenomenological investigations of $s$- and $p$-wave pairing in cold-atomic gases will also be discussed.
The slow (s) and rapid (r) neutron capture processes have long been considered to produce nearly the entirety of elements above Fe. Under further scrutiny, when comparing expected s-process and r-process yields with spectroscopic data, inconsistencies in abundance arise in the Z=40 region. These differences are expected to be attributable to the intermediate (i) neutron capture process. Sensitivity studies have shown that the intermediate neutron-capture process follows reaction pathways through experimentally accessible neutron-rich nuclei, providing opportunities to constrain the neutron capture rates that define them. Of these exotic nuclei, $^{90}$Sr provides a strong case in providing new information on i-process abundances.
I will discuss the $\beta$-Oslo analysis of $^{91}$Sr to reduce uncertainties in the $^{90}$Sr(n,$\gamma$)$^{91}$Sr reaction, measured via the $\beta$-decay of $^{91}$Rb into $^{91}$Sr with the SuN total absorption spectrometer at the NSCL in 2018. By simultaneously measuring both $\gamma$-ray and excitation energies, a coincidence matrix was produced to perform the Oslo analysis, providing experimental information on the Nuclear Level Density (NLD) and $\gamma$-ray Strength Functions ($\gamma$SF), two critical components in limiting the uncertainty of the neutron capture cross section when it cannot be directly measured. This constrained uncertainty will allow us to better characterize the contribution of $^{90}$Sr to the i process and make progress in explaining observed abundances in suspected i-process stellar environments.
The impact of the coronavirus disease of 2019 (COVID-19) on secondary education continues to disrupt and profoundly affect student learning and success at post-secondary institutions. Many university instructors have noted and reported that there has emerged a significant gap between course instructor expectations and students’ abilities for pandemic cohorts of students. Urgent consideration must be given as to how best to support these and similarly affected incoming cohorts of undergraduate students. This talk examines and proposes resources to support instructors on how to teach students post covid and invites discussion and suggestions on the way forward.
Review sessions, designed to provide students additional practice and support for summative assessments, help with prioritizing course material. With multitude of resources available online, the importance shifts to ensuring that students are engaged, inspired, aware of the level at which they will be tested, and able to assess their own knowledge. In courses that have problem solving skills as one of the course outcomes, the ability to so apply the knowledge is especially important.
We have explored an inclusion of various competitive setups that all emphasized learning and growth as the main goal, focused on the process and quality of work, gave everyone equal chance of winning and had no profound effect on students’ grade in the course. While the context was always light-hearted and fun, the content of the questions was tightly related to the course material, allowing for self-evaluation and self-reflection, and had full intention of preparing students for the summative assessment.
All discussed activities were designed for electromagnetism courses intended for different audiences – from algebra-based service courses, through those intended for students in engineering, to senior physics majors, showing that the technique, supported by the appropriate content, can be employed at all learning levels.
Over the past decade, there has been a growing recognition in the physics community of the need for students in undergraduate physics programs to develop computational skills. Not only are computational skills utilized in a wide variety of careers, but they also teach students transferrable skills such as problem solving, analysis and critical thinking. While the value of these skills is generally acknowledged, the integration of coding activities into physics courses begs the question: How does the engagement with computational activities enhance students’ learning of physics? This research project seeks to investigate the benefits and challenges of using computational exercises to learn the content delivered in undergraduate physics courses. In a second-year electricity and magnetism course, students wrote python code to numerically compute vector derivatives for a variety of fields that were presented either visually or symbolically. Learning gains were investigated using pre- and post-quizzes. Additionally, interviews were conducted with students as they developed their code. This provided insights into their thought process, confidence in their code, and reconciliation of the computed results with their pre-conceptions of the divergence and curl of the vector fields explored.
Multiple choice questions are a common teaching and evaluation tool supporting Peer Instruction (PI) pedagogy in large-enrolment introductory physics classes across Canadian universities. Unfortunately, the multiple-choice format limits the opportunities for the students to formulate their own ideas. In addition, such questions often over-simplify the phenomena presented. Case studies based on open-ended and more realistic scenarios can provide a viable additional option for student collaborations in the classroom and beyond. This talk will present an example of such activity – a case study exploring the air resistance and the concept of the terminal velocity. Air resistance is the topic that is often ignored in the introductory physics curriculum, despite being virtually unavoidable in real life. The case study explores the topic through the analysis of a real event: a historic 2012 fall from the stratosphere in which the skydiver broke the world records for the highest “freefall” and the highest manned balloon flight, as well as becoming the first person to break the sound barrier in “freefall”. The students were provided the set of data of the skydiver's speed versus time and were asked a series of questions about the flight requiring them to analyze the data provided. While the full effect of such case studies on conceptual learning still needs to be formally evaluated, it is already clear that they have a potential to increase the students' engagement with the material.
The Bomem DA3 series of Fourier transform (FT) spectrometers were the first commercially available research-grade instruments of their type. They were available for purchase from 1980-2000. This FT could achieve an ultimate resolution of 0.0025 cm-1, with a resolvance of 106 at any wavelength, impressive specifications even for a similar modern instrument. The scanning Michelson interferometer and on-board electronics were simple and robust, but the computer systems that controlled the instrument, collected the interferograms and processed the spectra have long been obsolete.
We inherited a DA3 FT from Dr. Anthony Merer at UBC and, with the help of colleagues in Lyon, France who run a similar instrument, our group has revived it with new methods for control of the mechanical components, and for new data collection and processing procedures. The instrument is now routinely used to record dispersed fluorescence spectra of metal-bearing molecules generated in our lab at UNB. The talk will focus on the revival of the DA3, ways in which we have obtained improved performance from it, and extensions to its capabilities not available in the 1980s.
A kW laser beam focused on metal creates a highly dynamic environment of considerable importance to automotive production and 3D additive manufacturing. For example, laser welding allows the use of non-traditional materials to reduce vehicle weight (for improved fuel efficiency) but provides no direct on-the-fly quality indicator. The dramatic increase in demand for electric vehicles has required the development of completely new manufacturing techniques (e.g., welding 100s of battery tabs) necessitating the development of new monitoring techniques to ensure compliance with stringent part quality requirements. Metal 3D printing promises custom part creation at the push of a button, but slow print speeds coupled with inconsistent part quality and expensive ex situ quality assurance (e.g., x-ray CT) have slowed widescale adoption. We developed inline coherent imaging, an interferometric imaging approach easily deployable in the field that can monitor laser processing operando at high speeds (>300 kHz) and high resolutions (< 10 micron) [1]. Recent work combines this approach with other in situ diagnostics (e.g., integrating sphere radiometry) to capture simultaneous depth and absorptance to reveal the microscopic origin of the highly efficient energy coupling from light to metal integral to laser welding [2]. Simultaneous capture of morphology through both inline coherent imaging and high-speed x-ray imaging (possible only with synchrotron-based light sources) definitively explains the supposed “noise” in optical depth imaging [in preparation]. In metal 3D printing, we track morphology layer by layer, providing an immediate check on surface roughness, recoater blade damage, and powder packing density [3]. Defects are corrected through closed-loop control before subsequent layer deposition.
[1] Webster et al., Optics Letters 39, 6217-6220 (2014).
[2] Allen et al., Procedia CIRP 111, 5-9 (2022).
[3] Fleming et al., Additive Manufacturing 32, 100978 (2020).
Performing in-situ ion beam analysis to determine metal oxide growth mechanisms poses challenges due to the incompatibility of a electrolyte solution with ultra-high vacuum (UHV). To circumvent this problem, a specialized in-situ cell was developed which isolates the liquid electrolyte from the UHV using a silicon wafer, preventing any contact between vacuum and liquid. This wafer is equipped with a 50-200nm thin Si3N4 window, then coated in the metal under investigation and inserted with the metal in contact with the electrolyte, isolating the electrolyte from the vacuum. As a result, the ion beam can pass through the Si3N4 window, interacting with the material on the opposite side. This technique allows electrochemical methods such as anodization and impedance measurements to be taken under UHV and in-situ with Rutherford backscattering spectroscopy (RBS).
Upon preliminary testing of magnetron sputter deposited thin film Ti, higher oxide growth during anodization was reported compared to literature and ex-situ anodization studies. Exposure of Ti/Si3N4 sample to a 1 MeV He+ ion beam for 30 minutes with no applied potential showed a spontaneous formation of a continuous oxide layer. Next, the study focused on radiolysis product generation after ruling out considerations such as charging from the ion beam on the Si3N4 surface facing vacuum. Radiolysis was done using two separate alpha sources; 241Am at 0.525 Bq, 5.7 MeV, and 241Am/247Cm/244Pu at 0.525 Bq, 5.7 MeV. Ti samples with known oxide thickness were submerged in 0.27 M NaCl solution for various times, with a maximum of 193 hours, with the nuclide source facing the solution. Additional covered and uncovered samples were created with similar setups for control. Channeling RBS experiments were performed on the samples, and the resultant spectra were analyzed using SIMNRA to determine oxide growth as a function of incident alpha particles. Using linear regression analysis, the growth of titanium oxide as a result of alpha radiolysis in a conductive analyte was quantified. From these experiments, it was found that the formation of radiolysis products in an electrolyte solution contributed to the spontaneous oxide growth of Ti.
Appropriate characterization is vital for improvement of the electronic properties of semiconductors in photovoltaic devices, which will enable solar energy to compete with non-renewable energy sources. A critical electro-optical characterization tool for determining the carrier mobility in thin-film solar cells is offered by photo-carrier extraction by linearly increasing voltage (photo-CELIV) [1], in which a nano-second laser pulse is applied to a device followed by a linear extraction voltage ramp. The maximum intensity and extraction time of the transient offer information on the photocarrier mobility. In this presentation, we describe how a photo-CELIV apparatus has been integrated with a confocal optical microscope to extend the ability of photo-CELIV to obtain cross-sectional mobility profiles along the z-axis of a solar cell. High laser power density at the microscope’s focal plane leads to a drastic increase in non-geminate recombination, such that the concentration of charge carriers extracted from the microscope focal plane saturates. A model has been developed to analyze the photo-CELIV transients at each confocal plane and calculate the cross-sectional mobility profile based on discretization of the active layer into N slices of unknown mobility. To test this method, we apply it to a hydrogenated amorphous silicon (a-Si:H) solar cell, which is a well understood material, and thus well suited for testing this novel characterization technique. Comparison of our results with measurements of the hydrogen content profile shows very good correlation, allowing for direct confirmation of our obtained mobility profile.
[1] Juška et al, Phys. Rev. Lett., 2000, 84, 4946
The nonlinear relationship between the form and function of physical structures in our built environment raises challenges for design. Modern design methods, such as topology optimization, provide structural solutions but obscure the relationship between the form of the solution and the formulation of the underlying design problem. Here, we show that embedding computational structure design in statistical physics provides unprecedented insight into the origin and organization of design features. We show how our "hyperoptimization" approach, a generalized, superset of molecular dynamics and standard simulated annealing optimization, surmounts known design problems including grayscale ambiguity, manufacturing inaccuracy, and artificially over-specified criteria in computational morphogenesis.
Skyrmions are a topologically non-trivial magnetic state that has been observed in several different magnetic materials, such as the chiral cubic magnets Cu2OSeO3, FeGe, and MnSi. In these non-centrosymmetric systems, competition between the symmetric exchange interaction and Dzyaloshinskii-Moriya interaction results in the formation of incommensurate spin textures, such as the vortex-like skyrmions. In metallic systems, such as FeGe, these skyrmions can be manipulated by small electrical currents, with motion occurring at very small current densities. This raises the possibility for them to be used in ultra-low energy electronic applications, such as memory devices or stochastic computing. In this talk, I will introduce general features of the skyrmion state, and present recent work on skyrmions in thin lamella of FeGe and Cu2OSeO3, investigated using soft X-ray scattering. In particular, I will discuss aspects of skyrmion metastability in these systems, as well as investigations of the current-induced motion of skyrmions in sample with variable thickness, and the prospects of this leading towards potential applications.
The properties of heavy 5d transition metal oxides, such as iridates and osmates, are often remarkably different from those of their lighter 3d counterparts. In particular, the presence of strong spin-orbit coupling (SOC) in these compounds can give rise to a variety of exotic quantum states, including spin-orbital Mott insulators, topological insulators, Weyl semimetals, and quantum spin liquids. In materials based on edge-sharing octahedral crystal structures, large SOC can also lead to unconventional magnetism, and a form of highly anisotropic, bond-directional Ising interaction known as the Kitaev interaction. The first, and best known, experimental realizations of Kitaev magnetism are honeycomb lattice materials: the 5d iridates A$_2$IrO$_3$ (A = Na, Li) and the 4d halide α-RuCl$_3$. These compounds have attracted considerable attention due to predictions of a Kitaev quantum spin liquid with exotic anyonic excitations. However, there has recently been growing interest in the search for Kitaev magnetism in other families of materials with different lattice geometries. In this talk, I will describe several candidates for Kitaev magnetism beyond the honeycomb lattice. This will include (1) potential face-centered-cubic (fcc) Kitaev systems, such as the double perovskite iridates (A$_2$BIrO$_6$) and iridium halides (A$_2$IrX$_6$), and (2) potential Kitaev chain systems in quasi-1D iridates.
We present the results of a finite-temperature study of a Heisenberg-Dzyaloshinskii-Moriya Hamiltonian on AB-stacked kagome bilayers. We develop an exact analytical coarse-graining procedure to map the microscopic Hamiltonian onto a generalized XY model on a triangular lattice. To leading order, the effective XY model includes both bilinear and biquadratic interactions. In a large portion of the parameter space, the biquadratic couplings dominate in the system, leading to two phase transitions: a high-temperature nematic, and a low-temperature Ising transition. In bilayer systems, these transitions are accompanied respectively by the binding/unbinding of half-integer or integer topological vortex defects. Furthermore, we show that when the ground state is incommensurate, thermal fluctuations change the nature of the low-temperature transition from continuous to first-order. These predictions are confirmed by the numerical Monte-Carlo finite-size analysis.
The Heisenberg-compass model on a square lattice offers a simple example of a frustrated magnet that exhibits the phenomenon of order by disorder (ObD). In this system the ordering direction is selected by quantum zero-point fluctuations for much of the phase diagram, providing a minimal context to explore manifestations of ObD, such as the presence of a pseudo-Goldstone gap. We explore the Heisenberg-compass model by exact diagonalization on small clusters. By employing translation symmetries of the model, ground state properties, including energies and correlation functions, are studied for clusters of up to 25 spins. We find a phase diagram qualitatively consistent with the classical result, identifying the magnetic ordering pattern via the spin-spin correlation functions. The low-lying spectrum, specifically the evolution of the spin-wave gap, will be presented as a function of the Heisenberg and compass exchanges. We find good agreement between the exact diagonalization results and semi-classical expectations from non-linear spin-wave theory.
Canadian Science Publishing welcomes all CAP attendees for a discussion about the future of science publishing and how publishing serves the research community. CSP will present a brief outlook on the changes within publishing companies (including our own not-for-profit structure) as well as the future of scientific publishing with particular focus on Open Access and Open Science. We will present for discussion models of agreements that make it possible for OA to be financially feasible for today’s researchers, and welcome feedback on how the Canadian Journal of Physics can better serve the physics research community. The second half of the session will be open for community discussion with representatives from CSP as well as the Editors in Chief of the Canadian Journal of Physics, Profs. Robert Mann and Marco Merkli.
As teachers we want our students to learn. As students we want high marks. What classroom practices build a bridge between these wants? We will explore the role of questions and student cognition, white boards, and low-cost hands-on experiences to promote learning about the abstract concepts of electric fields and magnetic fields.
The Large Hadron Collider restarted collisions at $\sqrt{s}$ = 13.6 TeV in 2022, beginning the start of a planned 4-year Run 3. The ATLAS experiment is now commissioning several upgraded detector systems to best take advantage of the new dataset, including the New Small Wheels and phase-1 liquid argon electronics upgrade to which Canadian physicists made substantial contributions. This talk will highlight the ongoing status of data-taking and commissioning of these systems, and highlight the first physics results from ATLAS utilizing the run 3 dataset.
Top quark being the heaviest elementary particle, and the only quark which decays in its bare form, has the potential to reveal crucial information on particle dynamics. For example, precise measurement of top quark mass is needed to understand the vacuum structure, including its stability. Having the strongest coupling with Higgs boson, it can reveal information related to the electroweak symmetry breaking in a unique way. BSM effects can affect the couplings of the top quark with the Higgs boson, as well as other quarks and gauge bosons. Some of these lead to rare decays of top quark with small branching fractions, but significantly larger than the corresponding SM predictions. With very large cross section at TeV energies of pp collisions, the LHC at the end of its Run3 is expected to produce close to half-a-million top quark-antiquark pairs. Such statistics is capable of probing some of the BSM scenarios of rare decays. Proper understanding of the role of CP-symmetry in particle dynamics is another important issue. For example, presence of non-zero electric dipole moment of an elementary particle would indicate violation of CP-symmetry beyond what the CKM quark mixing structure prescribes. Top quark is an ideal system to probe this. In addition to the top quark-antiquark pair production, sizeable production of single top events and four-top events are expected at the LHC in its high luminosity version beyond Run3.
In the talk, we shall review the top quark study in the light of LHC within and beyond the SM, summarising the experimental and theoretical results so far. Some emphasise will be given to the Flavour-Changing-Neutral-Current (FCNC) interactions of the top quark, as this is one of our current interests.
As the heaviest known fundamental particle, the top quark plays a special role in many theories of new physics beyond the Standard Model. Reconstruction of top anti-top pair production to the best possible resolution is therefore crucial to enhancing our sensitivity to Beyond Standard Model effects in precision measurements and searches at the Large Hadron Collider (LHC), from improved mass resolutions for bump hunting to more diagonal unfolding matrices for differential cross-section measurements. As such, we’ve designed a deep neural network (TRecNet) that infers the four-vectors of the top and anti-top quarks from detector-level decay products in the semi-leptonic decay channel at ATLAS. The performance of TRecNet and several slight variations of the network are compared to traditional top reconstruction algorithms based on likelihood fits and are shown to improve upon both reconstruction efficiency and resolution.
We use the newly proposed Energy Mover’s Distance as a measure of jet isotropy to define new jet substructure observables for quark/gluon discrimination and identifying hadronically-decaying top quarks with large transverse momentum. We assess their effectiveness by comparing them with other classifiers. The quark/gluon study is conducted at hadron level while the top quark study is conducted at detector-level in events reconstructed with a simulated version of the ATLAS detector implemented in GEANT4.
The $t$-channel single-top quark production is observed for the first time at a centre-of-mass energy of 5.02 TeV using proton-proton collision data collected by the ATLAS detector at the Large Hadron Collider. The observation is made using an event selection optimized for the $l$+jets decay topology of the single-top process, which requires candidate events to have exactly one charged lepton (electron or muon), exactly two jets, only one of which must arise from a $b$-hadron decay, and a large transverse momentum imbalance; and after which using a multivariate discriminant to separate the $t$-channel signal events from background events that satisfy the $l$+jets topology. Using a profile likelihood fit, we measured the production cross-section of single-top quarks and antiquarks individually, the inclusive cross-section for the combined production, the ratio of single-top quark to antiquark production, and $V_{tb}$ in the CKM matrix.
During the past few decades, many astronomical and cosmological studies provided strong evidence for the existence of dark matter.
Though, to this day, we do not have any hint about what dark matter is, which motivates taking any opportunity to probe this question.
One possible solution is to extend the Standard Model with a new U(1) gauge group.
This introduces a new mediator: a light boson, usually associated with a dark photon A', that couples to the Standard Model kinetically.
On top of that, the ATOMKI collaboration has recently observed an anomaly [Phys. Rev. Lett. 116, 052501 (2016)] that might be explained by a 17 MeV dark photon.
For these reasons, we search for a low mass dark photon decaying to an electron and positron using the $387$ fb$^{-1}$ of 10.58 GeV centre-of-mass data collected with the Belle II detector at the SuperKEKB $e^+e^-$ collider. We probe $e^+e^- \rightarrow \gamma_{ISR} [A' \rightarrow e^+e^- ]$ visible decays in a mass range from 10 MeV up to 200 MeV.
We present new preliminary results that set a $90\%$ CL upper limit on the kinetic coupling of the dark photon, and give the first results of a search in $e^+e^-$ collisions in the region below 20 MeV, a region also sensitive to the ATOMKI anomaly.
PICO-40L is a bubble chamber detector with a target material of superheated C3F8, located at the SNOLAB underground research facility outside Sudbury, Ontario. With its abundance of non-zero-spin fluorine nucleons in the detector target and its effective blindness to electron recoil interactions, it is projected to set world-leading exclusion limits in the spin-dependent dark matter interaction parameter space. Unlike previous generations of the PICO experiment, PICO-40L employs a "Right Side Up" design, with the target fluid above the chamber compression and expansion system, which is expected to eliminate a class of backgrounds from previous versions of the detector. PICO-40L is currently in the commissioning stage and is expected to start its year-long blinded data-taking run in the mid-to-late summer of this year. The analysis strategy, as well as the results from the early commissioning runs, will be presented in this talk.
The DEAP-3600 experiment at SNOLAB primarily searches for Weakly Interacting Massive Particle (WIMP) dark matter candidates through interactions with argon nuclei. The detector consists of 3.3 tonnes of liquid argon housed in a spherical acrylic vessel which is viewed by 255 photomultiplier tubes. Data have been taken stably from November 2016 to March 2020 and the detector is currently undergoing hardware upgrades. DEAP-3600 achieved world-leading constraints on Planck-scale mass dark matter, and the most sensitive limit on the spin-independent WIMP-nucleon cross-section among the argon-based experiments. This talk presents the latest DEAP-3600 results demonstrating the background models, updates on the dark matter search, as well as other physics analyses and measurements.
Entropy production is a necessary ingredient for addressing the over-population of thermal relics. It is widely employed in particle physics models for explaining the origin of dark matter. A longlived particle that decays to the known particles, while dominating the universe, plays the role of the dilutor. We point out the impact of its partial decay to dark matter on the primordial matter power spectrum. For the first time, we derive a stringent limit on the branching ratio of the dilutor to dark matter from large scale structure observation using the SDSS data. This offers a novel tool for testing models with a dark matter dilution mechanism. We apply it to the left-right symmetric model and show that it firmly excludes a large portion of parameter space for right-handed neutrino warm dark matter.
The SNO+ Experiment is a versatile multipurpose neutrino detector situated at SNOLAB, with the primary goal of searching for neutrinoless double beta decay. After a successful operating phase as a water Cherenkov detector, the SNO+ target medium was switched to a liquid scintillator to increase the light yield of the detector, thereby enabling a much richer physics programme. In addition to ongoing measurements of reactor antineutrinos, solar neutrinos, geoneutrinos, supernova neutrinos, and other exotic phenomena, the SNO+ experiment is now preparing for a future phase capable of neutrinoless double beta decay. After presenting an overview of the detector and recent preliminary results, the upcoming physics capabilities of the experiment will be discussed.
Quantum black holes are one of the main playgrounds of any theory of quantum gravity. Describing such objects is a principal goal of these theories. I will review the fundamentals of analyzing black holes in non-perturbative canonical quantum gravity and briefly present some of the models arising from this approach. I will also present a short overview of some of the phenomenological aspects of these black holes in the effective regime that are predicted by such models.
Following the techniques of canonical loop quantum gravity, a full Thiemann regularization is performed on the scalar constraint of classical general relativity. The regularized Hamiltonian is then considered for a general spherically-symmetric spacetime, without recourse to additional gauge-fixing conditions commonly imposed to aid in computing the radial holonomies. By investigating the form of the modified scalar constraint in various contexts, including cosmological and black hole spacetimes, we develop an effective framework for the dynamics of spherically-symmetric spacetimes endowed with an underlying discrete structure.
Caustics are regions of high intensity created generically by the natural focusing of waves, and are universally described by catastrophe theory. Each distinct class of catastrophe is uniquely described by its own diffraction pattern, the simplest two being the Airy and Pearcey functions. A more exotic form of logarithmic wave singularity occurs near event horizons, which have acoustic analogues in quantum fluids such as Bose-Einstein condensates where Hawking radiation can be simulated. In such systems logarithmic singularities are regulated by taking into account non-linear dispersive effects, and are properly described by an Airy-type wave function supplemented by a logarithmic phase term. We find the presence of additional sub-dominant waves not yet predicted near the horizon. Furthermore, the horizon and the caustic do not in general coincide; the finite spatial region between them delineates a broadened horizon. Our catastrophe theory motivated approach allows us to comment on the stability/universality of inter-atomic length scale corrections to the Hawking spectrum (analogous to Planck scale corrections for gravitational Hawking radiation).
We use a novel approach to numerically calculate Fast-Oscillating Integrals (FOI) using the Picard-Lefschetz theory. In this theory, analytic oscillatory integrals are converted into sums of convex integrals by deforming the integration domain in the complex plane. Feldbrugge, Pen, and Turok 2019 introduced a new numerical integrator to evaluate the interference effects near caustics in lenses in one dimension. Recent studies have also used this numerical integrator to study lensing of gravitational waves as well, however one shortcoming of the integrator is that it is not optimal. In this work, we optimize the convergence to desired contours in the complex plane and further generalize the algorithm to work for random functions that appear in various physical applications like scintillation of radio signals in astrophysical sources.
Magnetic resonance imaging (MRI) is well known as a non-invasive diagnostic imaging technique available to clinical medicine. MRI provides high spatial resolution images with flexible soft tissue contrast as the signal encoding is more complicated than other imaging modalities.
Machine learning, especially deep learning, has become a popular research topic to solve nonlinear problems. It has played an important role in many areas, from self-driving car to chatGDP. The MRI research community has embraced the opportunity and exploited the powerful tool in image classification/feature detection, and signal processing/image reconstruction. However, diagnostic imaging presents different challenges compared to other digital image processing tasks such as computer vision. In this talk, I will present the capabilities and potential pitfalls of deep learning, focusing on the applications in MRI.
Introduction: Hyperpolarized 129Xe lung MRI is an efficient technique used to investigate and assess pulmonary diseases. However, the longitudinal observation of the emphysema progression using hyperpolarized gas MRI-based Apparent Diffusion Coefficient (ADC) can be problematic, as the disease-progression can lead to increasing unventilated-lung areas, which likely excludes the largest ADC estimates. One solution to this problem is to combine static-ventilation and ADC measurements following the idea of 3He MRI ventilatory ADC (vADC). We have demonstrated this method adapted for 129Xe MRI to help overcome the above-mentioned shortcomings and provide an accurate assessment of the emphysema progression.
Methods: Ten study-subjects with written informed consent provided to an ethics-board-approved study protocol, underwent spirometry and 3He/129Xe MRI scanning. 129Xe imaging was performed at 3.0T (MR750, GEHC, WI) using whole-body gradients (5G/cm maximum) and a commercial 129Xe quadrature-flex RF coil (MR Solutions, USA).1 Hyperpolarized 129Xe gas (polarization=35%) was obtained from a turn-key, spin-exchange polarizer system (Polarean-9820 129Xe polarizer). VDP was generated using the DL. We used 2-D U-Net architecture for segmentation and ResNet-152 as the backbone network that was trained on the ImageNet and a low-resolution MRI dataset. The segmentation masks were compared to ground truths using dice similarity coefficient.
Results: Fig.1 shows the acquired static-ventilation images (top-panel), matched voxel-size unweighted (b=0,) images (middle-panel) and correspondent ADC maps (bottom-panel) in coronal view for a representative study-subject demonstrating a good- match between static-ventilation and matched resolution unweighted-slices. Table 1 shows the demographic, PFTs, mean VDP, ADC, and vADC estimations for all study-subjects.
Discussion and Conclusion: In this proof-of-concept-study, we showed that the emphysema-progression can be potentially quantified with using the pulmonary static-ventilation and diffusion-weighted images of hyperpolarized 129Xe utilizing the ventilatory ADC approach powered by the DL-segmentation.
INTRODUCTION: A non-invasive imaging technique inhaled hyperpolarized (HP) 129Xe magnetic resonance imaging (MRI) is presently employed to assess lung structure and function1. It is possible to quantify the ventilation/perfusion (V/P) of the lungs simultaneously using this MRI technique because the solubility of xenon in lung tissues is higher compared to other imaging gases. This measurement is possible owing to the distinct and broad range of chemical shift frequencies (~200 ppm) of 129Xe when residing in lung tissue, brain tissue, and red blood cells as opposed to the gas phase.
[15O]-water positron emission tomography (PET) is the gold standard imaging method for determining cerebral perfusion2,3. In this study, simultaneous in-vivo 129Xe-based MRI and [15O]-water PET images were collected and compared.
METHODS: [15O]-water solution (30mL) contained in a 60mL plastic syringe was used to dissolve 30mL of the hyperpolarized 129Xe gas. Anesthesia was induced in rats with 5% isoflurane and oxygen and maintained at 2%. A 24g tail vein catheter was inserted for delivery of the [15O]-water / 129Xe mixture. Hyperpolarized 129Xe gas was obtained from a turn-key, spin-exchange polarizer system (Polarean 9800 129Xe polarizer). In-vivo PET imaging was obtained using a small animal MRI compatible PET insert (Cubresa Inc.) [15O]-water PET data was acquired simultaneously with 129Xe MRI using the integrated PET system in the 3T PET/MRI.
RESULTS: 2D axial 129Xe MRI images and [15O]-water PET images were acquired simultaneously indicating that the diameter of the phantom from both PET and MRI images were similar. The 129Xe image demonstrates a sufficient SNR level (80). The anatomical-proton and [15O]-water-PET-perfusion images of rat-brain were also produced.
CONCLUSIONS: The results of this study clearly indicate the feasibility of simultaneous hyperpolarized 129Xe MRI and [15O]-water PET measurements. This demonstration proves that 129Xe could be used as a potential non-radioactive and high-resolution imaging tool.
References:
1. Kaushik, S. S. et al. MRM (2016); 2. Fan, A., et. al. JCBFM (2016); 3. Ssali. T., et. al. JNM (2018).
Introduction: It has recently been shown1,2 that combining Compressed-Sensing with the Stretched-Exponential Model (SEM) can significantly increase SNR of accelerated/undersampled MR images. The reconstruction uses an exponentially decaying signal trend across a group of images assumed to represent the decaying density of resonant isotope in lungs after each wash-out breath. This decaying signal trend can be induced artificially to ensure the reconstruction : previous work was done using a specific averaging pattern1,2, but this signal decay can be a result of decaying hyperpolarized (HP) xenon polarization in a set of back-to-back acquisitions.
Method: In-vitro MRI was performed at 74mT on a phantom with 45mL of HP 129Xe (35% polarization): 3 sets of 10 undersampled images each were acquired (acceleration factor of 7), only refilling the phantom with fresh hyperpolarized xenon gas between sets. To ensure adequate sampling of the centre of k-space, the Fast-Gradient-Recalled-Echo (FGRE) sequence was modified for centric-out trajectory in both phase-encode and readout directions.
Seven coronal slices (30mm) of 9 undersampled (AF=7) 2D human lung images were acquired at 3T with 1L of inhaled HP 129Xe (33% polarization, 30/70 129Xe/4He), all acquired in one breath-hold (1s/slice, 7s total scan time). The previously used averaging pattern was applied before the reconstruction, and the SNR was fitted to the SEM using the Abascal method.3
Results: The signal of the phantom images followed the expected exponential decay trend. The reconstructed human lung images saw around 5x higher SNR compared to the original non-averaged images.
Conclusion: Although the signal decay of the phantom images followed the expected trend, the reconstruction was not able to be performed: this was caused by unexpected low frequency RF interference presenting as a consistent spike in k-space, confusing the reconstruction algorithm. The source of this interference and possible solutions are under investigation. The prospectively undersampled lung images show improved SNR within a single breath-hold: to remove the artefacts, a lung dataset will be assembled to train the artefact removal neural network2 developed previously on undersampled lung reconstructions.
References:
1 Perron et al. ISMRM (2021); 2 Perron et al. ISMRM (2022); 3 Abascal et al. IEEE Trans Med Imaging (2018).
A key question of modern physics concerns how the bulk of the universe's visible mass emerges from the Standard Model (SM). Some of this mass is generated by Higgs boson couplings to matter fields in the SM, but of the constituents of atomic matter, this suffices to explain only the mass of electrons in its entirety. The overwhelming majority of atomic mass resides in the nucleus, which is composed of neutrons and protons (nucleons), bound together by the exchange of pions and other mesons at shorter ranges. As far as these nucleons and pions are concerned, the Higgs-generated mass component is only a small fraction. The overhelming majority of the mass comes from the dynamical quark-gluon interactions of QCD, through a mechanism termed ``Emergent Hadronic Mass'' (EHM). Paradoxically, the study of the lightest pseudoscalar mesons, the pion and kaon, appear to hold the key to a further understanding of EHM and structure mechanisms. I will discuss the contributions the PionLT and KaonLT experiments at Jefferson Lab are expected to make towards the resolution of this puzzle, as well as the role of proposed future extensions of these measurements using the Jefferson Lab 22 GeV upgrade and the Electron-Ion Collider.
There are many open questions in the field of hadronic structure, as the properties of constituent quarks and gluons (e.g. spin and mass) do not explicitly add up to the properties of hadrons. The pion is a simple hadron, consisting of only two valence quarks (up and down), which makes it an ideal candidate for studies of hadronic structure. The exclusive pion electroproduction reaction, with a ground state nucleon p(e, e’ π+)n has been studied in detail at low momentum transfer (Q^2). The longitudinally polarized virtual photons dominate the cross-section of this reaction at low -t. A number of physical observables such as form factors and Generalized Parton Distributions (GPDs) can be extracted from this cross-section using models. Experimental Hall C at Jefferson Lab is the only active facility in the world that can host high precision studies of exclusive pion electroproduction reactions. With the 12 GeV upgrade, it allows the extraction of pion form factor at moderate Q^2, as well as giving a unique opportunity to study higher resonance of pion electroproduction reaction p(e, e’ π+)Δ. This research aims to measure the separated cross-section of the ground state reaction, as well as first measurement of the higher resonance reaction. The comparison of separated cross-section of two reactions will be invaluable for our understanding of hadronic structure.
Photoproduction mechanisms studied in the GlueX experiment allows the mapping of light mesons in unprecedented detail with particular interest in exotic meson candidates. This is achieved by impinging an 8.2-8.8 GeV linearly polarized photon beam on a liquid hydrogen target. The measurement of beam asymmetry $\Sigma$ will help constrain quasi-particle t-channel exchange processes using Regge theory. Understanding the photoproduction exchange mechanisms is a crucial ingredient in establishing hybrid and exotic photoproduced light meson states. $\Sigma$ is extracted from the azimuthal angular distribution between the meson production plane and the polarized photon beam. In particular, we will report results on the beam asymmetry measurements for $\eta$ in the reaction $\gamma$ p $\rightarrow$ $\eta$ $\Delta^+$. This reaction with a recoiling $\Delta^+$ will allow for comparison and validation of theoretical calculations and provide additional validation of the $\eta$ asymmetry with a recoiling proton. The different isospin of the $\Delta^+$ imposes additional restrictions that further constrain allowed Regge exchanges.
The KaonLT/PionLT Collaboration probes hadron structure by measuring deep exclusive meson production reactions at Jefferson Lab. A set of high momentum, high resolution spectrometers in Hall C allow for precision measurements from which form factors and other observables can be extracted. One possible measurement is the beam spin asymmetry, which describes the fractional difference in cross-section between events caused by an electron of positive or negative helicity. This asymmetry is caused by interference between longitudinally and transversely polarized virtual photons, which makes it possible to extract a polarized interference cross-section $\sigma_{LT’}/\sigma_0$. In this work, the asymmetry is calculated in the transition regime where the strong force is still poorly understood (Q$^2$ between 2 and 5.5 GeV$^2$), for the p + e → e’+ π + n reaction data from the recent KaonLT experiment. The dependence of $\sigma_{LT’}/\sigma_0$ on the four-momentum transfer to the target -t is then determined, and the results are compared to two different classes of theoretical models. By comparing with predictions made using both Regge trajectories and Generalized Parton Distributions, the asymmetry helps determine how to best describe hadronic reactions in the transition regime, thus providing insight into the strong force.
Hadrons are typically described using "quenched" constituent quark models, which posit a Hamiltonian acting on the state space of the valence quarks, neglecting mixing of higher Fock states. In recent years, experimentalists have observed states which are not well characterized by these models, motivating quark modellers to examine the effects of unquenching. The resultant mass shifts throw the entire predicted spectrum into disagreement with observation, which may indicate that the leading-order effects of unquenching have been absorbed into the phenomenologically-measured parameters of the quenched Hamiltonian. We have calculated corrections to the spectrum using a formalism which estimates and compensates for the effects of this parameter renormalization, leaving small residual mass shifts which better reflect the observable effects of unquenching.
There has been noted concern regarding the retention, academic success, and motivation of students in STEM courses, especially physics. Many factors can impact students’ persistence in STEM courses, however students who do persist often find themselves underprepared for problem-solving within authentic settings. Problem solving is a highly valued 21st Century workforce skill in Canada (Hutchison, 2022) that recent graduates seem to lack (Cavanagh, Kay, Klein, & Meisinger, 2006; Deloitte & The Manufacturing Institute, 2011; Binkley et al., 2012; Finegold & Notabartolo, 2010). To positively impact undergraduate physics education, conversations are needed on ways to transform curricula that support diverse populations of students. There are increasing calls for using evidence-based teaching strategies to improve STEM instruction (Cooper et al., 2015). Prior studies have revealed that both contrasting cases and argumentation tasks can support deeper learning and problem-solving skills. Yet, students are seldom encouraged to justify or to explain their solutions. They rarely reflect on the appropriateness of their responses and consider alternative solutions. Studies suggest that appropriate scaffolds are needed for these instructional strategies to be successful. In this talk I describe how we have integrated contrasting cases and argumentation and alternative forms of writing prompts (similarities and differences, invent a unifying statement, and argumentation) used in introductory physics for non-science majors as well as in calculus-based physics. Results suggest that prompts for identifying similarities and differences within cases tended to promote identification of surface features irrelevant to solving the problems. However, argumentation prompts to evaluate competing theories tended to support deeper understanding of underlying physics principles and appropriate application of principles.
What is most important for non-physics specialists to learn from an introductory physics course? How can course design and assessment support learning transferable skills, especially in large “lecture” classes? We will discuss preliminary results from a collaborative research-practice self-study partnership focusing on Sealfon’s implementation of learner-centered approaches in a 200-student first-semester algebra-based physics course with labs. The course design followed the two intentionalities of the Investigative Science Learning Environment (ISLE) approach (Brookes, Etkina, and Planinsic 2020): (1) We want students to learn physics by thinking like physicists; by engaging in knowledge-generating activities that mimic the actual practices of physics and using the reasoning tools that physicists use when constructing and applying knowledge. (2) The way in which students learn physics should enhance their well-being (via empowering versus authoritative teaching practices). In the lecture hall, students worked in pairs or small groups on knowledge-generating activities on their white boards (laminated sheets of paper). Sealfon regularly pulled a group’s whiteboard, displayed it to the class using a document camera, and discussed the activity with the class. In labs, students worked in small groups to design and conduct experiments to observe phenomena, propose explanations for patterns, and test ideas (hypotheses). Students completed reading and homework using the interactive Perusall platform with brief feedback provided by their teaching assistants. Nontraditional assessment elements included two-stage tests with an optional collaborative portion and an option to revise and resubmit problem-solving solutions on tests with oral quizzes given by teaching assistants.
Since the pandemic forced everything online, there have been rapid and significant changes to the way many of us have been teaching and learning. As more options for in-person activities become available again, we need to consider which elements of learning in the online environment benefit students and are worth keeping. Beginning in Fall 2020, we distributed anonymous online surveys (Fall and Winter) related to students’ interests, motivations and preparedness to all students taking introductory level physics courses at McMaster University. These students are taking physics courses which are aimed at either physical science students, life sciences students, or engineering students. While there is significant overlap in the content of these courses, the student cohorts differ by stream. Comparing results across years and between cohorts, can provide us with insight into our students’ experiences under different learning conditions.
With the shift to online learning, lab kits containing simple, affordable equipment were made to replace the previous in-person labs for our Physics for Life Sciences course. In these home labs, students can perform the experiments and collect their own data independently, and are able to learn about data analysis, graphing and different physics concepts. With most labs now back in person, this year, we took the opportunity to compare the two lab modalities. All students completed two different labs on the same theme (kinematics), one at-home and one in our on-campus labs. Students were then asked (anonymously) about their experiences in the two different labs: what they liked, how they felt they learned, and any challenges they encountered.
I will share some of our results from these surveys as well as some of our plans going forward based on what we have learned so far.
Students who excel in mathematics and physics in high school often consider engineering or physics for university-level studies. But how do they make their choice? How can the education system better advise them to choose the career that is best for each one of them individually? We present results from a survey on how first-year university students choose between the physical sciences and engineering, and examine aspects of how recent high-school students understand the differences and similarities between science and engineering, and how their understanding factors into their choice of university study. Results from our survey may inform outreach and pedagogy for science and engineering, and thus foster greater attraction and retention of undergraduate students in STEM fields, and greater career sustainability and life satisfaction for our graduates.
The accurate measurement of time is of critical importance to society as it provides the means to synchronize events in our lives. The world has ever-increasing demands for precise time in fields such as automation, energy grids, smart cities, financial markets, fundamental research, and global positioning and navigation. Since the world moved to an atomic definition of time in the 1960s, caesium fountain clocks have provided the most accurate realization of the SI second. At the National Research Council, the NRC-FCs2 fountain clock has been operating as a primary frequency standard for Canada since 2020. It is used to contribute to the steering of International Atomic Time, as well as Canada’s official timescale. I will outline the design and performance of the clock, describe the current efforts to re-evaluate the systematic shifts that limit the uncertainty, and discuss the upcoming redefinition of the SI second.
This project focuses on the investigation of trap energy levels introduced by radiation damage in epitaxial p-type silicon. Using 6-inch wafers of various boron doping concentrations (1e13, 1e14, 1e15, 1e16, and 1e17 cm$^{−3}$) with a 50 µm epitaxial layer, multiple iterations of test structures consisting of Schottky and pn-junction diodes of different sizes and flavours are being fabricated at RAL and Carleton University.
In this talk, details on the diode fabrication and electrical measurements of the structures will be given. IV and CV scans of fabricated test structures have been performed and cross-checked between institutes, the results of which will be presented. Furthermore, another focus of this talk will be in the characterisation of trap parameters obtained from Deep-Level Transient Spectroscopy (DLTS) and supplemented by Thermal Admittance Spectroscopy (TAS). Spectra for unirradiated and irradiated diode samples will be shown and their details collected from Arrhenius analyses will be listed. Lastly, DLTS and Charge Collection Efficiency (CCE) measurements conducted on samples before and after neutron irradiation will be evaluated and their results compared.
HELIX, the High Energy Light Isotope eXperiment, is a balloon-borne payload designed to measure the isotopic abundances of light cosmic ray nuclei. Precise measurements of the 10Be nuclear isotope from 0.2 GeV/n to 10 GeV/n will help study propagation processes of cosmic rays. These measurements will allow the refining of propagation models, critical for interpreting excesses and unexpected fluxes reported by several space-borne instruments in recent years. Rare light isotopes will be observed by HELIX with the first in a series of long duration balloon flights in the upcoming year. The instrument will undergo several tests and phases during its commissioning period during which it may be deconstructed and rebuilt. Knowing the position of components following each assembly is important to the measurements of various detectors. The metrology of HELIX was thus studied to provide knowledge of the distances between specified points and planes of the experiment payload.
A Total Station, a device that provides precise optical measurements in surveying and construction, was used to create a position-tracking system for HELIX. This study tested the measurement protocol with various student-designed rigs using retroreflective dots as targets. The retroreflectors have been placed on the experimental payload and are now ready for use in virtual geometry reconstruction. The metrology procedure and code produced through this project will serve as a local positioning system for HELIX components and the output points will be used to update the geometry of the detectors in simulations.
The ocean Sound Speed Profile directly affects how acoustic waves propagate in the ocean. As a result, knowledge of the sound speed profile is important in many underwater acoustic applications including acoustic imaging, source localization, and underwater communication. Measurement of ocean sound speed can also provide an indirect measure of ocean temperature using the close dependence of sound speed on water temperature. Our presentation focuses on remote estimation of the ocean sound speed profile by using an underwater acoustic pulse-echo method. We propose the use of a directional transmitter and a number of receivers offset at a comparatively small distance from the transmitter location. Sound is scattered by naturally occurring targets and these signals are detected by the receiving array. The arrival time and phase in the detected signals contains information on the location of these targets and importantly, the sound speed through the water column. Sound speed estimates can be generated directly by the arrival time data but we propose the use of an inverse approach working with the phase in the signals that allows for greater accuracy in the sound speed estimates. The viability of our approach is demonstrated through use of an acoustic model that generates simulated received signals for our system geometry. We are using the model analysis to guide in the design of a prototype system for future field trials.
Phytoglycogen (PG) nanoparticles are hyperbranched, dendritic polymers of glucose that are produced as compact nanoparticles in the kernels of sweet corn. Our measurements of their structure, morphology, hydration and mechanical properties illustrate the unique properties of native PG nanoparticles: they are soft, porous, hairy and hydrated. These physical properties, combined with their digestibility and lack of toxicity, make PG ideal for a broad range of applications in personal care, nutrition and biomedicine. The properties of PG can also be tuned through chemical modification, such as controlled digestion using dilute acids or enzymes, or through covalent attachment of a variety of different chemical groups that can impart charge and hydrophobicity. In this talk, I will describe the properties of native PG particles and how these properties are modified by acid hydrolysis and covalent attachment of cationic groups, anionic groups, and groups that are both anionic and hydrophobic. These simple modifications have produced significant and sometimes dramatic changes to the physical properties of these soft colloidal nanoparticles, opening up new possibilities for applications of this sustainable nanotechnology.
This talk reviews recent studies of the dynamical and mechanical behaviour of nanocolloidal soft glassy materials using Rheo-XPCS, x-ray photon correlation spectroscopy with in situ rheology [1]. Rheo-XPCS allows for simultaneous studies of the mechanics and nanoscale dynamics of materials over a wide range of timescales from milliseconds to hours. As such, it is an outstanding tool to characterize the behaviour of glassy and other metastable soft systems under the influence of applied stress and strain. I will present several case studies of soft glasses composed of concentrated suspensions of charged silica nanoparticles, demonstrating (i) their stress relaxation and micro-structural dynamics in response to applied step strains below and above the macroscopic yielding transition, (ii) their macro- and micro-structural creep dynamics in response to applied shear stresses, and (iii) their ability to acquire micro-structural and mechanical memory in response to applied oscillatory strain histories. These studies provide insights into the nanoscale origins of non-equilibrium phenomena in driven soft glassy systems.
[1] R.L. Leheny, M.C. Rogers, K. Chen, S. Narayanan, and J.L. Harden, “Rheo-XPCS,” Curr. Opin. Colloid Interface Sci. 20, 261 (2015).
Classical experiments, typically performed using bulk continuous matter can be applied for granular systems to better understand their properties, and explore the analogies between granular and bulk continuous systems. While the classic pendant drop experiment can be
used to measure the interfacial tension between fluids, here we perform the granular version of the pendant drop experiment. The system consists of aggregates of adhesive, monodisperse, frictionless oil droplets in an aqueous solution. Depending on the system parameters, the properties of aggregates resemble both liquid-like and solid-like systems.
Electrohydrodynamics of droplets immersed in an immiscible carrier fluid was first explored in a pioneering paper by G. I. Taylor who formulated the weakly conducting or leaky dielectric model and predicted the steady drop shape in the small-deformation limit. Contemporary literature in electrohydrodynamic studies focuses primarily on the deformations of single droplets. On the other hand, the collective behavior of many droplets shows a wide range of surprising phenomena. In the presence of a DC electric field, a multitude of unstable, chaotic, and turbulent behaviors are observed.
In this work, we use new substances for the continuous leaky dielectric phase and discrete dielectric phase. This opens new doors of possibilities to the experiments in electrohydrodynamics, with lower threshold voltages. The lower voltage thresholds enable new electrorheology experiments to be conducted, the results of which will be reported.
Single photon sources play a critical role in many emerging applications in quantum information science. Single photon quantum computing [1], and single photon quantum cryptography [2] both rely heavily on high-brightness, and high-indistinguishability single photon sources where subsequent single photons are identical in all degrees of freedom. In order to maximize indistinguishability, the quantum emitter must be driven resonantly so that incoherent relaxation pathways are eliminated. This, however, necessitates an efficient method for separating the single photons from the scattered excitation light. We present a novel driving scheme called Notched Adiabatic Rapid Passage (NARP) [3] where a frequency swept optical pulse containing a spectral hole resonant with the quantum emitter is used. The frequency-swept nature of the pulse allows the scheme to retain the benefits of Adiabatic Rapid Passage (ARP), including robustness to variations of the properties of pump laser and quantum emitter. It also enables the suppression of decoherence tied to electron-phonon coupling [4]. The spectral hole allows for the single photons to be spectrally filtered from the scattered laser light. Together, this excitation scheme would enable <10-8 scattered photons per single photon emission with a detection loss of 4%. We have demonstrated this scheme in a single semiconductor quantum dot.
[1] Madsen, L. S., et. al. Quantum computational advantage with a programmable photonic processor. Nature, 606(7912), 75-81 (2022).
[2] Bozzio, M., Vyvlecka, M., Cosacchi, M. et al. Enhancing quantum cryptography with quantum dot single-photon sources. npj Quantum Inf 8, 104 (2022).
[3] Wilbur, G. R., Binai-Motlagh, A., Clarke, A., Ramachandran, A., Milson, N., Healey, J. P., O’Neal, S., Deppe, D. G., Hall, K. C. Notch-filtered Adiabatic Rapid Passage for Optically-Driven Quantum Light Sources. APL Photonics (in press) (2022).
[4] A. Ramachandran, G. R. Wilbur, S. O’Neal, D. G. Deppe, and K. C. Hall, "Suppression of decoherence tied to electron–phonon coupling in telecom-compatible quantum dots: low-threshold reappearance regime for quantum state inversion," Opt. Lett. 45, 6498-6501 (2020)
High quality, uniform thin films of quantum materials are of extreme importance across many classes of device research. Minimizing energy consumption, while keeping flexibility in the deposition process, along with high structural stability, electrical and thermal conductivity, and optical transparency is critical in designing a reactor for quantum material thin film growth. Ultra-thin films based on tungsten semi-carbide (W2C) are excellent candidates as quantum materials with startling properties such as theoretically predicted negative Poisson’s ratio.[1] However, chemical-vapour thin-film deposition (CVD) techniques have not been reported to yield bona fide W2C films, arguably because they operate under thermodynamic equilibrium conditions, where the stable phases are segregated tungsten and carbon, or the carbon-rich WC. Here, we report of the synthesis of highly crystalline few-layer W2C, achieved using an ad-hoc designed remote plasma vapour deposition (RPVD) ultra-high vacuum reactor. The reactor built by us for this study generates tungsten ions from a 13.56 MHz radio frequency biased 2” target inductively coupled with hydrocarbon species from the ionisation of methane at ~10-6 mbar (~10-9 mbar base vacuum). The so achieved plasma is injected in a high-temperature furnace (900oC) where substrates are placed, by a 10-kV DC accelerating voltage. X-ray diffractometry, scanning tunneling microscopy, and elemental analysis have confirmed few-layer W2C crystals in the deposits, with decreasing thickness in backstream mode deposition, with the addition of varying amounts of Ar ions in the forward gas stream. Dramatic advantages of our high-vacuum RPVD deposition system rests in the high crystallinity of our deposits, where tungsten carbides without W2C structure (i.e. WC, or amorphous) were obtained by CVD or less advanced plasma deposition systems.[2]
[1] Wu et al, Phys. Chem. Chem. Phys., 2018, 20, 18924
[2] Baklanov et al, Mater. Res. Express, 2020, 9, 016403
Thermal transport in low-dimensional systems such as nanowires is interesting for applications involving system design at the nanoscale, but the effects of changes like the shape of a nanowire are not completely understood. In this work the behaviour of the thermal conductance of nanowires is investigated by introducing a single kink into an otherwise straight nanowire. The angle of this kink is varied to examine its effects on thermal transport. Kinked systems are constructed and simulated using Molecular Dynamics simulations, phonon Monte Carlo simulations and classical solutions of the heat equation. The effects of lattice orientation within the kink are found to be significant, but an examination of the heat flux field reveals additional complexities. Details of transport modeling, ratio of mean free path to characteristic system size, phonon reflections and system specularity yield differences in thermal behaviour throughout the systems. Comparing the heat flux between phonon Monte Carlo and classical solutions of the heat equation finds that the heat flux in systems where the mean free path of phonons is large compared to the system dimensions (such as those in the Monte Carlo simulation) may have heat flow concentrated in a channel smaller than the dimensions of the system.
The ATLAS experiment recorded 140 ifb in the LHC’s $\sqrt{s}$ = 13 TeV Run 2, and the analysis of this high-quality and well-understood dataset continues. Canadian physicists are involved in all aspects of data analysis, from the trigger systems to reconstruction to physics results. Recent results, including highlights from Higgs properties and precision measurements of the Standard Model, as well as searches for new physics, will be discussed.
Electron objects are used in a large fraction of ATLAS publications. Better identification implemented in the electron triggers would allow to lower their transverse momentum threshold and increase their acceptance. In particular, analysis with many electrons in the final state such as the ones studying the Higgs, the W boson or Beyond de Standard Model phenomena can suffer from large, sometimes dominant, fake or non-prompt electron background and would benefit from improved electron identification.
To address this problem, our group developed a convolutional neural network (CNN) to identify electrons in ATLAS. Our CNN shows significant improvement in performance when compared to the algorithms currently used in ATLAS for electron identification. Our first iteration of the CNN is trained using a Monte Carlo simulation (MC) sample, and we aim to improve even further the performance by designing a real data sample to train our CNN on.
With that goal in mind, we study a real data sample pure in background electrons and compare it to the MC we used for training the first CNN. We show that such sample can be obtained by applying various trigger, and transverse energy and pseudo-rapidty cuts. We show that the distributions of the various high level input variables differs between the two datasets, particularly at low transverse energy. We then find similar results when comparing the mean calorimeter images of each dataset.
We conclude that the low transverse energy region is imperfectly modelled by the MC and thus, training a CNN in real data should yield substantial improvements in performance.
The international CALICE collaboration is dedicated to detector R&D in calorimetry for new experiments. All project concepts now use high granularity to maximally profit from Particle Flow Algorithms and thus improve jet energy resolution, device versatility and response performance. A review of innovative analog or digital detector types, using technologies such as silicon, scintillators or resistive plate chambers, will be presented, as well as results from recent work realized in Canada.
The Belle II experiment, based at SuperKEKB, is collecting e+e- collision data at the Upsilon(4S) resonance energy. The Belle II physics program is enabled by the (all-time high) record luminosity of SuperKEKB; a metric that also incurs record high beam background in the detector. Accurate simulation of physics events in the detector during collisions is vital to obtaining quality physics results.
The effects of beam background are currently represented in simulations by overlaying background data measured randomly during data taking. The large size of these background data samples is a technical problem; they are challenging to use on distributed computing grids. As Belle II approaches higher luminosity, saving and using data samples will become unsustainable. An alternative scheme where data-like beam background samples are generated in lieu of data samples directly while simulating is necessary to continue producing the quality simulations essential for the Belle II physics program.
The novel generative adversarial network (GAN) implemented in the Belle II electromagnetic calorimeter (ECL) is capable of simulating data-like background waveforms in the 8736 CsI(Tl) ECL crystals, which will mitigate this problem. GANs can be used in High Energy Physics (HEP) experiments as a novel simulation method to generate random yet accurate background waveforms on the fly from lightweight neural networks that can be overlayed onto more complex physics simulations such as those coming from GEANT4. This talk will show GAN designs at Belle II, their training framework and the tests performed to determine their performance.
The analysis of collision events at the Large Hadron Collider (LHC) presents significant computational challenges, particularly due to the need for large amounts of Monte Carlo simulation to reduce statistical uncertainties in the simulated datasets. The most computationally intensive task in Monte Carlo detector simulation is the simulation of high-energy particles interacting with the calorimeter. In this work, we propose a novel approach that combines recent advancements in generative models and quantum annealing techniques to provide fast and efficient simulation of high-energy particle-calorimeter interactions. Our approach, the Quantum Variational Encoder (QVAE), utilizes a Variational Autoencoder (VAE) model with a Restricted Boltzmann Machine (RBM) prior implemented on an annealing Quantum Processing Unit (QPU). The quantum annealing QPU can generate a large number of samples from the latent space of a trained VAE model with high efficiency. We show the performance of the QVAE on simulated calorimetric cluster data. The promising evaluation results demonstrate the accuracy and reliability of our Quantum Variational Encoder. Furthermore, our proposed approach has the potential for significant improvement by extending it to use QPU samples during the training process, enhancing the computational efficiency even further.
In order to make new discoveries within the realm of particle physics it is imperative that we are able to compare data collected using the ATLAS detector with theoretical predictions as well as results from other experiments. The process of correcting ATLAS data such that the effects of the detector are eliminated is known as unfolding. At present, commonly used unfolding methods require data to be binned and are typically performed with low dimensionality. With recent advances in machine learning, however, it has become possible to perform unfolding with unbinned, high dimensional data. The method examined here, known as the OmniFold technique, utilizes iteratively trained neural networks to accomplish this task. In this presentation, the results for the first unbinned, 24 dimensional measurement with full uncertainties is shown. This measurement is performed using the full Run 2 proton-proton collision dataset recorded by the ATLAS detector and examines Z+jets events where the Z boson decays to two muons. Various observables related to the dimuon kinematics, track jet kinematics and track jet substructure are included in the unfolding. A select number of observables that may be derived after the unfolding are also examined.
Darkside-20k, planned to be constructed at the LNGS underground laboratory in Italy, is a forthcoming detector that aims at using a Liquid Argon (LAr) target to detect the scattering of dark matter particles from argon atoms. The detector will collect an exposure of 200 tonne-years while keeping the instrumental background level in the WIMP search region of interest to a minimum.
At the center of the detector, a two-phase Liquid Argon Time Projection Chamber (LArTPC) will be filled with low-radioactivity Underground Argon (UAr) with a 20-tonne active volume. The TPC barrel will be made up of eight gadolinium (Gd) loaded PMMA (acrylic) panels. The acrylic anode and cathode plates of the TPC barrel will be coated with Clevios to realize the electrical potentials in the TPC and with 1,1,4,4 tetraphenyl-1,3-butadiene (TPB) to wavelength shift the 128 nm argon scintillation light to ≈420 nm which is necessary for the Silicon Photomultiplier (SiPM)–based readout to detect light.
The thermal vacuum evaporation method is the most common way to deposit TPB on the acrylic time projection chamber. For Darkside-20k, a system with multiple point sources to coat the TPC barrel is proposed and, in this talk, I will present how using more than one point source can improve the uniformity of the TPB coatings. I will also talk about other important parameters that can affect the uniformity of the coatings.
The Cryogenic Underground TEst facility (CUTE) is located 2 km underground at SNOLAB in Sudbury, Ontario. The response of cryogenic germanium and silicon semiconductor detectors is characterised through testing at CUTE prior to use in the Super Cryogenic Dark Matter Search (SuperCDMS) experiment. SNOLAB and CUTE together provide a low background environment for testing, shielded from cosmic rays and other interfering radioactive backgrounds. CUTE currently has two sources available within the facility for gamma calibration, used to characterise the high voltage detector response. iZIP detectors being tested in the coming years will need a neutron calibration source available to characterise their response.
Transportation of radioactive sources within SNOLAB is a process requiring advance planning in order to notify other experiments about the possible presence of unaccounted for radioactive sources. Due to the ever-changing nature of any experimental work this process can cause further delays when testing cannot be done promptly as needed. The CUTE neutron calibration system is built to solve this issue. The system uses a californium 252 source which is pulled by a motor through a tube located within the shield water tank, allowing the location of the neutron source to be controlled remotely. Testing for the system has begun with implementation foreseen in September 2023. This talk will discuss the commissioning and applications of the neutron calibration system at CUTE.
The detection of dark matter (DM) is currently one of the leading challenges in particle physics. While many experiments attempt to detect dark matter in a variety of ways, the DEAP-3600 experiment uses roughly 3.3 tonnes of liquid argon in an attempt to detect the scintillation signal produced by a dark matter particle scattering on an argon nucleus. DEAP-3600 uses pulse shape discrimination to reject electromagnetic backgrounds by taking advantage of the difference in the time over which scintillation light is produced for various types of incident radiation and subsequently detected in the 255 photomultiplier tubes imaging the detector. The ability to understand and reject background interactions in the detector is key in ensuring a low-background dark matter search region.
In this talk, we discuss progress made on measurements benefiting current DM experiments like DEAP-3600 as well as future liquid argon detectors using Argon-1. Argon-1 is a modular single phase liquid argon detector located at Carleton University in Ottawa, Ontario, instrumented with two silicon photomultipliers (SiPMs) used to detect the scintillation light. We discuss pulse shape discrimination techniques employed using SiPMs, as well as studies on alpha particle quenching.
The Deep Underground Neutrino Experiment or DUNE is an ambitious accelerator based neutrino oscillation experiment that is not only able to resolve the mass hierarchy, but also has excellent potential to measure the charge-parity violating angle in the neutrino sector. DUNE will constrain systematic uncertainties by building a suite of detectors close to the neutrino source (near detector) and another at a distance of 1300km away (far detector). In DUNE, this Near Detector suite consists of three main components. Here the focus will lie on the Liquid Argon Near Detector or ND-LAr, which is built with a novel pixelated charge readout system. ND-LAr is a modular design of 35 identical LAr TPCs assembled in a 7 by 5 array. To prove the viability of this concept, a chain of prototypes has been constructed and tested with cosmic rays; From singular modules (60cm x 120cm x 60cm) to ultimately combining them in a 2x2 array in the NuMI neutrino beam at Fermilab. With a pixel pitch of 4.4mm in three of the four modules and a pitch of 3.8mm in the fourth, there are more than 300k readout channels across the 8 drift volumes. This talk will describe the characteristics of these modules and their responses to incoming charged particles by studying the pixel performance and the particle track widths. These studies will help us understand the differences between the modules in terms of the respective drift fields applied within them and the charge collection efficiencies.
The Hyper-Kamiokande (HK) is a next generation neutrino detector that will require new detector technologies and percent-level calibration to achieve its full physics potential. To achieve this goal, a 50-ton scale Water Cherenkov Test Experiment (WCTE) has been proposed and is scheduled to be installed at the T9 test beam experimental area in CERN, with the run starting in summer of 2024. To understand and characterize the T9 beam, several small detectors have been designed, including a Time-of-flight (TOF) detector, Aerogel Cherenkov Threshold (ACT) detectors, hole counters, and hodoscopes. These detectors will be placed between the beam-target stage and the WCTE water tank. However, the presence of these intermediary materials will modify the momentum and position distribution of the incoming T9 beam. To study these modifications, a dedicated Geant4 simulation has been performed, and the results will be discussed in this talk. Overall, this simulation aims to improve the accuracy and effectiveness of the WCTE detector by providing a better understanding of the T9 beam and its interactions with intermediary materials.
The Hyper-Kamiokande project plans to measure the phenomenon of neutrino oscillations with unprecedented precision, at the 1% systematic uncertainty level or less. To do so, multiple water cherenkov detectors will be deployed: near and far detectors, as well as a test experiment (WCTE) for the testing of new technologies and improvement of physics understanding. These detectors will use multi-photomultiplier tube (mPMT) modules, each of which consists of nineteen 3'' PMTs for the detection of cherenkov radiation produced by the resultant charged particle from a neutrino interaction. These mPMTs are under development at multiple locations. A number of measurements have been done on the modules, including optical tests to understand light-collection capabilities before and after the inclusion of additional reflective material on the PMT cups, pressure tests to measure the amount of deflection of the mPMT components at various water depths, and mechanical tests on the gel that optically couples the PMTs to the acrylic dome covering the module. This presentation will discuss these measurements, as well as provide an overview of the mechanical components and electronics that comprise the modules.
There exists a large body of indirect evidence for the existence of Dark Matter (DM) but, to date, no direct evidence has been found. Because of this, the wide range of possible parameter space that would then be used to explain dark matter’s observed effects has given rise to a large number of models. One possible form of DM is strongly self-interacting DM, which includes Strongly Interacting Massive Particles (SIMP), modeled after Quantum Chromodynamics (QCD). To narrow down possible models, direct detection of dark matter at accelerators is a high priority. Detecting or ruling out some possible DM models is a part of the experimental program for the MoEDAL experiment located at the LHC. The MAPP extension to the MoEDAL experiment, now approved for run 3, focuses on searching for Mili-Charged Particles (mCPs), and Long-Lived Particles (LLP). In this talk, we will discuss meson-like SIMP, and their potential detectability at the MoEDAL MAPP experiment. In order to model this DM, we construct a Lagrangian describing dark-pions using an approach inspired by Chiral Perturbation theory, an effective field theory of QCD. In addition to strong self interactions, our meson like DM also couples to dark gauge fields. To couple our model to the Standard Model, we include a vector portal term which kinetically mixes our dark gauge fields with standard model gauge fields. As part of our model, we also include a Wess-Zumino-Witten term, this term is important to control the overproduction of strongly self-interacting DM in the early universe. We focus on two processes: a Drell-Yan process involving a dark gauge field, which produces a pair of dark-pions, and photofusion of two dark photons to three dark-pions. Due to kinetic mixing, these dark-pions will have an effective electric charge that is a small fraction of that of the electron.
The far-infrared spectrum of CD$_{3}$SH has been recorded from 60 to 450 cm$^{-1}$ at the FIR beamline of the Canadian Light Source in Saskatoon in order to explore the evolution of the torsional structure in climbing up the ladder of torsional states. So far, the torsion-rotation levels have been extensively mapped up to the third excited torsional state, and we hope to push the assignments further up to the v$_{t}$ = 4 state and beyond to where the ground torsional ladder is passing through the lower vibrational levels with the possibility of interesting torsion-vibration interactions. Here the torsional levels are high above the potential barrier to internal rotation and are essentially free rotor states following the parabolic curves of our “Universal Spectral Predictor”. We wish to explore how well the free rotor pattern can be modeled in order to gain predictive power for extrapolation up to higher states and potentially address the long-standing torsional problem of simultaneous global fitting of ground and vibrational states.
Authors fabricated a unique plasmonic structure using gold nanorods (GNRs) along the length of a tapered fiber using a well-known phenomenon called “Optical tweezing”. The plasmonic structure, known as an optical fiber probe, was used to detect chemicals at lower concentrations. Surface-enhanced Raman spectroscopy (SERS) technique was used to obtain the data for chemicals adsorbed on the probe. The fiber probe was manufactured using a dynamic etching process. We will present the Raman spectra of Rhodamine 6G and Crystal Violet (CV), extensively used in the food and textile industry. Manufacturers use CV in aquaculture for its anti-parasitic and anti-microbial properties, which help prevent diseases and infections in fish and seafood farming. The usage of CV contains elevated toxicity levels, and the residue is strictly forbidden in food due to potential carcinogenic and mutagenic properties that pose a potential threat to both human and aquatic life. R6G is a synthetic dye commonly used in the food industry to provide color to various food products such as candies, energy drinks, sauces, dressings etc., as well as in the textile industry that uses massive amounts of dyes which is a major cause of water pollution. The manufactured probe is reliable, sensitive and compact. The “dip and dry” technique was used to adsorb analytes (R6G and CV) with different concentrations on the gold-nanorods coated fiber probe. The results based on GNRs (aspect ratio 3.8 and longitudinal surface plasmon resonance (LSPR) wavelength 785 nm, Nanopartz, USA) show that the minimum concentrations detected for R6G and CV were 10-12 M and 10-11 M, respectively.
Authors demonstrated a passively Q-Switched pulse laser by using an aqueous solution of colloidal gold nanorods (GNRs) and Poly vinyl Alcohol (PVA) as a Saturable Absorber (SA) in a fiber ring laser cavity. GNRs, due to its unique plasmonic and nonlinear properties, has the potential to generate ultrashort pulses. In addition, a tunable laser can be developed using a mixture of GNRs with different lengths (aspect ratios = length/diameter). However, the application of GNRs is limited in developing high-power lasers due to their low damage threshold. The aqueous solution used in the experiment increased the damage threshold of the GNRs, and their shape remained intact after prolonged exposure to high power. The heat accumulated in the GNRs can dissipate in the surrounding medium in less time than the time required to deform the shape of GNRs responsible for the plasmonic properties. PVA provided stability to the solution and restricted the accumulation of GNRs in the solution, resulting in uniform distribution of GNRs, which was examined in TEM images. The density of GNRs was increased in the aqueous solution to increase light absorption. Q-Switched pulses were generated having a width, repetition rate and average power of 9.2 µs, 21.5 kHz, and 3.25 mW, respectively, at 1560 nm central wavelength. The authors will present the design of the laser cavity, the process of preparing SA and Experimental results, which include TEM images showing the distribution of GNRs in PVA at different concentrations.
We will report on the development of efforts to create new spectroscopic reference data to help astronomers to find exoplanets. Astronomers routinely observe spectra of the molecule iron hydride (FeH) in the atmosphere of M-class stars. By measuring the Doppler shifts of transitions in FeH, they can determine a star’s radial velocity. If a star has an exoplanet, the star and the exoplanet will orbit their common center of mass inducing a periodic frequency wobble in the star’s FeH spectrum. However, M-class stars often have strong magnetic fields that modify its FeH spectrum through Zeeman splitting. While the infrared transition of interest (E$^4$Π- A$^4$Π, ~ 1600 nm) has been studied in detail, its response under magnetic field has not been studied in the lab, although it has been observed in sunspots (A. Asensio Ramos et al., 2004). Hence astronomers require an FeH spectrum to interpret their stellar observations when there is non-negligible magnetic field.
We are creating FeH in the lab to study the response of the E$^4$Π and A$^4$Π states to controlled magnetic fields. Transitions involving E$^4$Π and A$^4$Π are accessed through new higher-lying states that we have recently discovered. One of the new states is accessible from the ground state, X$^4$Δ, via laser excitation spectroscopy in the green region (~ 510 nm) and fluoresces to both the E$^4$Π and A$^4$Π states. Thus, we can obtain information about the infrared transition indirectly through Zeeman spectroscopy of the visible transitions. The second new higher-lying state is assigned as the (2)$^4$Φ state. This state is also accessed from the ground state (~ 515 nm) and fluoresces very strongly in the red (~ 625 nm) to the previously observed C$^4$Φ state.
Laser-cooled molecules exhibit several features that make them attractive virtual laboratories for probing new physics Beyond the Standard Model (BSM). Various proposed extensions to the Standard Model predict non-zero values for the electron's Electric Dipole Moment (eEDM). To date, no experiment has measured a non-zero eEDM; however measurements placing an upper bound on the value for the eEDM provide an experimental check on potential new physics theories. YbOH has recently been suggested as a molecule of interest in the search for BSM physics due to its large effective internal EM fields. Despite this interest, laboratory spectra of its isotopologue YbOD have remained elusive until now. We present our analysis of the first high-resolution LIF spectra of $^{174}$YbOD.
discussion & networking opportunities for session close out
Hyperpolarized (HP) gas MRI was previously developed to provide a way to study whole lung ventilation, alveolar morphometry and gas-exchange, with the first demonstration of 129Xe MRI lung imaging nearly 30 years ago. In the ensuing decades, HP gas MRI research has demonstrated that inhaled HP gas lung MRI provides unique measurements for a number of pulmonary diseases including chronic obstructive pulmonary disease (COPD), cystic fibrosis, asthma, lung cancer, and COVID. This MRI approach allows for visualization and quantification of lung units that participate in ventilation, and differentiating them from non-ventilating regions. This non-invasive, rapidly acquired and radiation-free lung imaging method, provides direct, spatial measurements of lung structure, function, and gas exchange down to the alveoli and acinar ducts.
HP gas 129Xe MRI has been recently FDA approved which means that this imaging technique is now a clinical-tool in USA. It is expected that Canada will also approve this method in near future. 129Xe lung MRI is extremely powerful and the only radiation-free tool for lung structure and function measurements. The proposed program and requested infrastructure will provide new tools for accurate lung damage assessment, therapy guidance, and evaluation of treatment outcomes. This is not only critical for the almost 6 million Canadians with asthma (3.8 million) and COPD (2.0 million), but also to the 3.5 million Canadians who were infected by COVID-19 in 2020 - 2022 and experienced lung damage (up to 10% infected) as a result. They will require longitudinal observation of the lung structure and function to best understand and manage the short and long-term health effects and to determine the impact of treatment strategies. The new tools being developed here will enable this.
There are a number of benefits associated with 129Xe brain MRI. First, 129Xe MRI brain perfusion images show a larger area of the brain affected by stroke compared to the traditional proton MRI, which can be an important and more accurate second stoke predictor, keeping in mind, that stroke is the 3rd leading cause of death in Canada and the 10th largest contributor to disability-adjusted life years (the number of years lost due to ill-health, disability or early death). As such, having these tools is so important in the management and prevention of stroke.
The brain is made of billions of cells called neurons, which are responsible for conducting electrical signals between the central nervous system and the rest of the body. The axon is the thread-like projection of the neuronal cell body and is usually insulated by the myelin sheath. The two hemispheres of the brain are connected by a white matter tract called the corpus callosum and the degeneration and dysfunction of axons within this brain region is indicative of many disorders, including Multiple Sclerosis. Such degeneration can be seen in the decreasing diameters of axons within the corpus callosum.
Current methods for measuring axon diameters require ex vivo tissue samples and electron microscopy analysis. Recently, Magnetic Resonance Imaging (MRI) is proving to be a useful tool for measuring axon diameters. Oscillating Gradient Spin Echo (OGSE) MRI pulse sequences can be used to probe micron-sized structures within the sample. This project investigated the use of OGSE sequences to measure axon diameters in the mouse corpus callosum. A CDI (Clostridioides difficile Infection) male mouse was anesthetized using isoflurane and perfused according to University of Winnipeg and Manitoba CACC protocol. Following sacrifice, the mouse brain in skull was isolated then soaked in paraformaldehyde for 48 hours, followed by phosphate-buffered saline for another 48 hours prior to imaging. The mouse brain was then transferred to a holding tube filled with Fomblin and the tube was placed inside the 21 cm horizontal bore 7 Tesla Bruker Magnet. Images were registered, ROIs were drawn in the corpus callosum, and axon diameters within the corpus callosum were inferred using custom-built Matlab code.
Axon diameters in various regions of the corpus collosum were inferred to be 5.4±0.8µm, 5.3±0.7µm and 6±1 µm. MRI using OGSE pulse sequences can probe micron-sized axons in fixed biological tissues. The next step is to reduce the uncertainty in the measurements.
The authors would like to acknowledge funding from NSERC and Mitacs, as well as assistance with animal care from Rhonda Kelly.
Approximately 1 in 6 people globally are affected by a Neurological disorder. Previous research has linked numerous Neurological disorders post-mortem to abnormalities in axon distribution and integrity within neural white matter tracts. Therefore, it is of high interest to investigate methods that will eventually be able to measure axon diameters in white matters tracts in live brains. This would allow for the development of new clinical applications such as earlier diagnosis and allow for the development of new treatments. Diffusion MRI is a method with the potential to infer microstructure in live brains using temporal diffusion spectroscopy (TDS). TDS, when used with certain pulse sequences, such as Oscillating Gradient Spin Echo (OGSE), can be used to infer micron-scale axon diameters. To calibrate TDS with OGSE, an ex vivo mouse brain was imaged and analyzed in this project and many substructures were studied to assess the differences within a mouse. The images were collected using a 7T Bruker AvanceIII NMR system with Paravision 5.0 and were processed and analyzed using MATLAB. The mean diameter inferred of axons in the corpus callosum ranged from 2.6 ± 3.4 μm to 5.6 ± 1.0 μm, with an average of 5.3 ± 0.2 μm. The next step is to increase the precision of the measurements with the goal of being able to measure axon diameters in in vivo mouse brains.
The authors wish to acknowledge Rhonda Kelley for her help with animal care and imaging. The authors acknowledge funding from NSERC and Mitacs.
Magnetic resonance imaging (MRI) is a powerful imaging technique for diagnosing disease. One major drawback of currently available MR systems is the cost of the high-field (>1T), general use magnets that are the current clinical standard. Thus, interest has grown in developing smaller, low-field, and diagnosis specific MR systems. These systems can reduce costs, and increase the accessibility of MRI by allowing MR systems to be used as part of bedside care. To aid in the design of new MR systems, we have developed the Numeric Integrator for the Bloch Equations (NIBLEs). NIBLEs is a Python 3 based simulation tool for simulating varied MR sequences and magnet configurations, with the aim of identifying the optimal system characteristics for a given use case to serve as a starting point for the development of specialized MR systems.
The NIBLEs toolset itself is divided into two main components; a toolset for defining sample properties to be used in later simulations, and a toolset for simulating MR hardware and experiments. The sample toolset allows a user to build a sample composed of spatially distributed magnetization vectors with varied properties used to simulate proton density, chemical shift, and NMR relaxation times. Meanwhile, the main simulator allows users to script an MR pulse sequence using functions that define the applied magnetic fields from the constituent components of an MR system (e.g. gradient coils and radiofrequency coils). This script provides the applied field and sample properties to the core solver, which uses a Runge- Kutta algorithm to numerically solve the Bloch equations to reproduce the signal space output of that experiment. Simulations will be presented to illustrate the capabilities of the NIBLEs solver to produce realistic results for NMR experiments, and MRI imaging protocols including Turbo Spin Echo and Gradient Echo imaging.
The weak mixing angle can be measured in parity-violating elastic electron-proton scattering. The aim of the P2 experiment is a very precise measurement of the weak mixing angle with an accuracy of 0.15% at a low four-momentum transfer of Q2 = 4.5x10^{-3} GeV^2. In combination with existing measurements at the Z pole with comparable accuracy, this comprises a test of the standard model with a sensitivity towards new physics up to a mass scale of 50 TeV. In addition to the measurement using a liquid hydrogen target, other targets, such as carbon and lead, are considered for measuring parity-violating elastic electron scattering. The experiment will be built at the future MESA accelerator in Mainz. In this talk, the motivation and challenges for these measurements will be discussed.
High Voltage Monolithic Active Pixel Sensors (HVMAPS) are a new type of electron detector. This hybrid pixel detector combines the semiconductor sensor elements that detect high energy particles with the readout electronics in one element. The demand for fast, high resolution and low noise detectors by experiments conducted at the LHC initiated the development of hybrid pixel detectors, first being developed at CERN in the 1980s [1]. Each pixel has its own integrated readout electronics. The manufacturing process provides high levels of customization like radiation thickness and radiation length, thereby allowing the control of material budget for detectors, where scattering could be an issue. HVMAPS have been used as detectors for the Mu3e experiment [2]. As thin as 50 microns, the latest version of the HVMAPS, (MuPix Version 11) are the ideal electron detector for applications in the MOLLER experiment at Jefferson Lab [3], The experiment proposes to measure the asymmetry of parity violating scattering, APV, in polarized electron-electron scattering, thereby measuring the Weinberg angle to a greater precision. This presentation outlines the use of HVMAPS in two aspects of the experiment: the Compton polarimeter, and the main detectors, for tracking the path and position of electrons respectively.
[1] Philip Garrou, C Bower, and P Ramm. Introduction to 3d integration. In Handbook of 3D Integration Vol 1-Technology and Applications of 3D Integrated Circuits. Wiley-VCH, 2008.
[2] Niklaus Berger, Mu3e Collaboration, et al. The mu3e experiment. Nuclear Physics B-Proceedings Supplements, 248:35–40, 2014.
[3] Mammei, Juliette. "The MOLLER experiment." arXiv preprint arXiv:1208.1260 (2012).
The TRIUMF Ultracold Advanced Neutron (TUCAN) source, when completed, will be a world-leading source of ultracold neutrons. The source is a unique combination of a spallation target coupled to a superfluid helium converter. A key component of the source is the liquid deuterium moderator, which surrounds the superfluid helium converter. The goal of the LD$_2$ moderator is to provide a high flux of cold neutrons into the superfluid, where they are then downscattered to become UCN. The liquid deuterium moderator has a 125 L volume, and experiences a heat load of 60 W for the design proton beam current of 40 $\mu$A. Cooling is provided by a distant cryocooler at higher elevation, in a thermosyphon loop with the moderator volume. The thermosyphon solution is unique in featuring no moving parts and single-phase (liquid) operation. This presentation will focus on the design of the liquid deuterium system, and the status and plans for this important component of the UCN source.
In order to search for the physics beyond the Standard Model at the precision frontier, it is sometimes essential to account for Next-to-Next-Leading Order (NNLO) corrections theoretical calculations. Using the covariant approach, we calculated the QED type leptonic tensor up to quadratic (one loop squared) NNLO (alpha cube) order, which can be used for the processes like (electron-proton) and (muon-proton) scattering relevant to MOLLER (background studies) and MUSE experiments, respectively. Recently we have used this approach for a hard photon bremsstrahlung process called "Bethe-Heitler". This is a 2->3 process where an electron scatters with a proton with the emission of a hard photon and is an important example in Quantum Electrodynamics (QED).
In this presentation, I will quickly review covariant approach and provide our latest results for quadratic QED electron-proton scattering along with the Bethe-Heitler process.
Performing measurements on anti-matter atoms is an alluring proposition for studying the symmetries between matter and anti-matter; however, it presents a number of technical challenges. The ALPHA group has met these challenges and successfully trapped large numbers of anti-hydrogen atoms, opening the door for many such measurements. The new ALPHA-g experiment has the ability to measure the gravitational force exerted by the Earth on these anti-hydrogen atoms, by counterbalancing this force with precisely controlled magnetic fields. By relaxing only the confinement along the gravitational axis, the anti-atoms are released into two “up” and “down” regions separated by tens of centimetres. Here they annihilate, and the ratio of counts in the two regions describes the overall – magnetic plus gravitational – bias.
Charged pions resulting from these annihilations are tracked in a time projection chamber; these tracks are fit and extrapolated back to a common annihilation vertex. Our ability to reconstruct the position of these annihilation vertices into the correct region was previously one of the limiting factors of the experiment. Here I present the steps taken to improve our position resolution beyond that necessary for the experiment.
Furthermore, due to the low number of anti-atoms produced and slow experiment timescale, cosmic rays produce a sizeable background in our time projection chamber. To mitigate this, a second plastic scintillator-based detector system was implemented, called the “barrel veto”. This was used to discriminate against the cosmic ray background based on event topology in the first data-taking run in 2022. It has the additional possibility of using time-of-flight to further identify background events. Here I present the usage of the barrel veto to reject the cosmic ray background, and demonstrate the overall effectiveness of the ALPHA-g detector system.
The ALPHA-g experiment at CERN aims to test the fundamental symmetry between matter and antimatter by precisely measuring the effect of Earth's gravity on antihydrogen atoms. To achieve this goal, the experiment uses a radial Time Projection Chamber (rTPC) as the primary detector for particle tracking. The rTPC provides a high spatial resolution of the antihydrogen annihilation vertices, which is crucial for a measurement of the interaction between antimatter and Earth's gravitational field. This presentation will discuss a simulation study of the rTPC's performance, which aims to quantify its spatial resolution, efficiency, and response to various experimental parameters. The study highlights the essential role of simulations in understanding systematics for future precision measurements. Specifically, the results demonstrate the importance of simulation studies in optimizing the performance of the rTPC and lays the groundwork for future investigations of the detector's tracking and vertex reconstruction algorithms.
The ways Physics and Astronomy are traditionally taught in Canadian Universities typically ignores millennia of knowledges of Indigenous Peoples. This is by design as textbooks and curriculum tend to build upon a European view of the growth of science, physics, and astronomy that centers one perspective. However, we can build a more diverse and improved curriculum by considering Indigenous methodologies, knowledges and stories in our research and teaching. In this talk I will reflect on experiences teaching an Indigenous centric astronomy course that considered Indigenous methods and the intersection of professional astronomy and colonization today. I will also offer insights into how physicists and astronomers can move to support inclusion and celebration of Indigenous knowledges in the classroom.
nEXO is a planned next-generation neutrinoless double beta decay experiment, designed to be at SNOLAB in Sudbury, Ontario, Canada. Within the international nuclear and astroparticle physics communities, we strive to be a leader and role model in the areas of Diversity, Equity, and Inclusion while drawing inspiration from the trailblazers who came before us. In 2018, nEXO wrote and adopted its Code of Conduct and created a standing Code of Conduct committee. In 2020, nEXO founded its Diversity, Equity, and Inclusion Committee. The nEXO-DEI committee has created a mentorship program, started an internal DEI lecture series, initiated an internal newsletter and information hub, and began surveying our own collaboration on ways that we can improve our culture. This talk outlines the works of these groups, the progress they have made, and where the future of DEI in the nEXO collaboration is headed.
Undergraduate research activities, strong mentorship and peer support have been demonstrated to improve the experiences of students studying science. This is especially important for Indigenous students for whom the transition from a high school setting, where students feel comfortable and may be embedded in robust Indigenous community, to university, which can be isolating and challenging. UWinnipeg has a large population of Indigenous students, and is uniquely situated to support and encourage Indigenous students in the sciences. This presentation will describe the suite of programs at UWinnipeg, namely the Pathway to Graduate Studies (P2GS) program for junior students and the Indigenous Summer Scholars program (ISSP) for students towards the end of their degree. In 2022 UWinnipeg helped develop a pilot program UWindsor. These programs offer a rich environment for research and scholarly success and a means to form a sense of community and belonging on campus. The P2GS program provides an opportunity for first and second year undergraduate students to upgrade their basic science skills, gain research experience in a university laboratory, and to form a network of peers, graduate students and faculty. The ISSP program matches senior undergraduate students with a research mentor to work on an independent research project. Both programs are deeply connected to and supported by the UWinnipeg chapter of the Canadian Indigenous Science and Engineering Society. I will also invite discussion about how these programs may be used as a model at other post-secondary institutions.
The authors thanks funding from NSERC PromoScience.
The Truth and Reconciliation Commission of Canada called on post-secondary institutions to integrate Indigenous knowledge and teaching methods into classrooms” (TRC, 2015). At the University of Windsor, there is a broad initiative to include Indigenous knowledge and ways of knowing in as many courses as programs as possible. I present the first attempt at indigenizing and decolonizing the physics curriculum at the University of Windsor. This involves the development of a brand-new second-year elective entitled 'History of Astronomy'. Throughout the development of the course, the goal of 'two-eyed seeing' was kept in mind. I will report on the challenges encountered and how we tried to meet these challenges.
TRIUMF, Canada’s Particle Accelerator Centre, delivers beams for fundamental science and a wide range of accelerator-based applications.
World-leading in radioisotope beam production, TRIUMF-ISAC is the only ISOL facility that is routinely operating targets under particle irradiation in the high-power regime in excess of 10 kW. TRIUMF’s current flagship project ARIEL, Advanced Rare IsotopE Laboratory, is adding two new target stations providing isotopes to the existing experimental stations in ISAC I and ISAC II at keV and MeV energies, respectively. In addition to the operating 500 MeV, 50 kW proton driver from TRIUMF’s cyclotron, ARIEL will make use of a 35 MeV, 100 kW electron beam from a new TRIUMF designed and built superconducting linear accelerator. Together with additional 200 m of RIB beamlines within the radioisotope distribution complex, this will put TRIUMF in the unprecedented capability of delivering three RIB beams to different experiments, while producing radioisotopes for medical applications simultaneously – enhancing the scientific output of the laboratory significantly.
To cope with the increased occupancy and radiation dose expected at the High-Luminosity LHC, the ATLAS experiment will replace its current Inner Detector with the Inner Tracker (ITk), containing all-silicon pixel and strip sub-detectors. The strip detectors will be built from modules each consisting of one or two n+-in-p sensors, one or two PCB hybrids containing the front-end electronics, and one powerboard. The sensors in the barrel region of the cylindrical ITk will use simple rectangular pixels, while those in the circular endcaps will use a radial pixel layout.
To validate the expected performance of the ITk strip detector, a series of testbeam campaigns has been performed over several years at the DESY-II electron accelerator. Beam particles are tracked by EUDET telescopes, consisting of six high-resolution pixel detectors, plus an additional fast detector to improve timing resolution. Tracks are reconstructed with a spatial resolution of several microns, and compared to hits in the module under test. To evaluate the end-of-life performance of the ITk, modules from different regions of the detector have been built using sensors and/or front-end electronics irradiated to the maximum dose expected in the HL-LHC, plus a 50% safety factor, and measured in the testbeam to assess charge collection, signal efficiency, and noise occupancy. The results of this analysis give confidence in the detector meeting specifications across its operational lifetime.
The ABCStar (ATLAS Binary Chip – Star Version) is a front-end readout chip for the silicon-strips portion of the ATLAS Inner Tracker (ITk) upgrade. These radiation-hard application specific integrated circuits (ASICs) are implemented in a commercial 130 nm CMOS process and are intended to handle the high rate of collision data at the High Luminosity LHC (HL-LHC), and last throughout the lifetime of the detector. Over 350,000 ABCStar ASICs need to be extensively tested to ensure that chips used for sensor module assembly follow all design specifications.
Conventionally, electronics for particle physics experiments have been tested using custom equipment in dedicated research facilities that allow for extensive research and experimentation. Rather than duplicating this approach, Carleton partnered with DA-Integrated, a specialist ASIC testing company in Canada, to implement an industrial-standard wafer testing program for the first time in a particle physics detector project. By leveraging their expertise and infrastructure we were able to obtain large improvements in throughput compared to existing approaches, without compromising test coverage or data collection. In addition, the enhanced wafer testing capabilities at DA-Integrated allowed for a detailed investigation of the digital performance of the ABCStar under different duty cycles and supplied voltages. These results were used to determine the operational window of the ABCStar to prevent data loss in the detector.
Production probing of the ABCStar is underway, with Carleton set to test half of the ABCStar ASICs required for ITk. Collaborating with DA-Integrated has bridged the methodological, technical and semantic gap between research facilities and the semiconductor testing industry. This will open new possibilities for ASIC testing in future particle physics projects.
New neutron sources are needed both for Canada and internationally as access to reactor based neutrons shrinks. Compact Accelerator-based Neutron Sources (CANS) offer the possibility of an intense source of pulsed neutrons with a capital cost significantly lower than spallation sources. In an effort to close the neutron gap in Canada a prototype, Canadian compact accelerator-based neutron source (PC-CANS) is proposed for installation at the University of Windsor. The PC-CANS is envisaged to serve two neutron science instruments, a boron neutron capture therapy (BNCT) station and a beamline for fluorine-18 radioisotope production for positron emission tomography (PET). To serve these diverse applications of neutron beams, a linear accelerator solution is selected, that will provide 10 MeV protons with a peak current of 20 mA within a 5% duty cycle. The accelerator is based on an RFQ and DTL with a post-DTL pulsed kicker system to simultaneously deliver macro-pulses to each end-station. This study compares the performance of various DTL solutions including Alvarez, CH, and APF structures.
The Off-Line Ion Sources (OLIS) facility is part of TRIUMF’s world-class Isotope Separator and Accelerator (ISAC) complex, specializing in nuclear and particle physics research. Delivery of stable beams from OLIS and rare isotope beams from ISAC and eventually ARIEL (the Advanced Rare Isotope Laboratory) to various experiments with desired intensity and quality requires a complex tune of many independent parameters, over a lengthy, manual procedure.
Here we present first results of tuning the OLIS beamline using Bayesian optimization, a state-of-the-art machine learning algorithm to maximize black-box functions. It takes advantage of probabilistic modeling using Gaussian processes with an iterative method (an acquisition function) of selecting sample points to search for the best solution. We have shown that the working model performs as well as human operators in minimizing beam loss over a section of beamline.
Our AI-driven method has far-reaching implications for automated tuning of the entire ISAC-I/II and ARIEL beamline complexes for rare and stable isotope beam transport.
Intricate periodic and aperiodic ordered phases have been discovered in various soft matter systems such as supramolecular assemblies, surfactant solutions and block copolymers, underscoring the universality of emergent order in condensed matter. Theoretical study of block copolymer systems has been successful, revealing that the formation of complex ordered phases could be regulated by several mechanisms including conformational asymmetry, copolymer architecture and variety of the polymeric components. However, extending this success to non-polymeric soft matter systems is not straightforward and the study of the emergence of complex ordered phases in soft matter still presents an unsolved problem. We tackled this challenging problem by developing molecularly-informed Landau theory and density functional theory for various soft matter systems. In particular, we have demonstrated that the proposed theoretical framework can describe the emergence of complex ordered phases such as the networked phases and the Frank-Kasper phases. Our study provides an initial step for the development of a generic theoretical framework for the understanding of the universality of the phase behavior involving complex ordered phases in various soft matter.
Recent experimental and theoretical studies have shown that many ordered structures, ranging in complexity from simple lamellae to complex Frank-Kasper (FK) phases, can be formed from diblock copolymers. In many of the experimental studies the polymeric samples used in are polydisperse, however most theoretical studies have examined monodisperse systems. Therefore, to conduct theoretical studies on the phase behaviour of polydisperse block copolymer systems is desirable. In our study, the molecular weight distribution of AB diblock copolymers is modelled as a four component blend. Self-consistent field theory is used to study the effects of the shape of the molecular weight distribution (MWD). It is found that the width and skewness of the MWD, and conformational asymmetry, all have significant effects on the formation of the FK phases. The theoretical results provide insight to regulating block copolymer phase behaviours via designed molecular weight distributions and shed light on the formation mechanisms of the FK phases.
Many soft matter theoretical problems can be reformulated into minimizing a cost function, in which the field-based physical properties (the target functions) are adjusted to achieve the minimum. The Neural-network approach approximates the target functions by forward-feeding neural networks and the machine-learning techniques adjust the network parameters to produce the approximation to the desirable solutions. The physical properties, such as the free energy, together with boundary conditions, etc, are modelled in the cost function. The decoupling between the function approximator and sampling space allows for further incorporation of the weighted Monte Carlo method. The algorithm is demonstrated here by solving a few classical theoretical problems in soft matter.
AI and machine learning – specifically neural network (NN) based approaches – have become an indispensable tool in many areas of physics research. Nevertheless, there is still much to learn about NNs at the fundamental level and for application specific methodologies. In this talk, I will discuss some of the work we have done both using physics applications to study how neural networks learn and using neural networks to study physics applications. Both areas of research center on using NNs to solve partial differential equations (PDEs). On the one hand, simple physical systems described by PDEs yield clean and well-posed problems that are useful for analyzing the training process. On the other hand, using NNs to solve PDE descriptions of systems such as biomolecules in nanoconfinement is a promising alternative to standard simulation approaches. I will also discuss some current and potential approaches that combine machine learning and simulation techniques to achieve significant efficiency gains.
Nanothermometry is a powerful tool that allows for controlling temperature at the nanoscale, and thus finds applications in research fields ranging from biomedicine to high-power microelectronics. Typical nanothermometry techniques employ secondary nanothermometers, where each individual nanosensor must be individually calibrated—ideally, both off- and in-situ. Here we utilize fluorescent nanodiamonds co-hosting germanium-vacancy and silicon-vacancy centers and a machine learning multi-feature linear regression (ML-MFR) algorithm to overcome this resource-expensive calibration requirement. By leveraging the temperature-dependent spectroscopy features of the diamond color centers (intensity, zero-phonon line wavelength, emission linewidth, etc.), we show that the MFR model yields more accurate temperature predictions than those produced, traditionally, by monitoring any one of these individual temperature-dependent observables. We observe nanoscale temperature readings with accuracy and resolution improved by factors of ~1.3-10.1x and ~1.2-8.3x, respectively. Importantly, the MFR algorithm does so without the need to calibrate every single nanothermometer prior to its use. The method is general, as it is suitable for any nanothermometry technique that uses nanosensors with at least two temperature-dependent observables, without requiring prior knowledge of the type of dependence. Furthermore, this approach is attractive for practical scenarios where calibration prior to employment is difficult or unfeasible, as the models can be pre-trained on similar nanosensors. This study demonstrates the practical benefits of a machine learning approach to nanothermometry which is applicable to a wide variety of research fields.
The high-entropy oxide (HEO) Mg$_{0.2}$Co$_{0.2}$Ni$_{0.2}$Cu$_{0.2}$Zn$_{0.2}$O is synthesized by annealing equimolar mixtures of the parent binary oxides MgO, CoO, CuO, NiO, and ZnO to 1000 K and quenching to 295 K. X-ray diffraction shows HEO crystallizes in a single-phase rocksalt structure. The cations randomly occupy the $(000)$ site, while the oxygen sublattice is ordered. Lattice dynamical (LD) studies on amorphous Si ($\alpha$-Si) have shown that structural disorder can induce localized phonon modes (`locons') beyond a high-frequency mobility edge in the vibrational density-of-states (VDOS). Locons are characterized by eigenvectors which decay exponentially and a participation ratio $\mbox{(PR)}<0.1$. We have used the General Utility Lattice Program to study the optical properties and phonon localization in HEO. Previous LD studies on the elastic constants of ternary and quaternary oxides have obtained satisfactory agreement with experiment by neglecting cation-cation interactions and modelling cation-oxygen and oxygen-oxygen bonds with the Buckingham potential. Polarization effects are modelled using a shell model (SM) for oxygen; all cations are treated as point charges. In this work, we instead treat every atom with the SM: new Buckingham parameters for the binary oxides were obtained by fitting to experimental crystal structures, dielectric constants, and phonon frequencies. Agreement between the simulated VDOS and inelastic neutron scattering data is reasonable and improves upon existing models, which use a combination of the Buckingham potential and point charge approximation. Phonon mode localization was studied by calculating the PR of a 4096-atom supercell of HEO. Despite the strongly-disordered cation sublattice, it was determined that only of 0.5% modes are locons. This is roughly 10% of the number of locons in a similarly-sized cluster of $\alpha$-Si. Furthermore, it was shown that the number of locons increases if sulfur atoms are randomly substituted into the oxygen sublattice.
Understanding and controlling the liquid to crystal transformation is a central topic for numerous natural phenomena and technological applications. The first step of crystallization is the birth of critical nuclei. Their size, structure and rate at which critical nuclei appear and grow are fundamental parameters for understanding and controlling crystallization. Although nucleation rates can be measured experimentally in a few systems due to the very small nucleus size (nm) and either a too short or too long lifetime, it is extremely difficult to understand and describe the microscopic mechanism of nucleation, which remains elusive. To this end, computer simulation techniques provide, in principle, a suitable tool to dig deeper into this process. At least three main methods are available to obtain crystal nucleation rates via molecular dynamics simulation: 1) the mean lifetime method, 2) enhanced-sampling methods and 3) the seeding method.
The Classical Nucleation Theory (CNT) is one of the most well-known models to describe the nucleation process. This theory assumes that the formation of crystal nuclei takes place as a result of thermal fluctuations in a supercooled liquid (SCL). If an embryo overcomes a certain threshold size, it becomes a critical nucleus that spontaneously grows until it meets other growing crystals and the liquid solidifies. According to this theory, the interplay between the supercooled liquid/nucleus interfacial free energy, γ, and the difference between the chemical potentials of the crystal phase and the supercooled liquid describes the thermodynamics of crystal nucleation. The third key property is the effective diffusion coefficient, which controls the atomic transport rate at the liquid/crystal interface. The independent determination of these three quantities allows CNT calculations and comparison with experimentally determined or simulated nucleation rates. Owing to the scarcity of direct measurements of these properties, often questioned the validity and accuracy of the CNT. In this work, we were able to deeply supercool Zinc Selenide (ZnSe), and determine spontaneous homogeneous steady-state nucleation rates by molecular dynamics simulations (MD) using the mean lifetime method. At moderate supercoolings, where the nucleation rates are much smaller, we used the seeding method to compute the nucleation rates by the classical nucleation theory formalism, J_CNT, without any fitting parameter, using the physical properties obtained by MD simulations: the melting temperature, density, melting enthalpy, diffusion coefficient, and the critical nucleus size, combined with two expressions for the thermodynamic driving force. The values of γ calculated by the CNT expression using the MD simulation data, via both the seeding method and the mean lifetime method at moderate and deep supercoolings show a weak temperature dependence, which is in line with the Diffuse Interface Theory. The extrapolated values of γ, from the spontaneous nucleation regime to the seeding nucleation region cover the range of values of γ calculated via the seeding method and the CNT formalism. Finally, the J_CNT extrapolated from moderate supercoolings to deep supercoolings are in good agreement with the J_MD. These results confirm the validity of the CNT.
Understanding the kinetics and thermodynamics of the crystallization processes involved in carbon-rich materials is a critical knowledge gap that hinders a realistic assessment of the risks and benefits of potential climate-change-mitigation strategies [1]. Toward this end, we investigated the thermal and aqueous stabilities of single-phase and multi-phase mixtures of calcium carbonate and magnesian carbonate, including both laboratory-synthesized and biogenic sources. Building on earlier work from our group [2], we use a suite of materials characterization techniques (including infrared spectroscopy, differential scanning calorimetry, and thermogravimetric analyses) to track changes to the solid phases over time after thermal treatments and/or exposure to water-based solutions. Our results are framed in the context of how to design experiments that help to identify - and ultimately reduce - the uncertainties and associated risks associated with climate mitigation strategies that rely on controlling carbonate mineral formation.
[1] Basic Energy Sciences Roundtable: Foundational Science for Carbon Dioxide Removal Technologies (Brochure). United States: 2022. Web. doi:10.2172/1868525.
[2] B. Gao and K. M. Poduska. Solids 2022. 3(4) 684-696. doi: 10.3390/solids3040042.
The p-doping of organic semiconductors (OSCs) for tuning their electronic structure in opto-electronic applications is typically done by adding strong molecular acceptors as dopants to initiate charge transfer. I will summarize the current understanding of the phenomena observed upon molecularly p-doping conjugated polymers (CPs) and molecules (COMs), where two different competing scenarios have been identified [1]: (i) integer charge transfer between OSC dopant forming ion pairs (IPAs) and (ii), fractional charge transfer, where ground-state charge transfer complexes (CPXs) between the OSC and dopant are formed. For prototypical OSCs such as poly(3-hexylthiophene) (P3HT) [2] and various oligothiophenes of different chain length [3], I will present recent findings on the role of microstructure, dopant strength, and conjugation length on the respective doping scenarios, from which chemical design strategies for improved molecular dopants emerged and are tested to suppress CPX formation [4].
[1] Salzmann et al., Acc. Chem. Res. 49, 370 (2016); [2] Hase et al., J. Phys. Chem. C 122, 25893 (2018), Hase et al., J. Phys. Mater. 6, 014004 (2023); [3] Liu et al., Angew. Chem. Int. Ed. 59, 7146 (2020); [4] Charoughchi et al., submitted.
We discuss a systematic error in time-resolved optical conductivity measurements that becomes important at high pump intensities. We show that common optical nonlinearities can distort the photoconductivity depth profile, and by extension distort the photoconductivity spectrum. We show evidence that this distortion is present in existing measurements on $\text{K}_{3}\text{C}_{60}$, and describe how it may create the appearance of photoinduced superconductivity where none exists. Similar errors may emerge in other pump-probe spectroscopy measurements, and we discuss how to correct for them.
What is a quantum black hole? How does it form and how long does it last? I will provide an answer to these questions via an effective equation that describes gravitational collapse of dust with quantum corrections. Solving this equation reveals that black holes end in a shock wave after a time of order mass squared.
I will discuss a class of time-dependent, asymptotically flat and spherically symmetric metrics which model gravitational collapse in quantum gravity developed by myself and the other listed authors. Motivating the work was the intuition that quantum gravity should not exhibit curvature singularities and indeed, the metrics lead to singularity resolution with horizon formation and evaporation following a matter bounce. A noteworthy result of this metric is that we can recover the Hawking evaporation time M^3
for the lifetime of the black hole.
In general, black holes interact with external matter and fields. A four-dimensional static black hole within a static external axisymmetric gravitational field can be described by a Weyl solution of the Einstein equations. These results can be extended to higher dimensions using the generalized Weyl form. Various studies have been devoted to investigate the properties of the distorted black holes so far. These include a distorted five dimensional Schwarzschild-Tangherlini black hole, a distorted five dimensional Reissner-Nordstrom black hole and a distorted black ring. In this talk, we consider five-dimensional Weyl solutions, which are characterized by two independent axially symmetric harmonic functions in three-dimensional flat space. Using this method, we investigate distortions of a vacuum five-dimensional black hole with a “bubble' (the black hole exterior has nontrivial topology).
In recent years, with the progress in gravitational wave astronomy and subsequent importance of binary black hole mergers, there has been an increased focus on numerical simulations of these events. However, the most common surface of interest in black holes—the event horizon—is difficult to track numerically, as it is defined teleologically from future boundary conditions. Instead, the focus is on the quasi-local alternative to the event horizon, marginally outer trapped surfaces (MOTSs)—the apparent horizon being the outermost of these (in most cases). Our group has previously discussed the self-intersecting MOTSs we have found inside the apparent horizons of various black hole spacetimes. However (and unsurprisingly given how rotation often complicates analysis), the formalism used to calculate these MOTS for static black hole geometries does not hold when considering rotating black holes. This talk will focus on a formalism we have developed which generalizes the previous methods to rotating black holes (of arbitrary dimension) and provide an example of using this formalism to calculate the “MOTSodesic” equations—an analogue of the geodesic equations for MOTSs—in the Kerr spacetime. Applications of this method will be discussed in the talk by Kam To Billy Chan.
Self-intersecting marginally outer-trapped surfaces (MOTSs) have been found to play a vital role in binary black hole merger processes through numerical simulations [Pook-Kolb et. al. arXiv:1903.05626]. The search for such exotic MOTSs can also be found in analytical black hole solutions, such as the simplest (Schwarzschild) black hole [Booth et. al., arXiv:2005.05350]. Ongoing work continues to investigate the physical implications of the self-intersecting behaviour in spherically-symmetric spacetimes [Hennigar et. al., arXiv:2111.09373]. The previous techniques for finding self-intersecting MOTSs are restricted to non-rotating spacetimes. This talk makes use of the extension of the MOTS-finding method to rotating (axisymmetric) spacetimes for any dimension that was developed in [Booth et. al., arXiv:2210.15685] and which is presented at this congress in the talk by Sarah Muth. Such spacetimes are astrophysically relevant, as black holes found in our universe are expected to carry angular momentum. It is shown that the MOTSs found bears many similarities with those found in spherically-symmetric spacetimes, while exhibiting previously unseen behaviours.
For the last few decades and especially since the first detection of gravitational waves, black hole mergers have been a core research area in general relativity. However, the process by which two black hole horizons merge is only now starting to be well-understood. In numerical studies of apparent horizon evolution, self-intersecting marginally outer-trapped surfaces (MOTS) were found and play a key role [Pook-Kolb et. al. arXiv:1903.05626]. Later an infinite number of self-intersecting MOTSs were found in Painleve-Gullstrand slices of the Schwarzschild solution [Booth et. al., arXiv:2005.05350]. Further work has shown that their existence is robust and not simply an artifact of that coordinate system [Hennigar et. al., arXiv:2111.09373]. This talk presents results found when examining the maximal extension to the Schwarzschild black hole in Kruskal-Szekeres coordinates. In this system, two separate universes dynamically connect through a worm-hole and pass through a moment of time-symmetry before the worm-hole pinches off and they disconnect. In these time slices, self-intersecting MOTS are found which, among other things, straddle the Einstein-Rosen bridge extending into both universes. Of particular interest is the behavior around the moment of time symmetry, as this provides insight into how MOTS evolve in numerical solutions which start from time-symmetric initial data.
The notion of wave-particle duality is fundamental in the quantum mechanical description of matter. This duality asserts that matter sometimes behaves like a particle and sometimes behaves like a wave, called de Broglie waves. Recent advances in methods to coherently manipulate de Broglie waves of atoms have enabled a new generation of atom interferometers with unique capability to address outstanding fundamental science challenges. The method has emerged as a tool capable of addressing a diverse set of questions in gravitational physics and quantum physics, and as a technology for advanced sensors for navigation and for measurement of the Earth’s gravitational field. We will discuss an experiment consisting of a tungsten mass and atomic wave packets separated by about 25cm. In this experiment, the relative phase of the interfering wave packets is shown to depend on the gravitational interaction in a way which is analogous to the so-called Aharonov-Bohm effect for charged particles. We will describe the relevance of these results to observation of quantum superpositions of Newtonian gravitational fields. Future science and technology applications will also be described, including the detection of dark matter, detection of gravitational waves at frequencies below 1 Hz and satellite geodesy.
Session I: Biopolymers in Confined Environments
I will present a unique quantitative single-molecule imaging platform called CLiC (Convex Lens-induced Confinement) which enables simultaneous measurements of the size, mRNA-payload, and dynamic properties of mRNA-based therapies and vaccines in controlled, cell-like conditions (Kamanzi et al, ACS Nano 2021). Here, we apply single-molecule biophysics to help characterize and understand the mechanisms of action of emerging classes of therapeutics and vaccines. By isolating and imaging freely diffusing particles in solution as well as during reagent-exchange, such as in response to a change in solution pH, we can emulate and explore dynamics in a controlled setting which are relevant to understanding complex dynamics inside cells and as well as inside manufacturing devices. Over the long term and in collaboration with health scientists, we are working towards correlating detailed multi-scale data sets, including single-particle measurements made in vitro as well as in cells and tissues, with genomic and proteomic analyses of the same samples, as well as clinical results, to create a through-line of understanding of drug/vaccine effectiveness from the microscopic to clinical scale. Our inspiration is to innovate and use nanoscale tools to obtain new biophysical insights into how and why medicines/vaccines work to enable and optimize their rational design and engineering. This talk builds off our recent publication in ACS Nano (Kamanzi et al, 2021) which established our measurement platform, and describes our ongoing collaboration with health scientists and unpublished data sets on single-particle dynamics and mRNA-LNP properties acquired at our new labs in MSL-UBC during the pandemic.
Even though dilute (unentangled) polymer solutions cannot act as gel-like sieving media, it has been shown that they can be used to separate DNA molecules in capillary electrophoresis. The separation then comes from sporadic and independent polyelectrolyte-polymer collisions. Here we explore such collisions in nanochannels (i.e., channels that are smaller than the normal size of the polymers), a situation where a charged analyte is forced to migrate "through" isolated uncharged molecules during electrophoresis. We use Langevin dynamics (LD) simulations to investigate the nature of these collisions and their effect on the net movement of both polymer chains. We identify several types of collisions, including some that are unique to nanochannels. These results suggest a few potential applications for the analysis of biomolecules.
In recent years, nanofluidic devices have proven extremely useful for characterizing the physical behaviour of biopolymers such as DNA confined to narrow channels and micron-sized cavities. Insight gleaned from experiments using nanochannels is valuable for applications such as optical mapping of elongated DNA. Likewise, studies of multiple DNA molecules in nanocavities have provided insight into the confinement-enhanced entropic force that tends to induce polymer segregation, an effect that likely contributes to segregation of chromosomes in replicating prokaryotes. While simple theoretical models can be used to explain the basic aspects of such behaviour, the experiments are often carried out under conditions where the system lies outside clearly defined scaling regimes. This can give rise to pronounced quantitative discrepancies between theory and experiment. In such cases, computer simulations can provide an effective means to bridge the divide between the theoretical predictions and experimental results. In this talk, I present the results of recent Monte Carlo simulation studies, most of which have been inspired by recent experiments on confined DNA. I examine the effects of channel shape and width on the tendency for single semiflexible chains to form structures such as backfolds and knots. I also examine polymer segregation behaviour of two-chain systems confined to channels or elongated cavities. A key aspect of this work is the explicit calculation of the variation of the configurational free energy with respect to some relevant system parameter such as knot size or inter-polymer overlap. Theoretical treatments typically employ analytical approximations of free energy functions. While the explicitly calculated free energy functions yield qualitatively similar scaling behaviour, the discrepancies between calculated and theoretical scaling exponents can be appreciable. Quantifying these discrepancies should be of value in the interpretation of experimental results.
Innovation in materials science and engineering resides in our ability to control the structure of materials at the nanoscale in order to design advanced materials with outstanding functional properties (electrical, optical, magnetic, photocatalytic, etc.). One of the most powerful means to arrange matter at the nanoscale is to use laser produced plasmas due to their exceptional ability to provide simultaneously ions and neutral atoms with various energies in a non-equilibrium environment. Moreover, the possibility to perform growth in a reactive environment such as oxygen or to operate in a double-beam configuration offers an additional flexibility to control the stoichiometry of oxide materials, the dopant content and the surface quality. In this presentation, we will focus on the use of pulsed laser deposition for the growth of various oxide materials in the form of thin films, including undoped and doped vanadium dioxide and titanium oxide. They are exploited for the development of the next generation of photonic devices or for advanced environmental applications such as water treatment.
The field of nanotechnology has rapidly expanded over the past few decades due to the unique physical, chemical, mechanical, and electrical properties of nanoscale materials. Today, nanomaterials are applied in numerous fields, including catalysis, drug delivery, and microelectronics, among others. Plasma-based methods have shown great potential for use in the synthesis of nanomaterials via bottom-up or top-down approaches. The plasma-liquid system is a relatively novel field of research that has shown high efficiency in synthesizing nanomaterials. In this system, the plasma is either i) generated in a gas phase that is in contact with the liquid or ii) generated directly in the liquid (with or without bubble assistance).
In this communication, we will focus on the production of nanomaterials using in-liquid discharges, more particularly those eroding the electrodes in a controlled way. First, we will show that the produced particles are highly sensitive to both, electrode nature and liquid composition. In a second part, we will introduce a novel plasma-liquid system in which a spark discharge is used to generate plasma in a liquid that is in contact with another liquid (combination of type (i) and type (ii) systems). A brief review of the discharge electrical and optical characteristics, we will provide the synthesis conditions, i.e. those leading to a spark discharge between a pin electrode immersed in a dielectric liquid (heptane) and the surface of a conductive solution (water + metal salts). This configuration guarantees an interaction between the high-density plasma (spark in liquid heptane) and the solution that contains metal ions, and so, we used it herein to synthesize metal nanoparticles as well as binary and ternary nanoalloys.
Since its inception in 1999, Plasmionique has been carrying out collaborative research with Canadian Universities, national laboratories as well as international groups and companies. Such collaborations have allowed Plasmionique to remain at forefront of technological development and fulfill its mission of proliferating and commercializing plasma technology as an environmentally clean substitute for many challenging problems related to Advanced Surface Engineering, Material Synthesis, and Thin Film Processing. In this talk, we will present some examples to highlight the diversity of the applications that plasma technology could offer. Examples of topics discussed include, the synthesis of various allotropes of carbon, such as CNT [1], graphene [2], diamond [3] ; surface engineering of forestry products [4] ; thin film synthesis of multiferroic materials using conventional and hybrid [5] PVD techniques for memory [6] and neuromorphic engineering applications[7] ; biomaterial surface engineering for deposition of antibacterial coatings [8] ; DLC hard coating on implants for protection against corrosion and erosion [9] ; implantation of short-life ß-emitting radioisotopes in medical implants [10] and controlling the corrosion rate of biodegradable materials.
1- J.B. Kpetsu, et al., Nanoscale Res Lett (2010) 5:539–544
2- P. Vachon, et al., J. Phys. D: Appl. Phys. 54 (2021) 295202 (13pp)
3- A. Sarkissian, et al., CAP/COMP/CASCA 2004 Congress, June 13, 2004, Winnipeg
4- S. Babaei, et al.,et al, Plasma Process Polym. 2020;17:e2000091
5- D. Benetti, et al., Scientific RepoRts | 7: 2503 | DOI:10.1038/s41598-017-02284-0
6- F. Ambriz-Vargas, et al., Appl. Phys. Lett. 110, 093106 (2017)
7- G. Kolhatkar, et al., ACS Appl. Electron. Mater. 2019, 1, 828−835
8- L. Bonilla-Gameros, et al., Nanomedicine : Nanotechnology,Biology and Medicine, 24(2020)102142
9- G. Morand, et al., Surf Interface Anal. 2021;53:658–671.
10- F. Marion, et al., Plasma Sources Sci. Technol. 18 (2009) 015014
Nanocomposite (NC) thin-films are widely studied due to the multifunctional properties they can develop (optical, electrical, mechanical). A lot of methods are under development with a real attraction for processes at atmospheric pressure, such as dielectric barrier discharge (DBD).
Recently, a new process of nanoparticles injection in plasmas has been developed [1]. This method consists in synthesizing the nanoparticles prior to their injection in the plasma in a low frequency pulsed injection regime. However, the impacts of the liquid pulsed injection on the DBD physics are still opening questions.
This work aims to study a pulsed-liquid-assisted DBD deposition process. In contrast with the continuous nebulization of solutions, pulsed injection causes a sudden increase of the quantity of precursor as droplets in the inter-dielectric space – the average velocity being in the 10 m/s range. We observed that depending on the process parameters (injection times, pulse frequency, continuous gas flow rate, etc.), the discharge stability is modified. These parameters are also critical for transport and evaporation of the droplets and so on the thin film deposition (here ppHMDSO). For example, by varying the different parameters of the pulsed-liquid-assisted DBD, we observe that the deposit can consist in different phases (liquid and solid) as a function of the time residency of the aerosol and the thickness of the deposited layer.
[1] Kahn, M., Champouret, Y., Clergereaux, R., Vahlas, C. & Mingotaud, A.-F. Process for the preparation of nanoparticles. (2016).
Nitrogen doped graphene, or N-graphene, is a promising material for a wide range of applications such as supercapacitors, optoelectronic devices, and biosensors. Nitrogen plasmas have been proved to be an excellent path to generate N-graphene from polycrystalline monolayer graphene films grown by chemical vapor deposition (CVD). In this study, CVD graphene has been exposed to low-frequency Townsend dielectric barrier discharge operated in nitrogen at atmospheric pressure. In such conditions, the discharge is weakly ionized, and the neutral gas temperature is close to 300 K. In addition, plasma-graphene interactions are dominated by plasma-generated N atoms and metastable N2(A) states, with the latter acting as a 6 eV energy reservoir. To investigate the mechanisms of nitrogen incorporation by the plasma-based process, Hyperspectral Raman IMAging (RIMA) and X-ray Photoelectron Spectroscopy (XPS) have been performed over different processing time. A clear defects generation is observed from the Raman signature with a transition towards amorphization for longer discharge exposure times. From the high spatial resolution of RIMA, different Raman dynamics can be seen at the grain domains (GD) versus at the boundaries (GB) of CVD graphene. It is found that there is a selective nitrogen incorporation at GDs, a feature linked to preferential healing of plasma-generated defects near GB. N-uptake is further discussed using the model proposed by Robert-Bigras et al. in which defects generation plays a critical role in the N-incorporation kinetics.
Uncovering the nature of dark matter is one of the most important goals of particle physics. Light bosonic particles, such as the dark photon, are well-motivated candidates: they are generally long lived, weakly-interacting, and naturally produced in the early universe. LAMPOST (Light A' Multilayer Periodic Optical SNSPD Target) experiment searches for dark photon dark matter in the eV mass range, via coherent conversion of dark photon to photon in a multilayer dielectric haloscope, which are subsequently collected with superconducting nanowire single-photon detector (SNSPD).
I this talk, I will report on the recent progress of the LAMPOST experiment. In a prototype experiment, we achieve efficient photon detection with a dark count rate (DCR) of ∼ $6 \times 10^{−6}$ counts/s. We find no evidence for dark photon dark matter in the mass range of ∼ 0.7-0.8 eV with kinetic mixing $\epsilon \geq 10^{−12}$, improving existing limits in the mass range. I will also show some recent progress in experimental design and performances of SNSPDs, and how these could allow us to probe significant new parameter space for dark photon and axion dark matter in the meV to 10 eV mass range.
SENSEI (Sub-Electron Noise Skipper Experimental Instrument) is a direct detection dark matter experiment with detectors operating at Fermilab and at the SNOLAB underground facility. The experiment consists of silicon Skipper-CCD sensors that make multiple non-destructive measurements of the charge contained in each pixel, reducing the readout noise to a level that allows for resolution of single electrons. This low energy threshold, along with low rates of events which may contain up to four electrons, results in competitive sensitivity for low-mass dark matter candidates which interact with electrons over a wide range of dark matter masses. This presentation will give an overview of the SENSEI experiment and the current status after the successful commissioning of the first batch of science-grade sensors at SNOLAB.
In the presence of radiation from bright astrophysical sources at radio frequencies, axion dark matter can undergo stimulated decay to two nearly back-to-back photons, meaning that bright sources could have counterimages in other parts of the sky. The counterimages will be spectrally distinct from backgrounds, taking the form of a narrow radio line centered at half the axion mass with a spectral width determined by Doppler broadening in the dark matter halo. The morphology of these images can be nontrivial, with blurring due to the geometry of the source and image as well as spatial smearing due to the galactic kinematics of axion dark matter. I will show that the axion decay-induced counterimages of galactic sources may be bright enough to be detectable with ongoing observations from the FAST radio telescope as well as archival data from CHIME and other radio surveys.
The Super Cryogenic Dark Matter Search (SuperCDMS) is a direct detection experiment, optimized for low-mass dark matter searches. Comprised of silicon and germanium crystal bolometers, the experiment utilizes transition-edge sensor (TES) technology to measure small heat signals that result from particle interactions with the bulk crystal. While the experiment is small compared to ton-scale experiments, the low energy threshold of these detectors enables searches for low-mass dark matter. More recently, a gram-scale SuperCDMS prototype detector was developed (HVeV), achieving eV-scale resolutions, and resolving single electron-hole events thanks to the high voltage (HV) applied across the detectors, which amplifies ionizing events. Traditional direct detection searches have relied on dark matter nuclear recoils as their signal. Electron-recoil dark matter (ERDM) is another avenue for dark matter to possibly interact with the Standard Model, and has gained interest recently in searches for light dark matter (LDM) candidates. In this talk, I will present recent updates from Run 4 of the HVeV program at NEXUS (Northwestern EXperimental Underground Site) in Fermilab, probing ERDM candidates such as dark photons, axion-like particles, as well as generic LDM electron-scattering signals.
The inclusion of thermodynamic pressure has been one of the major developments in black hole thermodynamics in recent years. By incorporating pressure, black holes are now known to exhibit behaviour corresponding to that seen in a broad variety of chemical systems, including liquid-gas type transitions, reentrant phase behaviour, polymer-like transitions, superfluid phase behaviour, and more. Consequently the subject has come to be known as Black Hole Chemistry. While black hole triple points — analogous to the ice-water-steam triple point — were discovered several years ago, only recently has it been shown that black holes can have multicritical points, in which four or more phases coalesce at a single temperature and pressure. This phenomenon — seen in colloids and polymers — can take place in Einstein gravity for multiply rotating black holes, as well as in charged black holes in non-linear electrodynamics, Lovelock gravity, and Generalized Quasitopological Gravity. I will describe how multicriticality arises in black holes and how the Gibbs Phase Rule governs such behaviour.
Relativistic quantum metrology is a framework that not only accounts for both relativistic and quantum effects when performing measurements and estimations, but further improves upon classical estimation protocols by exploiting quantum relativistic properties of a given system.
Here I present recent developments in the Fisher information analysis associated with black hole spacetimes. I review recent work in relativistic quantum metrology that examined Fisher information for estimating thermal parameters in (2+1)-dimensional AdS and the static BTZ black hole spacetimes. Treating Unruh-DeWitt detectors coupled to a massless scalar field as probes in an open quantum systems framework, I extend these recent results to the (2+1)-dimensional rotating black hole spacetime. We find that varying the angular momentum of the BTZ black hole leads to dramatic change in the Fisher information provided the appropriate black hole and detector parameters.
During simulations of a binary black hole collision, the final (post-merger) black hole horizon exhibits a decaying oscillation. There is also an observable gravitational wave signal from this black hole ringdown. Then it is natural to think that the oscillation generates the gravitational wave signal. However, this is not the case. By definition the black hole horizon (either event or apparent) cannot send signals to infinity. Instead, both the ringdown and signal must correlate with evolving, near-horizon gravitational fields which send signals both out to infinity as well as into the black hole. What then can an evolving horizon geometry tell us about the surrounding spacetime? Quantifying an answer to this question requires not only the Einstein equations but also a careful consideration of “physically reasonable” initial and boundary conditions. In this talk I will discuss recent progress that we have made on this problem.
We derive a "classical-quantum" approximation scheme for a broad class of bipartite quantum systems. In this approximation, one subsystem's evolution is governed by classical equations of motion with quantum corrections, and the other subsystem evolves quantum mechanically with equations of motion informed by the classical degrees of freedom. Similar approximations are common when discussing the backreaction of quantum fields on curved spacetime, as in Hawking radiation around black holes or the generation of primordial perturbations in inflation. We derive an estimate for the growth rate of entanglement between the subsystems, which allows us to predict the "scrambling time": the amount of time required for the subsystems to become significantly entangled. We illustrate the general formalism by numerically studying the fully quantum, fully classical, and classical-quantum dynamics of system of two oscillators with non-linear coupling.
Marginally outer trapped tubes are one of the essential tools to understand the dynamical evolution of Black Holes. In this talk, I will present a new symplectic formalism that applies to various spacetimes containing a BH. This framework allows studying charges, flux laws, and higher multiple moments. All this is directly linked to the study of gravitational waves.
This talk reviews selected topics of the Fundamental Symmetries sector of Nuclear Physics from the theoretical point of view. It focuses on three themes of interest to the Canadian experimental community: electric dipole moments, neutral currents, and beta decays. Lepton flavor violation, where major new international experiments are about to come online, will also be discussed.
Experimental tests of fundamental symmetries using nuclei and other particles subject to the strong nuclear force have led to the discovery of parity (P) violation and the discovery of charge-parity (CP) violation. It is believed that additional sources of CP-violation may be needed to explain the apparent scarcity of antimatter in the observable universe. A particularly sensitive and unambiguous signature of both time-reversal- (T) and CP-violation would be the existence of an electric dipole moment (EDM). The next generation of EDM searches in a variety of complimentary systems (neutrons, atoms, and molecules) will have unprecedented sensitivity to physics beyond the Standard Model. This talk will focus on current and planned experiments that use radioactive isotopes with pear-shaped nuclei. This uncommon nuclear structure significantly amplifies the observable effect of T, P, & CP-violation originating within the nuclear medium when compared to isotopes with relatively undeformed nuclei such as Mercury-199. Certain isotopes of Radium (Ra) and Protactinium (Pa) are both expected to have greatly enhanced sensitivity to symmetry violations and will be produced in abundance at the Facility for Rare Isotope Beams currently operating at Michigan State University. I will describe the current status of ongoing searches and the prospects for next generation searches for time-reversal violation possibly using radioactive molecules to further enhance the new physics sensitivity in the FRIB-era.
The MOLLER experiment at Jefferson Lab aims for an ultra-precise determination of the weak mixing angle $\sin^2\theta_W$ by measuring the parity-violating asymmetry $A_{\rm PV}$ in polarized electron-electron (Moller) scattering. For the approved 88 calendar week run, the proposed accuracy on $A_{\rm PV}$ is 0.7 parts per billion corresponding to an overall relative measurement accuracy of 2.4% for the electron's weak charge and 0.1% for the weak mixing angle. The measurement will enhance our understanding of fundamental symmetries of the electroweak interaction and provide a powerful search for physics Beyond the Standard Model. MOLLER represents a 4th generation parity violation experiment at Jefferson Lab and has an experienced collaboration working closely with integrated lab management team. The project is fully funded and on schedule for assembly in Hall A starting in early 2025. This talk will give an introduction to MOLLER physics motivations and reach, present details of the apparatus and experimental techniques, and conclude with a brief progress update on status and plans.
The MOLLER (Measurement Of a Lepton Lepton Electroweak Reaction) experiment, in preparation at Jefferson Lab, aims to constrain physics beyond the Standard Model using parity-violating Moller scattering at 11 GeV. The parity-violating asymmetry between the cross-sections for right- and left-handed helicity beam electrons scattered from the atomic electrons in a liquid hydrogen target is expected to be 35.6 ppb and MOLLER aims for 0.73 ppb precision. The measured asymmetry will be used to determine the weak charge of the electron to a fractional accuracy of 2.4%. Among the most challenging aspects of the experiment will be the detection of the small asymmetry in the detector signal. To prepare for production running, we must fully characterize the MOLLER main detector system through a combination of simulation and beam tests. This talk will provide an overview of the main detector system with a focus on radiation testing of our integrating electronics.
Topological semimetals display a range of novel transport phenomena, including enormous magnetoresistance and mobilities. Using Raman spectroscopy we have uncovered a novel mechanism for phonons to play a central role in this phenomena. Specifically we demonstrate the phonon-electron scattering time far exceeds the phonon-phonon. As such the momentum and energy typically lost to the lattice is returned to the electron bath. I will also briefly discuss the key material properties that make this discovery possible.
Spectroscopic techniques have made remarkable progress in the past decades have played a critical role in advancing our understanding of quantum and topological materials. However, the interpretation of the spectroscopic data and information extraction processes can be highly nontrivial. In this symposium talk, we introduce machine learning as an auxiliary technique for various experiments that can lead to our improved understanding of quantum phenomena. We show that it can enhance the identification of nuanced magnetic effects at topological insulator interfaces [1], and directly be used to predict materials’ topological classes using simple spectra indicators [2]. Beyond resolution improvement or classification, we show that machine learning can also be used to predict materials properties that are challenging to obtain by conventional methods, such as phonon density-of-states [3] and phonon dispersion relations [4], or extracting hidden information in time-resolved data [5]. We highlight the importance of the representations and envision a few more challenging problems that can benefit from machine learning [6], from strongly correlated systems to finding topological materials that are ready for room-temperature devices.
[1] https://aip.scitation.org/doi/10.1063/5.0078814
[2] https://onlinelibrary.wiley.com/doi/abs/10.1002/adma.202204113
[3] https://onlinelibrary.wiley.com/doi/10.1002/advs.202004214
[4] https://arxiv.org/abs/2301.02197
[5] https://onlinelibrary.wiley.com/doi/10.1002/adma.202206997
[6] https://aip.scitation.org/doi/10.1063/5.0049111
Topological semimetals can host novel fermionic particles whose intriguing interactions and many-body phases can be studied experimentally. I will discuss the particularly exciting class of Rarita-Schwinger-Weyl semimetals hosting spin-3/2 electrons with linear dispersion at a four-fold band crossing point, realized experimentally in quantum materials in the last years. I will combine symmetry considerations, perturbative renormalization group analysis, and mean-field theory to discern several exotic interacting phases that are prone to emerge in the strongly correlated regime.
One intriguing feature of 2D hyperbolic-lattice models is that their band theories live in hypertoric Brillouin zones. The high-dimensional band structures and the dimensional mismatch between real and momentum spaces present uncharted territory beyond Euclidean topological phases. This work investigates topological phases exhibited by hyperbolic Haldane models, which are generalizations of the graphene Haldane model to various regular hyperbolic lattices. A comprehensive symmetry analysis is performed to constrain the multiple independent first and second Chern numbers arising from the high-dimensional bands. Our extensive analysis of both real- and momentum-space models shows frequent occurrence of topological gaps, most of which characterized by first Chern numbers of 1 and some of 2. Importantly, the numerically computed first Chern numbers respect the predicted symmetry constraints and agree with real-space topological markers, implying a direct connection to observables such as the number of chiral edge modes. With our large repertoire of models, we further demonstrate that the topology of hyperbolic Haldane models is trivialized by strong curvature of the underlying lattices.
Opening remarks
Dr. Phil Kaye graduated in the first PhD cohort from Waterloo’s Institute for Quantum Computing in 2007. From 2004 to 2018, he served in a variety of roles with the Government of Canada’s Communications Security Establishment, primarily as a trusted advisor on the impacts of quantum technologies. From 2004 to 2010, he was the Program Reporter for the Canadian Institute for Advanced Research’s Quantum Information Processing Program. In 2007, Phil co-authored a seminal textbook on quantum algorithms (“An Introduction to Quantum Computing”, Kaye, Laflamme, Mosca, 2007). From 2018 to 2020, he worked for D-Wave Systems as Program Director, Corporate Affairs. In 2019 he co-founded and chaired Quantum Industry Canada (QIC), a consortium representing over 24 Canadian quantum technology companies. Presently, Phil is leading NRC’s Applied Quantum Computing Challenge program. In his spare time, Phil pilots an airplane that he built in his garage, plays the guitar and composes music.
Dr. Rezaee earned his PhD in Physics from the University of Tennessee, Knoxville, in 2015, and subsequently held postdoc positions at Texas A&M University and the University of Ottawa. While at the University of Ottawa, Dr. Rezaee played a crucial role in establishing the Joint Centre for Extreme Photonics (JCEP) lab, a collaborative initiative with the NRC. His academic pursuits revolve around ground-breaking quantum light sources, diamond-based quantum sensing, and quantum simulation. Beyond academia, Dr. Rezaee has founded three quantum hardware startups in both Canada and the USA. While doing so, he also successfully completed the Creative Destruction Lab-QML programs at the Rotman School of Business and participated in Y Combinator's W19 cohort. Driven by an unwavering passion for quantum technologies, Dr. Rezaee continues to help bridge the gap between cutting-edge science and transformative commercial ventures in his current role as Mitacs' national team lead for quantum technologies.
Dr. Nipun Vats is the Assistant Deputy Minister, Science and Research Sector, at the Department of Innovation, Science and Economic Development Canada. In this role, he is responsible for leading the development of federal policy and investments in post-secondary research.
He has held a variety of positions within the Canadian federal government, including in the Privy Council Office and the Department of Finance, and as Secretary to a National Advisory Panel on Sustainable Energy Science and Technology. He has also served as the lead federal official in the successful negotiation of the Canadian Free Trade Agreement.
In this talk, we will discuss a model of quantum gravity in which dynamical spacetime arises as a collective phenomenon of underlying quantum matter. In the model, the pattern of entanglement formed across local Hilbert spaces determines the dimension, topology and geometry of an emergent spacetime. After discussing the general structure of the model, we will describe the dynamics of a semi-classical solution that describes a (3+1)-dimensional de Sitter-like spacetime with the Lorentzian signature. Small fluctuations around the semi-classical solution include the propagating gapless graviton.
Session II: Superbiomolecular Assemblies
Nucleic acids are the most basic molecules of life, being tasked with storing and transmitting genetic information in all living organisms. Both DNA and RNA are composed of fundamental building blocks that each include a nucleobase (A, G, C, T/U), sugar ([deoxy]ribose), and phosphate moiety. To enhance nucleic acid programmability and stability, and aid the formation of functional 3D shapes, nucleotides are commonly modified in nature. Indeed, DNA nucleobases are methylated to control gene expression, while the identification of over 130 distinct modifications in RNA has led to the emerging field of epitranscriptomics. Furthermore, the ease of synthesis of nucleic acids functionalized at any nucleobase, sugar, or phosphate site, as well as the ability of modifications to impact pairing, chemical stability, conformation, and interactions with proteins, has led to the development of a wealth of unique modifications with far-reaching applications. For example, modified nucleic acids have been designed for medicinal uses such as drugs, vaccines, bioprobes, antimicrobials and tissue engineering, as well as for nanomaterials to build nanowires, nanomachines and nanorobots. Unfortunately, the lack of known structure–function relationships for a range of modified nucleic acids raises questions such as why does nature introduce modifications and how can modifications be used to their full potential in valued applications. This talk will provide a survey of some of the recent topics of interest in my lab that use computer modeling to gain a fundamental understanding of the diverse chemistry of modified nucleic acids. The information gained from computer simulations fills knowledge gaps by providing a greater understanding of the role of nucleic acid modifications in nature and improving the design of original modified nucleotides for novel applications.
Hypoxia is a characteristic pathophysiological property of advanced solid tumours which influences aggressiveness and resistance to treatment. Real-time measurement of tumour oxygenation is thus vital for stratifying treatment plans by hypoxic severity and monitoring variations in partial pressure of oxygen (pO$_2$) caused by high energy X-ray and other photonic therapies. Azobenzene photoswitches present a novel form of oxygen sensing predicated on their photophysical properties. Upon irradiation with light, azobenzenes undergo reversible geometric isomerization between stable trans and metastable cis isomers. The rate of cis-trans thermal relaxation is a first-order process sensitive to the molecular environment, which translates into sensor functionalities. In this work, we investigate a novel bio-inspired azobenzene photoswitch for oxygen sensitivity in solution.
A biomimetic material, FePc(PAP)$_2$ was synthesized by coordination of 4-phenylazopyridine (PAP) to iron (II) phthalocyanine (FePc). As a model system for oxygen sensing, FePc is capable of binding oxygen similarly to heme-porphyrin in blood. Solutions were purged with argon gas or flowed with oxygen gas to modulate pO$_2$. Isomerization kinetics were measured by pump-probe isomerization spectroscopy, wherein photoisomerization was initiated by irradiation with a 600 mW 365 nm LED, and then recovery of the $\pi$-$\pi^*$ absorption band was monitored as a function of time with a spectrophotometer. In addition, density functional theory (DFT) was used to theoretically calculate the activation barrier between cis and trans isomers, which can be related to isomerization rates in the presence or absence of oxygen.
To validate our experimental setup, cis-trans isomerization rates of methyl orange and methyl red photoswitches were measured to be insensitive to oxygen, as expected from literature. As a control, the isomerization lifetime of PAP was found to be several hours in ambient oxygenation. DFT calculations predict that the isomerization rate of a PAP-porphyrin system is an order of magnitude faster than PAP. As a proof-of-principle demonstration of a new molecular sensor for evaluating tumour oxygenation, the isomerization rates of PAP and FePc(PAP)$_2$ were experimentally and computationally determined as a function of oxygen concentration and will be reported in this work.
Understanding and controlling structural organization mechanisms is a key challenge in producing synthetic materials that mimic the complexity seen in organisms. This talk will present recent advances in controlling the local organization of colloidal building blocks, and a "pre-assembly" approach to produce hierarchically-structured materials.
A low-temperature plasma (LTP) is being advanced as an alternative radiation source that offers unique chemical properties owned by a variety of reactive plasma species (RPS), such as radicals, electrons, and excited species, delivered and formed in media upon exposure. Our current research explores the possibility of implementing DNA and its damage as a probe for specific plasma diagnostics such as RPS formation and transient local heating. Both LTP characteristics have been analyzed based upon the detection of plasma-induced strand breaks and DNA denaturation. Our previous studies proved that DNA can be utilized as a probe for RPS, particularly for reactive oxygen and nitrogen species that cause strand breaks in aqueous DNA. Moreover, the yield of strand breaks can be varied by tuning plasma parameters because of DNA’s susceptibility to all RPS. Recently we observed previously undetected DNA denaturation in addition to the DNA strand breaks present upon plasma irradiation. Thus, our primary focus has been to determine whether DNA denaturation, known to occur during heating, may be a reliable indicator of the plasma’s elevated gas temperature. In parallel, we performed measurements of LTP gas temperature using a conventional temperature sensor. Surprisingly, we observed denaturation at the combination of plasma parameters that form the jet with a temperature much below the thermal decomposition of DNA. To understand this effect, we implemented a physics-guided neural network model to predict the formation of strand breaks and denaturation and their yields for a given combination of LTP parameters. Using predictive modeling, we obtained the evolution of these two types of DNA damage as a function of voltage (and power), frequency, flow rate, and irradiation time. Based on our findings we suggested that denaturation of DNA can be attributed to transient local heating of the aqueous DNA, (“hotspots”), while bulk heating was not observed.
Within the past decade micro-plasma jets in contact with liquids have been the focus of international research. They have shown great potential in applications ranging from surface treatment to medicine. To be able to control these jets for precise application, a fundamental understanding of the underlying processes is required. For this, detailed diagnostics need to be performed, which are challenged by the plasma jet’s high gradients, multiphase transport processes and interfaces of plasma and liquid or solid.
Most conventional plasma diagnostics fail in cases of non-equilibrium processes at atmospheric pressure. Ultrafast laser spectroscopy, however, permits the diagnostic of fundamental plasma properties such as reduced electric field or flow properties and gas composition at timescales much shorter than collisional processes.
The talk presents current development in the field of ultrafast laser diagnostics and the challenges that single shot measurements have.
A compromise to gain information from single shot measurements and high signal to noise from averaging measurements can be gained from data post processing or advanced averaging methods.
Thomson scattering (TS), the elastic scattering of light photons by charged particles, is a powerful diagnostic for the measurements of electron properties (density and temperature) in low-temperature plasmas (LTP). It is in fact one of the few diagnostics capable of providing simultaneously electron density (ne) and electron temperature (Te) information at the nanosecond timescale. As a result of the implementation of this diagnostic, many insights have been gained on electron kinetics in diverse low temperature discharges. In most of the situations, TS in LTP is encountered in the non-collective (or incoherent) regime, meaning that scattering signals from individual charged particles are added together. Besides, because ions are generally in thermal equilibrium with the neutrals constituting the background gas, TS is essentially giving information about the hot electrons. However, for high density plasmas (typically ne > 1017 cm-3), the collective (or coherent) TS regime is generally observed. In the collective regime. light photons are scattered off plasma waves (instead of individual charged particles). In such a configuration two different spectral features are observed: electron and ion features, which result from scattering off the so-called electron plasma waves (EPW) and ion acoustic waves (IAW), respectively. While the ion feature is observed near the probe laser spectral location, the electron feature is observed far from it. Conversely, scattering off IAW results in stronger collected signals than scattering off EPW. Probing simultaneously electron and ion features of a high density plasma would in principle provide a plethora of information regarding the plasma conditions: ne, Te, Ti (ion temperature), Z (average charge state), vei (electron-ion relative drift velocity) and V (fluid velocity).
We show through forward modeling the feasibility of implementing such a diagnostic for laser-produced tin droplet plasmas generated during the ablation of 30-80 µm tin droplets by a 10 ns Nd:YAG laser emitting at 1064 nm. Such plasmas are currently employed as extreme ultraviolet light sources (at 13.5 nm ± 1%) for the semiconductor industry.
Non-equilibrium effects are ubiquitous in laboratory plasmas and need to be considered to optimize the reactor performance for specific applications. In the low temperature plasma (LTP) community, there are on-going discussions on how to define reaction mechanisms and verify them. Such efforts would allow going toward predictive modelling and accelerate innovation. In this contribution, we will discuss a couple of cases illustrating different non-equilibrium effects which play a direct role in the yield of plasma (reactive) species. While external electric and magnetic fields come first to mind in controlling non-equilibrium plasma properties, we will focus more specifically on flow and wall effects. These effects are usually hard to quantify and generate additional challenges for constructing a plasma chemistry model and validating it. A better understanding (and/or control) of them would allow making significant steps forward in the development of predictive models. The current state-of-the-art will be outlined and steps toward the definition of reaction mechanisms discussed.
High energy neutrinos from cosmic sources are one of the most exciting subjects for study in particle physics. They allow access to energy ranges otherwise unobtainable and since neutrinos point back to their origin, they allow deep insights into the sources of the highest energy processes in the universe.
The P-ONE collaboration is aiming to construct a large scale ocean based neutrino observatory in the Canadian Pacific Ocean to provide new capabilities and added active volume to the existing, highly successful observatories around the world. We will report on the progress of designing and constructing a prototype sensor array for deployment in the ocean in the coming years.
Advanced LIGO and Advanced Virgo have confidently detected dozens of gravitational wave (GW) signals from colliding black holes and neutron stars. As these GW detectors improve and more are added to the global network, the expected rate of detected events will increase (with the cube of the sensitive range) and our ability to constrain the properties, including likely sky location, will improve. I will discuss the challenges for extracting a high expected rate of GW signals from noisy gravitational wave detector data and an emerging suite of machine learning methods developed to better distinguish true astrophysical signals from non-stationary LIGO detector noise. I’ll give my perspective on the implications for GW candidate alerts and future multi-messenger discoveries during the next international GW network observing run (expected to start in May 2023).
The planned upgrade of the Large Hadron Collider to quadruple the luminosity requires a substantial corresponding upgrade to the ATLAS detector in order to continue to keep up with the challenging experimental conditions that high luminosity imposes. Canada is participating in a wide range of these planned upgrades, with a particular focus on a new silicon strip detector and upgraded electronics for the liquid argon calorimeter. Recent progress and achievements will be discussed, as well as prospects for the physics reach of the upgraded detector.
Quantum chemistry has been identified as one of the prime applications for quantum computers. At present, the majority of quantum algorithm developments have the Noisy Intermediate Scale Quantum (NISQ) architecture in mind, for which it is important to design quantum circuits with low circuit depth to minimize noise and error propagation. In this presentation, I will present a modular circuit which allows for short circuit depths while allowing for a quantum chemical interpretation in terms of resonating valence bond structures. I will discuss applications in small molecular systems. Joint work with Ehsan Ghasempouri and Gerhard Dueck.
The search for the invisible dark matter particle is complicated due to the uncertainties in its distribution in our Galaxy. An accurate determination of the dark matter phase space distribution in the Solar neighborhood is crucial for the correct analysis and interpretation of data from dark matter direct detection experiments. Massive satellites such as the Large Magellanic Cloud can impact the dark matter halo of the Milky Way, and boost the dark matter velocity distribution in the Solar neighborhood. I will present the local dark matter distribution of Milky Way-like galaxies extracted from state-of-the-art cosmological simulations, and discuss their implications for direct dark matter searches. I will also discuss how the dark matter component of the Large Magellanic Cloud can alter the results.
I will give an overview of pseudo-Dirac dark matter, a scenario where a small Majorana mass splits charged Dirac dark matter into two nearly degenerate states. A longtime favourite of model-builders, this dark matter candidate has a rich phenomenology that still has yet to be fully characterized. I will discuss a few mechanisms for producing this kind of dark matter in the early universe, and will show various ways in which this candidate will manifest itself in the subsequent cosmology, astrophysical systems, and terrestrial experiments.
Antimatter and gravity are subjects of two of the biggest mysteries in physics: How can we explain the observed excess of matter over antimatter in the universe? And, how can the theories of gravity and quantum mechanics be unified? Antihydrogen, as the simplest purely antimatter atomic system, is a natural candidate for experimentally testing some fundamental theories related to these questions. For example, CPT (Charge-Parity-Time) symmetry predicts that the spectra of hydrogen and antihydrogen should be identical. Because the hydrogen spectrum is one of the best understood in physics, similar measurements of antihydrogen can provide a precise test of this symmetry. In addition, because antihydrogen is electrically neutral it can be used as a probe of the gravitational interaction between matter and antimatter. If the weak equivalence principle in general relativity holds, then the gravitational mass of antimatter should be identical to that of matter but so far there have been no direct free-fall style experiments to test this.
The ALPHA antihydrogen experiment at CERN’s Antiproton Decelerator has made major strides in the trapping and spectroscopy of antihydrogen. In recent years, the ALPHA collaboration has turned its attention toward the weak equivalence principle with the construction of a new apparatus, known as ALPHA-g, that aims to measure the gravitational acceleration of antihydrogen. In this experiment, antihydrogen atoms are magnetically confined and then allowed to escape up or down. The up-down balance of atoms that escape will allow a measurement of the gravitational acceleration of antihydrogen. ALPHA-g has been successfully commissioned and the first measurement campaign was completed in 2022. This talk will discuss the details of the ALPHA-g apparatus, the experimental methodology, and the latest results of the experiment.
The TRIUMF UltraCold Advanced Neutron (TUCAN) Collaboration is developing a new ultracold neutron (UCN) source for installation at TRIUMF. High energy neutrons will be produced by directing protons from the TRIUMF cyclotron onto a tungsten target. The neutrons will undergo moderation in two steps to reduce their energy, first in a heavy water then in a liquid deuterium moderator. The moderated neutrons then enter a superfluid helium volume where they will be converted into UCN through superthermal processes. The goal for the source is to produce the world's highest density UCN source surpassing current UCN source densities by at least one order of magnitude.
As UCN can be stored in material containers for hundreds of seconds they are ideal for experiments on the fundamental properties of neutrons. To take advantage of this the first experiment planned for this UCN source is a measurement of the neutron electric dipole moment (nEDM). For this experiment UCNs will be confined to a material bottle where they will precess at a rate that is proportional to their electric and magnetic dipole moments and the applied fields. By precisely measuring the difference in the precession frequency between parallel and anti-parallel field configurations the nEDM can be determined. According to current models we expect to have $1.43\times 10^{6}$ UCN detected per measurement cycle which should allow us to reach our goal statistical accuracy of $1\times 10^{-27}$ e-cm in $400$ measurement days. This is approximately 20 times more precise than the current world's best measurement that was done by the nEDM Collaboration at the Paul Scherrer Institut of $1.8\times 10^{-26}$ e-cm (90% CL) which had $1.5\times 10^{4}$ UCN per cycle, and is competitive with their planned future experiment n2EDM that anticipates $1.21\times 10^{5}$ UCN per measurement cycle.
I will describe the planned UCN source and nEDM experiment, as well as the current status of the efforts.
The pursuit of fundamental interactions requires ever increasing precision in theory and experiment. Ion-trapping techniques have been deployed and pioneered to investigate radioactive nuclides at the TITAN-TRIUMF facility. Experiments include precision mass spectrometry of superallowed $\beta$ emitters to investigate isospin symmetry and to test the unitarity of the quark-mixing matrix. To further these studies, a redesigned Penning-trap system has been commissioned to achieve precisions as low as $\delta m/ m \sim$ 10-10. In this talk, I will contextualize the new Penning trap and other technical developments for studies of fundamental symmetries.
Majorana zero modes appear at the edges of topological superconducting wires as part of the bulk-boundary correspondence in these systems. Thanks to topology, Majoranas are robust against weak perturbations and this makes them promising candidates for qubit building blocks. In Majorana-based qubits quantum information is stored non-locally, avoiding many sources of decoherence. In such qubits logical operations amount to coupling and exchanging Majoranas in space. This means that in order to perform quantum operations we need to learn how to manipulate Majorana modes while reducing diabatic errors.
In this talk we will discuss Majorana chains, how to improve their reliability and how to manipulate them safely on a wire.
Topological nanowires, topological materials confined in one dimension (1D), hold great promise for robust and scalable quantum computing and low-dissipation interconnect applications, which will transform current computing technologies. To do so, research in topological nanowires must continue to improve their synthesis and properties.
In this talk, I will discuss my group’s efforts to develop a high throughput and precision synthesis method to fabricate 1D topological systems (APL Materials 10, 080904 (2022)). I will highlight our studies on topological crystalline insulator SnTe nanowires and topological metal MoP nanowires and discuss their potential applications. Using SnTe nanowires as weak links in Josephson junction devices, we discover a novel superconducting phase (npj Quantum Materials 6, 61 (2021)). With MoP nanowires, we show that the resistivity scaling of MoP nanowires is superior to those of the state-of-the-art Cu interconnects and Cu alternative metals, presenting MoP as a breakthrough metal for the low-resistance interconnect applications (Advanced Materials doi:10.1002/adma.202208965 (2023)).
The scaling of interconnect wiring in integrated circuits leads to increasing resistivity of Cu wires and degrades the chip power-performance significantly. Current research on alternative interconnect conductors is largely limited to conventional metals for mitigating the growing line resistance. Here we explore topological conductors as a potential solution. Using CoSi and NbAs as examples, we find that, through the dominant surface-state conduction, the resistivity in topological semimetals reduces with decreasing feature sizes in the nanometer scale. This trend holds even in the presence of mild disorder and grain boundaries, in sharp contrast to conventional metals. We will present detailed first-principles calculations and report experimental evidence for unconventional resistivity scaling in CoSi thin films, showing resistivity significantly below that of ideal bulk single crystals. We will conclude with a set of guidelines to screen topological semimetals for beyond-Cu interconnects and a list of key next steps.
Acknowledgements: Hsin Lin, Ion Garate, Gengchiau Liang, Nick Lanzillo, Utkarsh Bajpai, Shang-Wei Lien, Yi-Hsin Tu, Sushant Kumar, Ravishankar Sundararaman, Christian Lavoie, Oki Gunawan, Asir Khan, Guy Cohen, John Bruley, Vesna Stanic, Jean Jordan-Sweet, Peter Kerns, Teodor Todorov, Nathan Marchack, Cheng-Yi Huang, Chuang-Han Hsu, Tay-Rong Chang, Arun Bansil.
Weyl semimetals (WSMs) are materials whose low-energy excitations are Weyl fermions. Since its first observation in 2015, much work has gone into understanding the various properties of the WSM, most notably the Fermi arc -- a surface projection of the Berry flux connecting the WSM's zero-energy points. Here, we study the effects of tunnelling on the band structure and Fermi arc of a time-reversal broken WSM. When coupled to a simple non-magnetic parabolic band, the WSM's chiral arc state lowers in energy and forms, together with a previously extended state, a noticeable spin-dependent asymmetry in the interface spectrum in the vicinity of the Weyl nodes reminiscent of tunnelling in a Dirac cone. We study these effects with a lattice model which we solve numerically on a finite sample and analytically using an ansatz on an infinite sample, with both continuum and lattice frameworks. Our model agrees very well with the numerical simulation as it accurately describes the behaviour of the chiral state, from its energy asymmetry to the spin canting at the interface. We also find that the tunnelling effectively increases the Fermi arc length, allowing for the presence of interface states beyond the bare Weyl nodes in agreement with previous work. These additional states may also carry current along the interface and we propose methods to detect them experimentally.
A quantum physicist by training, Dr. Martin Laforest spent his career ensuring quantum technologies have a disruptive, yet positive impact on industry and society. Martin is currently managing partner for Quantacet, an early stage, quantum-focused investment fund and the director of Quantum Strategy for ACET, a Sherbrooke-based deep tech incubator offering specific mentoring tailored to quantum enterprises. Martin also serves as a technical advisor for DistriQ, Sherbrooke’s quantum innovation hub. Before moving to Sherbrooke, Martin was the senior product manager for ISARA Corporation, a quantum-safe security company. Martin also spent eight years promoting the impacts of quantum technologies to students, governments, companies and investors for the Institute for Quantum Computing at the University of Waterloo where he also received his PhD.
Udson Mendes has a PhD in Physics from the University of Campinas in Brazil. He was a postdoctoral fellow at the Laboratoire Pierre Aigrain at the École Normale Supérieure de Paris, and then at the Institut Quantique at the Université de Sherbrooke. Dr. Mendes’ research has been focusing on the development of quantum technologies ranging from quantum hardware to quantum algorithms. Since joining CMC, he created the world’s first cost-sharing fabrication service for superconducting devices and helped to train over 170 high qualified personnel in CMC’s quantum workshops. Moreover, Dr. Mendes leverages his expertise to lead a team of quantum scientists working on applications ranging from cybersecurity to protein design to cancer diagnosis.
Nicholas obtained his Ph.D. in Physics from Simon Fraser University, where his research focused on the numerical and phenomenological modeling of impurities in superconductors. In 2021, he joined Photonic, a full-stack quantum computing company based in British Columbia, as its first Quantum Software Engineer, leading the development of a laboratory measurement and control system. Realizing the need to take a simulation-driven approach to quantum computer design, he went on to establish and lead the Quantum Software and Simulations team. His team, comprising several Software Engineers and Quantum Software Engineers, develops a diverse range of simulation and design tools to accelerate the development of Photonic's quantum computers.
Catalina holds a MSc. in Electronics from Los Andes University and Engineering Diploma from IMT Atlantique in France, with a research focus on autonomous systems. She’s currently Quantum Community Manager at Xanadu, where she helps build the community around PennyLane. Today she’s working with professors from around the world, helping them include quantum programming in their courses. In the past, Catalina worked at IBM, where she was an IBM Quantum Ambassador.
Session III: Active Soft Matter
A remarkable diversity of morphologies exists among flagellated bacteria and, more broadly, motile microorganisms. To understand some of the consequences of these design choices, we numerically simulate the swimming motion of flagellated bacteria and model squirmers using a boundary element method. We show that interactions with solid surfaces bounding their fluid environment are particularly sensitive to parameters such as the cell body aspect ration, the length, number and placement of flagella on the cell, and the effective stiffness of the flagellar hook. The behaviour of a bacteria-like swimmer near surfaces can be tuned by choosing particular configurations and varying the motor torque. We then characterize the interaction of swimmers with neutrally buoyant spherical particles in unbounded fluid. Interestingly, we find that large particles (e.g., 10 times the radius of the swimmer) can have a larger net displacement due to an encounter with a swimmer than smaller particles at the same impact parameter. This has implications for the effective enhancement of the diffusion coefficient of suspended particles in a bacteria-laden fluid. Based on numerical results, we estimate the effective diffusivity of a particle in a dilute bath of swimmers and show that there is a non-monotonic dependence on particle radius. Similarly, we show that the effective diffusivity of a swimmer scattering in a suspension of particles varies non-monotonically with particle radius. As with interactions with a planar surface, the details are highly dependent on the chosen swimmer, allowing the enhancement of diffusion to selectively affect particles of a specific size more or less.
The material state of embryonic tissues emerges from the collective interactions of cells. Most tissues are soft active materials that can flow or deform. This deformability is shown to be important for proper embryonic development. However, cell and tissue mechanics are experimentally difficult to probe in developing animals. Here, I will discuss our research developing computational and theoretical models to investigate how tissue material properties affect cellular functions and coordination. I will present verifiable mathematical models and predictions that we developed for various developmental processes.
Colloids are mesoscopic particles that enable a systematic study of inter-particle interactions in soft materials. The depletion interaction is an attractive effective interaction that can be tuned by polymer additives, while the amplitude and frequency of an external electric field can be used to tune the dipolar interaction. Using these two interactions simultaneously, we create multi-tunable colloids where weak depletion results in increase crystalline order while stronger depletion increases disorder and results in novel gel states [1]. With these “dipolar-depletion” gels, we examine the onset of irreversibility and find strategies to accelerate aging.
[1] Shivani Semwal, Cassandra Clowe-Coish, Ivan Saika-Voivod, Anand Yethiraj, “Tunable colloids with dipolar and depletion interactions: towards field-switchable crystals and gels.”, Physical Review X 12, 041021 (2022).
Burst-mode ultrafast laser-materials treatments use high-repetition-rate (>MHz) delivery of femtosecond laser pulses. This takes advantage of characteristically tiny residual heat left in a substrate through individual femtosecond-laser-matter interaction. At the same time, the approach opens the door to manipulating the accumulation of that same tiny heat from rapid repetition. This mode of fluence-delivery can, for instance, transition brittle materials like glasses to ductile states, then cut aggressively while ductile and not susceptible to fracture, before the material naturally returns to its brittle state.
In solid dielectrics, isolated sub-picosecond laser pulses first create a limited plasma from nonlinear ionization, then they increase that plasma through collisional ionization. Used in burst-mode, the hypothesis is that some residual ionization persists for a few nanoseconds, meaning that subsequent pulses need not re-initiate dielectric breakdown. Instead, they see linear absorption in a state comparable to a metal or semiconductor. In effect, the plasma is ‘simmered’ continuously throughout a burst, controlling the mode and amount of absorption.
We report studies of the persistence of the plasma state in fused silica within a burst of ~60 pulses, each of 300 fs duration, arriving with an intra-burst repetition rate of 200 MHz (5ns separation). We measure -- pulse-by-pulse during the burst -- the partition of energy into specular scattering, diffuse scattering, transmission through the sample, and absorption of laser energy. With this, we determine the decay of the plasma created by one pulse, until the arrival of the next pulse 5 ns later, and we characterize the subsequent re-growth of the plasma.
In this picture, the absorption of any given pulse depends on the recent history of irradiation. The material response is therefore non-local in time, which we can then frame as a material susceptibility that depends on the frequency of the intra-burst repetition rate.
Filamentary plasma structures aligned with magnetic fields are ubiquitous in various space and laboratory plasma environments. In numerous magnetic confinement devices, such coherent structures called blobs or blob-filaments, are intermittently formed in the boundary layer region of the device and transported across magnetic field lines through ExB convective motion. These structures can be much more efficient at transporting particles and energy than standard diffusive processes, therefore it is important to understand their propagation and stability. The magnetized plasma pressure filaments are often created in pairs or bundles, therefore filament-filament interaction is important for purposes of estimating their lifetime. One other feature of these structures, is the presence of internal steep pressure gradients, with density and temperature gradient scale lengths on the order of the cross-field filament size. This provides a free energy source for driving spontaneous low frequency excitations such as drift waves and vortices. It is the purpose of this study to understand the nonlinear saturated state of small scale (few electron skin depths) magnetized plasma pressure filaments that undergo drift wave turbulence driven by their internal pressure gradients. Experiments were designed to form controlled plasma pressure filament structures within a large linear magnetized plasma device; for this purpose the upgraded Large Plasma Device (LAPD) operated by the Basic Plasma Science Facility at UCLA was used. The setup consists of single or multiple biased probe-mounted cerium hexaboride (CeB6) crystal cathodes that inject low energy electrons along a strong magnetic field into a pre-existing cold afterglow plasma, thus forming plasma pressure filaments. Langmuir probes inserted in the plasma measure the low frequency (~10-20 kHz) gradient-driven fluctuations. A statistical study of the fluctuations reveals amplitude distributions that are skewed, which is a signature of intermittency in the transport dynamics. Large amplitude temperature fluctuation bursts have been analyzed and are related to spatiotemporal structures which propagate azimuthally and radially outward from the filaments. Details on the time scales of density, temperature and vorticity mixing in the interacting filaments will be presented along with fluid and kinetic simulation modeling results.
Plasma flow and acceleration in the magnetic nozzle with converging-diverging magnetic configuration are important for applications in electric propulsion and fusion systems such as open mirrors and tokamak divertors. We report on some features of plasma acceleration in the magnetic nozzle that have been revealed in recent analytical and computational studies. The non-monotonic magnetic field with a local maximum of the magnetic field is necessary for forming the quasineutral accelerating potential structure with a unique velocity profile entirely determined by the magnetic field. The explicit form of the solution can be obtained in the form of the Lambert function. The fluid model has been further extended to include the effects of warm ions with anisotropic ion pressure. It is shown that the perpendicular ion pressure enhances plasma acceleration due to the mirror force. The kinetic effects have been investigated using the quasineutral hybrid model with kinetic ions and isothermal Boltzmann electrons. It is shown that in the cold ions limit the velocity profile agrees well with the analytical theory. The full kinetic simulations, including the ions and electrons within the quasi- two dimensional paraxial model, further confirmed these results. Further generalization includes the role of the induced azimuthal magnetic field and plasma rotation, i.e., coupling with Alfven wave dynamics. It is shown that the inhomogeneous magnetic field couples the axial plasma flow with the evolution of the azimuthal magnetic field and plasma rotation resembling the problem of the magnetically driven flow in astrophysical jets and winds. The role of the Alfven, slow, and fast magnetosonic point singularities in plasma acceleration is discussed.
Long-lived particles (LLPs) are well-motivated signatures that can appear in many models of physics beyond the Standard Model. The Detection ability of LLPs at current accelerator-based experiments is restricted, as they may decay outside of the tracking acceptance of these experiments, especially for LLPs with masses above GeV and lifetimes at the limit set by Big Bang Nucleosynthesis, ∼10$^7$–10$^8$ m. In order to directly detect the decays of LLPs across a broad range of masses and lifetimes, MATHUSLA experiment is proposed for the HL-LHC at CERN to be located on the surface above the CMS experiment, with a decay volume of 100m x 100m x 30m instrumented with plastic scintillators and SiPM readout. LLPs that decay within this volume are reconstructed by tracking their decay products and finding a displaced vertex. This talk presents the physics cases and development progress of MATHUSLA experiment.
Silicon photomultiplier (SiPM) technology displaced photomultiplier tubes in the design of next-generation experiments in particle physics. This presentation will focus on astroparticle physics experiments that will use liquid argon or liquid xenon with SiPM photo-detectors for rare-event searches such as dark matter, neutrinoless double beta decay, solar neutrinos, supernova neutrinos, and coherent elastic neutrino-nucleus scattering. The photo-detector requirements for these experiments will be discussed, including ultraviolet photon detection efficiency (either direct sensitivity or with a wavelength-shifter), low radioactivity, and low noise rates to enable low thresholds. This talk will also feature the latest developments in photon-to-digital converter (PDC) technology, where signals from each photodiode are digitized in situ, and its proposed applications in future experiments.
The MoEDAL experiment deployed at IP8 on the LHC ring was the first dedicated search experiment to take data at the LHC in 2010. It was designed to search for Highly Ionizing Particle (HIP) avatars of new physics such as magnetic monopoles, dyons, Q-balls, multiply charged particles, massive slowly moving charged particles and long-lived massive charge SUSY particles. We shall report on our search at LHC’s Run-2 for Magnetic monopoles and dyons produced in p-p and photon-fusion and detail our most recent result in this arena: the search for magnetic monopoles via the Schwinger Mechanism in Pb-Pb collisions, recently published in Nature. The MoEDAL detector will be reinstalled for LHC’s Run-3 to continue the search for electrically and magnetically charged HIPs. As part of this effort we will initiate the search for massive long-very lived SUSY particles to which MoEDAL has a competitive sensitivity. An upgrade to MoEDAL, the MoEDAL Apparatus for Penetrating Particles (MAPP), approved by CERN’s Research Board is now the LHC’s newest detector. The MAPP detector, positioned in UA83, expands the physics reach of MoEDAL to include sensitivity to feebly-charged particles with charge, or effective charge, as low as 10-3 e (where e is the electron charge). Also, the MAPP detector In conjunction with MoEDAL’s trapping detector gives us a unique sensitivity to extremely long-lived charged particles. MAPP also has some sensitivity to long-lived neutral particles. Additionally, we will very briefly present on the plans for the MAPP-2 upgrade to the MoEDAL-MAPP experiment for the High Luminosity LHC (HL-LHC). We envisage that this detector will be deployed in the UGC1 gallery near to IP8. This phase of the experiment is designed to maximize MoEDAL-MAPP’s sensitivity to very long-lived neutral messengers of physics beyond the Standard Model.
One of the exciting new frontiers in cosmology and structure formation is the Epoch of Reionization (EoR), a period when the radiation from the early stars and galaxies ionized almost all gas in the Universe. This epoch forms an important evolutionary link between the smooth matter distribution at early times and the highly complex structures seen today. Gaining insights into this epoch has been quite challenging because the current generation of telescopes are only able to probe the tail end of this process. Fortunately, a whole slew of instruments that have been specifically designed to study the high-redshift Universe (JWST, ALMA, Roman Space Telescope, HERA, SKA, CCAT-p, SPHEREx), are about to come online. This will unleash a flood of observational data that will usher the study of EoR into a new, high-precision era. It is, therefore, imperative that theoretical/numerical models achieve sufficient accuracy and physical fidelity to meaningfully interpret this new data. In this talk, I will introduce the THESAN simulation framework that is designed to efficiently leverage current and upcoming high redshift observations to constrain the physics of reionization. The multi-scale nature of the process is tackled by coupling large volume (~100s Mpc) simulations designed to study the large-scale statistical properties of the intergalactic medium (IGM) that is undergoing reionization, with high-resolution (~ 10 pc) simulations that zoom-in on single galaxies which are ideal for predicting the resolved properties of the sources responsible for it. I will briefly discuss applications from the first set of papers, including predictions for high redshift galaxy properties, the galaxy-IGM connection, Ly-α transmission and back reaction of reionization on galaxy formation. I will then highlight the potential for using line intensity mapping of spectral lines originating from the interstellar medium (ISM) of galaxies and the 21 cm emission from the neutral hydrogen gas in the Universe to constrain galaxy formation and cosmology. I will finish by highlighting how this numerical framework, coupled with accurate observational predictions promises important and potentially transformative changes in our understanding of the primitive Universe.
The multimessenger binary neutron star merger GW170817 and subsequent LIGO-Virgo gravitational-wave discoveries are shedding new light on the ultra-dense matter inside neutron stars. With densities and pressures several times greater than those in atomic nuclei, neutron star cores harbour the most extreme matter in the Universe. Its composition remains an open question: does it consist entirely of hadrons, like neutrons and protons, or does a more exotic state, like quark matter, prevail at the highest densities? I will describe what gravitational-wave observations are revealing about the neutron star interior, and how future-generation observatories will revolutionize our understanding of ultra-dense matter.
Just over 65 years ago Burbidge, Burbidge, Fowler, and Hoyle (B2FH) charted the initial roadmap for nuclear astrophysics. This seminal work recognized that explaining the origins of the heavy elements such as lead, gold, and uranium requires at least two types of neutron capture nucleosynthesis processes with each having distinct astrophysical sites. At the time of B2FH the rapid neutron capture process (r-process) showed itself to be related to explosive astrophysical events largely via the signature of exotic, neutron-rich nuclei in the Solar abundances. Fast forward to today and we have now seen heavy element formation in the act via the impact of lanthanide elements on the observed light curve from the GW170817 merger of two neutron stars. Therefore, nowadays nucleosynthesis studies have several distinct types of observational information to assimilate, presenting the opportunity to make big leaps in our understanding of r-process sites. However, this requires careful consideration of the nuclear physics uncertainties associated with the vastly uncharted territory of neutron-rich nuclei. With both experimental and theoretical efforts providing key inputs for theoretical r-process studies, in this talk I will discuss how nuclear physics campaigns will play a central role in deciphering observables of heavy element production over the next decade.
Justin Furlotte is a Data Scientist with Fiddlehead Technology in Moncton, New Brunswick. His academic background included a BSc in Mathematics-Physics from the University of New Brunswick, followed by a MSc in Mathematics at the University of British Columbia, where he researched the quantum Hall effect and quantum lattice systems. Justin has also previously worked with a thermal analysis company called C-Therm Technologies in Fredericton, New Brunswick, performing computational physics as a Research Scientist. Today, Justin still uses mathematics on a regular basis and spends most of his time creating statistical forecasts on the sales data of large clients in the food and beverage industry.
With a career path spanning over 20 years across several continents, I followed and adapted to the opportunities as they arose. Beginning from an applied physics base, I have worked in; defense, solved product heat treatment problems using 1st principles physics, destroyed products, destroyed plasma coating machines, then designed and built them, developed thin film coating material solutions from cutting tools to solar panels to erosion-corrosion coatings in gas turbine engines. The path to success has been a constant battle of destruction and redesign to test new concepts and bring them to market.
This process of creative destruction got me interested in the innovation process and teams, then management, then complex systems.
Now I am involved in two very different but complimentary start-up companies, one in plasma physics that helps other companies grow, and one leveraging bleeding edge developments in theoretical and applied physics to apply a general physics solution to any problem. The interesting part is how the different experiences have required different physics skills sets, but there is one core feature from my physics training I have needed every step of the way - this I will share with you.
Today we stand at an inflection point for society, one where we can accelerate the innovative growth for humanity with a minimum of harm or hit a Fermi bottleneck. Let’s focus on the former but be mindful of the latter.
Medical physicists are health care professionals with specialized training in the medical applications of physics. Their work often involves the use of x-rays, ultrasound, magnetic and electric fields, infra-red and ultraviolet light, heat and lasers in diagnosis and therapy. Medical physicists work in hospital diagnostic imaging departments, cancer treatment facilities, hospital-based research establishments, universities, government, and industry. The majority of medical physicists work in radiation therapy as clinical medical physicists and their role within this particular field will be highlighted. The education pathway to becoming a medical physicist will be discussed, including CAMPEP requirements for certification.
Building on our accurate measurement of the $\beta$ direction's asymmetry with respect to decaying polarized $^{37}$K [B. Fenker PRL 120 062502], we plan further measurements of the momenta of the recoiling progeny nucleus in coincidence. $^{37}$K's decay to its isobaric analog state has similar sensitivity to unknown physics compared to neutron decay, while a nuclear structure feature where the d$_{3/2}$ unpaired proton naturally produces a tiny nuclear magnetic moment keeps known higher-order corrections small. The angular distribution of the outgoing leptons is predicted from their helicity combined with angular momentum conservation, and we've realized one of our experiments would be the most direct measurement of the $\nu$ helicity since the Brookhaven 1958 measurement. Adding $\gamma$-ray detection with high-Z GAGG scintillators enables our search for a time reversal-breaking correlation of $\beta$, $\nu$, and $\gamma$ momenta in radiative $\beta$ decay, sensitive to a hypothetical dark strongly interacting sector. Time reversal-breaking interactions in the final nucleus in isospin-hindered $\beta$ decay compete with the Coulomb interaction instead of the strong interaction, potentially enhancing sensitivity by 1000x to make it complementary to neutron EDM and neutron resonance time reversal tests. We are beginning a program to measure isospin breaking in isospin-hindered $^{47}$K and $^{45}$K decay to determine our sensitivity. $^{47}$K decay, since parent and progeny are near closed shells so there are few final states, may exhaust the expected matrix element size in analog-antianalog isospin mixing.
The nEXO experiment is a proposed next-generation liquid xenon detector to search for neutrino-less double beta decay (0νββ) of 136Xe. The experiment will use a 5-tonne liquid xenon monolithic single-phase time projection chamber enriched to 90% 136Xe. Ionization electrons and scintillation photons from energy deposits in the Xe will be recorded by a segmented anode place and a large SiPM array. This talk will present recent progress in the detector design, an improved modelling of signal readout and the development of a deep neural network based data analysis architecture to improve signal/background separation. These developments result in a 90% CL 0νββ halflife sensitivity of $1.35\times10^{28}$ yrs in 10 years of data taking.
Probing electroweak physics at low energies plays an important role in the search for physics beyond the Standard Model. The exchange of Z bosons between an atom’s electrons and quarks induce an incredibly small atomic transition which can be probed via an atomic parity-violation (APV) experiment. APV measurements are sensitive as searches for leptoquarks and additional neutral gauge bosons and provide complementary results to higher-energy experiments. APV effects scales with the proton number ~Z^3. The extraction of electroweak physics from the observed signal requires atomic theory which is currently only available for alkali configurations. This makes neutral francium an ideal candidate for such experiments. Our goal is to measure APV effects using 10^6-10^7 laser-trapped neutral francium atoms at ultra cold temperatures. To this end , we have established an online neutral atom trap at the ISAC radioactive beam facility at TRIUMF in Vancouver. In this talk, I will discuss our progress towards an APV experiment in francium with a look at our recent observation of the highly forbidden 7s-8s magnetic dipole transition and our new detection scheme, bringing the observation of APV into reach.
Funding supported by NSERC and TRIUMF via NRC, and the Universities of Manitoba and Maryland.
Several high profile quantum algorithms have failed in the last decade, meaning that a better classical algorithm has been found. This has been especially pronounced in the field of quantum machine learning. While some methods have known bounds and are definitely faster than classical algorithms, the practicalities of finding a good working quantum algorithm capable of effecting real change remain a main goal in the field. I cover the issues and cover the current status of the field in an attempt to summarize the current issues and future outlook.
While photons are poised to play a key role for a wide range of quantum technologies, several experimental barriers still need to be overcome for most practical applications. In this talk, I will first present a brief overview of the advantages and drawbacks of using photons for quantum information, before discussing some important research directions currently being pursued in order to address these challenges.
Artificial intelligence plays an increasing role in many situations in our everyday lives. Its immense power finds applications in various fields, recently also including the field of quantum many-body physics. Artificial neural networks have led to improved numerical studies of qubit systems, increasing our understanding of these systems, which build the foundation of quantum computers and quantum simulators. In this talk, I will summarize recent breakthroughs achieved with artificial neural networks in quantum physics and provide an outlook of what to expect in the near future.
Resource theories provide a unifying framework to characterize the usefulness of quantum objects with respect to specified tasks. In this talk I will present the main ideas, showing that such a framework is quite general, and seemingly different phenomena can be all described within it. I will also chart some promising directions for future developments in this area of quantum information.
While qubit is the basic unit in most quantum information devices and applications, there is another class of quantum system that offers infinite states per degree of freedom. Bosonic systems, in this respect, are everywhere and provide loads of practical advantages. In this short talk, I will introduce the basics and features of bosonic quantum technologies, and try to convince you that we should go beyond qubits in the coming decade
The quantum computing stack is the sequence of transformations that must be performed for the high-level description of a quantum algorithm to be executed on a (concrete or hypothetical) quantum computer. In this talk I will discuss old and new developments in the field and comment on the role that the quantum stack can play in the advent of practical quantum computing.
Coherent scattering of photons in a dilute vapour of alkali atoms provides a strong link between the quantum information stored in the photonic and collective spin Hilbert spaces. In our lab we are looking at the mapping of photonic quantum states into and out of collective spins. By continuously scattering, we are creating highly correlated beams exhibiting EPR entanglement as well as quadrature and intensity squeezing below the standard quantum limit.
Coherent scattering of photons in a dilute vapour of alkali atoms provides a strong link between the quantum information stored in the photonic and collective spin Hilbert spaces. In our lab we are looking at the mapping of photonic quantum states into and out of collective spins. By continuously scattering, we are creating highly correlated beams exhibiting EPR entanglement as well as quadrature and intensity squeezing below the standard quantum limit.
Session IV: Physical bioenergetics: Energy fluxes, budgets, and constraints in cells
Bacteria are often assumed to allocate cellular resources to maximize their exponential growth rate. This postulate, derived from studies of Escherichia coli, is commonly interpreted as an economic principle, in which the cell balances supply of and demand for “metabolic currencies” such as amino acids during steady-state growth. However, testing these predictions has been a major experimental challenge. Here, we show that Bacillus subtilis, another model bacterial organism, deviates from this growth maximization paradigm. To this end, we modulated the rate of rRNA and ribosome synthesis by controlling the cellular GTP concentration. In nutrient-limited conditions, perturbations to ribosome production always reduced the growth rate. In stark contrast, under inhibition of translation with antibiotics, increased ribosome production led to faster growth. Using proteomics and LC/MS, we trace this submaximal growth to a reduction in GTP level upon translation inhibition, which leads to overproduction of metabolic enzymes at the expense of ribosomal proteins. We conclude that different organisms follow organism-specific resource allocation principles, perhaps as a consequence of evolution.
Perturbation experiments—where the response of a system of interest is observed after exposure to drugs or disruptions—are commonly used to identify interactions in biochemical reaction networks. However, it is often the case that the data is only analysed for its deterministic averages, and analysis techniques also rely on specific knowledge of each perturbation’s targets. We use constraints on interaction topology between the correlation and variation of molecular responses in two-component systems to analyse large-scale drug perturbation studies, in the absence of specific knowledge of the perturbations. We further show how analysis of variability in deterministic molecular responses is affected by non-linearity, stochasticity, and finite-sampling of perturbations.
Drug resistance is a global health threat that is undermining the advances of modern medicine. Non-genetic forms of drug resistance have been established over the last two decades to play an important role in drug resistance. However, the interplay between non-genetic and genetic forms of drug resistance is largely unknown, as are the evolutionary dynamics in fluctuating drug conditions.
Recently, we have shown using deterministic models and stochastic simulations that non-genetic drug resistance enhances the survival of a cell population undergoing drug treatment, while hindering the genetic evolution of drug resistance due to competition between non-genetically and genetically resistant subpopulations. This effect is enhanced in fluctuating drug conditions compared to constant drug conditions.
We are testing these predictions in evolution experiments on genetically engineered yeast harbouring synthetic drug resistance gene circuits. Synthetic resistance gene circuits are well characterized, mimic natural gene networks, and allow gene expression mean and “noise” (i.e., cell-to-cell variability among genetically identical cells) to be precisely controlled and quantified. Preliminary results from these evolution experiments in fluctuating drug conditions demonstrate that gene expression evolves to optimize growth rates, and, counterintuitively, that expression noise levels are reduced in fluctuating compared to constant drug conditions.
Overall, these investigations on quantitative model systems are enhancing our fundamental understanding of drug resistance evolution, which is essential to prolong and extend our armamentarium against drug-resistant infections.
The ionosphere, an ionized part of the Earth’s atmosphere, affects radio waves passing through it. The ionosphere and structures in them can cause disruptions in communication, position, navigation, and timing (CPNT) systems that rely on radio signals. These effects are scale dependent and driven by plasma turbulence (irregularities) in the ionosphere. The impact of ionospheric plasma turbulence on CPNT systems is significant and ranges from correctable/manageable errors to catastrophic failure of CPNT systems. For most applications, these effects can be broadly categorized into deterministic and stochastic. This talk will outline the impact of plasma turbulence on CPNT systems and methods to mitigate some of these detrimental effects on these critical systems. The talk will also discuss the use of these impacted signals in fundamental research on plasma turbulence and ionospheric irregularities.
Plasma Immersion Ion Implantation (PIII) is a powerful high-fluence ion implantation technique in which the target to be implanted is immersed in a plasma containing the desired ion species. PIII finds a wide variety of applications in semiconductor processing. A more recent area of application for our PIII technology is treatment of candidate materials for Plasma Facing Components (PFCs) intended for use in plasma fusion devices such as the ITER tokamak. PIII can be used to simulate the high fluence ion bombardment encountered in plasma fusion devices, and therefore provides a useful tool for PFC testing. This talk will discuss various fundamental aspects of PIII which are relevant to the PFC testing problem, and present recent results in this area.
Plasma Immersion Ion Implantation (PIII) consists in immersing in a plasma a negatively biased target (or electrode) with high voltage (HV) pulse in order to drive ions into the target and change the target surface structure and/or composition. This process has broad applications in the field of materials processing as well as semiconductor manufacturing. Improving PIII operational efficiency depends on a precise control of the ion fluence which itself relies on a rigorous empirical knowledge of the plasma behaviour during the HV pulses and in close proximity (~1 mm) to the electrode surface. The aim of this research is to study the behaviour of plasma parameters (electron density, electron and ion temperature, plasma potential and ion velocity) in a low-temperature inductively coupled plasma (ICP) chamber used for PIII. In order to obtain spatially resolved information with minimal plasma disturbance, Laser-Induced Fluorescence (LIF) was chosen to study the ion velocity distribution function. LIF measurements of the ion velocity distribution function during PIII have never been done, and will provide crucial insight into poorly known plasma dynamics. By monitoring the ion velocity in the region around the pulsed electrode, technologies such as semiconductor processing may become more efficient, less wasteful, and even more precise.
Ion temperature measurements were made in the bulk plasma during steady state operation for a range of power and pressure values (350-500W and 0.8-2 mTorr). It was found that ion temperature increases with increasing pressure. This is counter-intuitive considering increased pressure means more neutral gas particles, which would imply more collisions between ions and neutrals, thereby reducing ion temperature. LIF was also used to perform spatially resolved measurements of the ion velocity distribution function in the vicinity of the pulsed HV electrode in order to measure the average ion velocity near the electrode, deduce the sheath structure and measure the Bohm velocity, Cs. It was found that the ion velocity reaches Cs at 2 mm from the electrode surface. This falls within the theoretical estimate of the sheath length according to the electron density and temperature measured by means of Langmuir probes. This result is significant since the ion velocity and sheath length are essential parameters in an ICP used for PIII. Future experiments will focus on time-resolved measurements of the plasma under PIII relevant conditions.
Plasma discharges contains two distinct zones having different physical properties, namely the quasi-neutral bulk plasma and the sheath where the quasi-neutrality does not hold, separated by an intermediate transition zone called pre-sheath [1]. In particular, the sheath has a strong impact on the entire gas discharge since it is where the plasma interacts with the boundaries. The plasma-sheath transition is still a subject of active research today mainly due to its complex structure [2,3]. The modelling of an entire plasma discharge including the dynamics of sheaths is crucial to understand the behaviour of different plasmas such as nano-particle creation in sputtering magnetron discharge [4], also observed in the coldest region of tokamaks [5].
In this context a new and reliable numerical model for low-temperature plasma discharges including the sheaths is currently under development. Although kinetic models and Particle-In-Cell (PIC) methods [6, 7] are often preferred for their fidelity, they are limited by numerical constraints on the simulation time and memory requirements due to the high number of macro-particles necessary to accurately simulate high density plasmas and the sheaths.
Fluid approaches are limited by the model accuracy itself, but are less demanding in computational resources and is still capable of giving insights of the main physical phenomenon.
In this work, we focus on 1D plasma fluid model adapted for the simulation of medium to high pressure ($10^{-1}$ - $10^{2}$Pa) direct-current (DC) argon discharges.
In particular, a non quasi-neutral drift-diffusion model of two charged species – ions and electrons was developed aiming at correctly modelling the sheaths. The results are compared with PIC simulation outputs. Our results emphasize for the first time the importance of the ion temperature profile when their collisionality in the sheaths is not negligible.
References
[1] Langmuir I, Proceedings of the National Academy of Sciences 14 627–637 (1928)
[2] Hershkowitz N, Physics of Plasmas 12 055502 (2005)
[3] Riemann K, Plasma Sources Science and Technology 18 014006 (2008)
[4] Arnas C et al., Phys. Plasmas 26, 053706 (2019)
[5] Arnas C et al., Plasma Physics and Controlled Fusion 52 124007 (2010)
[6] Hagelaar G et al., Journal of Applied Physics 93, 67 (2003)
[7] Sahu R et al., Phys. Plasmas 27, 113505 (2020)
Microwave plasmas are hugely-studied plasmas, they have characteristics that make them unique, they can be generated for low and high-pressures, they have relatively high densities of charged particles, and can be generated in different cavity geometries. It has been proven that pulsing a microwave discharge can be beneficial for multiple application. Indeed, power interruption reduces gas heating and create one more tuning parameter for the plasma. To investigate microwaves plasmas parameters, most OES diagnostic methods rely on the use of a collisional radiative model. These kind of models assume apparent steady state of the plasma to determine key plasmas parameters from optical emission spectroscopy measurements. With pulsed plasma, the steady state hypothesis can not always be made.so collisional radiative models can’t be used to study these plasmas. A method relying on line trapping of argon 4p-4s transitions was developed to determine 4s argon level densities without assuming steady state. To verify this method, both OAS measurements and the line ratios method were performed on a surface wave plasma with pressure ranging from 500 mTorr to the atmospheric pressure. This method was then used to study relatively two microwave pulsed plasmas: a time reversal plasma and a pulsed Tiago torch.
Cancer incidence is on the rise in Canada, and metastasis is often associated with lowered life expectancy. Bone, especially the spine, is the common site of metastasis for breast, lung and prostate cancers. Treatments for these tumors rely on heavy doses of chemotherapeutic agents and invasive surgical procedures, which usually extend onto healthy tissue. This difficult procedure often requires bone reconstruction and graft, but also leaves high risks of open wound infection. The introduction of a cold plasma treatment promises to be a novel therapy that to aid surgical intervention. While empirical plasma medicine shows promising results, the reaction mechanism between plasma and tissues, proper treatment dosage and reactive species composition to reach hormesis are still unknown at large. Therefore, a plasma-bio interaction platform which combines a 3D-bioprinted tissue model to an automated cold plasma source is proposed. To ensure biocompatibility of the treatment, highly sensitive diagnostic techniques are necessary. By exploiting the thermo-optic effect on a fibre Bragg grating, measurements in the shift of the reflected wavelength exposed to a plasma source was used to estimate the temperature. This technique, coupled to the plasma jet, brings a novel approach for temperature characterization. It accurately shows its capability to attain a maximum temperature up to 40 °C inside the effluent while interacting with a dielectric surface. Similarly, colorimetric assays for nitrite and hydrogen peroxide detection have also confirmed that these long-lived species can be tailored through the electric pulse duration, the distance, the duration of treatment and the surrounding conditions. These results, combined with promising 2D in vitro treatment of MDA-MB-231 breast cancer cell line, show great potential toward tailoring of the plasma for personalized medicine.
The neutron lifetime from beta decay, τ , is a significant value for predictions in particle physics and cosmology. It is used to verify the unitarity of the Cabbibo-Kobayashi-Maskawa (CKM) matrix, the weak force quark mixing matrix in the Standard Model, and for evaluating the abundances of light elements such as Helium-4, created during big bang nucleosynthesis. Furthermore, there is a 3.6 σ discrepancy of neutron lifetime results from beam experiments (τ beam = 887.7 ± 1.2 ± 1.9s), and ultracold neutron (UCN) trap experiments (τ trap = 877.75± 0.28(+ 0.22/− 0.16s)). The measurements should agree, since beam experiments measure daughter particles from beta decay, and trap experiments measure surviving neutrons. The discrepancy may be evidence of Physics beyond the Standard Model or an undiscovered systematic effect. A more precise value of the neutron lifetime from beam or trap experiments provides more constraint on the predictions in particle physics and cosmology that are dependent on neutron lifetime. PENeLOPE (Precision Experiment on the Neutron Lifetime Operating with Proton Extraction), developed by Technical University of Munich, Germany, is a UCN magneto-gravitational trap experiment with a goal of determining the neutron lifetime to a precision of 0.1s. In this presentation, I will briefly discuss the motivation for the measurement, how UCN are trapped in PENeLOPE, and how the experiment cycle of PENeLOPE is optimized to reach a sensitivity of 0.1s.
Cosmological dark matter remains an important unsolved problem in physics. Direct detection using liquid argon offers exciting discovery potential to the “neutrino fog” with sensitivity to spin-independent WIMP-nucleon cross sections below $10^{-48}\,\mbox{cm}^{2}$. A program of phased deployment of ever more sensitive detectors will be described, including the upgraded DEAP-3600 experiment at SNOLAB, the DarkSide-20k experiment at Grans Sasso Laboratory, and ARGO, a multi-hundred tonne future detector at SNOLAB. We discuss control of important backgrounds throughout the program and highlight some of the technologies to reduce those backgrounds including surface coatings, low-radon assembly, and readout and electronics.
In the Fall of 2019, the NEWS-G experiment used its latest detector, a 140 cm diameter Spherical Proportional Counter (SPC) to search for low-mass dark matter at the Laboratoire souterrain de Modane (LSM), in France. SPCs are metallic spheres filled with gas, with a high voltage anode at the centre that attracts and amplifies ionization charges coming from atomic recoils. Having the sphere filled with pure methane, hydrogen was used as the target to produce new limits on the proton spin-dependent cross-section around masses of 1 GeV.
This talk will first introduce the NEWS-G experiment and describe the commissioning at the LSM with the shielding used, the SPC detection principle and the new multi-anode sensor. It will then focus on the calibrations using a UV laser and argon-37, as well as the background discrimination methods to remove alpha-induced events and spurious pulses coming from the electronics. Finally, it will explain the profile likelihood ratio method that was used in order to derive constraints on WIMP mass and cross-section.
Measurement of the charged-pion branching ratio to electrons versus muons, Re/μ, is extremely sensitive to a wide variety of new physics effects. The precision of the SM prediction for Re/μ is ~1 part in 10^4, 15 times more precise than the current experimental result. A next-generation experiment, PIONEER, is aiming at reducing the precision gap between theory and experiment, testing lepton flavor universality at an unprecedented level and probing new physics mass scales up to the PeV range. Additionally PIONEER is aiming at a 3 to 10-fold improvement in the pion beta decay, π+ → π0e+ν(γ) measurement which determines |V_ud| in a theoretically pristine manner. This measurement would shed new lights on existing tensions in the CKM matrix unitarity.
PIONEER will use a combination of new detector technologies based on LGAD silicon tracking target, a deep, and high solid angle coverage LXe calorimeter featuring excellent energy and time resolution. I’ll discuss PIONEER’s detector concept and goals in light of previous experimental designs and achievements.
An update of the R&D associated with upgrading the SuperKEKB e+e− collider with polarized electron beams is presented. The Chiral Belle physics program enables a set of unique precision measurements using the Belle II detector. It includes a set of measurements of $\sin^2\theta_W$ via separate left-right asymmetry ($A_{LR}$) measurements in $e^+e^−$ annihilations to pairs of electrons, muons, taus, charm and b-quarks at 10GeV that yield a precision matching that of the LEP/SLC world average that uniquely probes the running of $\sin^2\theta_W$ with high precision. It will also provide the highest precision measurements of neutral current universality ratios, and precision measurements of tau lepton properties, including the tau g-2, as probes for new physics. After reviewing developments on the physics potential, this presentation will report on developments related to provision of the polarized source, the new components of the accelerator lattice that rotate the electron spin from transverse to longitudinal at the interaction point, and polarimetry of the electron beam.
After reviewing some key hints and puzzles from the early
universe, I will introduce recent joint work with Neil Turok
suggesting a rigid and predictive new approach to addressing them.
Our universe seems to be dominated by radiation at early times, and
positive vacuum energy at late times. Taking the symmetry and
analyticity properties of such a spacetime seriously leads to a new
formula for the gravitational entropy of our universe, and a picture
in which the Big Bang may be regarded as a kind of mirror.
I will explain how this line of thought suggests new explanations for
a number of observed properties of the universe, including: its
homogeneity, isotropy and flatness; the arrow of time (i.e. the fact
that entropy increases away from the bang); the nature of dark
matter (which, in this picture, is a right-handed neutrino, radiated
from the early universe like Hawking radiation from a black hole); the
origin of the primordial perturbations; and even the existence of
three generations of standard model fermions. I will discuss some
observational predictions that will be tested in the coming decade,
and some key open questions.
Van Roosbroeck’s equations constitute a versatile tool to determine the dynamics of electrons under time- and space-dependent perturbations. Extensively utilized in ordinary semiconductors, their potential to model devices made from topological materials remains untapped. In this talk, we will adapt van Roosbroeck’s equations to theoretically study the bulk response of a Weyl semimetal to an ultrafast and spatially localized light pulse in the presence of a quantizing magnetic field. We predict a transient oscillatory photovoltage that originates from the chiral anomaly. The oscillations take place at the plasma frequency (THz range) and are damped by intervalley scattering and dielectric relaxation. Our results illustrate the ability of van Roosbroeck’s equations to unveil the interplay between electronic band topology and ultrafast carrier dynamics in microelectronic devices.
A quantum computer attains computational advantage when outperforming the best classical computers running the best-known algorithms on well-defined tasks. We report quantum computational advantage using Borealis, a photonic processor offering dynamic programmability on all gates implemented. We carry out Gaussian boson sampling (GBS) on 216 squeezed modes entangled with three-dimensional connectivity, using a time-multiplexed and photon-number-resolving architecture. On average, it would take more than 9,000 years for the best available algorithms and supercomputers to produce, using exact methods, a single sample from the programmed distribution, whereas Borealis requires only 36 μs. This runtime advantage is over 50 million times as extreme as that reported from earlier photonic machines. Ours constitutes a very large GBS experiment, registering events with up to 219 photons and a mean photon number of 125. This work is a critical milestone on the path to a practical quantum computer, validating key technological features of photonics as a platform for this goal.
Tissue material properties can change drastically during embryonic development, reminiscent of rigidity transitions in physics. However, measuring the transitions or learning how to control the transitions is challenging experimentally. Theoretical and computational models provide new powerful tools to offer hypotheses on how to control the transitions. In this talk, I will introduce background on a commonly used tissue model, vertex models. I will highlight recent studies on the role of collective tissue mechanics in development and disease. I will then present our research on developing computational models to study the tissue material properties and their impact on cellular functions and coordination thereof.
Many physics research innovations have made their way to very successful commercial products that have transformed our day to day lives. In recent years more and more researchers are looking into the aspect of how their fundamental and/or applied research results can be commercialized. In this talk I will be presenting various phases related to the transition from research lab to industry, including the skill sets necessary for success in physics careers in industry and in commercialization of a product. I will be sharing some of the success stories of our Thin Films & Photonics Research Group (GCMP).
Mike has a Bachelor of Science degree from the University of New Brunswick. His post-graduate work focussed primarily on molecular physics and included both a Masters degree from the University of New Brunswick and a PhD from the University of Waterloo. After graduation, Mike’s career diversified working first in astro-physics while doing a post-doctoral fellowship at NASA’s Jet Propulsion Lab. Following this position, Mike moved into the field of nuclear physics in Chalk River Ontario with Bubble Technology Industries (BTI) as a research scientist. Mike joined the Green Imaging Technologies (GIT) in September 2015 as a Principal Research Scientist. Mike works closely with GIT’s research team innovating new NMR techniques and tools for our clients. In this talk, Mike will discuss how he ended up with such a diversified career in Physics, share some of the things he has learned from over 20 years experience working in physics research and development and talk some about his current work in NMR for the energy industry.
Chaired by Daniel Cluff
Patrick Reid, Moltex Clean Energy
Justin Furlotte, Fiddlehead Technology
Troy vom Braucke, GP Plasma
Jonathan Dysart, Horizon Health
Daniel Cluff, Deep Mining
Pandurang Ashrit, Universite de Moncton
Michael Dick, Green Imaging Technologies
Neutron beta decay is a fundamental nuclear process that provides a means
to perform precision measurements that test the limits of our present under-
standing of the weak interaction described by the Standard Model of particle
physics and puts constraints on physics beyond the Standard Model. The Nab
experiment will measure ‘a’, the electron-neutrino angular correlation parameter,
to a precision of δa/a ∼ 10−3 and ‘b’, the Fierz interference term, to a precision
of δb = 3 · 10−3. The Nab experiment implements large area segmented silicon
detectors to measure the proton momentum and the electron energy to reconstruct
a and b. The Nab silicon detectors were being characterized with protons and
electron sources prior to installation into the Nab experiment at the SNS at
ORNL. This talk will present an overview and status of the Nab experiment
and focus on preliminary measurements of the electronic response of the Nab
detector pixels and the reconstructed energies of the incident radiation using
proton and electron sources under various experimental conditions performed
at the University of Manitoba. The reconstructed proton energy was measured
while varying the detector temperature, the observed pixel location, the detec-
tor bias voltage, and the proton accelerating potential, respectively. The proton
rates in neighbouring detector pixels, during an incremental deflection of the
proton beam across the pixel boundary, were also measured.
In the past few years, the prospect of probing fundamental symmetries with radioactive molecules has generated significant interest in the field of low-energy, high-precision tests of the Standard Model of particle physics. Indeed, tailored molecules containing short-lived radioactive atoms are predicted to be especially sensitive to violations of fundamental symmetries such as a permanent electric dipole moment (EDM) of an electron, nuclear spin-dependent parity violation, or nuclear moments violating both parity (P) and time reversal symmetry (T).
In light of these intriguing physics opportunities, our RadMol collaboration aims to establish a new laboratory dedicated to fundamental physics research using radioactive molecules. Coupled to TRIUMF's radioactive ion beam facilities ISAC and ARIEL, this laboratory will strongly benefit from TRIUMF's unique capabilities in rare-isotope production, especially in its upcoming multi-beam operation. The initial science focus of RadMol will be on molecular electric dipole moments with unprecedented sensitivity to nuclear time-reversal breaking Schiff moments. This talk will present the experimental program of our new radioactive molecule laboratory at TRIUMF, including the most recent results from molecular beam development at TRIUMF.
Low-energy precision electro-weak physics tests are advocated as part of the search for physics beyond the Standard Model. We are working towards a measurement of atomic parity violation (APV) in francium (Z = 87), the heaviest alkali, in a magneto-optical trap (MOT) online to ISAC at TRIUMF. The transition of interest in Fr is between the 7S and 8S states, where the parity violating (PV) observable will be the interference between a parity-conserving “Stark-induced” E1 amplitude, created by applying a dc electric field to mix S and P states, and the vastly weaker PV amplitude. The presence of a M1 amplitude poses additional challenges as it also can interfere with the Stark-induced E1 and mimic a PV signal. Using a cavity with nearly 4000x power buildup, we observed the faint M1 transition, which is about 13 orders of magnitude weaker than an allowed E1 transition. To characterize it to higher precision, we are deploying a highly efficient detection scheme involving bursts of light from a cycling transition. I will report on these developments and review the M1 results obtained so far.
This work is supported by NSERC, NRC, University of Manitoba, and University of Maryland
ALPHA-g has completed a successful run in 2022 in the pursuit of measuring the gravitational mass of antihydrogen. This apparatus was designed to test whether antimatter follows Einstein’s Weak Equivalence Principle (WEP), where the acceleration due to gravity that a body experiences is independent of its structure or composition. A measurement of the gravitational mass of antimatter has never been done before, as previous experiments used charged particles, which meant the experiments were dominated by electromagnetic forces. The ALPHA-g apparatus uses electrically neutral antihydrogen atoms produced in a vertical Penning-Malmberg trap and trapped in a magnetic minimum trap. By measuring the antihydrogen annihilation positions after a controlled magnetic release of the atoms the gravitational mass of antihydrogen can be determined. Annihilation positions are reconstructed using a radial time projection chamber (rTPC) surrounding the trapping volume. To accurately determine vertical annihilation positions, precise detector calibrations are needed.
A laser calibration system was developed and used to gather drift time data in the rTPC, which results in vertical position information, and can be used to monitor changes in pressure, temperature, and magnetic field. In particular, we can calculate the Lorentz angle which is then used in reconstruction to accurately determine the annihilation positions. Simulations are also required to determine the expected drift time and Lorentz angle. Using Geant4 and Garfield++ toolkits, we can simulate these observables from electrons drifting through the gas portion of the ALPHA-g detector. In this talk I will discuss the laser calibration system for the rTPC and the results of the drift time and Lorentz angle data taken over the course of the 2022 run period. I will further discuss how these results are used in the reconstruction of antihydrogen annihilations by comparing with simulation, and how this calibration will be implemented in future ALPHA-g measurements.
Ultracold neutrons (UCNs) are a powerful tool for probing fundamental physics, enabling precision measurements in a variety of research areas, such as beta decay, electric dipole moments, and gravitational quantum states. To advance these experimental efforts it is necessary to develop new, high-density UCN sources capable of providing order-of-magnitude improvements in statistical sensitivity. The TRIUMF UltraCold Advanced Neutron (TUCAN) collaboration is building a new spallation-driven superthermal UCN source using superfluid helium, which will enable a new generation of UCN-based precision experiments. The performance of this source will depend on the storage lifetime of UCNs in the superfluid volume, which is expected to have a temperature-dependence given by $\tau^{-1} = BT^7$. In this talk, I will present the results of experimental efforts to measure this dependence using a prototype UCN source.
Motivated by fundamental symmetry tests, a non-zero measurement of a permanent electric dipole moment (EDM) would represent a clear signal of the violation of the CP symmetries. The imbalance in the matter and antimatter observed in our Universe is believed to arise from such violations, although the amount that is present in the Standard Model (SM) is insufficient. Many extensions to the SM predict EDMs much larger than the SM itself (<<$10^{-30}$ e cm) that could be within experimental reach. Experimentally, EMDs of nuclei in atoms or molecules are only accessible through the Schiff moment that measures the difference in the charge and dipole distributions. To relate the Schiff moment to the underlying EDM, a nuclear structure model must be used. To date, the upper limit of the EDM of $^{199}Hg$ remains as the most stringent.
In order to guide nuclear structure models required for the calculation of the Schiff moment of $^{199}Hg$, we have undertaken detailed inelastic scattering reactions of $^{198,200}Hg$ in order to map the distribution of both E2 and E3 in these nuclei since the Schiff moment is proportional to the product of the nuclear deformation parameters $\beta_2\beta_3$. Performing an experiment for $^{199}Hg$ is challenging, as such several experiments on $^{198,200}Hg$ were performed at the Maier-Leibnitz Laboratorium of the Ludwig-Maximilians Universität München. A 22 MeV deuteron beam bombarded the targets of the compound of $^{198,200}Hg^{32}S$, and the scattered particles that were separated using the quadruple three-dipole (Q3D) magnetic spectrograph. Very high-statistics data sets were collected from this reaction, resulting in the observation of a considerable number of new states. The cross section angular distributions are used to provide information on the spin and parities, and ultimately will be used to determine the excitation matrix elements.
Details of the analysis of the $^{198}Hg(d,d’)$ reaction to date will be given.
[1] T. E. Chupp, P. Fierlinger, M.J. Ramsey-Musolf, and J.T. Singh. Electric Dipole Moments of Atoms, Molecules, nuclei, and Particles. https://doi.org/10.1103/RevModPhys.91.015001, Jan 2019.
Over the last years, artificial neural networks have been explored as powerful and systematically tuneable ansatz to represent quantum wave functions. Such numerical models can tomographically reconstruct quantum states and operator expectation values from a finite amount of measurements. At the same time, artificial neural networks can find the ground state wave function of a given Hamiltonian via variational energy minimization.
While both approaches experience individual limitations, combining them leads to significant enhancements in the variational ground state search by naturally finding an improved network initialization from a limited amount of measurement data. Additional specific modifications of the network model and its implementation can further optimize the performance of variational simulations for quantum many-body systems, providing significant insights into their behaviour.
In this talk, I will discuss the representation of quantum states with artificial neural networks and demonstrate achievable enhancements by adapting network models, optimization procedures, and data generation processes.
Viewing neural quantum state tomography (NQST) as a flexible method for capturing classical snapshots of experimentally prepared quantum states opens doors to many applications of it in quantum simulation. In this talk we first review "Neural Error Mitigation" (Nat Mach Intell 4, 2022) for improving predictions of various observables obtained via quantum simulation of quantum states of interest in quantum chemistry and quantum electrodynamics. We then show that incorporating classical shadow tomography in NQST significantly improves its learning of complex quantum states, and numerically demonstrate this advantage through case studies in atomic and condensed-matter physics.
In the past couple of years, machine learning has permeated many areas of physics and found numerous applications in condensed matter and chemistry. In particular, we have witnessed remarkable progress toward developing computational methods using neural networks as variational estimators. Variational representations of quantum states abound and have successfully been used to guess ground-state properties of quantum many-body systems. Some are based on partial physical insight (Jastrow, Gutzwiller projected, and fractional quantum Hall states, for instance), and others operate as a black box that may contain information about the underlying structure of entanglement and correlations (tensor networks) and offer the advantage of a large set of variational parameters that can be efficiently optimized. However, using variational approaches to study excited states and, in particular, calculating the excitation spectrum, remains a challenge.
In this talk, I present two variational methods to calculate the dynamical properties and spectral functions of quantum many-body systems in the frequency domain: The first one consists of encoding the Green's function of the problem in the form of a neural network. We introduce a natural gradient descent approach to solve linear systems of equations and use Monte Carlo to obtain the dynamical correlation function. The second approach is based on a Chebyshev expansion of the spectral function and a neural network representation for the wave functions. The Chebyshev moments are obtained by recursively applying the Hamiltonian and projecting on the space of variational states. I will present results for the one-dimensional and two-dimensional Heisenberg model on the square lattice and compare to those obtained by other methods.
References:
1.“Chebyshev expansion of spectral functions using restricted Boltzmann machines”; D. Hendry, Hongwei Chen, Phillip Weinberg, A. E. Feiguin; Phys. Rev. B 104, 205130 (2021)
2.“A machine learning approach to dynamical properties of quantum many-body systems”; Douglas Hendry, Adrian E. Feiguin; Phys. Rev. B 100, 245123 (2019).
3.“Systematic improvement of neural network quantum states using a Lanczos recursion”; Hongwei Chen, Douglas Hendry, Phillip Weinberg, Adrian E. Feiguin; NeurIPS 2022 (Accepted). arXiv: 2206.14307
4.“Neural network representation for minimally entangled typical thermal states”; Douglas Hendry, Hongwei Chen, Adrian Feiguin. Phys. Rev. B 106, 165111 (2022).
SUPPORT: NSF Grant No. DMR-2120501
Pre-Recorded
The University of Waterloo's graduate program in Quantum Information has been delivered as a collaboration between the Institute for Quantum Computing (IQC) and seven depart